##// END OF EJS Templates
phases: move the binary decoding function in the phases module...
Boris Feld -
r34321:12c42bcd default
parent child Browse files
Show More
@@ -1,1934 +1,1921 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import errno
151 151 import re
152 152 import string
153 153 import struct
154 154 import sys
155 155
156 156 from .i18n import _
157 157 from . import (
158 158 changegroup,
159 159 error,
160 160 obsolete,
161 161 phases,
162 162 pushkey,
163 163 pycompat,
164 164 tags,
165 165 url,
166 166 util,
167 167 )
168 168
169 169 urlerr = util.urlerr
170 170 urlreq = util.urlreq
171 171
172 172 _pack = struct.pack
173 173 _unpack = struct.unpack
174 174
175 175 _fstreamparamsize = '>i'
176 176 _fpartheadersize = '>i'
177 177 _fparttypesize = '>B'
178 178 _fpartid = '>I'
179 179 _fpayloadsize = '>i'
180 180 _fpartparamcount = '>BB'
181 181
182 182 preferedchunksize = 4096
183 183
184 184 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
185 185
186 186 def outdebug(ui, message):
187 187 """debug regarding output stream (bundling)"""
188 188 if ui.configbool('devel', 'bundle2.debug'):
189 189 ui.debug('bundle2-output: %s\n' % message)
190 190
191 191 def indebug(ui, message):
192 192 """debug on input stream (unbundling)"""
193 193 if ui.configbool('devel', 'bundle2.debug'):
194 194 ui.debug('bundle2-input: %s\n' % message)
195 195
196 196 def validateparttype(parttype):
197 197 """raise ValueError if a parttype contains invalid character"""
198 198 if _parttypeforbidden.search(parttype):
199 199 raise ValueError(parttype)
200 200
201 201 def _makefpartparamsizes(nbparams):
202 202 """return a struct format to read part parameter sizes
203 203
204 204 The number parameters is variable so we need to build that format
205 205 dynamically.
206 206 """
207 207 return '>'+('BB'*nbparams)
208 208
209 209 parthandlermapping = {}
210 210
211 211 def parthandler(parttype, params=()):
212 212 """decorator that register a function as a bundle2 part handler
213 213
214 214 eg::
215 215
216 216 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
217 217 def myparttypehandler(...):
218 218 '''process a part of type "my part".'''
219 219 ...
220 220 """
221 221 validateparttype(parttype)
222 222 def _decorator(func):
223 223 lparttype = parttype.lower() # enforce lower case matching.
224 224 assert lparttype not in parthandlermapping
225 225 parthandlermapping[lparttype] = func
226 226 func.params = frozenset(params)
227 227 return func
228 228 return _decorator
229 229
230 230 class unbundlerecords(object):
231 231 """keep record of what happens during and unbundle
232 232
233 233 New records are added using `records.add('cat', obj)`. Where 'cat' is a
234 234 category of record and obj is an arbitrary object.
235 235
236 236 `records['cat']` will return all entries of this category 'cat'.
237 237
238 238 Iterating on the object itself will yield `('category', obj)` tuples
239 239 for all entries.
240 240
241 241 All iterations happens in chronological order.
242 242 """
243 243
244 244 def __init__(self):
245 245 self._categories = {}
246 246 self._sequences = []
247 247 self._replies = {}
248 248
249 249 def add(self, category, entry, inreplyto=None):
250 250 """add a new record of a given category.
251 251
252 252 The entry can then be retrieved in the list returned by
253 253 self['category']."""
254 254 self._categories.setdefault(category, []).append(entry)
255 255 self._sequences.append((category, entry))
256 256 if inreplyto is not None:
257 257 self.getreplies(inreplyto).add(category, entry)
258 258
259 259 def getreplies(self, partid):
260 260 """get the records that are replies to a specific part"""
261 261 return self._replies.setdefault(partid, unbundlerecords())
262 262
263 263 def __getitem__(self, cat):
264 264 return tuple(self._categories.get(cat, ()))
265 265
266 266 def __iter__(self):
267 267 return iter(self._sequences)
268 268
269 269 def __len__(self):
270 270 return len(self._sequences)
271 271
272 272 def __nonzero__(self):
273 273 return bool(self._sequences)
274 274
275 275 __bool__ = __nonzero__
276 276
277 277 class bundleoperation(object):
278 278 """an object that represents a single bundling process
279 279
280 280 Its purpose is to carry unbundle-related objects and states.
281 281
282 282 A new object should be created at the beginning of each bundle processing.
283 283 The object is to be returned by the processing function.
284 284
285 285 The object has very little content now it will ultimately contain:
286 286 * an access to the repo the bundle is applied to,
287 287 * a ui object,
288 288 * a way to retrieve a transaction to add changes to the repo,
289 289 * a way to record the result of processing each part,
290 290 * a way to construct a bundle response when applicable.
291 291 """
292 292
293 293 def __init__(self, repo, transactiongetter, captureoutput=True):
294 294 self.repo = repo
295 295 self.ui = repo.ui
296 296 self.records = unbundlerecords()
297 297 self.reply = None
298 298 self.captureoutput = captureoutput
299 299 self.hookargs = {}
300 300 self._gettransaction = transactiongetter
301 301
302 302 def gettransaction(self):
303 303 transaction = self._gettransaction()
304 304
305 305 if self.hookargs:
306 306 # the ones added to the transaction supercede those added
307 307 # to the operation.
308 308 self.hookargs.update(transaction.hookargs)
309 309 transaction.hookargs = self.hookargs
310 310
311 311 # mark the hookargs as flushed. further attempts to add to
312 312 # hookargs will result in an abort.
313 313 self.hookargs = None
314 314
315 315 return transaction
316 316
317 317 def addhookargs(self, hookargs):
318 318 if self.hookargs is None:
319 319 raise error.ProgrammingError('attempted to add hookargs to '
320 320 'operation after transaction started')
321 321 self.hookargs.update(hookargs)
322 322
323 323 class TransactionUnavailable(RuntimeError):
324 324 pass
325 325
326 326 def _notransaction():
327 327 """default method to get a transaction while processing a bundle
328 328
329 329 Raise an exception to highlight the fact that no transaction was expected
330 330 to be created"""
331 331 raise TransactionUnavailable()
332 332
333 333 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
334 334 # transform me into unbundler.apply() as soon as the freeze is lifted
335 335 if isinstance(unbundler, unbundle20):
336 336 tr.hookargs['bundle2'] = '1'
337 337 if source is not None and 'source' not in tr.hookargs:
338 338 tr.hookargs['source'] = source
339 339 if url is not None and 'url' not in tr.hookargs:
340 340 tr.hookargs['url'] = url
341 341 return processbundle(repo, unbundler, lambda: tr)
342 342 else:
343 343 # the transactiongetter won't be used, but we might as well set it
344 344 op = bundleoperation(repo, lambda: tr)
345 345 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
346 346 return op
347 347
348 348 class partiterator(object):
349 349 def __init__(self, repo, op, unbundler):
350 350 self.repo = repo
351 351 self.op = op
352 352 self.unbundler = unbundler
353 353 self.iterator = None
354 354 self.count = 0
355 355 self.current = None
356 356
357 357 def __enter__(self):
358 358 def func():
359 359 itr = enumerate(self.unbundler.iterparts())
360 360 for count, p in itr:
361 361 self.count = count
362 362 self.current = p
363 363 yield p
364 364 p.seek(0, 2)
365 365 self.current = None
366 366 self.iterator = func()
367 367 return self.iterator
368 368
369 369 def __exit__(self, type, exc, tb):
370 370 if not self.iterator:
371 371 return
372 372
373 373 if exc:
374 374 # If exiting or interrupted, do not attempt to seek the stream in
375 375 # the finally block below. This makes abort faster.
376 376 if (self.current and
377 377 not isinstance(exc, (SystemExit, KeyboardInterrupt))):
378 378 # consume the part content to not corrupt the stream.
379 379 self.current.seek(0, 2)
380 380
381 381 # Any exceptions seeking to the end of the bundle at this point are
382 382 # almost certainly related to the underlying stream being bad.
383 383 # And, chances are that the exception we're handling is related to
384 384 # getting in that bad state. So, we swallow the seeking error and
385 385 # re-raise the original error.
386 386 seekerror = False
387 387 try:
388 388 for part in self.iterator:
389 389 # consume the bundle content
390 390 part.seek(0, 2)
391 391 except Exception:
392 392 seekerror = True
393 393
394 394 # Small hack to let caller code distinguish exceptions from bundle2
395 395 # processing from processing the old format. This is mostly needed
396 396 # to handle different return codes to unbundle according to the type
397 397 # of bundle. We should probably clean up or drop this return code
398 398 # craziness in a future version.
399 399 exc.duringunbundle2 = True
400 400 salvaged = []
401 401 replycaps = None
402 402 if self.op.reply is not None:
403 403 salvaged = self.op.reply.salvageoutput()
404 404 replycaps = self.op.reply.capabilities
405 405 exc._replycaps = replycaps
406 406 exc._bundle2salvagedoutput = salvaged
407 407
408 408 # Re-raising from a variable loses the original stack. So only use
409 409 # that form if we need to.
410 410 if seekerror:
411 411 raise exc
412 412
413 413 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
414 414 self.count)
415 415
416 416 def processbundle(repo, unbundler, transactiongetter=None, op=None):
417 417 """This function process a bundle, apply effect to/from a repo
418 418
419 419 It iterates over each part then searches for and uses the proper handling
420 420 code to process the part. Parts are processed in order.
421 421
422 422 Unknown Mandatory part will abort the process.
423 423
424 424 It is temporarily possible to provide a prebuilt bundleoperation to the
425 425 function. This is used to ensure output is properly propagated in case of
426 426 an error during the unbundling. This output capturing part will likely be
427 427 reworked and this ability will probably go away in the process.
428 428 """
429 429 if op is None:
430 430 if transactiongetter is None:
431 431 transactiongetter = _notransaction
432 432 op = bundleoperation(repo, transactiongetter)
433 433 # todo:
434 434 # - replace this is a init function soon.
435 435 # - exception catching
436 436 unbundler.params
437 437 if repo.ui.debugflag:
438 438 msg = ['bundle2-input-bundle:']
439 439 if unbundler.params:
440 440 msg.append(' %i params' % len(unbundler.params))
441 441 if op._gettransaction is None or op._gettransaction is _notransaction:
442 442 msg.append(' no-transaction')
443 443 else:
444 444 msg.append(' with-transaction')
445 445 msg.append('\n')
446 446 repo.ui.debug(''.join(msg))
447 447
448 448 processparts(repo, op, unbundler)
449 449
450 450 return op
451 451
452 452 def processparts(repo, op, unbundler):
453 453 with partiterator(repo, op, unbundler) as parts:
454 454 for part in parts:
455 455 _processpart(op, part)
456 456
457 457 def _processchangegroup(op, cg, tr, source, url, **kwargs):
458 458 ret = cg.apply(op.repo, tr, source, url, **kwargs)
459 459 op.records.add('changegroup', {
460 460 'return': ret,
461 461 })
462 462 return ret
463 463
464 464 def _gethandler(op, part):
465 465 status = 'unknown' # used by debug output
466 466 try:
467 467 handler = parthandlermapping.get(part.type)
468 468 if handler is None:
469 469 status = 'unsupported-type'
470 470 raise error.BundleUnknownFeatureError(parttype=part.type)
471 471 indebug(op.ui, 'found a handler for part %s' % part.type)
472 472 unknownparams = part.mandatorykeys - handler.params
473 473 if unknownparams:
474 474 unknownparams = list(unknownparams)
475 475 unknownparams.sort()
476 476 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
477 477 raise error.BundleUnknownFeatureError(parttype=part.type,
478 478 params=unknownparams)
479 479 status = 'supported'
480 480 except error.BundleUnknownFeatureError as exc:
481 481 if part.mandatory: # mandatory parts
482 482 raise
483 483 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
484 484 return # skip to part processing
485 485 finally:
486 486 if op.ui.debugflag:
487 487 msg = ['bundle2-input-part: "%s"' % part.type]
488 488 if not part.mandatory:
489 489 msg.append(' (advisory)')
490 490 nbmp = len(part.mandatorykeys)
491 491 nbap = len(part.params) - nbmp
492 492 if nbmp or nbap:
493 493 msg.append(' (params:')
494 494 if nbmp:
495 495 msg.append(' %i mandatory' % nbmp)
496 496 if nbap:
497 497 msg.append(' %i advisory' % nbmp)
498 498 msg.append(')')
499 499 msg.append(' %s\n' % status)
500 500 op.ui.debug(''.join(msg))
501 501
502 502 return handler
503 503
504 504 def _processpart(op, part):
505 505 """process a single part from a bundle
506 506
507 507 The part is guaranteed to have been fully consumed when the function exits
508 508 (even if an exception is raised)."""
509 509 handler = _gethandler(op, part)
510 510 if handler is None:
511 511 return
512 512
513 513 # handler is called outside the above try block so that we don't
514 514 # risk catching KeyErrors from anything other than the
515 515 # parthandlermapping lookup (any KeyError raised by handler()
516 516 # itself represents a defect of a different variety).
517 517 output = None
518 518 if op.captureoutput and op.reply is not None:
519 519 op.ui.pushbuffer(error=True, subproc=True)
520 520 output = ''
521 521 try:
522 522 handler(op, part)
523 523 finally:
524 524 if output is not None:
525 525 output = op.ui.popbuffer()
526 526 if output:
527 527 outpart = op.reply.newpart('output', data=output,
528 528 mandatory=False)
529 529 outpart.addparam(
530 530 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
531 531
532 532 def decodecaps(blob):
533 533 """decode a bundle2 caps bytes blob into a dictionary
534 534
535 535 The blob is a list of capabilities (one per line)
536 536 Capabilities may have values using a line of the form::
537 537
538 538 capability=value1,value2,value3
539 539
540 540 The values are always a list."""
541 541 caps = {}
542 542 for line in blob.splitlines():
543 543 if not line:
544 544 continue
545 545 if '=' not in line:
546 546 key, vals = line, ()
547 547 else:
548 548 key, vals = line.split('=', 1)
549 549 vals = vals.split(',')
550 550 key = urlreq.unquote(key)
551 551 vals = [urlreq.unquote(v) for v in vals]
552 552 caps[key] = vals
553 553 return caps
554 554
555 555 def encodecaps(caps):
556 556 """encode a bundle2 caps dictionary into a bytes blob"""
557 557 chunks = []
558 558 for ca in sorted(caps):
559 559 vals = caps[ca]
560 560 ca = urlreq.quote(ca)
561 561 vals = [urlreq.quote(v) for v in vals]
562 562 if vals:
563 563 ca = "%s=%s" % (ca, ','.join(vals))
564 564 chunks.append(ca)
565 565 return '\n'.join(chunks)
566 566
567 567 bundletypes = {
568 568 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
569 569 # since the unification ssh accepts a header but there
570 570 # is no capability signaling it.
571 571 "HG20": (), # special-cased below
572 572 "HG10UN": ("HG10UN", 'UN'),
573 573 "HG10BZ": ("HG10", 'BZ'),
574 574 "HG10GZ": ("HG10GZ", 'GZ'),
575 575 }
576 576
577 577 # hgweb uses this list to communicate its preferred type
578 578 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
579 579
580 580 class bundle20(object):
581 581 """represent an outgoing bundle2 container
582 582
583 583 Use the `addparam` method to add stream level parameter. and `newpart` to
584 584 populate it. Then call `getchunks` to retrieve all the binary chunks of
585 585 data that compose the bundle2 container."""
586 586
587 587 _magicstring = 'HG20'
588 588
589 589 def __init__(self, ui, capabilities=()):
590 590 self.ui = ui
591 591 self._params = []
592 592 self._parts = []
593 593 self.capabilities = dict(capabilities)
594 594 self._compengine = util.compengines.forbundletype('UN')
595 595 self._compopts = None
596 596
597 597 def setcompression(self, alg, compopts=None):
598 598 """setup core part compression to <alg>"""
599 599 if alg in (None, 'UN'):
600 600 return
601 601 assert not any(n.lower() == 'compression' for n, v in self._params)
602 602 self.addparam('Compression', alg)
603 603 self._compengine = util.compengines.forbundletype(alg)
604 604 self._compopts = compopts
605 605
606 606 @property
607 607 def nbparts(self):
608 608 """total number of parts added to the bundler"""
609 609 return len(self._parts)
610 610
611 611 # methods used to defines the bundle2 content
612 612 def addparam(self, name, value=None):
613 613 """add a stream level parameter"""
614 614 if not name:
615 615 raise ValueError(r'empty parameter name')
616 616 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
617 617 raise ValueError(r'non letter first character: %s' % name)
618 618 self._params.append((name, value))
619 619
620 620 def addpart(self, part):
621 621 """add a new part to the bundle2 container
622 622
623 623 Parts contains the actual applicative payload."""
624 624 assert part.id is None
625 625 part.id = len(self._parts) # very cheap counter
626 626 self._parts.append(part)
627 627
628 628 def newpart(self, typeid, *args, **kwargs):
629 629 """create a new part and add it to the containers
630 630
631 631 As the part is directly added to the containers. For now, this means
632 632 that any failure to properly initialize the part after calling
633 633 ``newpart`` should result in a failure of the whole bundling process.
634 634
635 635 You can still fall back to manually create and add if you need better
636 636 control."""
637 637 part = bundlepart(typeid, *args, **kwargs)
638 638 self.addpart(part)
639 639 return part
640 640
641 641 # methods used to generate the bundle2 stream
642 642 def getchunks(self):
643 643 if self.ui.debugflag:
644 644 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
645 645 if self._params:
646 646 msg.append(' (%i params)' % len(self._params))
647 647 msg.append(' %i parts total\n' % len(self._parts))
648 648 self.ui.debug(''.join(msg))
649 649 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
650 650 yield self._magicstring
651 651 param = self._paramchunk()
652 652 outdebug(self.ui, 'bundle parameter: %s' % param)
653 653 yield _pack(_fstreamparamsize, len(param))
654 654 if param:
655 655 yield param
656 656 for chunk in self._compengine.compressstream(self._getcorechunk(),
657 657 self._compopts):
658 658 yield chunk
659 659
660 660 def _paramchunk(self):
661 661 """return a encoded version of all stream parameters"""
662 662 blocks = []
663 663 for par, value in self._params:
664 664 par = urlreq.quote(par)
665 665 if value is not None:
666 666 value = urlreq.quote(value)
667 667 par = '%s=%s' % (par, value)
668 668 blocks.append(par)
669 669 return ' '.join(blocks)
670 670
671 671 def _getcorechunk(self):
672 672 """yield chunk for the core part of the bundle
673 673
674 674 (all but headers and parameters)"""
675 675 outdebug(self.ui, 'start of parts')
676 676 for part in self._parts:
677 677 outdebug(self.ui, 'bundle part: "%s"' % part.type)
678 678 for chunk in part.getchunks(ui=self.ui):
679 679 yield chunk
680 680 outdebug(self.ui, 'end of bundle')
681 681 yield _pack(_fpartheadersize, 0)
682 682
683 683
684 684 def salvageoutput(self):
685 685 """return a list with a copy of all output parts in the bundle
686 686
687 687 This is meant to be used during error handling to make sure we preserve
688 688 server output"""
689 689 salvaged = []
690 690 for part in self._parts:
691 691 if part.type.startswith('output'):
692 692 salvaged.append(part.copy())
693 693 return salvaged
694 694
695 695
696 696 class unpackermixin(object):
697 697 """A mixin to extract bytes and struct data from a stream"""
698 698
699 699 def __init__(self, fp):
700 700 self._fp = fp
701 701
702 702 def _unpack(self, format):
703 703 """unpack this struct format from the stream
704 704
705 705 This method is meant for internal usage by the bundle2 protocol only.
706 706 They directly manipulate the low level stream including bundle2 level
707 707 instruction.
708 708
709 709 Do not use it to implement higher-level logic or methods."""
710 710 data = self._readexact(struct.calcsize(format))
711 711 return _unpack(format, data)
712 712
713 713 def _readexact(self, size):
714 714 """read exactly <size> bytes from the stream
715 715
716 716 This method is meant for internal usage by the bundle2 protocol only.
717 717 They directly manipulate the low level stream including bundle2 level
718 718 instruction.
719 719
720 720 Do not use it to implement higher-level logic or methods."""
721 721 return changegroup.readexactly(self._fp, size)
722 722
723 723 def getunbundler(ui, fp, magicstring=None):
724 724 """return a valid unbundler object for a given magicstring"""
725 725 if magicstring is None:
726 726 magicstring = changegroup.readexactly(fp, 4)
727 727 magic, version = magicstring[0:2], magicstring[2:4]
728 728 if magic != 'HG':
729 729 ui.debug(
730 730 "error: invalid magic: %r (version %r), should be 'HG'\n"
731 731 % (magic, version))
732 732 raise error.Abort(_('not a Mercurial bundle'))
733 733 unbundlerclass = formatmap.get(version)
734 734 if unbundlerclass is None:
735 735 raise error.Abort(_('unknown bundle version %s') % version)
736 736 unbundler = unbundlerclass(ui, fp)
737 737 indebug(ui, 'start processing of %s stream' % magicstring)
738 738 return unbundler
739 739
740 740 class unbundle20(unpackermixin):
741 741 """interpret a bundle2 stream
742 742
743 743 This class is fed with a binary stream and yields parts through its
744 744 `iterparts` methods."""
745 745
746 746 _magicstring = 'HG20'
747 747
748 748 def __init__(self, ui, fp):
749 749 """If header is specified, we do not read it out of the stream."""
750 750 self.ui = ui
751 751 self._compengine = util.compengines.forbundletype('UN')
752 752 self._compressed = None
753 753 super(unbundle20, self).__init__(fp)
754 754
755 755 @util.propertycache
756 756 def params(self):
757 757 """dictionary of stream level parameters"""
758 758 indebug(self.ui, 'reading bundle2 stream parameters')
759 759 params = {}
760 760 paramssize = self._unpack(_fstreamparamsize)[0]
761 761 if paramssize < 0:
762 762 raise error.BundleValueError('negative bundle param size: %i'
763 763 % paramssize)
764 764 if paramssize:
765 765 params = self._readexact(paramssize)
766 766 params = self._processallparams(params)
767 767 return params
768 768
769 769 def _processallparams(self, paramsblock):
770 770 """"""
771 771 params = util.sortdict()
772 772 for p in paramsblock.split(' '):
773 773 p = p.split('=', 1)
774 774 p = [urlreq.unquote(i) for i in p]
775 775 if len(p) < 2:
776 776 p.append(None)
777 777 self._processparam(*p)
778 778 params[p[0]] = p[1]
779 779 return params
780 780
781 781
782 782 def _processparam(self, name, value):
783 783 """process a parameter, applying its effect if needed
784 784
785 785 Parameter starting with a lower case letter are advisory and will be
786 786 ignored when unknown. Those starting with an upper case letter are
787 787 mandatory and will this function will raise a KeyError when unknown.
788 788
789 789 Note: no option are currently supported. Any input will be either
790 790 ignored or failing.
791 791 """
792 792 if not name:
793 793 raise ValueError(r'empty parameter name')
794 794 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
795 795 raise ValueError(r'non letter first character: %s' % name)
796 796 try:
797 797 handler = b2streamparamsmap[name.lower()]
798 798 except KeyError:
799 799 if name[0:1].islower():
800 800 indebug(self.ui, "ignoring unknown parameter %s" % name)
801 801 else:
802 802 raise error.BundleUnknownFeatureError(params=(name,))
803 803 else:
804 804 handler(self, name, value)
805 805
806 806 def _forwardchunks(self):
807 807 """utility to transfer a bundle2 as binary
808 808
809 809 This is made necessary by the fact the 'getbundle' command over 'ssh'
810 810 have no way to know then the reply end, relying on the bundle to be
811 811 interpreted to know its end. This is terrible and we are sorry, but we
812 812 needed to move forward to get general delta enabled.
813 813 """
814 814 yield self._magicstring
815 815 assert 'params' not in vars(self)
816 816 paramssize = self._unpack(_fstreamparamsize)[0]
817 817 if paramssize < 0:
818 818 raise error.BundleValueError('negative bundle param size: %i'
819 819 % paramssize)
820 820 yield _pack(_fstreamparamsize, paramssize)
821 821 if paramssize:
822 822 params = self._readexact(paramssize)
823 823 self._processallparams(params)
824 824 yield params
825 825 assert self._compengine.bundletype == 'UN'
826 826 # From there, payload might need to be decompressed
827 827 self._fp = self._compengine.decompressorreader(self._fp)
828 828 emptycount = 0
829 829 while emptycount < 2:
830 830 # so we can brainlessly loop
831 831 assert _fpartheadersize == _fpayloadsize
832 832 size = self._unpack(_fpartheadersize)[0]
833 833 yield _pack(_fpartheadersize, size)
834 834 if size:
835 835 emptycount = 0
836 836 else:
837 837 emptycount += 1
838 838 continue
839 839 if size == flaginterrupt:
840 840 continue
841 841 elif size < 0:
842 842 raise error.BundleValueError('negative chunk size: %i')
843 843 yield self._readexact(size)
844 844
845 845
846 846 def iterparts(self):
847 847 """yield all parts contained in the stream"""
848 848 # make sure param have been loaded
849 849 self.params
850 850 # From there, payload need to be decompressed
851 851 self._fp = self._compengine.decompressorreader(self._fp)
852 852 indebug(self.ui, 'start extraction of bundle2 parts')
853 853 headerblock = self._readpartheader()
854 854 while headerblock is not None:
855 855 part = unbundlepart(self.ui, headerblock, self._fp)
856 856 yield part
857 857 # Seek to the end of the part to force it's consumption so the next
858 858 # part can be read. But then seek back to the beginning so the
859 859 # code consuming this generator has a part that starts at 0.
860 860 part.seek(0, 2)
861 861 part.seek(0)
862 862 headerblock = self._readpartheader()
863 863 indebug(self.ui, 'end of bundle2 stream')
864 864
865 865 def _readpartheader(self):
866 866 """reads a part header size and return the bytes blob
867 867
868 868 returns None if empty"""
869 869 headersize = self._unpack(_fpartheadersize)[0]
870 870 if headersize < 0:
871 871 raise error.BundleValueError('negative part header size: %i'
872 872 % headersize)
873 873 indebug(self.ui, 'part header size: %i' % headersize)
874 874 if headersize:
875 875 return self._readexact(headersize)
876 876 return None
877 877
878 878 def compressed(self):
879 879 self.params # load params
880 880 return self._compressed
881 881
882 882 def close(self):
883 883 """close underlying file"""
884 884 if util.safehasattr(self._fp, 'close'):
885 885 return self._fp.close()
886 886
887 887 formatmap = {'20': unbundle20}
888 888
889 889 b2streamparamsmap = {}
890 890
891 891 def b2streamparamhandler(name):
892 892 """register a handler for a stream level parameter"""
893 893 def decorator(func):
894 894 assert name not in formatmap
895 895 b2streamparamsmap[name] = func
896 896 return func
897 897 return decorator
898 898
899 899 @b2streamparamhandler('compression')
900 900 def processcompression(unbundler, param, value):
901 901 """read compression parameter and install payload decompression"""
902 902 if value not in util.compengines.supportedbundletypes:
903 903 raise error.BundleUnknownFeatureError(params=(param,),
904 904 values=(value,))
905 905 unbundler._compengine = util.compengines.forbundletype(value)
906 906 if value is not None:
907 907 unbundler._compressed = True
908 908
909 909 class bundlepart(object):
910 910 """A bundle2 part contains application level payload
911 911
912 912 The part `type` is used to route the part to the application level
913 913 handler.
914 914
915 915 The part payload is contained in ``part.data``. It could be raw bytes or a
916 916 generator of byte chunks.
917 917
918 918 You can add parameters to the part using the ``addparam`` method.
919 919 Parameters can be either mandatory (default) or advisory. Remote side
920 920 should be able to safely ignore the advisory ones.
921 921
922 922 Both data and parameters cannot be modified after the generation has begun.
923 923 """
924 924
925 925 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
926 926 data='', mandatory=True):
927 927 validateparttype(parttype)
928 928 self.id = None
929 929 self.type = parttype
930 930 self._data = data
931 931 self._mandatoryparams = list(mandatoryparams)
932 932 self._advisoryparams = list(advisoryparams)
933 933 # checking for duplicated entries
934 934 self._seenparams = set()
935 935 for pname, __ in self._mandatoryparams + self._advisoryparams:
936 936 if pname in self._seenparams:
937 937 raise error.ProgrammingError('duplicated params: %s' % pname)
938 938 self._seenparams.add(pname)
939 939 # status of the part's generation:
940 940 # - None: not started,
941 941 # - False: currently generated,
942 942 # - True: generation done.
943 943 self._generated = None
944 944 self.mandatory = mandatory
945 945
946 946 def __repr__(self):
947 947 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
948 948 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
949 949 % (cls, id(self), self.id, self.type, self.mandatory))
950 950
951 951 def copy(self):
952 952 """return a copy of the part
953 953
954 954 The new part have the very same content but no partid assigned yet.
955 955 Parts with generated data cannot be copied."""
956 956 assert not util.safehasattr(self.data, 'next')
957 957 return self.__class__(self.type, self._mandatoryparams,
958 958 self._advisoryparams, self._data, self.mandatory)
959 959
960 960 # methods used to defines the part content
961 961 @property
962 962 def data(self):
963 963 return self._data
964 964
965 965 @data.setter
966 966 def data(self, data):
967 967 if self._generated is not None:
968 968 raise error.ReadOnlyPartError('part is being generated')
969 969 self._data = data
970 970
971 971 @property
972 972 def mandatoryparams(self):
973 973 # make it an immutable tuple to force people through ``addparam``
974 974 return tuple(self._mandatoryparams)
975 975
976 976 @property
977 977 def advisoryparams(self):
978 978 # make it an immutable tuple to force people through ``addparam``
979 979 return tuple(self._advisoryparams)
980 980
981 981 def addparam(self, name, value='', mandatory=True):
982 982 """add a parameter to the part
983 983
984 984 If 'mandatory' is set to True, the remote handler must claim support
985 985 for this parameter or the unbundling will be aborted.
986 986
987 987 The 'name' and 'value' cannot exceed 255 bytes each.
988 988 """
989 989 if self._generated is not None:
990 990 raise error.ReadOnlyPartError('part is being generated')
991 991 if name in self._seenparams:
992 992 raise ValueError('duplicated params: %s' % name)
993 993 self._seenparams.add(name)
994 994 params = self._advisoryparams
995 995 if mandatory:
996 996 params = self._mandatoryparams
997 997 params.append((name, value))
998 998
999 999 # methods used to generates the bundle2 stream
1000 1000 def getchunks(self, ui):
1001 1001 if self._generated is not None:
1002 1002 raise error.ProgrammingError('part can only be consumed once')
1003 1003 self._generated = False
1004 1004
1005 1005 if ui.debugflag:
1006 1006 msg = ['bundle2-output-part: "%s"' % self.type]
1007 1007 if not self.mandatory:
1008 1008 msg.append(' (advisory)')
1009 1009 nbmp = len(self.mandatoryparams)
1010 1010 nbap = len(self.advisoryparams)
1011 1011 if nbmp or nbap:
1012 1012 msg.append(' (params:')
1013 1013 if nbmp:
1014 1014 msg.append(' %i mandatory' % nbmp)
1015 1015 if nbap:
1016 1016 msg.append(' %i advisory' % nbmp)
1017 1017 msg.append(')')
1018 1018 if not self.data:
1019 1019 msg.append(' empty payload')
1020 1020 elif (util.safehasattr(self.data, 'next')
1021 1021 or util.safehasattr(self.data, '__next__')):
1022 1022 msg.append(' streamed payload')
1023 1023 else:
1024 1024 msg.append(' %i bytes payload' % len(self.data))
1025 1025 msg.append('\n')
1026 1026 ui.debug(''.join(msg))
1027 1027
1028 1028 #### header
1029 1029 if self.mandatory:
1030 1030 parttype = self.type.upper()
1031 1031 else:
1032 1032 parttype = self.type.lower()
1033 1033 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1034 1034 ## parttype
1035 1035 header = [_pack(_fparttypesize, len(parttype)),
1036 1036 parttype, _pack(_fpartid, self.id),
1037 1037 ]
1038 1038 ## parameters
1039 1039 # count
1040 1040 manpar = self.mandatoryparams
1041 1041 advpar = self.advisoryparams
1042 1042 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1043 1043 # size
1044 1044 parsizes = []
1045 1045 for key, value in manpar:
1046 1046 parsizes.append(len(key))
1047 1047 parsizes.append(len(value))
1048 1048 for key, value in advpar:
1049 1049 parsizes.append(len(key))
1050 1050 parsizes.append(len(value))
1051 1051 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1052 1052 header.append(paramsizes)
1053 1053 # key, value
1054 1054 for key, value in manpar:
1055 1055 header.append(key)
1056 1056 header.append(value)
1057 1057 for key, value in advpar:
1058 1058 header.append(key)
1059 1059 header.append(value)
1060 1060 ## finalize header
1061 1061 try:
1062 1062 headerchunk = ''.join(header)
1063 1063 except TypeError:
1064 1064 raise TypeError(r'Found a non-bytes trying to '
1065 1065 r'build bundle part header: %r' % header)
1066 1066 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1067 1067 yield _pack(_fpartheadersize, len(headerchunk))
1068 1068 yield headerchunk
1069 1069 ## payload
1070 1070 try:
1071 1071 for chunk in self._payloadchunks():
1072 1072 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1073 1073 yield _pack(_fpayloadsize, len(chunk))
1074 1074 yield chunk
1075 1075 except GeneratorExit:
1076 1076 # GeneratorExit means that nobody is listening for our
1077 1077 # results anyway, so just bail quickly rather than trying
1078 1078 # to produce an error part.
1079 1079 ui.debug('bundle2-generatorexit\n')
1080 1080 raise
1081 1081 except BaseException as exc:
1082 1082 bexc = util.forcebytestr(exc)
1083 1083 # backup exception data for later
1084 1084 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1085 1085 % bexc)
1086 1086 tb = sys.exc_info()[2]
1087 1087 msg = 'unexpected error: %s' % bexc
1088 1088 interpart = bundlepart('error:abort', [('message', msg)],
1089 1089 mandatory=False)
1090 1090 interpart.id = 0
1091 1091 yield _pack(_fpayloadsize, -1)
1092 1092 for chunk in interpart.getchunks(ui=ui):
1093 1093 yield chunk
1094 1094 outdebug(ui, 'closing payload chunk')
1095 1095 # abort current part payload
1096 1096 yield _pack(_fpayloadsize, 0)
1097 1097 pycompat.raisewithtb(exc, tb)
1098 1098 # end of payload
1099 1099 outdebug(ui, 'closing payload chunk')
1100 1100 yield _pack(_fpayloadsize, 0)
1101 1101 self._generated = True
1102 1102
1103 1103 def _payloadchunks(self):
1104 1104 """yield chunks of a the part payload
1105 1105
1106 1106 Exists to handle the different methods to provide data to a part."""
1107 1107 # we only support fixed size data now.
1108 1108 # This will be improved in the future.
1109 1109 if (util.safehasattr(self.data, 'next')
1110 1110 or util.safehasattr(self.data, '__next__')):
1111 1111 buff = util.chunkbuffer(self.data)
1112 1112 chunk = buff.read(preferedchunksize)
1113 1113 while chunk:
1114 1114 yield chunk
1115 1115 chunk = buff.read(preferedchunksize)
1116 1116 elif len(self.data):
1117 1117 yield self.data
1118 1118
1119 1119
1120 1120 flaginterrupt = -1
1121 1121
1122 1122 class interrupthandler(unpackermixin):
1123 1123 """read one part and process it with restricted capability
1124 1124
1125 1125 This allows to transmit exception raised on the producer size during part
1126 1126 iteration while the consumer is reading a part.
1127 1127
1128 1128 Part processed in this manner only have access to a ui object,"""
1129 1129
1130 1130 def __init__(self, ui, fp):
1131 1131 super(interrupthandler, self).__init__(fp)
1132 1132 self.ui = ui
1133 1133
1134 1134 def _readpartheader(self):
1135 1135 """reads a part header size and return the bytes blob
1136 1136
1137 1137 returns None if empty"""
1138 1138 headersize = self._unpack(_fpartheadersize)[0]
1139 1139 if headersize < 0:
1140 1140 raise error.BundleValueError('negative part header size: %i'
1141 1141 % headersize)
1142 1142 indebug(self.ui, 'part header size: %i\n' % headersize)
1143 1143 if headersize:
1144 1144 return self._readexact(headersize)
1145 1145 return None
1146 1146
1147 1147 def __call__(self):
1148 1148
1149 1149 self.ui.debug('bundle2-input-stream-interrupt:'
1150 1150 ' opening out of band context\n')
1151 1151 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1152 1152 headerblock = self._readpartheader()
1153 1153 if headerblock is None:
1154 1154 indebug(self.ui, 'no part found during interruption.')
1155 1155 return
1156 1156 part = unbundlepart(self.ui, headerblock, self._fp)
1157 1157 op = interruptoperation(self.ui)
1158 1158 hardabort = False
1159 1159 try:
1160 1160 _processpart(op, part)
1161 1161 except (SystemExit, KeyboardInterrupt):
1162 1162 hardabort = True
1163 1163 raise
1164 1164 finally:
1165 1165 if not hardabort:
1166 1166 part.seek(0, 2)
1167 1167 self.ui.debug('bundle2-input-stream-interrupt:'
1168 1168 ' closing out of band context\n')
1169 1169
1170 1170 class interruptoperation(object):
1171 1171 """A limited operation to be use by part handler during interruption
1172 1172
1173 1173 It only have access to an ui object.
1174 1174 """
1175 1175
1176 1176 def __init__(self, ui):
1177 1177 self.ui = ui
1178 1178 self.reply = None
1179 1179 self.captureoutput = False
1180 1180
1181 1181 @property
1182 1182 def repo(self):
1183 1183 raise error.ProgrammingError('no repo access from stream interruption')
1184 1184
1185 1185 def gettransaction(self):
1186 1186 raise TransactionUnavailable('no repo access from stream interruption')
1187 1187
1188 1188 class unbundlepart(unpackermixin):
1189 1189 """a bundle part read from a bundle"""
1190 1190
1191 1191 def __init__(self, ui, header, fp):
1192 1192 super(unbundlepart, self).__init__(fp)
1193 1193 self._seekable = (util.safehasattr(fp, 'seek') and
1194 1194 util.safehasattr(fp, 'tell'))
1195 1195 self.ui = ui
1196 1196 # unbundle state attr
1197 1197 self._headerdata = header
1198 1198 self._headeroffset = 0
1199 1199 self._initialized = False
1200 1200 self.consumed = False
1201 1201 # part data
1202 1202 self.id = None
1203 1203 self.type = None
1204 1204 self.mandatoryparams = None
1205 1205 self.advisoryparams = None
1206 1206 self.params = None
1207 1207 self.mandatorykeys = ()
1208 1208 self._payloadstream = None
1209 1209 self._readheader()
1210 1210 self._mandatory = None
1211 1211 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1212 1212 self._pos = 0
1213 1213
1214 1214 def _fromheader(self, size):
1215 1215 """return the next <size> byte from the header"""
1216 1216 offset = self._headeroffset
1217 1217 data = self._headerdata[offset:(offset + size)]
1218 1218 self._headeroffset = offset + size
1219 1219 return data
1220 1220
1221 1221 def _unpackheader(self, format):
1222 1222 """read given format from header
1223 1223
1224 1224 This automatically compute the size of the format to read."""
1225 1225 data = self._fromheader(struct.calcsize(format))
1226 1226 return _unpack(format, data)
1227 1227
1228 1228 def _initparams(self, mandatoryparams, advisoryparams):
1229 1229 """internal function to setup all logic related parameters"""
1230 1230 # make it read only to prevent people touching it by mistake.
1231 1231 self.mandatoryparams = tuple(mandatoryparams)
1232 1232 self.advisoryparams = tuple(advisoryparams)
1233 1233 # user friendly UI
1234 1234 self.params = util.sortdict(self.mandatoryparams)
1235 1235 self.params.update(self.advisoryparams)
1236 1236 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1237 1237
1238 1238 def _payloadchunks(self, chunknum=0):
1239 1239 '''seek to specified chunk and start yielding data'''
1240 1240 if len(self._chunkindex) == 0:
1241 1241 assert chunknum == 0, 'Must start with chunk 0'
1242 1242 self._chunkindex.append((0, self._tellfp()))
1243 1243 else:
1244 1244 assert chunknum < len(self._chunkindex), \
1245 1245 'Unknown chunk %d' % chunknum
1246 1246 self._seekfp(self._chunkindex[chunknum][1])
1247 1247
1248 1248 pos = self._chunkindex[chunknum][0]
1249 1249 payloadsize = self._unpack(_fpayloadsize)[0]
1250 1250 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1251 1251 while payloadsize:
1252 1252 if payloadsize == flaginterrupt:
1253 1253 # interruption detection, the handler will now read a
1254 1254 # single part and process it.
1255 1255 interrupthandler(self.ui, self._fp)()
1256 1256 elif payloadsize < 0:
1257 1257 msg = 'negative payload chunk size: %i' % payloadsize
1258 1258 raise error.BundleValueError(msg)
1259 1259 else:
1260 1260 result = self._readexact(payloadsize)
1261 1261 chunknum += 1
1262 1262 pos += payloadsize
1263 1263 if chunknum == len(self._chunkindex):
1264 1264 self._chunkindex.append((pos, self._tellfp()))
1265 1265 yield result
1266 1266 payloadsize = self._unpack(_fpayloadsize)[0]
1267 1267 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1268 1268
1269 1269 def _findchunk(self, pos):
1270 1270 '''for a given payload position, return a chunk number and offset'''
1271 1271 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1272 1272 if ppos == pos:
1273 1273 return chunk, 0
1274 1274 elif ppos > pos:
1275 1275 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1276 1276 raise ValueError('Unknown chunk')
1277 1277
1278 1278 def _readheader(self):
1279 1279 """read the header and setup the object"""
1280 1280 typesize = self._unpackheader(_fparttypesize)[0]
1281 1281 self.type = self._fromheader(typesize)
1282 1282 indebug(self.ui, 'part type: "%s"' % self.type)
1283 1283 self.id = self._unpackheader(_fpartid)[0]
1284 1284 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1285 1285 # extract mandatory bit from type
1286 1286 self.mandatory = (self.type != self.type.lower())
1287 1287 self.type = self.type.lower()
1288 1288 ## reading parameters
1289 1289 # param count
1290 1290 mancount, advcount = self._unpackheader(_fpartparamcount)
1291 1291 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1292 1292 # param size
1293 1293 fparamsizes = _makefpartparamsizes(mancount + advcount)
1294 1294 paramsizes = self._unpackheader(fparamsizes)
1295 1295 # make it a list of couple again
1296 1296 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1297 1297 # split mandatory from advisory
1298 1298 mansizes = paramsizes[:mancount]
1299 1299 advsizes = paramsizes[mancount:]
1300 1300 # retrieve param value
1301 1301 manparams = []
1302 1302 for key, value in mansizes:
1303 1303 manparams.append((self._fromheader(key), self._fromheader(value)))
1304 1304 advparams = []
1305 1305 for key, value in advsizes:
1306 1306 advparams.append((self._fromheader(key), self._fromheader(value)))
1307 1307 self._initparams(manparams, advparams)
1308 1308 ## part payload
1309 1309 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1310 1310 # we read the data, tell it
1311 1311 self._initialized = True
1312 1312
1313 1313 def read(self, size=None):
1314 1314 """read payload data"""
1315 1315 if not self._initialized:
1316 1316 self._readheader()
1317 1317 if size is None:
1318 1318 data = self._payloadstream.read()
1319 1319 else:
1320 1320 data = self._payloadstream.read(size)
1321 1321 self._pos += len(data)
1322 1322 if size is None or len(data) < size:
1323 1323 if not self.consumed and self._pos:
1324 1324 self.ui.debug('bundle2-input-part: total payload size %i\n'
1325 1325 % self._pos)
1326 1326 self.consumed = True
1327 1327 return data
1328 1328
1329 1329 def tell(self):
1330 1330 return self._pos
1331 1331
1332 1332 def seek(self, offset, whence=0):
1333 1333 if whence == 0:
1334 1334 newpos = offset
1335 1335 elif whence == 1:
1336 1336 newpos = self._pos + offset
1337 1337 elif whence == 2:
1338 1338 if not self.consumed:
1339 1339 self.read()
1340 1340 newpos = self._chunkindex[-1][0] - offset
1341 1341 else:
1342 1342 raise ValueError('Unknown whence value: %r' % (whence,))
1343 1343
1344 1344 if newpos > self._chunkindex[-1][0] and not self.consumed:
1345 1345 self.read()
1346 1346 if not 0 <= newpos <= self._chunkindex[-1][0]:
1347 1347 raise ValueError('Offset out of range')
1348 1348
1349 1349 if self._pos != newpos:
1350 1350 chunk, internaloffset = self._findchunk(newpos)
1351 1351 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1352 1352 adjust = self.read(internaloffset)
1353 1353 if len(adjust) != internaloffset:
1354 1354 raise error.Abort(_('Seek failed\n'))
1355 1355 self._pos = newpos
1356 1356
1357 1357 def _seekfp(self, offset, whence=0):
1358 1358 """move the underlying file pointer
1359 1359
1360 1360 This method is meant for internal usage by the bundle2 protocol only.
1361 1361 They directly manipulate the low level stream including bundle2 level
1362 1362 instruction.
1363 1363
1364 1364 Do not use it to implement higher-level logic or methods."""
1365 1365 if self._seekable:
1366 1366 return self._fp.seek(offset, whence)
1367 1367 else:
1368 1368 raise NotImplementedError(_('File pointer is not seekable'))
1369 1369
1370 1370 def _tellfp(self):
1371 1371 """return the file offset, or None if file is not seekable
1372 1372
1373 1373 This method is meant for internal usage by the bundle2 protocol only.
1374 1374 They directly manipulate the low level stream including bundle2 level
1375 1375 instruction.
1376 1376
1377 1377 Do not use it to implement higher-level logic or methods."""
1378 1378 if self._seekable:
1379 1379 try:
1380 1380 return self._fp.tell()
1381 1381 except IOError as e:
1382 1382 if e.errno == errno.ESPIPE:
1383 1383 self._seekable = False
1384 1384 else:
1385 1385 raise
1386 1386 return None
1387 1387
1388 1388 # These are only the static capabilities.
1389 1389 # Check the 'getrepocaps' function for the rest.
1390 1390 capabilities = {'HG20': (),
1391 1391 'error': ('abort', 'unsupportedcontent', 'pushraced',
1392 1392 'pushkey'),
1393 1393 'listkeys': (),
1394 1394 'pushkey': (),
1395 1395 'digests': tuple(sorted(util.DIGESTS.keys())),
1396 1396 'remote-changegroup': ('http', 'https'),
1397 1397 'hgtagsfnodes': (),
1398 1398 }
1399 1399
1400 1400 def getrepocaps(repo, allowpushback=False):
1401 1401 """return the bundle2 capabilities for a given repo
1402 1402
1403 1403 Exists to allow extensions (like evolution) to mutate the capabilities.
1404 1404 """
1405 1405 caps = capabilities.copy()
1406 1406 caps['changegroup'] = tuple(sorted(
1407 1407 changegroup.supportedincomingversions(repo)))
1408 1408 if obsolete.isenabled(repo, obsolete.exchangeopt):
1409 1409 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1410 1410 caps['obsmarkers'] = supportedformat
1411 1411 if allowpushback:
1412 1412 caps['pushback'] = ()
1413 1413 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1414 1414 if cpmode == 'check-related':
1415 1415 caps['checkheads'] = ('related',)
1416 1416 return caps
1417 1417
1418 1418 def bundle2caps(remote):
1419 1419 """return the bundle capabilities of a peer as dict"""
1420 1420 raw = remote.capable('bundle2')
1421 1421 if not raw and raw != '':
1422 1422 return {}
1423 1423 capsblob = urlreq.unquote(remote.capable('bundle2'))
1424 1424 return decodecaps(capsblob)
1425 1425
1426 1426 def obsmarkersversion(caps):
1427 1427 """extract the list of supported obsmarkers versions from a bundle2caps dict
1428 1428 """
1429 1429 obscaps = caps.get('obsmarkers', ())
1430 1430 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1431 1431
1432 1432 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1433 1433 vfs=None, compression=None, compopts=None):
1434 1434 if bundletype.startswith('HG10'):
1435 1435 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1436 1436 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1437 1437 compression=compression, compopts=compopts)
1438 1438 elif not bundletype.startswith('HG20'):
1439 1439 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1440 1440
1441 1441 caps = {}
1442 1442 if 'obsolescence' in opts:
1443 1443 caps['obsmarkers'] = ('V1',)
1444 1444 bundle = bundle20(ui, caps)
1445 1445 bundle.setcompression(compression, compopts)
1446 1446 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1447 1447 chunkiter = bundle.getchunks()
1448 1448
1449 1449 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1450 1450
1451 1451 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1452 1452 # We should eventually reconcile this logic with the one behind
1453 1453 # 'exchange.getbundle2partsgenerator'.
1454 1454 #
1455 1455 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1456 1456 # different right now. So we keep them separated for now for the sake of
1457 1457 # simplicity.
1458 1458
1459 1459 # we always want a changegroup in such bundle
1460 1460 cgversion = opts.get('cg.version')
1461 1461 if cgversion is None:
1462 1462 cgversion = changegroup.safeversion(repo)
1463 1463 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1464 1464 part = bundler.newpart('changegroup', data=cg.getchunks())
1465 1465 part.addparam('version', cg.version)
1466 1466 if 'clcount' in cg.extras:
1467 1467 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1468 1468 mandatory=False)
1469 1469 if opts.get('phases') and repo.revs('%ln and secret()',
1470 1470 outgoing.missingheads):
1471 1471 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1472 1472
1473 1473 addparttagsfnodescache(repo, bundler, outgoing)
1474 1474
1475 1475 if opts.get('obsolescence', False):
1476 1476 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1477 1477 buildobsmarkerspart(bundler, obsmarkers)
1478 1478
1479 1479 if opts.get('phases', False):
1480 1480 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1481 1481 phasedata = phases.binaryencode(headsbyphase)
1482 1482 bundler.newpart('phase-heads', data=phasedata)
1483 1483
1484 1484 def addparttagsfnodescache(repo, bundler, outgoing):
1485 1485 # we include the tags fnode cache for the bundle changeset
1486 1486 # (as an optional parts)
1487 1487 cache = tags.hgtagsfnodescache(repo.unfiltered())
1488 1488 chunks = []
1489 1489
1490 1490 # .hgtags fnodes are only relevant for head changesets. While we could
1491 1491 # transfer values for all known nodes, there will likely be little to
1492 1492 # no benefit.
1493 1493 #
1494 1494 # We don't bother using a generator to produce output data because
1495 1495 # a) we only have 40 bytes per head and even esoteric numbers of heads
1496 1496 # consume little memory (1M heads is 40MB) b) we don't want to send the
1497 1497 # part if we don't have entries and knowing if we have entries requires
1498 1498 # cache lookups.
1499 1499 for node in outgoing.missingheads:
1500 1500 # Don't compute missing, as this may slow down serving.
1501 1501 fnode = cache.getfnode(node, computemissing=False)
1502 1502 if fnode is not None:
1503 1503 chunks.extend([node, fnode])
1504 1504
1505 1505 if chunks:
1506 1506 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1507 1507
1508 1508 def buildobsmarkerspart(bundler, markers):
1509 1509 """add an obsmarker part to the bundler with <markers>
1510 1510
1511 1511 No part is created if markers is empty.
1512 1512 Raises ValueError if the bundler doesn't support any known obsmarker format.
1513 1513 """
1514 1514 if not markers:
1515 1515 return None
1516 1516
1517 1517 remoteversions = obsmarkersversion(bundler.capabilities)
1518 1518 version = obsolete.commonversion(remoteversions)
1519 1519 if version is None:
1520 1520 raise ValueError('bundler does not support common obsmarker format')
1521 1521 stream = obsolete.encodemarkers(markers, True, version=version)
1522 1522 return bundler.newpart('obsmarkers', data=stream)
1523 1523
1524 1524 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1525 1525 compopts=None):
1526 1526 """Write a bundle file and return its filename.
1527 1527
1528 1528 Existing files will not be overwritten.
1529 1529 If no filename is specified, a temporary file is created.
1530 1530 bz2 compression can be turned off.
1531 1531 The bundle file will be deleted in case of errors.
1532 1532 """
1533 1533
1534 1534 if bundletype == "HG20":
1535 1535 bundle = bundle20(ui)
1536 1536 bundle.setcompression(compression, compopts)
1537 1537 part = bundle.newpart('changegroup', data=cg.getchunks())
1538 1538 part.addparam('version', cg.version)
1539 1539 if 'clcount' in cg.extras:
1540 1540 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1541 1541 mandatory=False)
1542 1542 chunkiter = bundle.getchunks()
1543 1543 else:
1544 1544 # compression argument is only for the bundle2 case
1545 1545 assert compression is None
1546 1546 if cg.version != '01':
1547 1547 raise error.Abort(_('old bundle types only supports v1 '
1548 1548 'changegroups'))
1549 1549 header, comp = bundletypes[bundletype]
1550 1550 if comp not in util.compengines.supportedbundletypes:
1551 1551 raise error.Abort(_('unknown stream compression type: %s')
1552 1552 % comp)
1553 1553 compengine = util.compengines.forbundletype(comp)
1554 1554 def chunkiter():
1555 1555 yield header
1556 1556 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1557 1557 yield chunk
1558 1558 chunkiter = chunkiter()
1559 1559
1560 1560 # parse the changegroup data, otherwise we will block
1561 1561 # in case of sshrepo because we don't know the end of the stream
1562 1562 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1563 1563
1564 1564 def combinechangegroupresults(op):
1565 1565 """logic to combine 0 or more addchangegroup results into one"""
1566 1566 results = [r.get('return', 0)
1567 1567 for r in op.records['changegroup']]
1568 1568 changedheads = 0
1569 1569 result = 1
1570 1570 for ret in results:
1571 1571 # If any changegroup result is 0, return 0
1572 1572 if ret == 0:
1573 1573 result = 0
1574 1574 break
1575 1575 if ret < -1:
1576 1576 changedheads += ret + 1
1577 1577 elif ret > 1:
1578 1578 changedheads += ret - 1
1579 1579 if changedheads > 0:
1580 1580 result = 1 + changedheads
1581 1581 elif changedheads < 0:
1582 1582 result = -1 + changedheads
1583 1583 return result
1584 1584
1585 1585 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1586 1586 'targetphase'))
1587 1587 def handlechangegroup(op, inpart):
1588 1588 """apply a changegroup part on the repo
1589 1589
1590 1590 This is a very early implementation that will massive rework before being
1591 1591 inflicted to any end-user.
1592 1592 """
1593 1593 tr = op.gettransaction()
1594 1594 unpackerversion = inpart.params.get('version', '01')
1595 1595 # We should raise an appropriate exception here
1596 1596 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1597 1597 # the source and url passed here are overwritten by the one contained in
1598 1598 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1599 1599 nbchangesets = None
1600 1600 if 'nbchanges' in inpart.params:
1601 1601 nbchangesets = int(inpart.params.get('nbchanges'))
1602 1602 if ('treemanifest' in inpart.params and
1603 1603 'treemanifest' not in op.repo.requirements):
1604 1604 if len(op.repo.changelog) != 0:
1605 1605 raise error.Abort(_(
1606 1606 "bundle contains tree manifests, but local repo is "
1607 1607 "non-empty and does not use tree manifests"))
1608 1608 op.repo.requirements.add('treemanifest')
1609 1609 op.repo._applyopenerreqs()
1610 1610 op.repo._writerequirements()
1611 1611 extrakwargs = {}
1612 1612 targetphase = inpart.params.get('targetphase')
1613 1613 if targetphase is not None:
1614 1614 extrakwargs['targetphase'] = int(targetphase)
1615 1615 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1616 1616 expectedtotal=nbchangesets, **extrakwargs)
1617 1617 if op.reply is not None:
1618 1618 # This is definitely not the final form of this
1619 1619 # return. But one need to start somewhere.
1620 1620 part = op.reply.newpart('reply:changegroup', mandatory=False)
1621 1621 part.addparam(
1622 1622 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1623 1623 part.addparam('return', '%i' % ret, mandatory=False)
1624 1624 assert not inpart.read()
1625 1625
1626 1626 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1627 1627 ['digest:%s' % k for k in util.DIGESTS.keys()])
1628 1628 @parthandler('remote-changegroup', _remotechangegroupparams)
1629 1629 def handleremotechangegroup(op, inpart):
1630 1630 """apply a bundle10 on the repo, given an url and validation information
1631 1631
1632 1632 All the information about the remote bundle to import are given as
1633 1633 parameters. The parameters include:
1634 1634 - url: the url to the bundle10.
1635 1635 - size: the bundle10 file size. It is used to validate what was
1636 1636 retrieved by the client matches the server knowledge about the bundle.
1637 1637 - digests: a space separated list of the digest types provided as
1638 1638 parameters.
1639 1639 - digest:<digest-type>: the hexadecimal representation of the digest with
1640 1640 that name. Like the size, it is used to validate what was retrieved by
1641 1641 the client matches what the server knows about the bundle.
1642 1642
1643 1643 When multiple digest types are given, all of them are checked.
1644 1644 """
1645 1645 try:
1646 1646 raw_url = inpart.params['url']
1647 1647 except KeyError:
1648 1648 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1649 1649 parsed_url = util.url(raw_url)
1650 1650 if parsed_url.scheme not in capabilities['remote-changegroup']:
1651 1651 raise error.Abort(_('remote-changegroup does not support %s urls') %
1652 1652 parsed_url.scheme)
1653 1653
1654 1654 try:
1655 1655 size = int(inpart.params['size'])
1656 1656 except ValueError:
1657 1657 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1658 1658 % 'size')
1659 1659 except KeyError:
1660 1660 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1661 1661
1662 1662 digests = {}
1663 1663 for typ in inpart.params.get('digests', '').split():
1664 1664 param = 'digest:%s' % typ
1665 1665 try:
1666 1666 value = inpart.params[param]
1667 1667 except KeyError:
1668 1668 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1669 1669 param)
1670 1670 digests[typ] = value
1671 1671
1672 1672 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1673 1673
1674 1674 tr = op.gettransaction()
1675 1675 from . import exchange
1676 1676 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1677 1677 if not isinstance(cg, changegroup.cg1unpacker):
1678 1678 raise error.Abort(_('%s: not a bundle version 1.0') %
1679 1679 util.hidepassword(raw_url))
1680 1680 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1681 1681 if op.reply is not None:
1682 1682 # This is definitely not the final form of this
1683 1683 # return. But one need to start somewhere.
1684 1684 part = op.reply.newpart('reply:changegroup')
1685 1685 part.addparam(
1686 1686 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1687 1687 part.addparam('return', '%i' % ret, mandatory=False)
1688 1688 try:
1689 1689 real_part.validate()
1690 1690 except error.Abort as e:
1691 1691 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1692 1692 (util.hidepassword(raw_url), str(e)))
1693 1693 assert not inpart.read()
1694 1694
1695 1695 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1696 1696 def handlereplychangegroup(op, inpart):
1697 1697 ret = int(inpart.params['return'])
1698 1698 replyto = int(inpart.params['in-reply-to'])
1699 1699 op.records.add('changegroup', {'return': ret}, replyto)
1700 1700
1701 1701 @parthandler('check:heads')
1702 1702 def handlecheckheads(op, inpart):
1703 1703 """check that head of the repo did not change
1704 1704
1705 1705 This is used to detect a push race when using unbundle.
1706 1706 This replaces the "heads" argument of unbundle."""
1707 1707 h = inpart.read(20)
1708 1708 heads = []
1709 1709 while len(h) == 20:
1710 1710 heads.append(h)
1711 1711 h = inpart.read(20)
1712 1712 assert not h
1713 1713 # Trigger a transaction so that we are guaranteed to have the lock now.
1714 1714 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1715 1715 op.gettransaction()
1716 1716 if sorted(heads) != sorted(op.repo.heads()):
1717 1717 raise error.PushRaced('repository changed while pushing - '
1718 1718 'please try again')
1719 1719
1720 1720 @parthandler('check:updated-heads')
1721 1721 def handlecheckupdatedheads(op, inpart):
1722 1722 """check for race on the heads touched by a push
1723 1723
1724 1724 This is similar to 'check:heads' but focus on the heads actually updated
1725 1725 during the push. If other activities happen on unrelated heads, it is
1726 1726 ignored.
1727 1727
1728 1728 This allow server with high traffic to avoid push contention as long as
1729 1729 unrelated parts of the graph are involved."""
1730 1730 h = inpart.read(20)
1731 1731 heads = []
1732 1732 while len(h) == 20:
1733 1733 heads.append(h)
1734 1734 h = inpart.read(20)
1735 1735 assert not h
1736 1736 # trigger a transaction so that we are guaranteed to have the lock now.
1737 1737 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1738 1738 op.gettransaction()
1739 1739
1740 1740 currentheads = set()
1741 1741 for ls in op.repo.branchmap().itervalues():
1742 1742 currentheads.update(ls)
1743 1743
1744 1744 for h in heads:
1745 1745 if h not in currentheads:
1746 1746 raise error.PushRaced('repository changed while pushing - '
1747 1747 'please try again')
1748 1748
1749 1749 @parthandler('output')
1750 1750 def handleoutput(op, inpart):
1751 1751 """forward output captured on the server to the client"""
1752 1752 for line in inpart.read().splitlines():
1753 1753 op.ui.status(_('remote: %s\n') % line)
1754 1754
1755 1755 @parthandler('replycaps')
1756 1756 def handlereplycaps(op, inpart):
1757 1757 """Notify that a reply bundle should be created
1758 1758
1759 1759 The payload contains the capabilities information for the reply"""
1760 1760 caps = decodecaps(inpart.read())
1761 1761 if op.reply is None:
1762 1762 op.reply = bundle20(op.ui, caps)
1763 1763
1764 1764 class AbortFromPart(error.Abort):
1765 1765 """Sub-class of Abort that denotes an error from a bundle2 part."""
1766 1766
1767 1767 @parthandler('error:abort', ('message', 'hint'))
1768 1768 def handleerrorabort(op, inpart):
1769 1769 """Used to transmit abort error over the wire"""
1770 1770 raise AbortFromPart(inpart.params['message'],
1771 1771 hint=inpart.params.get('hint'))
1772 1772
1773 1773 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1774 1774 'in-reply-to'))
1775 1775 def handleerrorpushkey(op, inpart):
1776 1776 """Used to transmit failure of a mandatory pushkey over the wire"""
1777 1777 kwargs = {}
1778 1778 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1779 1779 value = inpart.params.get(name)
1780 1780 if value is not None:
1781 1781 kwargs[name] = value
1782 1782 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1783 1783
1784 1784 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1785 1785 def handleerrorunsupportedcontent(op, inpart):
1786 1786 """Used to transmit unknown content error over the wire"""
1787 1787 kwargs = {}
1788 1788 parttype = inpart.params.get('parttype')
1789 1789 if parttype is not None:
1790 1790 kwargs['parttype'] = parttype
1791 1791 params = inpart.params.get('params')
1792 1792 if params is not None:
1793 1793 kwargs['params'] = params.split('\0')
1794 1794
1795 1795 raise error.BundleUnknownFeatureError(**kwargs)
1796 1796
1797 1797 @parthandler('error:pushraced', ('message',))
1798 1798 def handleerrorpushraced(op, inpart):
1799 1799 """Used to transmit push race error over the wire"""
1800 1800 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1801 1801
1802 1802 @parthandler('listkeys', ('namespace',))
1803 1803 def handlelistkeys(op, inpart):
1804 1804 """retrieve pushkey namespace content stored in a bundle2"""
1805 1805 namespace = inpart.params['namespace']
1806 1806 r = pushkey.decodekeys(inpart.read())
1807 1807 op.records.add('listkeys', (namespace, r))
1808 1808
1809 1809 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1810 1810 def handlepushkey(op, inpart):
1811 1811 """process a pushkey request"""
1812 1812 dec = pushkey.decode
1813 1813 namespace = dec(inpart.params['namespace'])
1814 1814 key = dec(inpart.params['key'])
1815 1815 old = dec(inpart.params['old'])
1816 1816 new = dec(inpart.params['new'])
1817 1817 # Grab the transaction to ensure that we have the lock before performing the
1818 1818 # pushkey.
1819 1819 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1820 1820 op.gettransaction()
1821 1821 ret = op.repo.pushkey(namespace, key, old, new)
1822 1822 record = {'namespace': namespace,
1823 1823 'key': key,
1824 1824 'old': old,
1825 1825 'new': new}
1826 1826 op.records.add('pushkey', record)
1827 1827 if op.reply is not None:
1828 1828 rpart = op.reply.newpart('reply:pushkey')
1829 1829 rpart.addparam(
1830 1830 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1831 1831 rpart.addparam('return', '%i' % ret, mandatory=False)
1832 1832 if inpart.mandatory and not ret:
1833 1833 kwargs = {}
1834 1834 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1835 1835 if key in inpart.params:
1836 1836 kwargs[key] = inpart.params[key]
1837 1837 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1838 1838
1839 def _readphaseheads(inpart):
1840 headsbyphase = [[] for i in phases.allphases]
1841 entrysize = phases._fphasesentry.size
1842 while True:
1843 entry = inpart.read(entrysize)
1844 if len(entry) < entrysize:
1845 if entry:
1846 raise error.Abort(_('bad phase-heads bundle part'))
1847 break
1848 phase, node = phases._fphasesentry.unpack(entry)
1849 headsbyphase[phase].append(node)
1850 return headsbyphase
1851
1852 1839 @parthandler('phase-heads')
1853 1840 def handlephases(op, inpart):
1854 1841 """apply phases from bundle part to repo"""
1855 headsbyphase = _readphaseheads(inpart)
1842 headsbyphase = phases.binarydecode(inpart)
1856 1843 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase)
1857 1844 op.records.add('phase-heads', {})
1858 1845
1859 1846 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1860 1847 def handlepushkeyreply(op, inpart):
1861 1848 """retrieve the result of a pushkey request"""
1862 1849 ret = int(inpart.params['return'])
1863 1850 partid = int(inpart.params['in-reply-to'])
1864 1851 op.records.add('pushkey', {'return': ret}, partid)
1865 1852
1866 1853 @parthandler('obsmarkers')
1867 1854 def handleobsmarker(op, inpart):
1868 1855 """add a stream of obsmarkers to the repo"""
1869 1856 tr = op.gettransaction()
1870 1857 markerdata = inpart.read()
1871 1858 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1872 1859 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1873 1860 % len(markerdata))
1874 1861 # The mergemarkers call will crash if marker creation is not enabled.
1875 1862 # we want to avoid this if the part is advisory.
1876 1863 if not inpart.mandatory and op.repo.obsstore.readonly:
1877 1864 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1878 1865 return
1879 1866 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1880 1867 op.repo.invalidatevolatilesets()
1881 1868 if new:
1882 1869 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1883 1870 op.records.add('obsmarkers', {'new': new})
1884 1871 if op.reply is not None:
1885 1872 rpart = op.reply.newpart('reply:obsmarkers')
1886 1873 rpart.addparam(
1887 1874 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1888 1875 rpart.addparam('new', '%i' % new, mandatory=False)
1889 1876
1890 1877
1891 1878 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1892 1879 def handleobsmarkerreply(op, inpart):
1893 1880 """retrieve the result of a pushkey request"""
1894 1881 ret = int(inpart.params['new'])
1895 1882 partid = int(inpart.params['in-reply-to'])
1896 1883 op.records.add('obsmarkers', {'new': ret}, partid)
1897 1884
1898 1885 @parthandler('hgtagsfnodes')
1899 1886 def handlehgtagsfnodes(op, inpart):
1900 1887 """Applies .hgtags fnodes cache entries to the local repo.
1901 1888
1902 1889 Payload is pairs of 20 byte changeset nodes and filenodes.
1903 1890 """
1904 1891 # Grab the transaction so we ensure that we have the lock at this point.
1905 1892 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1906 1893 op.gettransaction()
1907 1894 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1908 1895
1909 1896 count = 0
1910 1897 while True:
1911 1898 node = inpart.read(20)
1912 1899 fnode = inpart.read(20)
1913 1900 if len(node) < 20 or len(fnode) < 20:
1914 1901 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1915 1902 break
1916 1903 cache.setfnode(node, fnode)
1917 1904 count += 1
1918 1905
1919 1906 cache.write()
1920 1907 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1921 1908
1922 1909 @parthandler('pushvars')
1923 1910 def bundle2getvars(op, part):
1924 1911 '''unbundle a bundle2 containing shellvars on the server'''
1925 1912 # An option to disable unbundling on server-side for security reasons
1926 1913 if op.ui.configbool('push', 'pushvars.server'):
1927 1914 hookargs = {}
1928 1915 for key, value in part.advisoryparams:
1929 1916 key = key.upper()
1930 1917 # We want pushed variables to have USERVAR_ prepended so we know
1931 1918 # they came from the --pushvar flag.
1932 1919 key = "USERVAR_" + key
1933 1920 hookargs[key] = value
1934 1921 op.addhookargs(hookargs)
@@ -1,2310 +1,2310 b''
1 1 # debugcommands.py - command processing for debug* commands
2 2 #
3 3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import codecs
11 11 import collections
12 12 import difflib
13 13 import errno
14 14 import operator
15 15 import os
16 16 import random
17 17 import socket
18 18 import ssl
19 19 import string
20 20 import sys
21 21 import tempfile
22 22 import time
23 23
24 24 from .i18n import _
25 25 from .node import (
26 26 bin,
27 27 hex,
28 28 nullhex,
29 29 nullid,
30 30 nullrev,
31 31 short,
32 32 )
33 33 from . import (
34 34 bundle2,
35 35 changegroup,
36 36 cmdutil,
37 37 color,
38 38 context,
39 39 dagparser,
40 40 dagutil,
41 41 encoding,
42 42 error,
43 43 exchange,
44 44 extensions,
45 45 filemerge,
46 46 fileset,
47 47 formatter,
48 48 hg,
49 49 localrepo,
50 50 lock as lockmod,
51 51 merge as mergemod,
52 52 obsolete,
53 53 obsutil,
54 54 phases,
55 55 policy,
56 56 pvec,
57 57 pycompat,
58 58 registrar,
59 59 repair,
60 60 revlog,
61 61 revset,
62 62 revsetlang,
63 63 scmutil,
64 64 setdiscovery,
65 65 simplemerge,
66 66 smartset,
67 67 sslutil,
68 68 streamclone,
69 69 templater,
70 70 treediscovery,
71 71 upgrade,
72 72 util,
73 73 vfs as vfsmod,
74 74 )
75 75
76 76 release = lockmod.release
77 77
78 78 command = registrar.command()
79 79
80 80 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
81 81 def debugancestor(ui, repo, *args):
82 82 """find the ancestor revision of two revisions in a given index"""
83 83 if len(args) == 3:
84 84 index, rev1, rev2 = args
85 85 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False), index)
86 86 lookup = r.lookup
87 87 elif len(args) == 2:
88 88 if not repo:
89 89 raise error.Abort(_('there is no Mercurial repository here '
90 90 '(.hg not found)'))
91 91 rev1, rev2 = args
92 92 r = repo.changelog
93 93 lookup = repo.lookup
94 94 else:
95 95 raise error.Abort(_('either two or three arguments required'))
96 96 a = r.ancestor(lookup(rev1), lookup(rev2))
97 97 ui.write('%d:%s\n' % (r.rev(a), hex(a)))
98 98
99 99 @command('debugapplystreamclonebundle', [], 'FILE')
100 100 def debugapplystreamclonebundle(ui, repo, fname):
101 101 """apply a stream clone bundle file"""
102 102 f = hg.openpath(ui, fname)
103 103 gen = exchange.readbundle(ui, f, fname)
104 104 gen.apply(repo)
105 105
106 106 @command('debugbuilddag',
107 107 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
108 108 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
109 109 ('n', 'new-file', None, _('add new file at each rev'))],
110 110 _('[OPTION]... [TEXT]'))
111 111 def debugbuilddag(ui, repo, text=None,
112 112 mergeable_file=False,
113 113 overwritten_file=False,
114 114 new_file=False):
115 115 """builds a repo with a given DAG from scratch in the current empty repo
116 116
117 117 The description of the DAG is read from stdin if not given on the
118 118 command line.
119 119
120 120 Elements:
121 121
122 122 - "+n" is a linear run of n nodes based on the current default parent
123 123 - "." is a single node based on the current default parent
124 124 - "$" resets the default parent to null (implied at the start);
125 125 otherwise the default parent is always the last node created
126 126 - "<p" sets the default parent to the backref p
127 127 - "*p" is a fork at parent p, which is a backref
128 128 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
129 129 - "/p2" is a merge of the preceding node and p2
130 130 - ":tag" defines a local tag for the preceding node
131 131 - "@branch" sets the named branch for subsequent nodes
132 132 - "#...\\n" is a comment up to the end of the line
133 133
134 134 Whitespace between the above elements is ignored.
135 135
136 136 A backref is either
137 137
138 138 - a number n, which references the node curr-n, where curr is the current
139 139 node, or
140 140 - the name of a local tag you placed earlier using ":tag", or
141 141 - empty to denote the default parent.
142 142
143 143 All string valued-elements are either strictly alphanumeric, or must
144 144 be enclosed in double quotes ("..."), with "\\" as escape character.
145 145 """
146 146
147 147 if text is None:
148 148 ui.status(_("reading DAG from stdin\n"))
149 149 text = ui.fin.read()
150 150
151 151 cl = repo.changelog
152 152 if len(cl) > 0:
153 153 raise error.Abort(_('repository is not empty'))
154 154
155 155 # determine number of revs in DAG
156 156 total = 0
157 157 for type, data in dagparser.parsedag(text):
158 158 if type == 'n':
159 159 total += 1
160 160
161 161 if mergeable_file:
162 162 linesperrev = 2
163 163 # make a file with k lines per rev
164 164 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
165 165 initialmergedlines.append("")
166 166
167 167 tags = []
168 168
169 169 wlock = lock = tr = None
170 170 try:
171 171 wlock = repo.wlock()
172 172 lock = repo.lock()
173 173 tr = repo.transaction("builddag")
174 174
175 175 at = -1
176 176 atbranch = 'default'
177 177 nodeids = []
178 178 id = 0
179 179 ui.progress(_('building'), id, unit=_('revisions'), total=total)
180 180 for type, data in dagparser.parsedag(text):
181 181 if type == 'n':
182 182 ui.note(('node %s\n' % str(data)))
183 183 id, ps = data
184 184
185 185 files = []
186 186 fctxs = {}
187 187
188 188 p2 = None
189 189 if mergeable_file:
190 190 fn = "mf"
191 191 p1 = repo[ps[0]]
192 192 if len(ps) > 1:
193 193 p2 = repo[ps[1]]
194 194 pa = p1.ancestor(p2)
195 195 base, local, other = [x[fn].data() for x in (pa, p1,
196 196 p2)]
197 197 m3 = simplemerge.Merge3Text(base, local, other)
198 198 ml = [l.strip() for l in m3.merge_lines()]
199 199 ml.append("")
200 200 elif at > 0:
201 201 ml = p1[fn].data().split("\n")
202 202 else:
203 203 ml = initialmergedlines
204 204 ml[id * linesperrev] += " r%i" % id
205 205 mergedtext = "\n".join(ml)
206 206 files.append(fn)
207 207 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
208 208
209 209 if overwritten_file:
210 210 fn = "of"
211 211 files.append(fn)
212 212 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
213 213
214 214 if new_file:
215 215 fn = "nf%i" % id
216 216 files.append(fn)
217 217 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
218 218 if len(ps) > 1:
219 219 if not p2:
220 220 p2 = repo[ps[1]]
221 221 for fn in p2:
222 222 if fn.startswith("nf"):
223 223 files.append(fn)
224 224 fctxs[fn] = p2[fn]
225 225
226 226 def fctxfn(repo, cx, path):
227 227 return fctxs.get(path)
228 228
229 229 if len(ps) == 0 or ps[0] < 0:
230 230 pars = [None, None]
231 231 elif len(ps) == 1:
232 232 pars = [nodeids[ps[0]], None]
233 233 else:
234 234 pars = [nodeids[p] for p in ps]
235 235 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
236 236 date=(id, 0),
237 237 user="debugbuilddag",
238 238 extra={'branch': atbranch})
239 239 nodeid = repo.commitctx(cx)
240 240 nodeids.append(nodeid)
241 241 at = id
242 242 elif type == 'l':
243 243 id, name = data
244 244 ui.note(('tag %s\n' % name))
245 245 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
246 246 elif type == 'a':
247 247 ui.note(('branch %s\n' % data))
248 248 atbranch = data
249 249 ui.progress(_('building'), id, unit=_('revisions'), total=total)
250 250 tr.close()
251 251
252 252 if tags:
253 253 repo.vfs.write("localtags", "".join(tags))
254 254 finally:
255 255 ui.progress(_('building'), None)
256 256 release(tr, lock, wlock)
257 257
258 258 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
259 259 indent_string = ' ' * indent
260 260 if all:
261 261 ui.write(("%sformat: id, p1, p2, cset, delta base, len(delta)\n")
262 262 % indent_string)
263 263
264 264 def showchunks(named):
265 265 ui.write("\n%s%s\n" % (indent_string, named))
266 266 for deltadata in gen.deltaiter():
267 267 node, p1, p2, cs, deltabase, delta, flags = deltadata
268 268 ui.write("%s%s %s %s %s %s %s\n" %
269 269 (indent_string, hex(node), hex(p1), hex(p2),
270 270 hex(cs), hex(deltabase), len(delta)))
271 271
272 272 chunkdata = gen.changelogheader()
273 273 showchunks("changelog")
274 274 chunkdata = gen.manifestheader()
275 275 showchunks("manifest")
276 276 for chunkdata in iter(gen.filelogheader, {}):
277 277 fname = chunkdata['filename']
278 278 showchunks(fname)
279 279 else:
280 280 if isinstance(gen, bundle2.unbundle20):
281 281 raise error.Abort(_('use debugbundle2 for this file'))
282 282 chunkdata = gen.changelogheader()
283 283 for deltadata in gen.deltaiter():
284 284 node, p1, p2, cs, deltabase, delta, flags = deltadata
285 285 ui.write("%s%s\n" % (indent_string, hex(node)))
286 286
287 287 def _debugobsmarkers(ui, part, indent=0, **opts):
288 288 """display version and markers contained in 'data'"""
289 289 opts = pycompat.byteskwargs(opts)
290 290 data = part.read()
291 291 indent_string = ' ' * indent
292 292 try:
293 293 version, markers = obsolete._readmarkers(data)
294 294 except error.UnknownVersion as exc:
295 295 msg = "%sunsupported version: %s (%d bytes)\n"
296 296 msg %= indent_string, exc.version, len(data)
297 297 ui.write(msg)
298 298 else:
299 299 msg = "%sversion: %s (%d bytes)\n"
300 300 msg %= indent_string, version, len(data)
301 301 ui.write(msg)
302 302 fm = ui.formatter('debugobsolete', opts)
303 303 for rawmarker in sorted(markers):
304 304 m = obsutil.marker(None, rawmarker)
305 305 fm.startitem()
306 306 fm.plain(indent_string)
307 307 cmdutil.showmarker(fm, m)
308 308 fm.end()
309 309
310 310 def _debugphaseheads(ui, data, indent=0):
311 311 """display version and markers contained in 'data'"""
312 312 indent_string = ' ' * indent
313 headsbyphase = bundle2._readphaseheads(data)
313 headsbyphase = phases.binarydecode(data)
314 314 for phase in phases.allphases:
315 315 for head in headsbyphase[phase]:
316 316 ui.write(indent_string)
317 317 ui.write('%s %s\n' % (hex(head), phases.phasenames[phase]))
318 318
319 319 def _quasirepr(thing):
320 320 if isinstance(thing, (dict, util.sortdict, collections.OrderedDict)):
321 321 return '{%s}' % (
322 322 b', '.join(b'%s: %s' % (k, thing[k]) for k in sorted(thing)))
323 323 return pycompat.bytestr(repr(thing))
324 324
325 325 def _debugbundle2(ui, gen, all=None, **opts):
326 326 """lists the contents of a bundle2"""
327 327 if not isinstance(gen, bundle2.unbundle20):
328 328 raise error.Abort(_('not a bundle2 file'))
329 329 ui.write(('Stream params: %s\n' % _quasirepr(gen.params)))
330 330 parttypes = opts.get(r'part_type', [])
331 331 for part in gen.iterparts():
332 332 if parttypes and part.type not in parttypes:
333 333 continue
334 334 ui.write('%s -- %s\n' % (part.type, _quasirepr(part.params)))
335 335 if part.type == 'changegroup':
336 336 version = part.params.get('version', '01')
337 337 cg = changegroup.getunbundler(version, part, 'UN')
338 338 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
339 339 if part.type == 'obsmarkers':
340 340 _debugobsmarkers(ui, part, indent=4, **opts)
341 341 if part.type == 'phase-heads':
342 342 _debugphaseheads(ui, part, indent=4)
343 343
344 344 @command('debugbundle',
345 345 [('a', 'all', None, _('show all details')),
346 346 ('', 'part-type', [], _('show only the named part type')),
347 347 ('', 'spec', None, _('print the bundlespec of the bundle'))],
348 348 _('FILE'),
349 349 norepo=True)
350 350 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
351 351 """lists the contents of a bundle"""
352 352 with hg.openpath(ui, bundlepath) as f:
353 353 if spec:
354 354 spec = exchange.getbundlespec(ui, f)
355 355 ui.write('%s\n' % spec)
356 356 return
357 357
358 358 gen = exchange.readbundle(ui, f, bundlepath)
359 359 if isinstance(gen, bundle2.unbundle20):
360 360 return _debugbundle2(ui, gen, all=all, **opts)
361 361 _debugchangegroup(ui, gen, all=all, **opts)
362 362
363 363 @command('debugcheckstate', [], '')
364 364 def debugcheckstate(ui, repo):
365 365 """validate the correctness of the current dirstate"""
366 366 parent1, parent2 = repo.dirstate.parents()
367 367 m1 = repo[parent1].manifest()
368 368 m2 = repo[parent2].manifest()
369 369 errors = 0
370 370 for f in repo.dirstate:
371 371 state = repo.dirstate[f]
372 372 if state in "nr" and f not in m1:
373 373 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
374 374 errors += 1
375 375 if state in "a" and f in m1:
376 376 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
377 377 errors += 1
378 378 if state in "m" and f not in m1 and f not in m2:
379 379 ui.warn(_("%s in state %s, but not in either manifest\n") %
380 380 (f, state))
381 381 errors += 1
382 382 for f in m1:
383 383 state = repo.dirstate[f]
384 384 if state not in "nrm":
385 385 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
386 386 errors += 1
387 387 if errors:
388 388 error = _(".hg/dirstate inconsistent with current parent's manifest")
389 389 raise error.Abort(error)
390 390
391 391 @command('debugcolor',
392 392 [('', 'style', None, _('show all configured styles'))],
393 393 'hg debugcolor')
394 394 def debugcolor(ui, repo, **opts):
395 395 """show available color, effects or style"""
396 396 ui.write(('color mode: %s\n') % ui._colormode)
397 397 if opts.get(r'style'):
398 398 return _debugdisplaystyle(ui)
399 399 else:
400 400 return _debugdisplaycolor(ui)
401 401
402 402 def _debugdisplaycolor(ui):
403 403 ui = ui.copy()
404 404 ui._styles.clear()
405 405 for effect in color._activeeffects(ui).keys():
406 406 ui._styles[effect] = effect
407 407 if ui._terminfoparams:
408 408 for k, v in ui.configitems('color'):
409 409 if k.startswith('color.'):
410 410 ui._styles[k] = k[6:]
411 411 elif k.startswith('terminfo.'):
412 412 ui._styles[k] = k[9:]
413 413 ui.write(_('available colors:\n'))
414 414 # sort label with a '_' after the other to group '_background' entry.
415 415 items = sorted(ui._styles.items(),
416 416 key=lambda i: ('_' in i[0], i[0], i[1]))
417 417 for colorname, label in items:
418 418 ui.write(('%s\n') % colorname, label=label)
419 419
420 420 def _debugdisplaystyle(ui):
421 421 ui.write(_('available style:\n'))
422 422 width = max(len(s) for s in ui._styles)
423 423 for label, effects in sorted(ui._styles.items()):
424 424 ui.write('%s' % label, label=label)
425 425 if effects:
426 426 # 50
427 427 ui.write(': ')
428 428 ui.write(' ' * (max(0, width - len(label))))
429 429 ui.write(', '.join(ui.label(e, e) for e in effects.split()))
430 430 ui.write('\n')
431 431
432 432 @command('debugcreatestreamclonebundle', [], 'FILE')
433 433 def debugcreatestreamclonebundle(ui, repo, fname):
434 434 """create a stream clone bundle file
435 435
436 436 Stream bundles are special bundles that are essentially archives of
437 437 revlog files. They are commonly used for cloning very quickly.
438 438 """
439 439 # TODO we may want to turn this into an abort when this functionality
440 440 # is moved into `hg bundle`.
441 441 if phases.hassecret(repo):
442 442 ui.warn(_('(warning: stream clone bundle will contain secret '
443 443 'revisions)\n'))
444 444
445 445 requirements, gen = streamclone.generatebundlev1(repo)
446 446 changegroup.writechunks(ui, gen, fname)
447 447
448 448 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
449 449
450 450 @command('debugdag',
451 451 [('t', 'tags', None, _('use tags as labels')),
452 452 ('b', 'branches', None, _('annotate with branch names')),
453 453 ('', 'dots', None, _('use dots for runs')),
454 454 ('s', 'spaces', None, _('separate elements by spaces'))],
455 455 _('[OPTION]... [FILE [REV]...]'),
456 456 optionalrepo=True)
457 457 def debugdag(ui, repo, file_=None, *revs, **opts):
458 458 """format the changelog or an index DAG as a concise textual description
459 459
460 460 If you pass a revlog index, the revlog's DAG is emitted. If you list
461 461 revision numbers, they get labeled in the output as rN.
462 462
463 463 Otherwise, the changelog DAG of the current repo is emitted.
464 464 """
465 465 spaces = opts.get(r'spaces')
466 466 dots = opts.get(r'dots')
467 467 if file_:
468 468 rlog = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
469 469 file_)
470 470 revs = set((int(r) for r in revs))
471 471 def events():
472 472 for r in rlog:
473 473 yield 'n', (r, list(p for p in rlog.parentrevs(r)
474 474 if p != -1))
475 475 if r in revs:
476 476 yield 'l', (r, "r%i" % r)
477 477 elif repo:
478 478 cl = repo.changelog
479 479 tags = opts.get(r'tags')
480 480 branches = opts.get(r'branches')
481 481 if tags:
482 482 labels = {}
483 483 for l, n in repo.tags().items():
484 484 labels.setdefault(cl.rev(n), []).append(l)
485 485 def events():
486 486 b = "default"
487 487 for r in cl:
488 488 if branches:
489 489 newb = cl.read(cl.node(r))[5]['branch']
490 490 if newb != b:
491 491 yield 'a', newb
492 492 b = newb
493 493 yield 'n', (r, list(p for p in cl.parentrevs(r)
494 494 if p != -1))
495 495 if tags:
496 496 ls = labels.get(r)
497 497 if ls:
498 498 for l in ls:
499 499 yield 'l', (r, l)
500 500 else:
501 501 raise error.Abort(_('need repo for changelog dag'))
502 502
503 503 for line in dagparser.dagtextlines(events(),
504 504 addspaces=spaces,
505 505 wraplabels=True,
506 506 wrapannotations=True,
507 507 wrapnonlinear=dots,
508 508 usedots=dots,
509 509 maxlinewidth=70):
510 510 ui.write(line)
511 511 ui.write("\n")
512 512
513 513 @command('debugdata', cmdutil.debugrevlogopts, _('-c|-m|FILE REV'))
514 514 def debugdata(ui, repo, file_, rev=None, **opts):
515 515 """dump the contents of a data file revision"""
516 516 opts = pycompat.byteskwargs(opts)
517 517 if opts.get('changelog') or opts.get('manifest') or opts.get('dir'):
518 518 if rev is not None:
519 519 raise error.CommandError('debugdata', _('invalid arguments'))
520 520 file_, rev = None, file_
521 521 elif rev is None:
522 522 raise error.CommandError('debugdata', _('invalid arguments'))
523 523 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
524 524 try:
525 525 ui.write(r.revision(r.lookup(rev), raw=True))
526 526 except KeyError:
527 527 raise error.Abort(_('invalid revision identifier %s') % rev)
528 528
529 529 @command('debugdate',
530 530 [('e', 'extended', None, _('try extended date formats'))],
531 531 _('[-e] DATE [RANGE]'),
532 532 norepo=True, optionalrepo=True)
533 533 def debugdate(ui, date, range=None, **opts):
534 534 """parse and display a date"""
535 535 if opts[r"extended"]:
536 536 d = util.parsedate(date, util.extendeddateformats)
537 537 else:
538 538 d = util.parsedate(date)
539 539 ui.write(("internal: %s %s\n") % d)
540 540 ui.write(("standard: %s\n") % util.datestr(d))
541 541 if range:
542 542 m = util.matchdate(range)
543 543 ui.write(("match: %s\n") % m(d[0]))
544 544
545 545 @command('debugdeltachain',
546 546 cmdutil.debugrevlogopts + cmdutil.formatteropts,
547 547 _('-c|-m|FILE'),
548 548 optionalrepo=True)
549 549 def debugdeltachain(ui, repo, file_=None, **opts):
550 550 """dump information about delta chains in a revlog
551 551
552 552 Output can be templatized. Available template keywords are:
553 553
554 554 :``rev``: revision number
555 555 :``chainid``: delta chain identifier (numbered by unique base)
556 556 :``chainlen``: delta chain length to this revision
557 557 :``prevrev``: previous revision in delta chain
558 558 :``deltatype``: role of delta / how it was computed
559 559 :``compsize``: compressed size of revision
560 560 :``uncompsize``: uncompressed size of revision
561 561 :``chainsize``: total size of compressed revisions in chain
562 562 :``chainratio``: total chain size divided by uncompressed revision size
563 563 (new delta chains typically start at ratio 2.00)
564 564 :``lindist``: linear distance from base revision in delta chain to end
565 565 of this revision
566 566 :``extradist``: total size of revisions not part of this delta chain from
567 567 base of delta chain to end of this revision; a measurement
568 568 of how much extra data we need to read/seek across to read
569 569 the delta chain for this revision
570 570 :``extraratio``: extradist divided by chainsize; another representation of
571 571 how much unrelated data is needed to load this delta chain
572 572 """
573 573 opts = pycompat.byteskwargs(opts)
574 574 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
575 575 index = r.index
576 576 generaldelta = r.version & revlog.FLAG_GENERALDELTA
577 577
578 578 def revinfo(rev):
579 579 e = index[rev]
580 580 compsize = e[1]
581 581 uncompsize = e[2]
582 582 chainsize = 0
583 583
584 584 if generaldelta:
585 585 if e[3] == e[5]:
586 586 deltatype = 'p1'
587 587 elif e[3] == e[6]:
588 588 deltatype = 'p2'
589 589 elif e[3] == rev - 1:
590 590 deltatype = 'prev'
591 591 elif e[3] == rev:
592 592 deltatype = 'base'
593 593 else:
594 594 deltatype = 'other'
595 595 else:
596 596 if e[3] == rev:
597 597 deltatype = 'base'
598 598 else:
599 599 deltatype = 'prev'
600 600
601 601 chain = r._deltachain(rev)[0]
602 602 for iterrev in chain:
603 603 e = index[iterrev]
604 604 chainsize += e[1]
605 605
606 606 return compsize, uncompsize, deltatype, chain, chainsize
607 607
608 608 fm = ui.formatter('debugdeltachain', opts)
609 609
610 610 fm.plain(' rev chain# chainlen prev delta '
611 611 'size rawsize chainsize ratio lindist extradist '
612 612 'extraratio\n')
613 613
614 614 chainbases = {}
615 615 for rev in r:
616 616 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
617 617 chainbase = chain[0]
618 618 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
619 619 basestart = r.start(chainbase)
620 620 revstart = r.start(rev)
621 621 lineardist = revstart + comp - basestart
622 622 extradist = lineardist - chainsize
623 623 try:
624 624 prevrev = chain[-2]
625 625 except IndexError:
626 626 prevrev = -1
627 627
628 628 chainratio = float(chainsize) / float(uncomp)
629 629 extraratio = float(extradist) / float(chainsize)
630 630
631 631 fm.startitem()
632 632 fm.write('rev chainid chainlen prevrev deltatype compsize '
633 633 'uncompsize chainsize chainratio lindist extradist '
634 634 'extraratio',
635 635 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
636 636 rev, chainid, len(chain), prevrev, deltatype, comp,
637 637 uncomp, chainsize, chainratio, lineardist, extradist,
638 638 extraratio,
639 639 rev=rev, chainid=chainid, chainlen=len(chain),
640 640 prevrev=prevrev, deltatype=deltatype, compsize=comp,
641 641 uncompsize=uncomp, chainsize=chainsize,
642 642 chainratio=chainratio, lindist=lineardist,
643 643 extradist=extradist, extraratio=extraratio)
644 644
645 645 fm.end()
646 646
647 647 @command('debugdirstate|debugstate',
648 648 [('', 'nodates', None, _('do not display the saved mtime')),
649 649 ('', 'datesort', None, _('sort by saved mtime'))],
650 650 _('[OPTION]...'))
651 651 def debugstate(ui, repo, **opts):
652 652 """show the contents of the current dirstate"""
653 653
654 654 nodates = opts.get(r'nodates')
655 655 datesort = opts.get(r'datesort')
656 656
657 657 timestr = ""
658 658 if datesort:
659 659 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
660 660 else:
661 661 keyfunc = None # sort by filename
662 662 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
663 663 if ent[3] == -1:
664 664 timestr = 'unset '
665 665 elif nodates:
666 666 timestr = 'set '
667 667 else:
668 668 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
669 669 time.localtime(ent[3]))
670 670 if ent[1] & 0o20000:
671 671 mode = 'lnk'
672 672 else:
673 673 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
674 674 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
675 675 for f in repo.dirstate.copies():
676 676 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
677 677
678 678 @command('debugdiscovery',
679 679 [('', 'old', None, _('use old-style discovery')),
680 680 ('', 'nonheads', None,
681 681 _('use old-style discovery with non-heads included')),
682 682 ] + cmdutil.remoteopts,
683 683 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
684 684 def debugdiscovery(ui, repo, remoteurl="default", **opts):
685 685 """runs the changeset discovery protocol in isolation"""
686 686 opts = pycompat.byteskwargs(opts)
687 687 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
688 688 opts.get('branch'))
689 689 remote = hg.peer(repo, opts, remoteurl)
690 690 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
691 691
692 692 # make sure tests are repeatable
693 693 random.seed(12323)
694 694
695 695 def doit(localheads, remoteheads, remote=remote):
696 696 if opts.get('old'):
697 697 if localheads:
698 698 raise error.Abort('cannot use localheads with old style '
699 699 'discovery')
700 700 if not util.safehasattr(remote, 'branches'):
701 701 # enable in-client legacy support
702 702 remote = localrepo.locallegacypeer(remote.local())
703 703 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
704 704 force=True)
705 705 common = set(common)
706 706 if not opts.get('nonheads'):
707 707 ui.write(("unpruned common: %s\n") %
708 708 " ".join(sorted(short(n) for n in common)))
709 709 dag = dagutil.revlogdag(repo.changelog)
710 710 all = dag.ancestorset(dag.internalizeall(common))
711 711 common = dag.externalizeall(dag.headsetofconnecteds(all))
712 712 else:
713 713 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
714 714 common = set(common)
715 715 rheads = set(hds)
716 716 lheads = set(repo.heads())
717 717 ui.write(("common heads: %s\n") %
718 718 " ".join(sorted(short(n) for n in common)))
719 719 if lheads <= common:
720 720 ui.write(("local is subset\n"))
721 721 elif rheads <= common:
722 722 ui.write(("remote is subset\n"))
723 723
724 724 serverlogs = opts.get('serverlog')
725 725 if serverlogs:
726 726 for filename in serverlogs:
727 727 with open(filename, 'r') as logfile:
728 728 line = logfile.readline()
729 729 while line:
730 730 parts = line.strip().split(';')
731 731 op = parts[1]
732 732 if op == 'cg':
733 733 pass
734 734 elif op == 'cgss':
735 735 doit(parts[2].split(' '), parts[3].split(' '))
736 736 elif op == 'unb':
737 737 doit(parts[3].split(' '), parts[2].split(' '))
738 738 line = logfile.readline()
739 739 else:
740 740 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
741 741 opts.get('remote_head'))
742 742 localrevs = opts.get('local_head')
743 743 doit(localrevs, remoterevs)
744 744
745 745 @command('debugextensions', cmdutil.formatteropts, [], norepo=True)
746 746 def debugextensions(ui, **opts):
747 747 '''show information about active extensions'''
748 748 opts = pycompat.byteskwargs(opts)
749 749 exts = extensions.extensions(ui)
750 750 hgver = util.version()
751 751 fm = ui.formatter('debugextensions', opts)
752 752 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
753 753 isinternal = extensions.ismoduleinternal(extmod)
754 754 extsource = pycompat.fsencode(extmod.__file__)
755 755 if isinternal:
756 756 exttestedwith = [] # never expose magic string to users
757 757 else:
758 758 exttestedwith = getattr(extmod, 'testedwith', '').split()
759 759 extbuglink = getattr(extmod, 'buglink', None)
760 760
761 761 fm.startitem()
762 762
763 763 if ui.quiet or ui.verbose:
764 764 fm.write('name', '%s\n', extname)
765 765 else:
766 766 fm.write('name', '%s', extname)
767 767 if isinternal or hgver in exttestedwith:
768 768 fm.plain('\n')
769 769 elif not exttestedwith:
770 770 fm.plain(_(' (untested!)\n'))
771 771 else:
772 772 lasttestedversion = exttestedwith[-1]
773 773 fm.plain(' (%s!)\n' % lasttestedversion)
774 774
775 775 fm.condwrite(ui.verbose and extsource, 'source',
776 776 _(' location: %s\n'), extsource or "")
777 777
778 778 if ui.verbose:
779 779 fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal])
780 780 fm.data(bundled=isinternal)
781 781
782 782 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
783 783 _(' tested with: %s\n'),
784 784 fm.formatlist(exttestedwith, name='ver'))
785 785
786 786 fm.condwrite(ui.verbose and extbuglink, 'buglink',
787 787 _(' bug reporting: %s\n'), extbuglink or "")
788 788
789 789 fm.end()
790 790
791 791 @command('debugfileset',
792 792 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
793 793 _('[-r REV] FILESPEC'))
794 794 def debugfileset(ui, repo, expr, **opts):
795 795 '''parse and apply a fileset specification'''
796 796 ctx = scmutil.revsingle(repo, opts.get(r'rev'), None)
797 797 if ui.verbose:
798 798 tree = fileset.parse(expr)
799 799 ui.note(fileset.prettyformat(tree), "\n")
800 800
801 801 for f in ctx.getfileset(expr):
802 802 ui.write("%s\n" % f)
803 803
804 804 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
805 805 def debugfsinfo(ui, path="."):
806 806 """show information detected about current filesystem"""
807 807 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
808 808 ui.write(('fstype: %s\n') % (util.getfstype(path) or '(unknown)'))
809 809 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
810 810 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
811 811 casesensitive = '(unknown)'
812 812 try:
813 813 with tempfile.NamedTemporaryFile(prefix='.debugfsinfo', dir=path) as f:
814 814 casesensitive = util.fscasesensitive(f.name) and 'yes' or 'no'
815 815 except OSError:
816 816 pass
817 817 ui.write(('case-sensitive: %s\n') % casesensitive)
818 818
819 819 @command('debuggetbundle',
820 820 [('H', 'head', [], _('id of head node'), _('ID')),
821 821 ('C', 'common', [], _('id of common node'), _('ID')),
822 822 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
823 823 _('REPO FILE [-H|-C ID]...'),
824 824 norepo=True)
825 825 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
826 826 """retrieves a bundle from a repo
827 827
828 828 Every ID must be a full-length hex node id string. Saves the bundle to the
829 829 given file.
830 830 """
831 831 opts = pycompat.byteskwargs(opts)
832 832 repo = hg.peer(ui, opts, repopath)
833 833 if not repo.capable('getbundle'):
834 834 raise error.Abort("getbundle() not supported by target repository")
835 835 args = {}
836 836 if common:
837 837 args[r'common'] = [bin(s) for s in common]
838 838 if head:
839 839 args[r'heads'] = [bin(s) for s in head]
840 840 # TODO: get desired bundlecaps from command line.
841 841 args[r'bundlecaps'] = None
842 842 bundle = repo.getbundle('debug', **args)
843 843
844 844 bundletype = opts.get('type', 'bzip2').lower()
845 845 btypes = {'none': 'HG10UN',
846 846 'bzip2': 'HG10BZ',
847 847 'gzip': 'HG10GZ',
848 848 'bundle2': 'HG20'}
849 849 bundletype = btypes.get(bundletype)
850 850 if bundletype not in bundle2.bundletypes:
851 851 raise error.Abort(_('unknown bundle type specified with --type'))
852 852 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
853 853
854 854 @command('debugignore', [], '[FILE]')
855 855 def debugignore(ui, repo, *files, **opts):
856 856 """display the combined ignore pattern and information about ignored files
857 857
858 858 With no argument display the combined ignore pattern.
859 859
860 860 Given space separated file names, shows if the given file is ignored and
861 861 if so, show the ignore rule (file and line number) that matched it.
862 862 """
863 863 ignore = repo.dirstate._ignore
864 864 if not files:
865 865 # Show all the patterns
866 866 ui.write("%s\n" % repr(ignore))
867 867 else:
868 868 m = scmutil.match(repo[None], pats=files)
869 869 for f in m.files():
870 870 nf = util.normpath(f)
871 871 ignored = None
872 872 ignoredata = None
873 873 if nf != '.':
874 874 if ignore(nf):
875 875 ignored = nf
876 876 ignoredata = repo.dirstate._ignorefileandline(nf)
877 877 else:
878 878 for p in util.finddirs(nf):
879 879 if ignore(p):
880 880 ignored = p
881 881 ignoredata = repo.dirstate._ignorefileandline(p)
882 882 break
883 883 if ignored:
884 884 if ignored == nf:
885 885 ui.write(_("%s is ignored\n") % m.uipath(f))
886 886 else:
887 887 ui.write(_("%s is ignored because of "
888 888 "containing folder %s\n")
889 889 % (m.uipath(f), ignored))
890 890 ignorefile, lineno, line = ignoredata
891 891 ui.write(_("(ignore rule in %s, line %d: '%s')\n")
892 892 % (ignorefile, lineno, line))
893 893 else:
894 894 ui.write(_("%s is not ignored\n") % m.uipath(f))
895 895
896 896 @command('debugindex', cmdutil.debugrevlogopts +
897 897 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
898 898 _('[-f FORMAT] -c|-m|FILE'),
899 899 optionalrepo=True)
900 900 def debugindex(ui, repo, file_=None, **opts):
901 901 """dump the contents of an index file"""
902 902 opts = pycompat.byteskwargs(opts)
903 903 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
904 904 format = opts.get('format', 0)
905 905 if format not in (0, 1):
906 906 raise error.Abort(_("unknown format %d") % format)
907 907
908 908 generaldelta = r.version & revlog.FLAG_GENERALDELTA
909 909 if generaldelta:
910 910 basehdr = ' delta'
911 911 else:
912 912 basehdr = ' base'
913 913
914 914 if ui.debugflag:
915 915 shortfn = hex
916 916 else:
917 917 shortfn = short
918 918
919 919 # There might not be anything in r, so have a sane default
920 920 idlen = 12
921 921 for i in r:
922 922 idlen = len(shortfn(r.node(i)))
923 923 break
924 924
925 925 if format == 0:
926 926 ui.write((" rev offset length " + basehdr + " linkrev"
927 927 " %s %s p2\n") % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
928 928 elif format == 1:
929 929 ui.write((" rev flag offset length"
930 930 " size " + basehdr + " link p1 p2"
931 931 " %s\n") % "nodeid".rjust(idlen))
932 932
933 933 for i in r:
934 934 node = r.node(i)
935 935 if generaldelta:
936 936 base = r.deltaparent(i)
937 937 else:
938 938 base = r.chainbase(i)
939 939 if format == 0:
940 940 try:
941 941 pp = r.parents(node)
942 942 except Exception:
943 943 pp = [nullid, nullid]
944 944 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
945 945 i, r.start(i), r.length(i), base, r.linkrev(i),
946 946 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
947 947 elif format == 1:
948 948 pr = r.parentrevs(i)
949 949 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
950 950 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
951 951 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
952 952
953 953 @command('debugindexdot', cmdutil.debugrevlogopts,
954 954 _('-c|-m|FILE'), optionalrepo=True)
955 955 def debugindexdot(ui, repo, file_=None, **opts):
956 956 """dump an index DAG as a graphviz dot file"""
957 957 opts = pycompat.byteskwargs(opts)
958 958 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
959 959 ui.write(("digraph G {\n"))
960 960 for i in r:
961 961 node = r.node(i)
962 962 pp = r.parents(node)
963 963 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
964 964 if pp[1] != nullid:
965 965 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
966 966 ui.write("}\n")
967 967
968 968 @command('debuginstall', [] + cmdutil.formatteropts, '', norepo=True)
969 969 def debuginstall(ui, **opts):
970 970 '''test Mercurial installation
971 971
972 972 Returns 0 on success.
973 973 '''
974 974 opts = pycompat.byteskwargs(opts)
975 975
976 976 def writetemp(contents):
977 977 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
978 978 f = os.fdopen(fd, pycompat.sysstr("wb"))
979 979 f.write(contents)
980 980 f.close()
981 981 return name
982 982
983 983 problems = 0
984 984
985 985 fm = ui.formatter('debuginstall', opts)
986 986 fm.startitem()
987 987
988 988 # encoding
989 989 fm.write('encoding', _("checking encoding (%s)...\n"), encoding.encoding)
990 990 err = None
991 991 try:
992 992 codecs.lookup(pycompat.sysstr(encoding.encoding))
993 993 except LookupError as inst:
994 994 err = util.forcebytestr(inst)
995 995 problems += 1
996 996 fm.condwrite(err, 'encodingerror', _(" %s\n"
997 997 " (check that your locale is properly set)\n"), err)
998 998
999 999 # Python
1000 1000 fm.write('pythonexe', _("checking Python executable (%s)\n"),
1001 1001 pycompat.sysexecutable)
1002 1002 fm.write('pythonver', _("checking Python version (%s)\n"),
1003 1003 ("%d.%d.%d" % sys.version_info[:3]))
1004 1004 fm.write('pythonlib', _("checking Python lib (%s)...\n"),
1005 1005 os.path.dirname(pycompat.fsencode(os.__file__)))
1006 1006
1007 1007 security = set(sslutil.supportedprotocols)
1008 1008 if sslutil.hassni:
1009 1009 security.add('sni')
1010 1010
1011 1011 fm.write('pythonsecurity', _("checking Python security support (%s)\n"),
1012 1012 fm.formatlist(sorted(security), name='protocol',
1013 1013 fmt='%s', sep=','))
1014 1014
1015 1015 # These are warnings, not errors. So don't increment problem count. This
1016 1016 # may change in the future.
1017 1017 if 'tls1.2' not in security:
1018 1018 fm.plain(_(' TLS 1.2 not supported by Python install; '
1019 1019 'network connections lack modern security\n'))
1020 1020 if 'sni' not in security:
1021 1021 fm.plain(_(' SNI not supported by Python install; may have '
1022 1022 'connectivity issues with some servers\n'))
1023 1023
1024 1024 # TODO print CA cert info
1025 1025
1026 1026 # hg version
1027 1027 hgver = util.version()
1028 1028 fm.write('hgver', _("checking Mercurial version (%s)\n"),
1029 1029 hgver.split('+')[0])
1030 1030 fm.write('hgverextra', _("checking Mercurial custom build (%s)\n"),
1031 1031 '+'.join(hgver.split('+')[1:]))
1032 1032
1033 1033 # compiled modules
1034 1034 fm.write('hgmodulepolicy', _("checking module policy (%s)\n"),
1035 1035 policy.policy)
1036 1036 fm.write('hgmodules', _("checking installed modules (%s)...\n"),
1037 1037 os.path.dirname(pycompat.fsencode(__file__)))
1038 1038
1039 1039 if policy.policy in ('c', 'allow'):
1040 1040 err = None
1041 1041 try:
1042 1042 from .cext import (
1043 1043 base85,
1044 1044 bdiff,
1045 1045 mpatch,
1046 1046 osutil,
1047 1047 )
1048 1048 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
1049 1049 except Exception as inst:
1050 1050 err = util.forcebytestr(inst)
1051 1051 problems += 1
1052 1052 fm.condwrite(err, 'extensionserror', " %s\n", err)
1053 1053
1054 1054 compengines = util.compengines._engines.values()
1055 1055 fm.write('compengines', _('checking registered compression engines (%s)\n'),
1056 1056 fm.formatlist(sorted(e.name() for e in compengines),
1057 1057 name='compengine', fmt='%s', sep=', '))
1058 1058 fm.write('compenginesavail', _('checking available compression engines '
1059 1059 '(%s)\n'),
1060 1060 fm.formatlist(sorted(e.name() for e in compengines
1061 1061 if e.available()),
1062 1062 name='compengine', fmt='%s', sep=', '))
1063 1063 wirecompengines = util.compengines.supportedwireengines(util.SERVERROLE)
1064 1064 fm.write('compenginesserver', _('checking available compression engines '
1065 1065 'for wire protocol (%s)\n'),
1066 1066 fm.formatlist([e.name() for e in wirecompengines
1067 1067 if e.wireprotosupport()],
1068 1068 name='compengine', fmt='%s', sep=', '))
1069 1069
1070 1070 # templates
1071 1071 p = templater.templatepaths()
1072 1072 fm.write('templatedirs', 'checking templates (%s)...\n', ' '.join(p))
1073 1073 fm.condwrite(not p, '', _(" no template directories found\n"))
1074 1074 if p:
1075 1075 m = templater.templatepath("map-cmdline.default")
1076 1076 if m:
1077 1077 # template found, check if it is working
1078 1078 err = None
1079 1079 try:
1080 1080 templater.templater.frommapfile(m)
1081 1081 except Exception as inst:
1082 1082 err = util.forcebytestr(inst)
1083 1083 p = None
1084 1084 fm.condwrite(err, 'defaulttemplateerror', " %s\n", err)
1085 1085 else:
1086 1086 p = None
1087 1087 fm.condwrite(p, 'defaulttemplate',
1088 1088 _("checking default template (%s)\n"), m)
1089 1089 fm.condwrite(not m, 'defaulttemplatenotfound',
1090 1090 _(" template '%s' not found\n"), "default")
1091 1091 if not p:
1092 1092 problems += 1
1093 1093 fm.condwrite(not p, '',
1094 1094 _(" (templates seem to have been installed incorrectly)\n"))
1095 1095
1096 1096 # editor
1097 1097 editor = ui.geteditor()
1098 1098 editor = util.expandpath(editor)
1099 1099 fm.write('editor', _("checking commit editor... (%s)\n"), editor)
1100 1100 cmdpath = util.findexe(pycompat.shlexsplit(editor)[0])
1101 1101 fm.condwrite(not cmdpath and editor == 'vi', 'vinotfound',
1102 1102 _(" No commit editor set and can't find %s in PATH\n"
1103 1103 " (specify a commit editor in your configuration"
1104 1104 " file)\n"), not cmdpath and editor == 'vi' and editor)
1105 1105 fm.condwrite(not cmdpath and editor != 'vi', 'editornotfound',
1106 1106 _(" Can't find editor '%s' in PATH\n"
1107 1107 " (specify a commit editor in your configuration"
1108 1108 " file)\n"), not cmdpath and editor)
1109 1109 if not cmdpath and editor != 'vi':
1110 1110 problems += 1
1111 1111
1112 1112 # check username
1113 1113 username = None
1114 1114 err = None
1115 1115 try:
1116 1116 username = ui.username()
1117 1117 except error.Abort as e:
1118 1118 err = util.forcebytestr(e)
1119 1119 problems += 1
1120 1120
1121 1121 fm.condwrite(username, 'username', _("checking username (%s)\n"), username)
1122 1122 fm.condwrite(err, 'usernameerror', _("checking username...\n %s\n"
1123 1123 " (specify a username in your configuration file)\n"), err)
1124 1124
1125 1125 fm.condwrite(not problems, '',
1126 1126 _("no problems detected\n"))
1127 1127 if not problems:
1128 1128 fm.data(problems=problems)
1129 1129 fm.condwrite(problems, 'problems',
1130 1130 _("%d problems detected,"
1131 1131 " please check your install!\n"), problems)
1132 1132 fm.end()
1133 1133
1134 1134 return problems
1135 1135
1136 1136 @command('debugknown', [], _('REPO ID...'), norepo=True)
1137 1137 def debugknown(ui, repopath, *ids, **opts):
1138 1138 """test whether node ids are known to a repo
1139 1139
1140 1140 Every ID must be a full-length hex node id string. Returns a list of 0s
1141 1141 and 1s indicating unknown/known.
1142 1142 """
1143 1143 opts = pycompat.byteskwargs(opts)
1144 1144 repo = hg.peer(ui, opts, repopath)
1145 1145 if not repo.capable('known'):
1146 1146 raise error.Abort("known() not supported by target repository")
1147 1147 flags = repo.known([bin(s) for s in ids])
1148 1148 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
1149 1149
1150 1150 @command('debuglabelcomplete', [], _('LABEL...'))
1151 1151 def debuglabelcomplete(ui, repo, *args):
1152 1152 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1153 1153 debugnamecomplete(ui, repo, *args)
1154 1154
1155 1155 @command('debuglocks',
1156 1156 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
1157 1157 ('W', 'force-wlock', None,
1158 1158 _('free the working state lock (DANGEROUS)'))],
1159 1159 _('[OPTION]...'))
1160 1160 def debuglocks(ui, repo, **opts):
1161 1161 """show or modify state of locks
1162 1162
1163 1163 By default, this command will show which locks are held. This
1164 1164 includes the user and process holding the lock, the amount of time
1165 1165 the lock has been held, and the machine name where the process is
1166 1166 running if it's not local.
1167 1167
1168 1168 Locks protect the integrity of Mercurial's data, so should be
1169 1169 treated with care. System crashes or other interruptions may cause
1170 1170 locks to not be properly released, though Mercurial will usually
1171 1171 detect and remove such stale locks automatically.
1172 1172
1173 1173 However, detecting stale locks may not always be possible (for
1174 1174 instance, on a shared filesystem). Removing locks may also be
1175 1175 blocked by filesystem permissions.
1176 1176
1177 1177 Returns 0 if no locks are held.
1178 1178
1179 1179 """
1180 1180
1181 1181 if opts.get(r'force_lock'):
1182 1182 repo.svfs.unlink('lock')
1183 1183 if opts.get(r'force_wlock'):
1184 1184 repo.vfs.unlink('wlock')
1185 1185 if opts.get(r'force_lock') or opts.get(r'force_lock'):
1186 1186 return 0
1187 1187
1188 1188 now = time.time()
1189 1189 held = 0
1190 1190
1191 1191 def report(vfs, name, method):
1192 1192 # this causes stale locks to get reaped for more accurate reporting
1193 1193 try:
1194 1194 l = method(False)
1195 1195 except error.LockHeld:
1196 1196 l = None
1197 1197
1198 1198 if l:
1199 1199 l.release()
1200 1200 else:
1201 1201 try:
1202 1202 stat = vfs.lstat(name)
1203 1203 age = now - stat.st_mtime
1204 1204 user = util.username(stat.st_uid)
1205 1205 locker = vfs.readlock(name)
1206 1206 if ":" in locker:
1207 1207 host, pid = locker.split(':')
1208 1208 if host == socket.gethostname():
1209 1209 locker = 'user %s, process %s' % (user, pid)
1210 1210 else:
1211 1211 locker = 'user %s, process %s, host %s' \
1212 1212 % (user, pid, host)
1213 1213 ui.write(("%-6s %s (%ds)\n") % (name + ":", locker, age))
1214 1214 return 1
1215 1215 except OSError as e:
1216 1216 if e.errno != errno.ENOENT:
1217 1217 raise
1218 1218
1219 1219 ui.write(("%-6s free\n") % (name + ":"))
1220 1220 return 0
1221 1221
1222 1222 held += report(repo.svfs, "lock", repo.lock)
1223 1223 held += report(repo.vfs, "wlock", repo.wlock)
1224 1224
1225 1225 return held
1226 1226
1227 1227 @command('debugmergestate', [], '')
1228 1228 def debugmergestate(ui, repo, *args):
1229 1229 """print merge state
1230 1230
1231 1231 Use --verbose to print out information about whether v1 or v2 merge state
1232 1232 was chosen."""
1233 1233 def _hashornull(h):
1234 1234 if h == nullhex:
1235 1235 return 'null'
1236 1236 else:
1237 1237 return h
1238 1238
1239 1239 def printrecords(version):
1240 1240 ui.write(('* version %s records\n') % version)
1241 1241 if version == 1:
1242 1242 records = v1records
1243 1243 else:
1244 1244 records = v2records
1245 1245
1246 1246 for rtype, record in records:
1247 1247 # pretty print some record types
1248 1248 if rtype == 'L':
1249 1249 ui.write(('local: %s\n') % record)
1250 1250 elif rtype == 'O':
1251 1251 ui.write(('other: %s\n') % record)
1252 1252 elif rtype == 'm':
1253 1253 driver, mdstate = record.split('\0', 1)
1254 1254 ui.write(('merge driver: %s (state "%s")\n')
1255 1255 % (driver, mdstate))
1256 1256 elif rtype in 'FDC':
1257 1257 r = record.split('\0')
1258 1258 f, state, hash, lfile, afile, anode, ofile = r[0:7]
1259 1259 if version == 1:
1260 1260 onode = 'not stored in v1 format'
1261 1261 flags = r[7]
1262 1262 else:
1263 1263 onode, flags = r[7:9]
1264 1264 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
1265 1265 % (f, rtype, state, _hashornull(hash)))
1266 1266 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
1267 1267 ui.write((' ancestor path: %s (node %s)\n')
1268 1268 % (afile, _hashornull(anode)))
1269 1269 ui.write((' other path: %s (node %s)\n')
1270 1270 % (ofile, _hashornull(onode)))
1271 1271 elif rtype == 'f':
1272 1272 filename, rawextras = record.split('\0', 1)
1273 1273 extras = rawextras.split('\0')
1274 1274 i = 0
1275 1275 extrastrings = []
1276 1276 while i < len(extras):
1277 1277 extrastrings.append('%s = %s' % (extras[i], extras[i + 1]))
1278 1278 i += 2
1279 1279
1280 1280 ui.write(('file extras: %s (%s)\n')
1281 1281 % (filename, ', '.join(extrastrings)))
1282 1282 elif rtype == 'l':
1283 1283 labels = record.split('\0', 2)
1284 1284 labels = [l for l in labels if len(l) > 0]
1285 1285 ui.write(('labels:\n'))
1286 1286 ui.write((' local: %s\n' % labels[0]))
1287 1287 ui.write((' other: %s\n' % labels[1]))
1288 1288 if len(labels) > 2:
1289 1289 ui.write((' base: %s\n' % labels[2]))
1290 1290 else:
1291 1291 ui.write(('unrecognized entry: %s\t%s\n')
1292 1292 % (rtype, record.replace('\0', '\t')))
1293 1293
1294 1294 # Avoid mergestate.read() since it may raise an exception for unsupported
1295 1295 # merge state records. We shouldn't be doing this, but this is OK since this
1296 1296 # command is pretty low-level.
1297 1297 ms = mergemod.mergestate(repo)
1298 1298
1299 1299 # sort so that reasonable information is on top
1300 1300 v1records = ms._readrecordsv1()
1301 1301 v2records = ms._readrecordsv2()
1302 1302 order = 'LOml'
1303 1303 def key(r):
1304 1304 idx = order.find(r[0])
1305 1305 if idx == -1:
1306 1306 return (1, r[1])
1307 1307 else:
1308 1308 return (0, idx)
1309 1309 v1records.sort(key=key)
1310 1310 v2records.sort(key=key)
1311 1311
1312 1312 if not v1records and not v2records:
1313 1313 ui.write(('no merge state found\n'))
1314 1314 elif not v2records:
1315 1315 ui.note(('no version 2 merge state\n'))
1316 1316 printrecords(1)
1317 1317 elif ms._v1v2match(v1records, v2records):
1318 1318 ui.note(('v1 and v2 states match: using v2\n'))
1319 1319 printrecords(2)
1320 1320 else:
1321 1321 ui.note(('v1 and v2 states mismatch: using v1\n'))
1322 1322 printrecords(1)
1323 1323 if ui.verbose:
1324 1324 printrecords(2)
1325 1325
1326 1326 @command('debugnamecomplete', [], _('NAME...'))
1327 1327 def debugnamecomplete(ui, repo, *args):
1328 1328 '''complete "names" - tags, open branch names, bookmark names'''
1329 1329
1330 1330 names = set()
1331 1331 # since we previously only listed open branches, we will handle that
1332 1332 # specially (after this for loop)
1333 1333 for name, ns in repo.names.iteritems():
1334 1334 if name != 'branches':
1335 1335 names.update(ns.listnames(repo))
1336 1336 names.update(tag for (tag, heads, tip, closed)
1337 1337 in repo.branchmap().iterbranches() if not closed)
1338 1338 completions = set()
1339 1339 if not args:
1340 1340 args = ['']
1341 1341 for a in args:
1342 1342 completions.update(n for n in names if n.startswith(a))
1343 1343 ui.write('\n'.join(sorted(completions)))
1344 1344 ui.write('\n')
1345 1345
1346 1346 @command('debugobsolete',
1347 1347 [('', 'flags', 0, _('markers flag')),
1348 1348 ('', 'record-parents', False,
1349 1349 _('record parent information for the precursor')),
1350 1350 ('r', 'rev', [], _('display markers relevant to REV')),
1351 1351 ('', 'exclusive', False, _('restrict display to markers only '
1352 1352 'relevant to REV')),
1353 1353 ('', 'index', False, _('display index of the marker')),
1354 1354 ('', 'delete', [], _('delete markers specified by indices')),
1355 1355 ] + cmdutil.commitopts2 + cmdutil.formatteropts,
1356 1356 _('[OBSOLETED [REPLACEMENT ...]]'))
1357 1357 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
1358 1358 """create arbitrary obsolete marker
1359 1359
1360 1360 With no arguments, displays the list of obsolescence markers."""
1361 1361
1362 1362 opts = pycompat.byteskwargs(opts)
1363 1363
1364 1364 def parsenodeid(s):
1365 1365 try:
1366 1366 # We do not use revsingle/revrange functions here to accept
1367 1367 # arbitrary node identifiers, possibly not present in the
1368 1368 # local repository.
1369 1369 n = bin(s)
1370 1370 if len(n) != len(nullid):
1371 1371 raise TypeError()
1372 1372 return n
1373 1373 except TypeError:
1374 1374 raise error.Abort('changeset references must be full hexadecimal '
1375 1375 'node identifiers')
1376 1376
1377 1377 if opts.get('delete'):
1378 1378 indices = []
1379 1379 for v in opts.get('delete'):
1380 1380 try:
1381 1381 indices.append(int(v))
1382 1382 except ValueError:
1383 1383 raise error.Abort(_('invalid index value: %r') % v,
1384 1384 hint=_('use integers for indices'))
1385 1385
1386 1386 if repo.currenttransaction():
1387 1387 raise error.Abort(_('cannot delete obsmarkers in the middle '
1388 1388 'of transaction.'))
1389 1389
1390 1390 with repo.lock():
1391 1391 n = repair.deleteobsmarkers(repo.obsstore, indices)
1392 1392 ui.write(_('deleted %i obsolescence markers\n') % n)
1393 1393
1394 1394 return
1395 1395
1396 1396 if precursor is not None:
1397 1397 if opts['rev']:
1398 1398 raise error.Abort('cannot select revision when creating marker')
1399 1399 metadata = {}
1400 1400 metadata['user'] = opts['user'] or ui.username()
1401 1401 succs = tuple(parsenodeid(succ) for succ in successors)
1402 1402 l = repo.lock()
1403 1403 try:
1404 1404 tr = repo.transaction('debugobsolete')
1405 1405 try:
1406 1406 date = opts.get('date')
1407 1407 if date:
1408 1408 date = util.parsedate(date)
1409 1409 else:
1410 1410 date = None
1411 1411 prec = parsenodeid(precursor)
1412 1412 parents = None
1413 1413 if opts['record_parents']:
1414 1414 if prec not in repo.unfiltered():
1415 1415 raise error.Abort('cannot used --record-parents on '
1416 1416 'unknown changesets')
1417 1417 parents = repo.unfiltered()[prec].parents()
1418 1418 parents = tuple(p.node() for p in parents)
1419 1419 repo.obsstore.create(tr, prec, succs, opts['flags'],
1420 1420 parents=parents, date=date,
1421 1421 metadata=metadata, ui=ui)
1422 1422 tr.close()
1423 1423 except ValueError as exc:
1424 1424 raise error.Abort(_('bad obsmarker input: %s') % exc)
1425 1425 finally:
1426 1426 tr.release()
1427 1427 finally:
1428 1428 l.release()
1429 1429 else:
1430 1430 if opts['rev']:
1431 1431 revs = scmutil.revrange(repo, opts['rev'])
1432 1432 nodes = [repo[r].node() for r in revs]
1433 1433 markers = list(obsutil.getmarkers(repo, nodes=nodes,
1434 1434 exclusive=opts['exclusive']))
1435 1435 markers.sort(key=lambda x: x._data)
1436 1436 else:
1437 1437 markers = obsutil.getmarkers(repo)
1438 1438
1439 1439 markerstoiter = markers
1440 1440 isrelevant = lambda m: True
1441 1441 if opts.get('rev') and opts.get('index'):
1442 1442 markerstoiter = obsutil.getmarkers(repo)
1443 1443 markerset = set(markers)
1444 1444 isrelevant = lambda m: m in markerset
1445 1445
1446 1446 fm = ui.formatter('debugobsolete', opts)
1447 1447 for i, m in enumerate(markerstoiter):
1448 1448 if not isrelevant(m):
1449 1449 # marker can be irrelevant when we're iterating over a set
1450 1450 # of markers (markerstoiter) which is bigger than the set
1451 1451 # of markers we want to display (markers)
1452 1452 # this can happen if both --index and --rev options are
1453 1453 # provided and thus we need to iterate over all of the markers
1454 1454 # to get the correct indices, but only display the ones that
1455 1455 # are relevant to --rev value
1456 1456 continue
1457 1457 fm.startitem()
1458 1458 ind = i if opts.get('index') else None
1459 1459 cmdutil.showmarker(fm, m, index=ind)
1460 1460 fm.end()
1461 1461
1462 1462 @command('debugpathcomplete',
1463 1463 [('f', 'full', None, _('complete an entire path')),
1464 1464 ('n', 'normal', None, _('show only normal files')),
1465 1465 ('a', 'added', None, _('show only added files')),
1466 1466 ('r', 'removed', None, _('show only removed files'))],
1467 1467 _('FILESPEC...'))
1468 1468 def debugpathcomplete(ui, repo, *specs, **opts):
1469 1469 '''complete part or all of a tracked path
1470 1470
1471 1471 This command supports shells that offer path name completion. It
1472 1472 currently completes only files already known to the dirstate.
1473 1473
1474 1474 Completion extends only to the next path segment unless
1475 1475 --full is specified, in which case entire paths are used.'''
1476 1476
1477 1477 def complete(path, acceptable):
1478 1478 dirstate = repo.dirstate
1479 1479 spec = os.path.normpath(os.path.join(pycompat.getcwd(), path))
1480 1480 rootdir = repo.root + pycompat.ossep
1481 1481 if spec != repo.root and not spec.startswith(rootdir):
1482 1482 return [], []
1483 1483 if os.path.isdir(spec):
1484 1484 spec += '/'
1485 1485 spec = spec[len(rootdir):]
1486 1486 fixpaths = pycompat.ossep != '/'
1487 1487 if fixpaths:
1488 1488 spec = spec.replace(pycompat.ossep, '/')
1489 1489 speclen = len(spec)
1490 1490 fullpaths = opts[r'full']
1491 1491 files, dirs = set(), set()
1492 1492 adddir, addfile = dirs.add, files.add
1493 1493 for f, st in dirstate.iteritems():
1494 1494 if f.startswith(spec) and st[0] in acceptable:
1495 1495 if fixpaths:
1496 1496 f = f.replace('/', pycompat.ossep)
1497 1497 if fullpaths:
1498 1498 addfile(f)
1499 1499 continue
1500 1500 s = f.find(pycompat.ossep, speclen)
1501 1501 if s >= 0:
1502 1502 adddir(f[:s])
1503 1503 else:
1504 1504 addfile(f)
1505 1505 return files, dirs
1506 1506
1507 1507 acceptable = ''
1508 1508 if opts[r'normal']:
1509 1509 acceptable += 'nm'
1510 1510 if opts[r'added']:
1511 1511 acceptable += 'a'
1512 1512 if opts[r'removed']:
1513 1513 acceptable += 'r'
1514 1514 cwd = repo.getcwd()
1515 1515 if not specs:
1516 1516 specs = ['.']
1517 1517
1518 1518 files, dirs = set(), set()
1519 1519 for spec in specs:
1520 1520 f, d = complete(spec, acceptable or 'nmar')
1521 1521 files.update(f)
1522 1522 dirs.update(d)
1523 1523 files.update(dirs)
1524 1524 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
1525 1525 ui.write('\n')
1526 1526
1527 1527 @command('debugpickmergetool',
1528 1528 [('r', 'rev', '', _('check for files in this revision'), _('REV')),
1529 1529 ('', 'changedelete', None, _('emulate merging change and delete')),
1530 1530 ] + cmdutil.walkopts + cmdutil.mergetoolopts,
1531 1531 _('[PATTERN]...'),
1532 1532 inferrepo=True)
1533 1533 def debugpickmergetool(ui, repo, *pats, **opts):
1534 1534 """examine which merge tool is chosen for specified file
1535 1535
1536 1536 As described in :hg:`help merge-tools`, Mercurial examines
1537 1537 configurations below in this order to decide which merge tool is
1538 1538 chosen for specified file.
1539 1539
1540 1540 1. ``--tool`` option
1541 1541 2. ``HGMERGE`` environment variable
1542 1542 3. configurations in ``merge-patterns`` section
1543 1543 4. configuration of ``ui.merge``
1544 1544 5. configurations in ``merge-tools`` section
1545 1545 6. ``hgmerge`` tool (for historical reason only)
1546 1546 7. default tool for fallback (``:merge`` or ``:prompt``)
1547 1547
1548 1548 This command writes out examination result in the style below::
1549 1549
1550 1550 FILE = MERGETOOL
1551 1551
1552 1552 By default, all files known in the first parent context of the
1553 1553 working directory are examined. Use file patterns and/or -I/-X
1554 1554 options to limit target files. -r/--rev is also useful to examine
1555 1555 files in another context without actual updating to it.
1556 1556
1557 1557 With --debug, this command shows warning messages while matching
1558 1558 against ``merge-patterns`` and so on, too. It is recommended to
1559 1559 use this option with explicit file patterns and/or -I/-X options,
1560 1560 because this option increases amount of output per file according
1561 1561 to configurations in hgrc.
1562 1562
1563 1563 With -v/--verbose, this command shows configurations below at
1564 1564 first (only if specified).
1565 1565
1566 1566 - ``--tool`` option
1567 1567 - ``HGMERGE`` environment variable
1568 1568 - configuration of ``ui.merge``
1569 1569
1570 1570 If merge tool is chosen before matching against
1571 1571 ``merge-patterns``, this command can't show any helpful
1572 1572 information, even with --debug. In such case, information above is
1573 1573 useful to know why a merge tool is chosen.
1574 1574 """
1575 1575 opts = pycompat.byteskwargs(opts)
1576 1576 overrides = {}
1577 1577 if opts['tool']:
1578 1578 overrides[('ui', 'forcemerge')] = opts['tool']
1579 1579 ui.note(('with --tool %r\n') % (opts['tool']))
1580 1580
1581 1581 with ui.configoverride(overrides, 'debugmergepatterns'):
1582 1582 hgmerge = encoding.environ.get("HGMERGE")
1583 1583 if hgmerge is not None:
1584 1584 ui.note(('with HGMERGE=%r\n') % (hgmerge))
1585 1585 uimerge = ui.config("ui", "merge")
1586 1586 if uimerge:
1587 1587 ui.note(('with ui.merge=%r\n') % (uimerge))
1588 1588
1589 1589 ctx = scmutil.revsingle(repo, opts.get('rev'))
1590 1590 m = scmutil.match(ctx, pats, opts)
1591 1591 changedelete = opts['changedelete']
1592 1592 for path in ctx.walk(m):
1593 1593 fctx = ctx[path]
1594 1594 try:
1595 1595 if not ui.debugflag:
1596 1596 ui.pushbuffer(error=True)
1597 1597 tool, toolpath = filemerge._picktool(repo, ui, path,
1598 1598 fctx.isbinary(),
1599 1599 'l' in fctx.flags(),
1600 1600 changedelete)
1601 1601 finally:
1602 1602 if not ui.debugflag:
1603 1603 ui.popbuffer()
1604 1604 ui.write(('%s = %s\n') % (path, tool))
1605 1605
1606 1606 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
1607 1607 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
1608 1608 '''access the pushkey key/value protocol
1609 1609
1610 1610 With two args, list the keys in the given namespace.
1611 1611
1612 1612 With five args, set a key to new if it currently is set to old.
1613 1613 Reports success or failure.
1614 1614 '''
1615 1615
1616 1616 target = hg.peer(ui, {}, repopath)
1617 1617 if keyinfo:
1618 1618 key, old, new = keyinfo
1619 1619 r = target.pushkey(namespace, key, old, new)
1620 1620 ui.status(str(r) + '\n')
1621 1621 return not r
1622 1622 else:
1623 1623 for k, v in sorted(target.listkeys(namespace).iteritems()):
1624 1624 ui.write("%s\t%s\n" % (util.escapestr(k),
1625 1625 util.escapestr(v)))
1626 1626
1627 1627 @command('debugpvec', [], _('A B'))
1628 1628 def debugpvec(ui, repo, a, b=None):
1629 1629 ca = scmutil.revsingle(repo, a)
1630 1630 cb = scmutil.revsingle(repo, b)
1631 1631 pa = pvec.ctxpvec(ca)
1632 1632 pb = pvec.ctxpvec(cb)
1633 1633 if pa == pb:
1634 1634 rel = "="
1635 1635 elif pa > pb:
1636 1636 rel = ">"
1637 1637 elif pa < pb:
1638 1638 rel = "<"
1639 1639 elif pa | pb:
1640 1640 rel = "|"
1641 1641 ui.write(_("a: %s\n") % pa)
1642 1642 ui.write(_("b: %s\n") % pb)
1643 1643 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
1644 1644 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
1645 1645 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
1646 1646 pa.distance(pb), rel))
1647 1647
1648 1648 @command('debugrebuilddirstate|debugrebuildstate',
1649 1649 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
1650 1650 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
1651 1651 'the working copy parent')),
1652 1652 ],
1653 1653 _('[-r REV]'))
1654 1654 def debugrebuilddirstate(ui, repo, rev, **opts):
1655 1655 """rebuild the dirstate as it would look like for the given revision
1656 1656
1657 1657 If no revision is specified the first current parent will be used.
1658 1658
1659 1659 The dirstate will be set to the files of the given revision.
1660 1660 The actual working directory content or existing dirstate
1661 1661 information such as adds or removes is not considered.
1662 1662
1663 1663 ``minimal`` will only rebuild the dirstate status for files that claim to be
1664 1664 tracked but are not in the parent manifest, or that exist in the parent
1665 1665 manifest but are not in the dirstate. It will not change adds, removes, or
1666 1666 modified files that are in the working copy parent.
1667 1667
1668 1668 One use of this command is to make the next :hg:`status` invocation
1669 1669 check the actual file content.
1670 1670 """
1671 1671 ctx = scmutil.revsingle(repo, rev)
1672 1672 with repo.wlock():
1673 1673 dirstate = repo.dirstate
1674 1674 changedfiles = None
1675 1675 # See command doc for what minimal does.
1676 1676 if opts.get(r'minimal'):
1677 1677 manifestfiles = set(ctx.manifest().keys())
1678 1678 dirstatefiles = set(dirstate)
1679 1679 manifestonly = manifestfiles - dirstatefiles
1680 1680 dsonly = dirstatefiles - manifestfiles
1681 1681 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
1682 1682 changedfiles = manifestonly | dsnotadded
1683 1683
1684 1684 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
1685 1685
1686 1686 @command('debugrebuildfncache', [], '')
1687 1687 def debugrebuildfncache(ui, repo):
1688 1688 """rebuild the fncache file"""
1689 1689 repair.rebuildfncache(ui, repo)
1690 1690
1691 1691 @command('debugrename',
1692 1692 [('r', 'rev', '', _('revision to debug'), _('REV'))],
1693 1693 _('[-r REV] FILE'))
1694 1694 def debugrename(ui, repo, file1, *pats, **opts):
1695 1695 """dump rename information"""
1696 1696
1697 1697 opts = pycompat.byteskwargs(opts)
1698 1698 ctx = scmutil.revsingle(repo, opts.get('rev'))
1699 1699 m = scmutil.match(ctx, (file1,) + pats, opts)
1700 1700 for abs in ctx.walk(m):
1701 1701 fctx = ctx[abs]
1702 1702 o = fctx.filelog().renamed(fctx.filenode())
1703 1703 rel = m.rel(abs)
1704 1704 if o:
1705 1705 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
1706 1706 else:
1707 1707 ui.write(_("%s not renamed\n") % rel)
1708 1708
1709 1709 @command('debugrevlog', cmdutil.debugrevlogopts +
1710 1710 [('d', 'dump', False, _('dump index data'))],
1711 1711 _('-c|-m|FILE'),
1712 1712 optionalrepo=True)
1713 1713 def debugrevlog(ui, repo, file_=None, **opts):
1714 1714 """show data and statistics about a revlog"""
1715 1715 opts = pycompat.byteskwargs(opts)
1716 1716 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
1717 1717
1718 1718 if opts.get("dump"):
1719 1719 numrevs = len(r)
1720 1720 ui.write(("# rev p1rev p2rev start end deltastart base p1 p2"
1721 1721 " rawsize totalsize compression heads chainlen\n"))
1722 1722 ts = 0
1723 1723 heads = set()
1724 1724
1725 1725 for rev in xrange(numrevs):
1726 1726 dbase = r.deltaparent(rev)
1727 1727 if dbase == -1:
1728 1728 dbase = rev
1729 1729 cbase = r.chainbase(rev)
1730 1730 clen = r.chainlen(rev)
1731 1731 p1, p2 = r.parentrevs(rev)
1732 1732 rs = r.rawsize(rev)
1733 1733 ts = ts + rs
1734 1734 heads -= set(r.parentrevs(rev))
1735 1735 heads.add(rev)
1736 1736 try:
1737 1737 compression = ts / r.end(rev)
1738 1738 except ZeroDivisionError:
1739 1739 compression = 0
1740 1740 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
1741 1741 "%11d %5d %8d\n" %
1742 1742 (rev, p1, p2, r.start(rev), r.end(rev),
1743 1743 r.start(dbase), r.start(cbase),
1744 1744 r.start(p1), r.start(p2),
1745 1745 rs, ts, compression, len(heads), clen))
1746 1746 return 0
1747 1747
1748 1748 v = r.version
1749 1749 format = v & 0xFFFF
1750 1750 flags = []
1751 1751 gdelta = False
1752 1752 if v & revlog.FLAG_INLINE_DATA:
1753 1753 flags.append('inline')
1754 1754 if v & revlog.FLAG_GENERALDELTA:
1755 1755 gdelta = True
1756 1756 flags.append('generaldelta')
1757 1757 if not flags:
1758 1758 flags = ['(none)']
1759 1759
1760 1760 nummerges = 0
1761 1761 numfull = 0
1762 1762 numprev = 0
1763 1763 nump1 = 0
1764 1764 nump2 = 0
1765 1765 numother = 0
1766 1766 nump1prev = 0
1767 1767 nump2prev = 0
1768 1768 chainlengths = []
1769 1769 chainbases = []
1770 1770 chainspans = []
1771 1771
1772 1772 datasize = [None, 0, 0]
1773 1773 fullsize = [None, 0, 0]
1774 1774 deltasize = [None, 0, 0]
1775 1775 chunktypecounts = {}
1776 1776 chunktypesizes = {}
1777 1777
1778 1778 def addsize(size, l):
1779 1779 if l[0] is None or size < l[0]:
1780 1780 l[0] = size
1781 1781 if size > l[1]:
1782 1782 l[1] = size
1783 1783 l[2] += size
1784 1784
1785 1785 numrevs = len(r)
1786 1786 for rev in xrange(numrevs):
1787 1787 p1, p2 = r.parentrevs(rev)
1788 1788 delta = r.deltaparent(rev)
1789 1789 if format > 0:
1790 1790 addsize(r.rawsize(rev), datasize)
1791 1791 if p2 != nullrev:
1792 1792 nummerges += 1
1793 1793 size = r.length(rev)
1794 1794 if delta == nullrev:
1795 1795 chainlengths.append(0)
1796 1796 chainbases.append(r.start(rev))
1797 1797 chainspans.append(size)
1798 1798 numfull += 1
1799 1799 addsize(size, fullsize)
1800 1800 else:
1801 1801 chainlengths.append(chainlengths[delta] + 1)
1802 1802 baseaddr = chainbases[delta]
1803 1803 revaddr = r.start(rev)
1804 1804 chainbases.append(baseaddr)
1805 1805 chainspans.append((revaddr - baseaddr) + size)
1806 1806 addsize(size, deltasize)
1807 1807 if delta == rev - 1:
1808 1808 numprev += 1
1809 1809 if delta == p1:
1810 1810 nump1prev += 1
1811 1811 elif delta == p2:
1812 1812 nump2prev += 1
1813 1813 elif delta == p1:
1814 1814 nump1 += 1
1815 1815 elif delta == p2:
1816 1816 nump2 += 1
1817 1817 elif delta != nullrev:
1818 1818 numother += 1
1819 1819
1820 1820 # Obtain data on the raw chunks in the revlog.
1821 1821 segment = r._getsegmentforrevs(rev, rev)[1]
1822 1822 if segment:
1823 1823 chunktype = bytes(segment[0:1])
1824 1824 else:
1825 1825 chunktype = 'empty'
1826 1826
1827 1827 if chunktype not in chunktypecounts:
1828 1828 chunktypecounts[chunktype] = 0
1829 1829 chunktypesizes[chunktype] = 0
1830 1830
1831 1831 chunktypecounts[chunktype] += 1
1832 1832 chunktypesizes[chunktype] += size
1833 1833
1834 1834 # Adjust size min value for empty cases
1835 1835 for size in (datasize, fullsize, deltasize):
1836 1836 if size[0] is None:
1837 1837 size[0] = 0
1838 1838
1839 1839 numdeltas = numrevs - numfull
1840 1840 numoprev = numprev - nump1prev - nump2prev
1841 1841 totalrawsize = datasize[2]
1842 1842 datasize[2] /= numrevs
1843 1843 fulltotal = fullsize[2]
1844 1844 fullsize[2] /= numfull
1845 1845 deltatotal = deltasize[2]
1846 1846 if numrevs - numfull > 0:
1847 1847 deltasize[2] /= numrevs - numfull
1848 1848 totalsize = fulltotal + deltatotal
1849 1849 avgchainlen = sum(chainlengths) / numrevs
1850 1850 maxchainlen = max(chainlengths)
1851 1851 maxchainspan = max(chainspans)
1852 1852 compratio = 1
1853 1853 if totalsize:
1854 1854 compratio = totalrawsize / totalsize
1855 1855
1856 1856 basedfmtstr = '%%%dd\n'
1857 1857 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
1858 1858
1859 1859 def dfmtstr(max):
1860 1860 return basedfmtstr % len(str(max))
1861 1861 def pcfmtstr(max, padding=0):
1862 1862 return basepcfmtstr % (len(str(max)), ' ' * padding)
1863 1863
1864 1864 def pcfmt(value, total):
1865 1865 if total:
1866 1866 return (value, 100 * float(value) / total)
1867 1867 else:
1868 1868 return value, 100.0
1869 1869
1870 1870 ui.write(('format : %d\n') % format)
1871 1871 ui.write(('flags : %s\n') % ', '.join(flags))
1872 1872
1873 1873 ui.write('\n')
1874 1874 fmt = pcfmtstr(totalsize)
1875 1875 fmt2 = dfmtstr(totalsize)
1876 1876 ui.write(('revisions : ') + fmt2 % numrevs)
1877 1877 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
1878 1878 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
1879 1879 ui.write(('revisions : ') + fmt2 % numrevs)
1880 1880 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
1881 1881 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
1882 1882 ui.write(('revision size : ') + fmt2 % totalsize)
1883 1883 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
1884 1884 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
1885 1885
1886 1886 def fmtchunktype(chunktype):
1887 1887 if chunktype == 'empty':
1888 1888 return ' %s : ' % chunktype
1889 1889 elif chunktype in pycompat.bytestr(string.ascii_letters):
1890 1890 return ' 0x%s (%s) : ' % (hex(chunktype), chunktype)
1891 1891 else:
1892 1892 return ' 0x%s : ' % hex(chunktype)
1893 1893
1894 1894 ui.write('\n')
1895 1895 ui.write(('chunks : ') + fmt2 % numrevs)
1896 1896 for chunktype in sorted(chunktypecounts):
1897 1897 ui.write(fmtchunktype(chunktype))
1898 1898 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
1899 1899 ui.write(('chunks size : ') + fmt2 % totalsize)
1900 1900 for chunktype in sorted(chunktypecounts):
1901 1901 ui.write(fmtchunktype(chunktype))
1902 1902 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
1903 1903
1904 1904 ui.write('\n')
1905 1905 fmt = dfmtstr(max(avgchainlen, maxchainlen, maxchainspan, compratio))
1906 1906 ui.write(('avg chain length : ') + fmt % avgchainlen)
1907 1907 ui.write(('max chain length : ') + fmt % maxchainlen)
1908 1908 ui.write(('max chain reach : ') + fmt % maxchainspan)
1909 1909 ui.write(('compression ratio : ') + fmt % compratio)
1910 1910
1911 1911 if format > 0:
1912 1912 ui.write('\n')
1913 1913 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
1914 1914 % tuple(datasize))
1915 1915 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
1916 1916 % tuple(fullsize))
1917 1917 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
1918 1918 % tuple(deltasize))
1919 1919
1920 1920 if numdeltas > 0:
1921 1921 ui.write('\n')
1922 1922 fmt = pcfmtstr(numdeltas)
1923 1923 fmt2 = pcfmtstr(numdeltas, 4)
1924 1924 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
1925 1925 if numprev > 0:
1926 1926 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
1927 1927 numprev))
1928 1928 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
1929 1929 numprev))
1930 1930 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
1931 1931 numprev))
1932 1932 if gdelta:
1933 1933 ui.write(('deltas against p1 : ')
1934 1934 + fmt % pcfmt(nump1, numdeltas))
1935 1935 ui.write(('deltas against p2 : ')
1936 1936 + fmt % pcfmt(nump2, numdeltas))
1937 1937 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
1938 1938 numdeltas))
1939 1939
1940 1940 @command('debugrevspec',
1941 1941 [('', 'optimize', None,
1942 1942 _('print parsed tree after optimizing (DEPRECATED)')),
1943 1943 ('', 'show-revs', True, _('print list of result revisions (default)')),
1944 1944 ('s', 'show-set', None, _('print internal representation of result set')),
1945 1945 ('p', 'show-stage', [],
1946 1946 _('print parsed tree at the given stage'), _('NAME')),
1947 1947 ('', 'no-optimized', False, _('evaluate tree without optimization')),
1948 1948 ('', 'verify-optimized', False, _('verify optimized result')),
1949 1949 ],
1950 1950 ('REVSPEC'))
1951 1951 def debugrevspec(ui, repo, expr, **opts):
1952 1952 """parse and apply a revision specification
1953 1953
1954 1954 Use -p/--show-stage option to print the parsed tree at the given stages.
1955 1955 Use -p all to print tree at every stage.
1956 1956
1957 1957 Use --no-show-revs option with -s or -p to print only the set
1958 1958 representation or the parsed tree respectively.
1959 1959
1960 1960 Use --verify-optimized to compare the optimized result with the unoptimized
1961 1961 one. Returns 1 if the optimized result differs.
1962 1962 """
1963 1963 opts = pycompat.byteskwargs(opts)
1964 1964 aliases = ui.configitems('revsetalias')
1965 1965 stages = [
1966 1966 ('parsed', lambda tree: tree),
1967 1967 ('expanded', lambda tree: revsetlang.expandaliases(tree, aliases,
1968 1968 ui.warn)),
1969 1969 ('concatenated', revsetlang.foldconcat),
1970 1970 ('analyzed', revsetlang.analyze),
1971 1971 ('optimized', revsetlang.optimize),
1972 1972 ]
1973 1973 if opts['no_optimized']:
1974 1974 stages = stages[:-1]
1975 1975 if opts['verify_optimized'] and opts['no_optimized']:
1976 1976 raise error.Abort(_('cannot use --verify-optimized with '
1977 1977 '--no-optimized'))
1978 1978 stagenames = set(n for n, f in stages)
1979 1979
1980 1980 showalways = set()
1981 1981 showchanged = set()
1982 1982 if ui.verbose and not opts['show_stage']:
1983 1983 # show parsed tree by --verbose (deprecated)
1984 1984 showalways.add('parsed')
1985 1985 showchanged.update(['expanded', 'concatenated'])
1986 1986 if opts['optimize']:
1987 1987 showalways.add('optimized')
1988 1988 if opts['show_stage'] and opts['optimize']:
1989 1989 raise error.Abort(_('cannot use --optimize with --show-stage'))
1990 1990 if opts['show_stage'] == ['all']:
1991 1991 showalways.update(stagenames)
1992 1992 else:
1993 1993 for n in opts['show_stage']:
1994 1994 if n not in stagenames:
1995 1995 raise error.Abort(_('invalid stage name: %s') % n)
1996 1996 showalways.update(opts['show_stage'])
1997 1997
1998 1998 treebystage = {}
1999 1999 printedtree = None
2000 2000 tree = revsetlang.parse(expr, lookup=repo.__contains__)
2001 2001 for n, f in stages:
2002 2002 treebystage[n] = tree = f(tree)
2003 2003 if n in showalways or (n in showchanged and tree != printedtree):
2004 2004 if opts['show_stage'] or n != 'parsed':
2005 2005 ui.write(("* %s:\n") % n)
2006 2006 ui.write(revsetlang.prettyformat(tree), "\n")
2007 2007 printedtree = tree
2008 2008
2009 2009 if opts['verify_optimized']:
2010 2010 arevs = revset.makematcher(treebystage['analyzed'])(repo)
2011 2011 brevs = revset.makematcher(treebystage['optimized'])(repo)
2012 2012 if opts['show_set'] or (opts['show_set'] is None and ui.verbose):
2013 2013 ui.write(("* analyzed set:\n"), smartset.prettyformat(arevs), "\n")
2014 2014 ui.write(("* optimized set:\n"), smartset.prettyformat(brevs), "\n")
2015 2015 arevs = list(arevs)
2016 2016 brevs = list(brevs)
2017 2017 if arevs == brevs:
2018 2018 return 0
2019 2019 ui.write(('--- analyzed\n'), label='diff.file_a')
2020 2020 ui.write(('+++ optimized\n'), label='diff.file_b')
2021 2021 sm = difflib.SequenceMatcher(None, arevs, brevs)
2022 2022 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2023 2023 if tag in ('delete', 'replace'):
2024 2024 for c in arevs[alo:ahi]:
2025 2025 ui.write('-%s\n' % c, label='diff.deleted')
2026 2026 if tag in ('insert', 'replace'):
2027 2027 for c in brevs[blo:bhi]:
2028 2028 ui.write('+%s\n' % c, label='diff.inserted')
2029 2029 if tag == 'equal':
2030 2030 for c in arevs[alo:ahi]:
2031 2031 ui.write(' %s\n' % c)
2032 2032 return 1
2033 2033
2034 2034 func = revset.makematcher(tree)
2035 2035 revs = func(repo)
2036 2036 if opts['show_set'] or (opts['show_set'] is None and ui.verbose):
2037 2037 ui.write(("* set:\n"), smartset.prettyformat(revs), "\n")
2038 2038 if not opts['show_revs']:
2039 2039 return
2040 2040 for c in revs:
2041 2041 ui.write("%s\n" % c)
2042 2042
2043 2043 @command('debugsetparents', [], _('REV1 [REV2]'))
2044 2044 def debugsetparents(ui, repo, rev1, rev2=None):
2045 2045 """manually set the parents of the current working directory
2046 2046
2047 2047 This is useful for writing repository conversion tools, but should
2048 2048 be used with care. For example, neither the working directory nor the
2049 2049 dirstate is updated, so file status may be incorrect after running this
2050 2050 command.
2051 2051
2052 2052 Returns 0 on success.
2053 2053 """
2054 2054
2055 2055 r1 = scmutil.revsingle(repo, rev1).node()
2056 2056 r2 = scmutil.revsingle(repo, rev2, 'null').node()
2057 2057
2058 2058 with repo.wlock():
2059 2059 repo.setparents(r1, r2)
2060 2060
2061 2061 @command('debugssl', [], '[SOURCE]', optionalrepo=True)
2062 2062 def debugssl(ui, repo, source=None, **opts):
2063 2063 '''test a secure connection to a server
2064 2064
2065 2065 This builds the certificate chain for the server on Windows, installing the
2066 2066 missing intermediates and trusted root via Windows Update if necessary. It
2067 2067 does nothing on other platforms.
2068 2068
2069 2069 If SOURCE is omitted, the 'default' path will be used. If a URL is given,
2070 2070 that server is used. See :hg:`help urls` for more information.
2071 2071
2072 2072 If the update succeeds, retry the original operation. Otherwise, the cause
2073 2073 of the SSL error is likely another issue.
2074 2074 '''
2075 2075 if pycompat.osname != 'nt':
2076 2076 raise error.Abort(_('certificate chain building is only possible on '
2077 2077 'Windows'))
2078 2078
2079 2079 if not source:
2080 2080 if not repo:
2081 2081 raise error.Abort(_("there is no Mercurial repository here, and no "
2082 2082 "server specified"))
2083 2083 source = "default"
2084 2084
2085 2085 source, branches = hg.parseurl(ui.expandpath(source))
2086 2086 url = util.url(source)
2087 2087 addr = None
2088 2088
2089 2089 if url.scheme == 'https':
2090 2090 addr = (url.host, url.port or 443)
2091 2091 elif url.scheme == 'ssh':
2092 2092 addr = (url.host, url.port or 22)
2093 2093 else:
2094 2094 raise error.Abort(_("only https and ssh connections are supported"))
2095 2095
2096 2096 from . import win32
2097 2097
2098 2098 s = ssl.wrap_socket(socket.socket(), ssl_version=ssl.PROTOCOL_TLS,
2099 2099 cert_reqs=ssl.CERT_NONE, ca_certs=None)
2100 2100
2101 2101 try:
2102 2102 s.connect(addr)
2103 2103 cert = s.getpeercert(True)
2104 2104
2105 2105 ui.status(_('checking the certificate chain for %s\n') % url.host)
2106 2106
2107 2107 complete = win32.checkcertificatechain(cert, build=False)
2108 2108
2109 2109 if not complete:
2110 2110 ui.status(_('certificate chain is incomplete, updating... '))
2111 2111
2112 2112 if not win32.checkcertificatechain(cert):
2113 2113 ui.status(_('failed.\n'))
2114 2114 else:
2115 2115 ui.status(_('done.\n'))
2116 2116 else:
2117 2117 ui.status(_('full certificate chain is available\n'))
2118 2118 finally:
2119 2119 s.close()
2120 2120
2121 2121 @command('debugsub',
2122 2122 [('r', 'rev', '',
2123 2123 _('revision to check'), _('REV'))],
2124 2124 _('[-r REV] [REV]'))
2125 2125 def debugsub(ui, repo, rev=None):
2126 2126 ctx = scmutil.revsingle(repo, rev, None)
2127 2127 for k, v in sorted(ctx.substate.items()):
2128 2128 ui.write(('path %s\n') % k)
2129 2129 ui.write((' source %s\n') % v[0])
2130 2130 ui.write((' revision %s\n') % v[1])
2131 2131
2132 2132 @command('debugsuccessorssets',
2133 2133 [('', 'closest', False, _('return closest successors sets only'))],
2134 2134 _('[REV]'))
2135 2135 def debugsuccessorssets(ui, repo, *revs, **opts):
2136 2136 """show set of successors for revision
2137 2137
2138 2138 A successors set of changeset A is a consistent group of revisions that
2139 2139 succeed A. It contains non-obsolete changesets only unless closests
2140 2140 successors set is set.
2141 2141
2142 2142 In most cases a changeset A has a single successors set containing a single
2143 2143 successor (changeset A replaced by A').
2144 2144
2145 2145 A changeset that is made obsolete with no successors are called "pruned".
2146 2146 Such changesets have no successors sets at all.
2147 2147
2148 2148 A changeset that has been "split" will have a successors set containing
2149 2149 more than one successor.
2150 2150
2151 2151 A changeset that has been rewritten in multiple different ways is called
2152 2152 "divergent". Such changesets have multiple successor sets (each of which
2153 2153 may also be split, i.e. have multiple successors).
2154 2154
2155 2155 Results are displayed as follows::
2156 2156
2157 2157 <rev1>
2158 2158 <successors-1A>
2159 2159 <rev2>
2160 2160 <successors-2A>
2161 2161 <successors-2B1> <successors-2B2> <successors-2B3>
2162 2162
2163 2163 Here rev2 has two possible (i.e. divergent) successors sets. The first
2164 2164 holds one element, whereas the second holds three (i.e. the changeset has
2165 2165 been split).
2166 2166 """
2167 2167 # passed to successorssets caching computation from one call to another
2168 2168 cache = {}
2169 2169 ctx2str = str
2170 2170 node2str = short
2171 2171 if ui.debug():
2172 2172 def ctx2str(ctx):
2173 2173 return ctx.hex()
2174 2174 node2str = hex
2175 2175 for rev in scmutil.revrange(repo, revs):
2176 2176 ctx = repo[rev]
2177 2177 ui.write('%s\n'% ctx2str(ctx))
2178 2178 for succsset in obsutil.successorssets(repo, ctx.node(),
2179 2179 closest=opts['closest'],
2180 2180 cache=cache):
2181 2181 if succsset:
2182 2182 ui.write(' ')
2183 2183 ui.write(node2str(succsset[0]))
2184 2184 for node in succsset[1:]:
2185 2185 ui.write(' ')
2186 2186 ui.write(node2str(node))
2187 2187 ui.write('\n')
2188 2188
2189 2189 @command('debugtemplate',
2190 2190 [('r', 'rev', [], _('apply template on changesets'), _('REV')),
2191 2191 ('D', 'define', [], _('define template keyword'), _('KEY=VALUE'))],
2192 2192 _('[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
2193 2193 optionalrepo=True)
2194 2194 def debugtemplate(ui, repo, tmpl, **opts):
2195 2195 """parse and apply a template
2196 2196
2197 2197 If -r/--rev is given, the template is processed as a log template and
2198 2198 applied to the given changesets. Otherwise, it is processed as a generic
2199 2199 template.
2200 2200
2201 2201 Use --verbose to print the parsed tree.
2202 2202 """
2203 2203 revs = None
2204 2204 if opts[r'rev']:
2205 2205 if repo is None:
2206 2206 raise error.RepoError(_('there is no Mercurial repository here '
2207 2207 '(.hg not found)'))
2208 2208 revs = scmutil.revrange(repo, opts[r'rev'])
2209 2209
2210 2210 props = {}
2211 2211 for d in opts[r'define']:
2212 2212 try:
2213 2213 k, v = (e.strip() for e in d.split('=', 1))
2214 2214 if not k or k == 'ui':
2215 2215 raise ValueError
2216 2216 props[k] = v
2217 2217 except ValueError:
2218 2218 raise error.Abort(_('malformed keyword definition: %s') % d)
2219 2219
2220 2220 if ui.verbose:
2221 2221 aliases = ui.configitems('templatealias')
2222 2222 tree = templater.parse(tmpl)
2223 2223 ui.note(templater.prettyformat(tree), '\n')
2224 2224 newtree = templater.expandaliases(tree, aliases)
2225 2225 if newtree != tree:
2226 2226 ui.note(("* expanded:\n"), templater.prettyformat(newtree), '\n')
2227 2227
2228 2228 if revs is None:
2229 2229 t = formatter.maketemplater(ui, tmpl)
2230 2230 props['ui'] = ui
2231 2231 ui.write(t.render(props))
2232 2232 else:
2233 2233 displayer = cmdutil.makelogtemplater(ui, repo, tmpl)
2234 2234 for r in revs:
2235 2235 displayer.show(repo[r], **pycompat.strkwargs(props))
2236 2236 displayer.close()
2237 2237
2238 2238 @command('debugupdatecaches', [])
2239 2239 def debugupdatecaches(ui, repo, *pats, **opts):
2240 2240 """warm all known caches in the repository"""
2241 2241 with repo.wlock(), repo.lock():
2242 2242 repo.updatecaches()
2243 2243
2244 2244 @command('debugupgraderepo', [
2245 2245 ('o', 'optimize', [], _('extra optimization to perform'), _('NAME')),
2246 2246 ('', 'run', False, _('performs an upgrade')),
2247 2247 ])
2248 2248 def debugupgraderepo(ui, repo, run=False, optimize=None):
2249 2249 """upgrade a repository to use different features
2250 2250
2251 2251 If no arguments are specified, the repository is evaluated for upgrade
2252 2252 and a list of problems and potential optimizations is printed.
2253 2253
2254 2254 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
2255 2255 can be influenced via additional arguments. More details will be provided
2256 2256 by the command output when run without ``--run``.
2257 2257
2258 2258 During the upgrade, the repository will be locked and no writes will be
2259 2259 allowed.
2260 2260
2261 2261 At the end of the upgrade, the repository may not be readable while new
2262 2262 repository data is swapped in. This window will be as long as it takes to
2263 2263 rename some directories inside the ``.hg`` directory. On most machines, this
2264 2264 should complete almost instantaneously and the chances of a consumer being
2265 2265 unable to access the repository should be low.
2266 2266 """
2267 2267 return upgrade.upgraderepo(ui, repo, run=run, optimize=optimize)
2268 2268
2269 2269 @command('debugwalk', cmdutil.walkopts, _('[OPTION]... [FILE]...'),
2270 2270 inferrepo=True)
2271 2271 def debugwalk(ui, repo, *pats, **opts):
2272 2272 """show how files match on given patterns"""
2273 2273 opts = pycompat.byteskwargs(opts)
2274 2274 m = scmutil.match(repo[None], pats, opts)
2275 2275 ui.write(('matcher: %r\n' % m))
2276 2276 items = list(repo[None].walk(m))
2277 2277 if not items:
2278 2278 return
2279 2279 f = lambda fn: fn
2280 2280 if ui.configbool('ui', 'slash') and pycompat.ossep != '/':
2281 2281 f = lambda fn: util.normpath(fn)
2282 2282 fmt = 'f %%-%ds %%-%ds %%s' % (
2283 2283 max([len(abs) for abs in items]),
2284 2284 max([len(m.rel(abs)) for abs in items]))
2285 2285 for abs in items:
2286 2286 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
2287 2287 ui.write("%s\n" % line.rstrip())
2288 2288
2289 2289 @command('debugwireargs',
2290 2290 [('', 'three', '', 'three'),
2291 2291 ('', 'four', '', 'four'),
2292 2292 ('', 'five', '', 'five'),
2293 2293 ] + cmdutil.remoteopts,
2294 2294 _('REPO [OPTIONS]... [ONE [TWO]]'),
2295 2295 norepo=True)
2296 2296 def debugwireargs(ui, repopath, *vals, **opts):
2297 2297 opts = pycompat.byteskwargs(opts)
2298 2298 repo = hg.peer(ui, opts, repopath)
2299 2299 for opt in cmdutil.remoteopts:
2300 2300 del opts[opt[1]]
2301 2301 args = {}
2302 2302 for k, v in opts.iteritems():
2303 2303 if v:
2304 2304 args[k] = v
2305 2305 # run twice to check that we don't mess up the stream for the next command
2306 2306 res1 = repo.debugwireargs(*vals, **args)
2307 2307 res2 = repo.debugwireargs(*vals, **args)
2308 2308 ui.write("%s\n" % res1)
2309 2309 if res1 != res2:
2310 2310 ui.warn("%s\n" % res2)
@@ -1,611 +1,627 b''
1 1 """ Mercurial phases support code
2 2
3 3 ---
4 4
5 5 Copyright 2011 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
6 6 Logilab SA <contact@logilab.fr>
7 7 Augie Fackler <durin42@gmail.com>
8 8
9 9 This software may be used and distributed according to the terms
10 10 of the GNU General Public License version 2 or any later version.
11 11
12 12 ---
13 13
14 14 This module implements most phase logic in mercurial.
15 15
16 16
17 17 Basic Concept
18 18 =============
19 19
20 20 A 'changeset phase' is an indicator that tells us how a changeset is
21 21 manipulated and communicated. The details of each phase is described
22 22 below, here we describe the properties they have in common.
23 23
24 24 Like bookmarks, phases are not stored in history and thus are not
25 25 permanent and leave no audit trail.
26 26
27 27 First, no changeset can be in two phases at once. Phases are ordered,
28 28 so they can be considered from lowest to highest. The default, lowest
29 29 phase is 'public' - this is the normal phase of existing changesets. A
30 30 child changeset can not be in a lower phase than its parents.
31 31
32 32 These phases share a hierarchy of traits:
33 33
34 34 immutable shared
35 35 public: X X
36 36 draft: X
37 37 secret:
38 38
39 39 Local commits are draft by default.
40 40
41 41 Phase Movement and Exchange
42 42 ===========================
43 43
44 44 Phase data is exchanged by pushkey on pull and push. Some servers have
45 45 a publish option set, we call such a server a "publishing server".
46 46 Pushing a draft changeset to a publishing server changes the phase to
47 47 public.
48 48
49 49 A small list of fact/rules define the exchange of phase:
50 50
51 51 * old client never changes server states
52 52 * pull never changes server states
53 53 * publish and old server changesets are seen as public by client
54 54 * any secret changeset seen in another repository is lowered to at
55 55 least draft
56 56
57 57 Here is the final table summing up the 49 possible use cases of phase
58 58 exchange:
59 59
60 60 server
61 61 old publish non-publish
62 62 N X N D P N D P
63 63 old client
64 64 pull
65 65 N - X/X - X/D X/P - X/D X/P
66 66 X - X/X - X/D X/P - X/D X/P
67 67 push
68 68 X X/X X/X X/P X/P X/P X/D X/D X/P
69 69 new client
70 70 pull
71 71 N - P/X - P/D P/P - D/D P/P
72 72 D - P/X - P/D P/P - D/D P/P
73 73 P - P/X - P/D P/P - P/D P/P
74 74 push
75 75 D P/X P/X P/P P/P P/P D/D D/D P/P
76 76 P P/X P/X P/P P/P P/P P/P P/P P/P
77 77
78 78 Legend:
79 79
80 80 A/B = final state on client / state on server
81 81
82 82 * N = new/not present,
83 83 * P = public,
84 84 * D = draft,
85 85 * X = not tracked (i.e., the old client or server has no internal
86 86 way of recording the phase.)
87 87
88 88 passive = only pushes
89 89
90 90
91 91 A cell here can be read like this:
92 92
93 93 "When a new client pushes a draft changeset (D) to a publishing
94 94 server where it's not present (N), it's marked public on both
95 95 sides (P/P)."
96 96
97 97 Note: old client behave as a publishing server with draft only content
98 98 - other people see it as public
99 99 - content is pushed as draft
100 100
101 101 """
102 102
103 103 from __future__ import absolute_import
104 104
105 105 import errno
106 106 import struct
107 107
108 108 from .i18n import _
109 109 from .node import (
110 110 bin,
111 111 hex,
112 112 nullid,
113 113 nullrev,
114 114 short,
115 115 )
116 116 from . import (
117 117 error,
118 118 smartset,
119 119 txnutil,
120 120 util,
121 121 )
122 122
123 123 _fphasesentry = struct.Struct('>i20s')
124 124
125 125 allphases = public, draft, secret = range(3)
126 126 trackedphases = allphases[1:]
127 127 phasenames = ['public', 'draft', 'secret']
128 128
129 129 def _readroots(repo, phasedefaults=None):
130 130 """Read phase roots from disk
131 131
132 132 phasedefaults is a list of fn(repo, roots) callable, which are
133 133 executed if the phase roots file does not exist. When phases are
134 134 being initialized on an existing repository, this could be used to
135 135 set selected changesets phase to something else than public.
136 136
137 137 Return (roots, dirty) where dirty is true if roots differ from
138 138 what is being stored.
139 139 """
140 140 repo = repo.unfiltered()
141 141 dirty = False
142 142 roots = [set() for i in allphases]
143 143 try:
144 144 f, pending = txnutil.trypending(repo.root, repo.svfs, 'phaseroots')
145 145 try:
146 146 for line in f:
147 147 phase, nh = line.split()
148 148 roots[int(phase)].add(bin(nh))
149 149 finally:
150 150 f.close()
151 151 except IOError as inst:
152 152 if inst.errno != errno.ENOENT:
153 153 raise
154 154 if phasedefaults:
155 155 for f in phasedefaults:
156 156 roots = f(repo, roots)
157 157 dirty = True
158 158 return roots, dirty
159 159
160 160 def binaryencode(phasemapping):
161 161 """encode a 'phase -> nodes' mapping into a binary stream
162 162
163 163 Since phases are integer the mapping is actually a python list:
164 164 [[PUBLIC_HEADS], [DRAFTS_HEADS], [SECRET_HEADS]]
165 165 """
166 166 binarydata = []
167 167 for phase, nodes in enumerate(phasemapping):
168 168 for head in nodes:
169 169 binarydata.append(_fphasesentry.pack(phase, head))
170 170 return ''.join(binarydata)
171 171
172 def binarydecode(stream):
173 """decode a binary stream into a 'phase -> nodes' mapping
174
175 Since phases are integer the mapping is actually a python list."""
176 headsbyphase = [[] for i in allphases]
177 entrysize = _fphasesentry.size
178 while True:
179 entry = stream.read(entrysize)
180 if len(entry) < entrysize:
181 if entry:
182 raise error.Abort(_('bad phase-heads stream'))
183 break
184 phase, node = _fphasesentry.unpack(entry)
185 headsbyphase[phase].append(node)
186 return headsbyphase
187
172 188 def _trackphasechange(data, rev, old, new):
173 189 """add a phase move the <data> dictionnary
174 190
175 191 If data is None, nothing happens.
176 192 """
177 193 if data is None:
178 194 return
179 195 existing = data.get(rev)
180 196 if existing is not None:
181 197 old = existing[0]
182 198 data[rev] = (old, new)
183 199
184 200 class phasecache(object):
185 201 def __init__(self, repo, phasedefaults, _load=True):
186 202 if _load:
187 203 # Cheap trick to allow shallow-copy without copy module
188 204 self.phaseroots, self.dirty = _readroots(repo, phasedefaults)
189 205 self._phaserevs = None
190 206 self._phasesets = None
191 207 self.filterunknown(repo)
192 208 self.opener = repo.svfs
193 209
194 210 def getrevset(self, repo, phases):
195 211 """return a smartset for the given phases"""
196 212 self.loadphaserevs(repo) # ensure phase's sets are loaded
197 213
198 214 if self._phasesets and all(self._phasesets[p] is not None
199 215 for p in phases):
200 216 # fast path - use _phasesets
201 217 revs = self._phasesets[phases[0]]
202 218 if len(phases) > 1:
203 219 revs = revs.copy() # only copy when needed
204 220 for p in phases[1:]:
205 221 revs.update(self._phasesets[p])
206 222 if repo.changelog.filteredrevs:
207 223 revs = revs - repo.changelog.filteredrevs
208 224 return smartset.baseset(revs)
209 225 else:
210 226 # slow path - enumerate all revisions
211 227 phase = self.phase
212 228 revs = (r for r in repo if phase(repo, r) in phases)
213 229 return smartset.generatorset(revs, iterasc=True)
214 230
215 231 def copy(self):
216 232 # Shallow copy meant to ensure isolation in
217 233 # advance/retractboundary(), nothing more.
218 234 ph = self.__class__(None, None, _load=False)
219 235 ph.phaseroots = self.phaseroots[:]
220 236 ph.dirty = self.dirty
221 237 ph.opener = self.opener
222 238 ph._phaserevs = self._phaserevs
223 239 ph._phasesets = self._phasesets
224 240 return ph
225 241
226 242 def replace(self, phcache):
227 243 """replace all values in 'self' with content of phcache"""
228 244 for a in ('phaseroots', 'dirty', 'opener', '_phaserevs', '_phasesets'):
229 245 setattr(self, a, getattr(phcache, a))
230 246
231 247 def _getphaserevsnative(self, repo):
232 248 repo = repo.unfiltered()
233 249 nativeroots = []
234 250 for phase in trackedphases:
235 251 nativeroots.append(map(repo.changelog.rev, self.phaseroots[phase]))
236 252 return repo.changelog.computephases(nativeroots)
237 253
238 254 def _computephaserevspure(self, repo):
239 255 repo = repo.unfiltered()
240 256 revs = [public] * len(repo.changelog)
241 257 self._phaserevs = revs
242 258 self._populatephaseroots(repo)
243 259 for phase in trackedphases:
244 260 roots = list(map(repo.changelog.rev, self.phaseroots[phase]))
245 261 if roots:
246 262 for rev in roots:
247 263 revs[rev] = phase
248 264 for rev in repo.changelog.descendants(roots):
249 265 revs[rev] = phase
250 266
251 267 def loadphaserevs(self, repo):
252 268 """ensure phase information is loaded in the object"""
253 269 if self._phaserevs is None:
254 270 try:
255 271 res = self._getphaserevsnative(repo)
256 272 self._phaserevs, self._phasesets = res
257 273 except AttributeError:
258 274 self._computephaserevspure(repo)
259 275
260 276 def invalidate(self):
261 277 self._phaserevs = None
262 278 self._phasesets = None
263 279
264 280 def _populatephaseroots(self, repo):
265 281 """Fills the _phaserevs cache with phases for the roots.
266 282 """
267 283 cl = repo.changelog
268 284 phaserevs = self._phaserevs
269 285 for phase in trackedphases:
270 286 roots = map(cl.rev, self.phaseroots[phase])
271 287 for root in roots:
272 288 phaserevs[root] = phase
273 289
274 290 def phase(self, repo, rev):
275 291 # We need a repo argument here to be able to build _phaserevs
276 292 # if necessary. The repository instance is not stored in
277 293 # phasecache to avoid reference cycles. The changelog instance
278 294 # is not stored because it is a filecache() property and can
279 295 # be replaced without us being notified.
280 296 if rev == nullrev:
281 297 return public
282 298 if rev < nullrev:
283 299 raise ValueError(_('cannot lookup negative revision'))
284 300 if self._phaserevs is None or rev >= len(self._phaserevs):
285 301 self.invalidate()
286 302 self.loadphaserevs(repo)
287 303 return self._phaserevs[rev]
288 304
289 305 def write(self):
290 306 if not self.dirty:
291 307 return
292 308 f = self.opener('phaseroots', 'w', atomictemp=True, checkambig=True)
293 309 try:
294 310 self._write(f)
295 311 finally:
296 312 f.close()
297 313
298 314 def _write(self, fp):
299 315 for phase, roots in enumerate(self.phaseroots):
300 316 for h in roots:
301 317 fp.write('%i %s\n' % (phase, hex(h)))
302 318 self.dirty = False
303 319
304 320 def _updateroots(self, phase, newroots, tr):
305 321 self.phaseroots[phase] = newroots
306 322 self.invalidate()
307 323 self.dirty = True
308 324
309 325 tr.addfilegenerator('phase', ('phaseroots',), self._write)
310 326 tr.hookargs['phases_moved'] = '1'
311 327
312 328 def registernew(self, repo, tr, targetphase, nodes):
313 329 repo = repo.unfiltered()
314 330 self._retractboundary(repo, tr, targetphase, nodes)
315 331 if tr is not None and 'phases' in tr.changes:
316 332 phasetracking = tr.changes['phases']
317 333 torev = repo.changelog.rev
318 334 phase = self.phase
319 335 for n in nodes:
320 336 rev = torev(n)
321 337 revphase = phase(repo, rev)
322 338 _trackphasechange(phasetracking, rev, None, revphase)
323 339 repo.invalidatevolatilesets()
324 340
325 341 def advanceboundary(self, repo, tr, targetphase, nodes):
326 342 """Set all 'nodes' to phase 'targetphase'
327 343
328 344 Nodes with a phase lower than 'targetphase' are not affected.
329 345 """
330 346 # Be careful to preserve shallow-copied values: do not update
331 347 # phaseroots values, replace them.
332 348 if tr is None:
333 349 phasetracking = None
334 350 else:
335 351 phasetracking = tr.changes.get('phases')
336 352
337 353 repo = repo.unfiltered()
338 354
339 355 delroots = [] # set of root deleted by this path
340 356 for phase in xrange(targetphase + 1, len(allphases)):
341 357 # filter nodes that are not in a compatible phase already
342 358 nodes = [n for n in nodes
343 359 if self.phase(repo, repo[n].rev()) >= phase]
344 360 if not nodes:
345 361 break # no roots to move anymore
346 362
347 363 olds = self.phaseroots[phase]
348 364
349 365 affected = repo.revs('%ln::%ln', olds, nodes)
350 366 for r in affected:
351 367 _trackphasechange(phasetracking, r, self.phase(repo, r),
352 368 targetphase)
353 369
354 370 roots = set(ctx.node() for ctx in repo.set(
355 371 'roots((%ln::) - %ld)', olds, affected))
356 372 if olds != roots:
357 373 self._updateroots(phase, roots, tr)
358 374 # some roots may need to be declared for lower phases
359 375 delroots.extend(olds - roots)
360 376 # declare deleted root in the target phase
361 377 if targetphase != 0:
362 378 self._retractboundary(repo, tr, targetphase, delroots)
363 379 repo.invalidatevolatilesets()
364 380
365 381 def retractboundary(self, repo, tr, targetphase, nodes):
366 382 oldroots = self.phaseroots[:targetphase + 1]
367 383 if tr is None:
368 384 phasetracking = None
369 385 else:
370 386 phasetracking = tr.changes.get('phases')
371 387 repo = repo.unfiltered()
372 388 if (self._retractboundary(repo, tr, targetphase, nodes)
373 389 and phasetracking is not None):
374 390
375 391 # find the affected revisions
376 392 new = self.phaseroots[targetphase]
377 393 old = oldroots[targetphase]
378 394 affected = set(repo.revs('(%ln::) - (%ln::)', new, old))
379 395
380 396 # find the phase of the affected revision
381 397 for phase in xrange(targetphase, -1, -1):
382 398 if phase:
383 399 roots = oldroots[phase]
384 400 revs = set(repo.revs('%ln::%ld', roots, affected))
385 401 affected -= revs
386 402 else: # public phase
387 403 revs = affected
388 404 for r in revs:
389 405 _trackphasechange(phasetracking, r, phase, targetphase)
390 406 repo.invalidatevolatilesets()
391 407
392 408 def _retractboundary(self, repo, tr, targetphase, nodes):
393 409 # Be careful to preserve shallow-copied values: do not update
394 410 # phaseroots values, replace them.
395 411
396 412 repo = repo.unfiltered()
397 413 currentroots = self.phaseroots[targetphase]
398 414 finalroots = oldroots = set(currentroots)
399 415 newroots = [n for n in nodes
400 416 if self.phase(repo, repo[n].rev()) < targetphase]
401 417 if newroots:
402 418
403 419 if nullid in newroots:
404 420 raise error.Abort(_('cannot change null revision phase'))
405 421 currentroots = currentroots.copy()
406 422 currentroots.update(newroots)
407 423
408 424 # Only compute new roots for revs above the roots that are being
409 425 # retracted.
410 426 minnewroot = min(repo[n].rev() for n in newroots)
411 427 aboveroots = [n for n in currentroots
412 428 if repo[n].rev() >= minnewroot]
413 429 updatedroots = repo.set('roots(%ln::)', aboveroots)
414 430
415 431 finalroots = set(n for n in currentroots if repo[n].rev() <
416 432 minnewroot)
417 433 finalroots.update(ctx.node() for ctx in updatedroots)
418 434 if finalroots != oldroots:
419 435 self._updateroots(targetphase, finalroots, tr)
420 436 return True
421 437 return False
422 438
423 439 def filterunknown(self, repo):
424 440 """remove unknown nodes from the phase boundary
425 441
426 442 Nothing is lost as unknown nodes only hold data for their descendants.
427 443 """
428 444 filtered = False
429 445 nodemap = repo.changelog.nodemap # to filter unknown nodes
430 446 for phase, nodes in enumerate(self.phaseroots):
431 447 missing = sorted(node for node in nodes if node not in nodemap)
432 448 if missing:
433 449 for mnode in missing:
434 450 repo.ui.debug(
435 451 'removing unknown node %s from %i-phase boundary\n'
436 452 % (short(mnode), phase))
437 453 nodes.symmetric_difference_update(missing)
438 454 filtered = True
439 455 if filtered:
440 456 self.dirty = True
441 457 # filterunknown is called by repo.destroyed, we may have no changes in
442 458 # root but phaserevs contents is certainly invalid (or at least we
443 459 # have not proper way to check that). related to issue 3858.
444 460 #
445 461 # The other caller is __init__ that have no _phaserevs initialized
446 462 # anyway. If this change we should consider adding a dedicated
447 463 # "destroyed" function to phasecache or a proper cache key mechanism
448 464 # (see branchmap one)
449 465 self.invalidate()
450 466
451 467 def advanceboundary(repo, tr, targetphase, nodes):
452 468 """Add nodes to a phase changing other nodes phases if necessary.
453 469
454 470 This function move boundary *forward* this means that all nodes
455 471 are set in the target phase or kept in a *lower* phase.
456 472
457 473 Simplify boundary to contains phase roots only."""
458 474 phcache = repo._phasecache.copy()
459 475 phcache.advanceboundary(repo, tr, targetphase, nodes)
460 476 repo._phasecache.replace(phcache)
461 477
462 478 def retractboundary(repo, tr, targetphase, nodes):
463 479 """Set nodes back to a phase changing other nodes phases if
464 480 necessary.
465 481
466 482 This function move boundary *backward* this means that all nodes
467 483 are set in the target phase or kept in a *higher* phase.
468 484
469 485 Simplify boundary to contains phase roots only."""
470 486 phcache = repo._phasecache.copy()
471 487 phcache.retractboundary(repo, tr, targetphase, nodes)
472 488 repo._phasecache.replace(phcache)
473 489
474 490 def registernew(repo, tr, targetphase, nodes):
475 491 """register a new revision and its phase
476 492
477 493 Code adding revisions to the repository should use this function to
478 494 set new changeset in their target phase (or higher).
479 495 """
480 496 phcache = repo._phasecache.copy()
481 497 phcache.registernew(repo, tr, targetphase, nodes)
482 498 repo._phasecache.replace(phcache)
483 499
484 500 def listphases(repo):
485 501 """List phases root for serialization over pushkey"""
486 502 # Use ordered dictionary so behavior is deterministic.
487 503 keys = util.sortdict()
488 504 value = '%i' % draft
489 505 for root in repo._phasecache.phaseroots[draft]:
490 506 keys[hex(root)] = value
491 507
492 508 if repo.publishing():
493 509 # Add an extra data to let remote know we are a publishing
494 510 # repo. Publishing repo can't just pretend they are old repo.
495 511 # When pushing to a publishing repo, the client still need to
496 512 # push phase boundary
497 513 #
498 514 # Push do not only push changeset. It also push phase data.
499 515 # New phase data may apply to common changeset which won't be
500 516 # push (as they are common). Here is a very simple example:
501 517 #
502 518 # 1) repo A push changeset X as draft to repo B
503 519 # 2) repo B make changeset X public
504 520 # 3) repo B push to repo A. X is not pushed but the data that
505 521 # X as now public should
506 522 #
507 523 # The server can't handle it on it's own as it has no idea of
508 524 # client phase data.
509 525 keys['publishing'] = 'True'
510 526 return keys
511 527
512 528 def pushphase(repo, nhex, oldphasestr, newphasestr):
513 529 """List phases root for serialization over pushkey"""
514 530 repo = repo.unfiltered()
515 531 with repo.lock():
516 532 currentphase = repo[nhex].phase()
517 533 newphase = abs(int(newphasestr)) # let's avoid negative index surprise
518 534 oldphase = abs(int(oldphasestr)) # let's avoid negative index surprise
519 535 if currentphase == oldphase and newphase < oldphase:
520 536 with repo.transaction('pushkey-phase') as tr:
521 537 advanceboundary(repo, tr, newphase, [bin(nhex)])
522 538 return True
523 539 elif currentphase == newphase:
524 540 # raced, but got correct result
525 541 return True
526 542 else:
527 543 return False
528 544
529 545 def subsetphaseheads(repo, subset):
530 546 """Finds the phase heads for a subset of a history
531 547
532 548 Returns a list indexed by phase number where each item is a list of phase
533 549 head nodes.
534 550 """
535 551 cl = repo.changelog
536 552
537 553 headsbyphase = [[] for i in allphases]
538 554 # No need to keep track of secret phase; any heads in the subset that
539 555 # are not mentioned are implicitly secret.
540 556 for phase in allphases[:-1]:
541 557 revset = "heads(%%ln & %s())" % phasenames[phase]
542 558 headsbyphase[phase] = [cl.node(r) for r in repo.revs(revset, subset)]
543 559 return headsbyphase
544 560
545 561 def updatephases(repo, tr, headsbyphase):
546 562 """Updates the repo with the given phase heads"""
547 563 # Now advance phase boundaries of all but secret phase
548 564 for phase in allphases[:-1]:
549 565 advanceboundary(repo, tr, phase, headsbyphase[phase])
550 566
551 567 def analyzeremotephases(repo, subset, roots):
552 568 """Compute phases heads and root in a subset of node from root dict
553 569
554 570 * subset is heads of the subset
555 571 * roots is {<nodeid> => phase} mapping. key and value are string.
556 572
557 573 Accept unknown element input
558 574 """
559 575 repo = repo.unfiltered()
560 576 # build list from dictionary
561 577 draftroots = []
562 578 nodemap = repo.changelog.nodemap # to filter unknown nodes
563 579 for nhex, phase in roots.iteritems():
564 580 if nhex == 'publishing': # ignore data related to publish option
565 581 continue
566 582 node = bin(nhex)
567 583 phase = int(phase)
568 584 if phase == public:
569 585 if node != nullid:
570 586 repo.ui.warn(_('ignoring inconsistent public root'
571 587 ' from remote: %s\n') % nhex)
572 588 elif phase == draft:
573 589 if node in nodemap:
574 590 draftroots.append(node)
575 591 else:
576 592 repo.ui.warn(_('ignoring unexpected root from remote: %i %s\n')
577 593 % (phase, nhex))
578 594 # compute heads
579 595 publicheads = newheads(repo, subset, draftroots)
580 596 return publicheads, draftroots
581 597
582 598 def newheads(repo, heads, roots):
583 599 """compute new head of a subset minus another
584 600
585 601 * `heads`: define the first subset
586 602 * `roots`: define the second we subtract from the first"""
587 603 repo = repo.unfiltered()
588 604 revset = repo.set('heads((%ln + parents(%ln)) - (%ln::%ln))',
589 605 heads, roots, roots, heads)
590 606 return [c.node() for c in revset]
591 607
592 608
593 609 def newcommitphase(ui):
594 610 """helper to get the target phase of new commit
595 611
596 612 Handle all possible values for the phases.new-commit options.
597 613
598 614 """
599 615 v = ui.config('phases', 'new-commit', draft)
600 616 try:
601 617 return phasenames.index(v)
602 618 except ValueError:
603 619 try:
604 620 return int(v)
605 621 except ValueError:
606 622 msg = _("phases.new-commit: not a valid phase name ('%s')")
607 623 raise error.ConfigError(msg % v)
608 624
609 625 def hassecret(repo):
610 626 """utility function that check if a repo have any secret changeset."""
611 627 return bool(repo._phasecache.phaseroots[2])
General Comments 0
You need to be logged in to leave comments. Login now