##// END OF EJS Templates
pathcopies: give up any optimization based on `introrev`...
pathcopies: give up any optimization based on `introrev` Between 8a0136f69027 and d98fb3f42f33, we sped up the search for the introduction revision during path copies. However, further checking show that finding the introduction revision is still expensive and that we are better off without it. So we simply drop it and only rely on the linkrev optimisation. I ran `perfpathcopies` on 6989 pair of revision in the pypy repository (`hg perfhelper-pathcopies`. The result is massively in favor of dropping this condition. The result of the copy tracing are unchanged. Attempt to use a smaller changes preserving linkrev usage were unsuccessful, it can return wrong result. The following changesets broke test-mv-cp-st-diff.t - if not f.isintroducedafter(limit): + if limit >= 0 and f.linkrev() < limit: return None Here are various numbers (before this changeset/after this changesets) source destination before after saved-time ratio worth cases e66f24650daf 695dfb0f493b 1.062843 1.246369 -0.183526 1.172675 c979853a3b6a 8d60fe293e79 1.036985 1.196414 -0.159429 1.153743 22349fa2fc33 fbb1c9fd86c0 0.879926 1.038682 -0.158756 1.180420 682b98f3e672 a4878080a536 0.909952 1.063801 -0.153849 1.169074 5adabc9b9848 920958a93997 0.993622 1.147452 -0.153830 1.154817 worse 1% dbfbfcf077e9 aea8f2fd3593 1.016595 1.082999 -0.066404 1.065320 worse 5% c95f1ced15f2 7d29d5e39734 0.453694 0.471156 -0.017462 1.038488 worse 10% 3e144ed1d5b7 2aef0e942480 0.035140 0.037535 -0.002395 1.068156 worse 25% 321fc60db035 801748ba582a 0.009267 0.009325 -0.000058 1.006259 median 2088ce763fc2 e6991321d78b 0.000665 0.000651 0.000014 0.978947 best 25% 915631a97de6 385b31354be6 0.040743 0.040363 0.000380 0.990673 best 10% ad495c36a765 19c10384d3e7 0.431658 0.411490 0.020168 0.953278 best 5% d13ae7d283ae 813c99f810ac 1.141404 1.075346 0.066058 0.942126 best 1% 81593cb4a496 99ae11866969 1.833297 0.063823 1.769474 0.034813 best cases c3b14617fbd7 743a0fcaa4eb 1101.811740 2.735970 1099.075770 0.002483 c3b14617fbd7 9ba6ab77fd29 1116.753953 2.800729 1113.953224 0.002508 058b99d6e81f 57e249b7a3ea 1246.128485 3.042762 1243.085723 0.002442 9a8c361aab49 0354a250d371 1253.111894 3.085796 1250.026098 0.002463 442dbbc53c68 3ec1002a818c 1261.786294 3.138607 1258.647687 0.002487 As one can see, the average case is not really impacted. However, the worth case we get after this changeset are much better than the one we had before it. We have 30 pairs where improvements are above 10 minutes. This reflect in the combined time for all pairs before: 26256s after: 1300s (-95%) If we remove these pathological 30 cases, we still see a significant improvements: before: 1631s after: 1245s (-24%)

File last commit:

r43387:8ff1ecfa default
r43469:c16fe77e default
Show More
highlight.py
101 lines | 3.1 KiB | text/x-python | PythonLexer
# highlight.py - highlight extension implementation file
#
# Copyright 2007-2009 Adam Hupp <adam@hupp.org> and others
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
#
# The original module was split in an interface and an implementation
# file to defer pygments loading and speedup extension setup.
from __future__ import absolute_import
from mercurial import demandimport
demandimport.IGNORES.update([b'pkgutil', b'pkg_resources', b'__main__'])
from mercurial import (
encoding,
pycompat,
)
from mercurial.utils import stringutil
with demandimport.deactivated():
import pygments
import pygments.formatters
import pygments.lexers
import pygments.plugin
import pygments.util
for unused in pygments.plugin.find_plugin_lexers():
pass
highlight = pygments.highlight
ClassNotFound = pygments.util.ClassNotFound
guess_lexer = pygments.lexers.guess_lexer
guess_lexer_for_filename = pygments.lexers.guess_lexer_for_filename
TextLexer = pygments.lexers.TextLexer
HtmlFormatter = pygments.formatters.HtmlFormatter
SYNTAX_CSS = (
b'\n<link rel="stylesheet" href="{url}highlightcss" type="text/css" />'
)
def pygmentize(field, fctx, style, tmpl, guessfilenameonly=False):
# append a <link ...> to the syntax highlighting css
tmpl.load(b'header')
old_header = tmpl.cache[b'header']
if SYNTAX_CSS not in old_header:
new_header = old_header + SYNTAX_CSS
tmpl.cache[b'header'] = new_header
text = fctx.data()
if stringutil.binary(text):
return
# str.splitlines() != unicode.splitlines() because "reasons"
for c in b"\x0c\x1c\x1d\x1e":
if c in text:
text = text.replace(c, b'')
# Pygments is best used with Unicode strings:
# <http://pygments.org/docs/unicode/>
text = text.decode(pycompat.sysstr(encoding.encoding), 'replace')
# To get multi-line strings right, we can't format line-by-line
try:
path = pycompat.sysstr(fctx.path())
lexer = guess_lexer_for_filename(path, text[:1024], stripnl=False)
except (ClassNotFound, ValueError):
# guess_lexer will return a lexer if *any* lexer matches. There is
# no way to specify a minimum match score. This can give a high rate of
# false positives on files with an unknown filename pattern.
if guessfilenameonly:
return
try:
lexer = guess_lexer(text[:1024], stripnl=False)
except (ClassNotFound, ValueError):
# Don't highlight unknown files
return
# Don't highlight text files
if isinstance(lexer, TextLexer):
return
formatter = HtmlFormatter(nowrap=True, style=pycompat.sysstr(style))
colorized = highlight(text, lexer, formatter)
coloriter = (
s.encode(pycompat.sysstr(encoding.encoding), 'replace')
for s in colorized.splitlines()
)
tmpl._filters[b'colorize'] = lambda x: next(coloriter)
oldl = tmpl.cache[field]
newl = oldl.replace(b'line|escape', b'line|colorize')
tmpl.cache[field] = newl