##// END OF EJS Templates
changegroup: introduce requests to define delta generation...
changegroup: introduce requests to define delta generation Currently, we iterate through each revision we will be producing a delta for then call into 1 of 2 functions for generating that delta. Deltas are emitted as we iterate. A problem with this model is that revision generation is tightly coupled to the changegroup code. And the storage layer needs to expose APIs like deltaparent() so changegroup delta generation can produce a delta with that knowledge. Another problem is that in this model, deltas can only be produced sequentially after the previous delta was produced and emitted. Some storage backends might be capable of producing deltas in parallel (e.g. if the changegroup deltas are cached somewhere). This commit aims to solve these problems by turning delta generation into a 2 phase implementation where the first phase determines info about all the deltas that need to be generated and the 2nd phase resolves those deltas. We introduce a "revisiondeltarequest" object that holds data about a to-be-generated delta. We perform a full pass over all revisions whose delta is to be generated and generate a "revisiondeltarequest" for each. Then we iterate over the "revisiondeltarequest" instances and derive a "revisiondelta" for each. This patch was quite large. In order to avoid even more churn, aspects of the implementation are less than ideal. e.g. we're recording revision numbers instead of nodes in a few places and we don't yet have a formal API for resolving an iterable of revisiondeltarequest instances. Things will be improved in subsequent commits. Unfortunately, this commit reduces performance substantially. For `hg perfchangegroupchangelog` on my hg repo: ! wall 1.512607 comb 1.510000 user 1.490000 sys 0.020000 (best of 7) ! wall 2.150863 comb 2.150000 user 2.150000 sys 0.000000 (best of 5) And for `hg bundle -t none-v2 -a` for the mozilla-unified repo: 178.32user 4.22system 3:02.59elapsed 190.97user 4.17system 3:15.19elapsed Some of this was attributed to changelog slowdown. `hg perfchangegroupchangelog` on mozilla-unified: ! wall 21.688715 comb 21.690000 user 21.570000 sys 0.120000 (best of 3) ! wall 25.683659 comb 25.680000 user 25.540000 sys 0.140000 (best of 3) Profiling seems to reveal that the changelog slowdown is due to reading changelog revisions multiple times. First in the linknode callback to resolve the set of files changed. Second in the delta generation. Before, we likely had hit the last revision cache in the revlog when doing delta generation since we performed that immediately after performing the linknode callback. I'm not exactly sure where the other ~8s are being spent. It might be from overhead of constructing a few million revisiondeltarequest objects. I'm OK with the regression for now because it is in service of a larger cause (storage abstraction). I'll try to profile later and claw back the performance. Differential Revision: https://phab.mercurial-scm.org/D4215

File last commit:

r30588:be0e7af8 default
r39054:e793e11e default
Show More
__init__.py
113 lines | 3.9 KiB | text/x-python | PythonLexer
# __init__.py - asv benchmark suite
#
# Copyright 2016 Logilab SA <contact@logilab.fr>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
# "historical portability" policy of contrib/benchmarks:
#
# We have to make this code work correctly with current mercurial stable branch
# and if possible with reasonable cost with early Mercurial versions.
'''ASV (https://asv.readthedocs.io) benchmark suite
Benchmark are parameterized against reference repositories found in the
directory pointed by the REPOS_DIR environment variable.
Invocation example:
$ export REPOS_DIR=~/hgperf/repos
# run suite on given revision
$ asv --config contrib/asv.conf.json run REV
# run suite on new changesets found in stable and default branch
$ asv --config contrib/asv.conf.json run NEW
# display a comparative result table of benchmark results between two given
# revisions
$ asv --config contrib/asv.conf.json compare REV1 REV2
# compute regression detection and generate ASV static website
$ asv --config contrib/asv.conf.json publish
# serve the static website
$ asv --config contrib/asv.conf.json preview
'''
from __future__ import absolute_import
import functools
import os
import re
from mercurial import (
extensions,
hg,
ui as uimod,
util,
)
basedir = os.path.abspath(os.path.join(os.path.dirname(__file__),
os.path.pardir, os.path.pardir))
reposdir = os.environ['REPOS_DIR']
reposnames = [name for name in os.listdir(reposdir)
if os.path.isdir(os.path.join(reposdir, name, ".hg"))]
if not reposnames:
raise ValueError("No repositories found in $REPO_DIR")
outputre = re.compile((r'! wall (\d+.\d+) comb \d+.\d+ user \d+.\d+ sys '
r'\d+.\d+ \(best of \d+\)'))
def runperfcommand(reponame, command, *args, **kwargs):
os.environ["HGRCPATH"] = os.environ.get("ASVHGRCPATH", "")
# for "historical portability"
# ui.load() has been available since d83ca85
if util.safehasattr(uimod.ui, "load"):
ui = uimod.ui.load()
else:
ui = uimod.ui()
repo = hg.repository(ui, os.path.join(reposdir, reponame))
perfext = extensions.load(ui, 'perfext',
os.path.join(basedir, 'contrib', 'perf.py'))
cmd = getattr(perfext, command)
ui.pushbuffer()
cmd(ui, repo, *args, **kwargs)
output = ui.popbuffer()
match = outputre.search(output)
if not match:
raise ValueError("Invalid output {0}".format(output))
return float(match.group(1))
def perfbench(repos=reposnames, name=None, params=None):
"""decorator to declare ASV benchmark based on contrib/perf.py extension
An ASV benchmark is a python function with the given attributes:
__name__: should start with track_, time_ or mem_ to be collected by ASV
params and param_name: parameter matrix to display multiple graphs on the
same page.
pretty_name: If defined it's displayed in web-ui instead of __name__
(useful for revsets)
the module name is prepended to the benchmark name and displayed as
"category" in webui.
Benchmarks are automatically parameterized with repositories found in the
REPOS_DIR environment variable.
`params` is the param matrix in the form of a list of tuple
(param_name, [value0, value1])
For example [(x, [a, b]), (y, [c, d])] declare benchmarks for
(a, c), (a, d), (b, c) and (b, d).
"""
params = list(params or [])
params.insert(0, ("repo", repos))
def decorator(func):
@functools.wraps(func)
def wrapped(repo, *args):
def perf(command, *a, **kw):
return runperfcommand(repo, command, *a, **kw)
return func(perf, *args)
wrapped.params = [p[1] for p in params]
wrapped.param_names = [p[0] for p in params]
wrapped.pretty_name = name
return wrapped
return decorator