##// END OF EJS Templates
changegroup: remove reordering control (BC)...
changegroup: remove reordering control (BC) This logic - including the experimental bundle.reorder option - was originally added in a8e3931e3fb5 in 2011 and then later ported to changegroup.py. The intent of this option and associated logic is to control the ordering of revisions in deltagroups in changegroups. At the time it was implemented, only changegroup version 1 existed and generaldelta revlogs were just coming into the world. Changegroup version 1 requires that deltas be made against the last revision sent over the wire. Used with generaldelta, this created an impedance mismatch of sorts and resulted in changegroup producers spending a lot of time recomputing deltas. Revision reordering was introduced so outgoing revisions would be sent in "generaldelta order" and producers would be able to reuse internal deltas from storage. Later on, we introduced changegroup version 2. It supported denoting which revision a delta was against. So we no longer needed to sort outgoing revisions to ensure optimal delta generation from the producer. So, subsequent changegroup versions disabled reordering. We also later made the changelog not store deltas by default. And we also made the changelog send out deltas in storage order. Why we do this for changelog, I'm not sure. Maybe we want to preserve revision order across clones? It doesn't really matter for this commit. Fast forward to 2018. We want to abstract storage backends. And having changegroup code require knowledge about how deltas are stored internally interferes with that goal. This commit removes reordering control from changegroup generation. After this commit, the reordering behavior is: * The changelog is always sent out in storage order (no behavior change). * Non-changelog generaldelta revlogs are reordered to always be in DAG topological order (previously, generaldelta revlogs would be emitted in storage order for version 2 and 3 changegroups). * Non-changelog non-generaldelta revlogs are sent in storage order (no behavior change). * There exists no config option to override behavior. The big difference here is that generaldelta revlogs now *always* have their revisions sorted in DAG order before going out over the wire. This behavior was previously only done for changegroup version 1. Version 2 and version 3 changegroups disabled reordering because the interchange format supported encoding arbitrary delta parents, so reordering wasn't strictly necessary. I can think of a few significant implications for this change. Because changegroup receivers will now see non-changelog revisions in DAG order instead of storage order, the internal storage order of manifests and files may differ substantially between producer and consumer. I don't think this matters that much, since the storage order of manifests and files is largely hidden from users. Only the storage order of changelog matters (because `hg log` shows the changelog in storage order). I don't think there should be any controversy here. The reordering of revisions has implications for changegroup producers. Previously, generaldelta revlogs would be emitted in storage order. And in the common case, the internally-stored delta could effectively be copied from disk into the deltagroup delta. This meant that emitting delta groups for generaldelta revlogs would be mostly linear read I/O. This is desirable for performance. With us now reordering generaldelta revlog revisions in DAG order, the read operations may use more random I/O instead of sequential I/O. This could result in performance loss. But with the prevalence of SSDs and fast random I/O, I'm not too worried. (Note: the optimal emission order for revlogs is actually delta encoding order. But the changegroup code wasn't doing that before or after this change. We could potentially implement that in a later commit.) Changegroups in DAG order will have implications for receivers. Previously, receiving storage order might mean seeing a number of interleaved branches. This would mean long delta chains, sparse I/O, and possibly more fulltext revisions instead of deltas, blowing up storage storage. (This is the same set of problems that sparse revlogs aims to address.) With the producer now sending revisions in DAG order, the receiver also stores revisions in DAG order. That means revisions for the same DAG branch are all grouped together. And this should yield better storage outcomes. In other words, sending the reordered changegroup allows the receiver to have better storage order and for the producer to not propagate its (possibly sub-optimal) internal storage order. On the mozilla-unified repository, this change influences bundle generation: $ hg bundle -t none-v2 -a before: time: real 355.680 secs (user 256.790+0.000 sys 16.820+0.000) after: time: real 382.950 secs (user 281.700+0.000 sys 17.690+0.000) before: 7,150,228,967 bytes (uncompressed) after: 7,041,556,273 bytes (uncompressed) before: 1,669,063,234 bytes (zstd l=3) after: 1,628,598,830 bytes (zstd l=3) $ hg unbundle before: time: real 511.910 secs (user 466.750+0.000 sys 32.680+0.000) after: time: real 487.790 secs (user 443.940+0.000 sys 30.840+0.000) 00manifest.d size: source: 274,924,292 bytes before: 304,741,626 bytes after: 245,252,087 bytes .hg/store total file size: source: 2,649,133,490 before: 2,680,888,130 after: 2,627,875,673 We see the bundle size drop. That's probably because if a revlog internally isn't storing a delta, it will choose to delta against the last emitted revision. And on repos with interleaved branches (like mozilla-unified), the previous revision could be an unrelated branch and therefore be a large delta. But with this patch, the previous revision is likely p1 or p2 and a delta should be small. We also see the manifest size drop by ~50 MB. It's worth noting that the manifest actually *increased* in size by ~25 MB in the old strategy and decreased ~25 MB from its source in the new strategy. Again, my explanation for this is that the DAG ordering in the changegroup is resulting in better grouping of revisions in the receiver, which results in more compact delta chains and higher storage efficiency. Unbundle time also dropped. I suspect this is due to the revlog having to work less to compute deltas since the incoming deltas are more optimal. i.e. the receiver spends less time resolving fulltext revisions as incoming deltas bounce around between DAG branches and delta chains. We also see bundle generation time increase. This is not desirable. However, the regression is only significant on the original repository: if we generate a bundle from the repository created from the new, always reordered bundles, we're close to baseline (if not at it with expected noise): $ hg bundle -t none-v2 -a before (original): time: real 355.680 secs (user 256.790+0.000 sys 16.820+0.000) after (original): time: real 382.950 secs (user 281.700+0.000 sys 17.690+0.000) after (new repo): time: real 362.280 secs (user 260.300+0.000 sys 17.700+0.000) This regression is a bit worrying because it will impact serving canonical repositories (that don't have optimal internal storage unless they are reordered - possibly as part of running `hg debugupgraderepo`). However, this regression will only be noticed by very large changegroups. And I'm guessing/hoping that any repository that large is using clonebundles to mitigate server load. Again, sending DAG order isn't the optimal send order for servers: sending in storage-delta order is. But in order to enable storage-optimal send order, we'll need a storage API that handles sorting. Future commits will introduce such an API. Differential Revision: https://phab.mercurial-scm.org/D4721

File last commit:

r37195:68ee6182 default
r39897:db5501d9 default
Show More
adapter.py
714 lines | 22.8 KiB | text/x-python | PythonLexer
Gregory Szorc
thirdparty: vendor zope.interface 4.4.3...
r37193 ##############################################################################
#
# Copyright (c) 2004 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Adapter management
"""
Gregory Szorc
thirdparty: port zope.interface to relative imports...
r37195 from __future__ import absolute_import
Gregory Szorc
thirdparty: vendor zope.interface 4.4.3...
r37193 import weakref
Gregory Szorc
thirdparty: port zope.interface to relative imports...
r37195 from . import implementer
from . import providedBy
from . import Interface
from . import ro
from .interfaces import IAdapterRegistry
Gregory Szorc
thirdparty: vendor zope.interface 4.4.3...
r37193
Gregory Szorc
thirdparty: port zope.interface to relative imports...
r37195 from ._compat import _normalize_name
from ._compat import STRING_TYPES
Gregory Szorc
thirdparty: vendor zope.interface 4.4.3...
r37193
_BLANK = u''
class BaseAdapterRegistry(object):
# List of methods copied from lookup sub-objects:
_delegated = ('lookup', 'queryMultiAdapter', 'lookup1', 'queryAdapter',
'adapter_hook', 'lookupAll', 'names',
'subscriptions', 'subscribers')
# All registries maintain a generation that can be used by verifying
# registries
_generation = 0
def __init__(self, bases=()):
# The comments here could be improved. Possibly this bit needs
# explaining in a separate document, as the comments here can
# be quite confusing. /regebro
# {order -> {required -> {provided -> {name -> value}}}}
# Here "order" is actually an index in a list, "required" and
# "provided" are interfaces, and "required" is really a nested
# key. So, for example:
# for order == 0 (that is, self._adapters[0]), we have:
# {provided -> {name -> value}}
# but for order == 2 (that is, self._adapters[2]), we have:
# {r1 -> {r2 -> {provided -> {name -> value}}}}
#
self._adapters = []
# {order -> {required -> {provided -> {name -> [value]}}}}
# where the remarks about adapters above apply
self._subscribers = []
# Set, with a reference count, keeping track of the interfaces
# for which we have provided components:
self._provided = {}
# Create ``_v_lookup`` object to perform lookup. We make this a
# separate object to to make it easier to implement just the
# lookup functionality in C. This object keeps track of cache
# invalidation data in two kinds of registries.
# Invalidating registries have caches that are invalidated
# when they or their base registies change. An invalidating
# registry can only have invalidating registries as bases.
# See LookupBaseFallback below for the pertinent logic.
# Verifying registies can't rely on getting invalidation messages,
# so have to check the generations of base registries to determine
# if their cache data are current. See VerifyingBasePy below
# for the pertinent object.
self._createLookup()
# Setting the bases causes the registries described above
# to be initialized (self._setBases -> self.changed ->
# self._v_lookup.changed).
self.__bases__ = bases
def _setBases(self, bases):
self.__dict__['__bases__'] = bases
self.ro = ro.ro(self)
self.changed(self)
__bases__ = property(lambda self: self.__dict__['__bases__'],
lambda self, bases: self._setBases(bases),
)
def _createLookup(self):
self._v_lookup = self.LookupClass(self)
for name in self._delegated:
self.__dict__[name] = getattr(self._v_lookup, name)
def changed(self, originally_changed):
self._generation += 1
self._v_lookup.changed(originally_changed)
def register(self, required, provided, name, value):
if not isinstance(name, STRING_TYPES):
raise ValueError('name is not a string')
if value is None:
self.unregister(required, provided, name, value)
return
required = tuple(map(_convert_None_to_Interface, required))
name = _normalize_name(name)
order = len(required)
byorder = self._adapters
while len(byorder) <= order:
byorder.append({})
components = byorder[order]
key = required + (provided,)
for k in key:
d = components.get(k)
if d is None:
d = {}
components[k] = d
components = d
if components.get(name) is value:
return
components[name] = value
n = self._provided.get(provided, 0) + 1
self._provided[provided] = n
if n == 1:
self._v_lookup.add_extendor(provided)
self.changed(self)
def registered(self, required, provided, name=_BLANK):
required = tuple(map(_convert_None_to_Interface, required))
name = _normalize_name(name)
order = len(required)
byorder = self._adapters
if len(byorder) <= order:
return None
components = byorder[order]
key = required + (provided,)
for k in key:
d = components.get(k)
if d is None:
return None
components = d
return components.get(name)
def unregister(self, required, provided, name, value=None):
required = tuple(map(_convert_None_to_Interface, required))
order = len(required)
byorder = self._adapters
if order >= len(byorder):
return False
components = byorder[order]
key = required + (provided,)
# Keep track of how we got to `components`:
lookups = []
for k in key:
d = components.get(k)
if d is None:
return
lookups.append((components, k))
components = d
old = components.get(name)
if old is None:
return
if (value is not None) and (old is not value):
return
del components[name]
if not components:
# Clean out empty containers, since we don't want our keys
# to reference global objects (interfaces) unnecessarily.
# This is often a problem when an interface is slated for
# removal; a hold-over entry in the registry can make it
# difficult to remove such interfaces.
for comp, k in reversed(lookups):
d = comp[k]
if d:
break
else:
del comp[k]
while byorder and not byorder[-1]:
del byorder[-1]
n = self._provided[provided] - 1
if n == 0:
del self._provided[provided]
self._v_lookup.remove_extendor(provided)
else:
self._provided[provided] = n
self.changed(self)
def subscribe(self, required, provided, value):
required = tuple(map(_convert_None_to_Interface, required))
name = _BLANK
order = len(required)
byorder = self._subscribers
while len(byorder) <= order:
byorder.append({})
components = byorder[order]
key = required + (provided,)
for k in key:
d = components.get(k)
if d is None:
d = {}
components[k] = d
components = d
components[name] = components.get(name, ()) + (value, )
if provided is not None:
n = self._provided.get(provided, 0) + 1
self._provided[provided] = n
if n == 1:
self._v_lookup.add_extendor(provided)
self.changed(self)
def unsubscribe(self, required, provided, value=None):
required = tuple(map(_convert_None_to_Interface, required))
order = len(required)
byorder = self._subscribers
if order >= len(byorder):
return
components = byorder[order]
key = required + (provided,)
# Keep track of how we got to `components`:
lookups = []
for k in key:
d = components.get(k)
if d is None:
return
lookups.append((components, k))
components = d
old = components.get(_BLANK)
if not old:
# this is belt-and-suspenders against the failure of cleanup below
return # pragma: no cover
if value is None:
new = ()
else:
new = tuple([v for v in old if v is not value])
if new == old:
return
if new:
components[_BLANK] = new
else:
# Instead of setting components[_BLANK] = new, we clean out
# empty containers, since we don't want our keys to
# reference global objects (interfaces) unnecessarily. This
# is often a problem when an interface is slated for
# removal; a hold-over entry in the registry can make it
# difficult to remove such interfaces.
del components[_BLANK]
for comp, k in reversed(lookups):
d = comp[k]
if d:
break
else:
del comp[k]
while byorder and not byorder[-1]:
del byorder[-1]
if provided is not None:
n = self._provided[provided] + len(new) - len(old)
if n == 0:
del self._provided[provided]
self._v_lookup.remove_extendor(provided)
self.changed(self)
# XXX hack to fake out twisted's use of a private api. We need to get them
# to use the new registed method.
def get(self, _): # pragma: no cover
class XXXTwistedFakeOut:
selfImplied = {}
return XXXTwistedFakeOut
_not_in_mapping = object()
class LookupBaseFallback(object):
def __init__(self):
self._cache = {}
self._mcache = {}
self._scache = {}
def changed(self, ignored=None):
self._cache.clear()
self._mcache.clear()
self._scache.clear()
def _getcache(self, provided, name):
cache = self._cache.get(provided)
if cache is None:
cache = {}
self._cache[provided] = cache
if name:
c = cache.get(name)
if c is None:
c = {}
cache[name] = c
cache = c
return cache
def lookup(self, required, provided, name=_BLANK, default=None):
if not isinstance(name, STRING_TYPES):
raise ValueError('name is not a string')
cache = self._getcache(provided, name)
required = tuple(required)
if len(required) == 1:
result = cache.get(required[0], _not_in_mapping)
else:
result = cache.get(tuple(required), _not_in_mapping)
if result is _not_in_mapping:
result = self._uncached_lookup(required, provided, name)
if len(required) == 1:
cache[required[0]] = result
else:
cache[tuple(required)] = result
if result is None:
return default
return result
def lookup1(self, required, provided, name=_BLANK, default=None):
if not isinstance(name, STRING_TYPES):
raise ValueError('name is not a string')
cache = self._getcache(provided, name)
result = cache.get(required, _not_in_mapping)
if result is _not_in_mapping:
return self.lookup((required, ), provided, name, default)
if result is None:
return default
return result
def queryAdapter(self, object, provided, name=_BLANK, default=None):
return self.adapter_hook(provided, object, name, default)
def adapter_hook(self, provided, object, name=_BLANK, default=None):
if not isinstance(name, STRING_TYPES):
raise ValueError('name is not a string')
required = providedBy(object)
cache = self._getcache(provided, name)
factory = cache.get(required, _not_in_mapping)
if factory is _not_in_mapping:
factory = self.lookup((required, ), provided, name)
if factory is not None:
result = factory(object)
if result is not None:
return result
return default
def lookupAll(self, required, provided):
cache = self._mcache.get(provided)
if cache is None:
cache = {}
self._mcache[provided] = cache
required = tuple(required)
result = cache.get(required, _not_in_mapping)
if result is _not_in_mapping:
result = self._uncached_lookupAll(required, provided)
cache[required] = result
return result
def subscriptions(self, required, provided):
cache = self._scache.get(provided)
if cache is None:
cache = {}
self._scache[provided] = cache
required = tuple(required)
result = cache.get(required, _not_in_mapping)
if result is _not_in_mapping:
result = self._uncached_subscriptions(required, provided)
cache[required] = result
return result
LookupBasePy = LookupBaseFallback # BBB
try:
Gregory Szorc
thirdparty: port zope.interface to relative imports...
r37195 from ._zope_interface_coptimizations import LookupBase
Gregory Szorc
thirdparty: vendor zope.interface 4.4.3...
r37193 except ImportError:
LookupBase = LookupBaseFallback
class VerifyingBaseFallback(LookupBaseFallback):
# Mixin for lookups against registries which "chain" upwards, and
# whose lookups invalidate their own caches whenever a parent registry
# bumps its own '_generation' counter. E.g., used by
# zope.component.persistentregistry
def changed(self, originally_changed):
LookupBaseFallback.changed(self, originally_changed)
self._verify_ro = self._registry.ro[1:]
self._verify_generations = [r._generation for r in self._verify_ro]
def _verify(self):
if ([r._generation for r in self._verify_ro]
!= self._verify_generations):
self.changed(None)
def _getcache(self, provided, name):
self._verify()
return LookupBaseFallback._getcache(self, provided, name)
def lookupAll(self, required, provided):
self._verify()
return LookupBaseFallback.lookupAll(self, required, provided)
def subscriptions(self, required, provided):
self._verify()
return LookupBaseFallback.subscriptions(self, required, provided)
VerifyingBasePy = VerifyingBaseFallback #BBB
try:
Gregory Szorc
thirdparty: port zope.interface to relative imports...
r37195 from ._zope_interface_coptimizations import VerifyingBase
Gregory Szorc
thirdparty: vendor zope.interface 4.4.3...
r37193 except ImportError:
VerifyingBase = VerifyingBaseFallback
class AdapterLookupBase(object):
def __init__(self, registry):
self._registry = registry
self._required = {}
self.init_extendors()
super(AdapterLookupBase, self).__init__()
def changed(self, ignored=None):
super(AdapterLookupBase, self).changed(None)
for r in self._required.keys():
r = r()
if r is not None:
r.unsubscribe(self)
self._required.clear()
# Extendors
# ---------
# When given an target interface for an adapter lookup, we need to consider
# adapters for interfaces that extend the target interface. This is
# what the extendors dictionary is about. It tells us all of the
# interfaces that extend an interface for which there are adapters
# registered.
# We could separate this by order and name, thus reducing the
# number of provided interfaces to search at run time. The tradeoff,
# however, is that we have to store more information. For example,
# if the same interface is provided for multiple names and if the
# interface extends many interfaces, we'll have to keep track of
# a fair bit of information for each name. It's better to
# be space efficient here and be time efficient in the cache
# implementation.
# TODO: add invalidation when a provided interface changes, in case
# the interface's __iro__ has changed. This is unlikely enough that
# we'll take our chances for now.
def init_extendors(self):
self._extendors = {}
for p in self._registry._provided:
self.add_extendor(p)
def add_extendor(self, provided):
_extendors = self._extendors
for i in provided.__iro__:
extendors = _extendors.get(i, ())
_extendors[i] = (
[e for e in extendors if provided.isOrExtends(e)]
+
[provided]
+
[e for e in extendors if not provided.isOrExtends(e)]
)
def remove_extendor(self, provided):
_extendors = self._extendors
for i in provided.__iro__:
_extendors[i] = [e for e in _extendors.get(i, ())
if e != provided]
def _subscribe(self, *required):
_refs = self._required
for r in required:
ref = r.weakref()
if ref not in _refs:
r.subscribe(self)
_refs[ref] = 1
def _uncached_lookup(self, required, provided, name=_BLANK):
required = tuple(required)
result = None
order = len(required)
for registry in self._registry.ro:
byorder = registry._adapters
if order >= len(byorder):
continue
extendors = registry._v_lookup._extendors.get(provided)
if not extendors:
continue
components = byorder[order]
result = _lookup(components, required, extendors, name, 0,
order)
if result is not None:
break
self._subscribe(*required)
return result
def queryMultiAdapter(self, objects, provided, name=_BLANK, default=None):
factory = self.lookup(map(providedBy, objects), provided, name)
if factory is None:
return default
result = factory(*objects)
if result is None:
return default
return result
def _uncached_lookupAll(self, required, provided):
required = tuple(required)
order = len(required)
result = {}
for registry in reversed(self._registry.ro):
byorder = registry._adapters
if order >= len(byorder):
continue
extendors = registry._v_lookup._extendors.get(provided)
if not extendors:
continue
components = byorder[order]
_lookupAll(components, required, extendors, result, 0, order)
self._subscribe(*required)
return tuple(result.items())
def names(self, required, provided):
return [c[0] for c in self.lookupAll(required, provided)]
def _uncached_subscriptions(self, required, provided):
required = tuple(required)
order = len(required)
result = []
for registry in reversed(self._registry.ro):
byorder = registry._subscribers
if order >= len(byorder):
continue
if provided is None:
extendors = (provided, )
else:
extendors = registry._v_lookup._extendors.get(provided)
if extendors is None:
continue
_subscriptions(byorder[order], required, extendors, _BLANK,
result, 0, order)
self._subscribe(*required)
return result
def subscribers(self, objects, provided):
subscriptions = self.subscriptions(map(providedBy, objects), provided)
if provided is None:
result = ()
for subscription in subscriptions:
subscription(*objects)
else:
result = []
for subscription in subscriptions:
subscriber = subscription(*objects)
if subscriber is not None:
result.append(subscriber)
return result
class AdapterLookup(AdapterLookupBase, LookupBase):
pass
@implementer(IAdapterRegistry)
class AdapterRegistry(BaseAdapterRegistry):
LookupClass = AdapterLookup
def __init__(self, bases=()):
# AdapterRegisties are invalidating registries, so
# we need to keep track of out invalidating subregistries.
self._v_subregistries = weakref.WeakKeyDictionary()
super(AdapterRegistry, self).__init__(bases)
def _addSubregistry(self, r):
self._v_subregistries[r] = 1
def _removeSubregistry(self, r):
if r in self._v_subregistries:
del self._v_subregistries[r]
def _setBases(self, bases):
old = self.__dict__.get('__bases__', ())
for r in old:
if r not in bases:
r._removeSubregistry(self)
for r in bases:
if r not in old:
r._addSubregistry(self)
super(AdapterRegistry, self)._setBases(bases)
def changed(self, originally_changed):
super(AdapterRegistry, self).changed(originally_changed)
for sub in self._v_subregistries.keys():
sub.changed(originally_changed)
class VerifyingAdapterLookup(AdapterLookupBase, VerifyingBase):
pass
@implementer(IAdapterRegistry)
class VerifyingAdapterRegistry(BaseAdapterRegistry):
LookupClass = VerifyingAdapterLookup
def _convert_None_to_Interface(x):
if x is None:
return Interface
else:
return x
def _lookup(components, specs, provided, name, i, l):
if i < l:
for spec in specs[i].__sro__:
comps = components.get(spec)
if comps:
r = _lookup(comps, specs, provided, name, i+1, l)
if r is not None:
return r
else:
for iface in provided:
comps = components.get(iface)
if comps:
r = comps.get(name)
if r is not None:
return r
return None
def _lookupAll(components, specs, provided, result, i, l):
if i < l:
for spec in reversed(specs[i].__sro__):
comps = components.get(spec)
if comps:
_lookupAll(comps, specs, provided, result, i+1, l)
else:
for iface in reversed(provided):
comps = components.get(iface)
if comps:
result.update(comps)
def _subscriptions(components, specs, provided, name, result, i, l):
if i < l:
for spec in reversed(specs[i].__sro__):
comps = components.get(spec)
if comps:
_subscriptions(comps, specs, provided, name, result, i+1, l)
else:
for iface in reversed(provided):
comps = components.get(iface)
if comps:
comps = comps.get(name)
if comps:
result.extend(comps)