##// END OF EJS Templates
commands: make commit acquire locks before processing (issue4368)...
commands: make commit acquire locks before processing (issue4368) Before this patch, "hg commit" (process A) executes steps below: 1. get current branch heads via 'repo.branchheads()' - cache 'repo.changelog' 2. invoke 'repo.commit()' 3. acquire wlock - invalidate 'repo.dirstate' 4. access 'repo.dirstate' - re-read '.hg/dirstate' - check validity of parent revisions with 'repo.changelog' 5. invoke 'repo.commitctx()' 6. acquire store lock (slock) - invalidate 'repo.changelog' 7. do committing 8. release slock 9. release wlock 10. check new branch head (via 'cmdutil.commitstatus()') If acquisition of wlock at (3) above waits for another "hg commit" (process B) or so running parallelly to release wlock, process A causes creating orphan revision, because: - '.hg/dirstate' refers the revision, which is newly added by process B, as its parent - but already cached 'repo.changelog' doesn't contain such revision - therefore, validating parents of '.hg/dirstate' at (4) above replaces such revision with 'nullid' Then, process A creates "orphan" revision, of which parent is "null" revision. In addition to it, "created new head" may be shown at the end of process A unintentionally, if store is updated parallelly, because both getting branch heads (1) and checking new branch head (10) are executed outside slock scope. To avoid this issue, this patch makes "hg commit" acquire wlock and slock before processing. This patch resolves the issue between "hg commit" processes, but not one between "hg commit" and other commands. Subsequent patches resolve the latter. Even after this patch, there are still corner case problems below: - filecache may overlook changes of '.hg/dirstate', and it causes similar issue (see below for detail) https://bz.mercurial-scm.org/show_bug.cgi?id=4368#c10 - 3rd party extension may cause similar issue, if it directly uses 'repo.commit()' without acquisition of wlock and slock This can be fixed by acquisition of slock at the beginning of 'repo.commit()', but it seems suitable for "default" branch In fact, acquisition of slock itself is already introduced at "default" branch by 4414d500604f, but acquisition is not at the beginning of 'repo.commit()'. This patch also changes some tests: - test-fncache.t needs this tricky wrapping, to release (= forced failure of) wlock certainly - order of "hg commit" output is changed by widening scope of locks, because some hooks are fired after releasing wlock

File last commit:

r25965:e6b56b2c default
r27192:a01d3d32 default
Show More
peer.py
126 lines | 3.9 KiB | text/x-python | PythonLexer
# peer.py - repository base classes for mercurial
#
# Copyright 2005, 2006 Matt Mackall <mpm@selenic.com>
# Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import
from .i18n import _
from . import (
error,
util,
)
# abstract batching support
class future(object):
'''placeholder for a value to be set later'''
def set(self, value):
if util.safehasattr(self, 'value'):
raise error.RepoError("future is already set")
self.value = value
class batcher(object):
'''base class for batches of commands submittable in a single request
All methods invoked on instances of this class are simply queued and
return a a future for the result. Once you call submit(), all the queued
calls are performed and the results set in their respective futures.
'''
def __init__(self):
self.calls = []
def __getattr__(self, name):
def call(*args, **opts):
resref = future()
self.calls.append((name, args, opts, resref,))
return resref
return call
def submit(self):
pass
class localbatch(batcher):
'''performs the queued calls directly'''
def __init__(self, local):
batcher.__init__(self)
self.local = local
def submit(self):
for name, args, opts, resref in self.calls:
resref.set(getattr(self.local, name)(*args, **opts))
def batchable(f):
'''annotation for batchable methods
Such methods must implement a coroutine as follows:
@batchable
def sample(self, one, two=None):
# Handle locally computable results first:
if not one:
yield "a local result", None
# Build list of encoded arguments suitable for your wire protocol:
encargs = [('one', encode(one),), ('two', encode(two),)]
# Create future for injection of encoded result:
encresref = future()
# Return encoded arguments and future:
yield encargs, encresref
# Assuming the future to be filled with the result from the batched
# request now. Decode it:
yield decode(encresref.value)
The decorator returns a function which wraps this coroutine as a plain
method, but adds the original method as an attribute called "batchable",
which is used by remotebatch to split the call into separate encoding and
decoding phases.
'''
def plain(*args, **opts):
batchable = f(*args, **opts)
encargsorres, encresref = batchable.next()
if not encresref:
return encargsorres # a local result in this case
self = args[0]
encresref.set(self._submitone(f.func_name, encargsorres))
return batchable.next()
setattr(plain, 'batchable', f)
return plain
class peerrepository(object):
def batch(self):
return localbatch(self)
def capable(self, name):
'''tell whether repo supports named capability.
return False if not supported.
if boolean capability, return True.
if string capability, return string.'''
caps = self._capabilities()
if name in caps:
return True
name_eq = name + '='
for cap in caps:
if cap.startswith(name_eq):
return cap[len(name_eq):]
return False
def requirecap(self, name, purpose):
'''raise an exception if the given capability is not present'''
if not self.capable(name):
raise error.CapabilityError(
_('cannot %s; remote repository does not '
'support the %r capability') % (purpose, name))
def local(self):
'''return peer as a localrepo, or None'''
return None
def peer(self):
return self
def canpush(self):
return True
def close(self):
pass