##// END OF EJS Templates
revlog: don't flush data file after every added revision...
revlog: don't flush data file after every added revision The current behavior of revlogs is to flush the data file when writing data to it. Tracing system calls revealed that changegroup processing incurred numerous write(2) calls for values much smaller than the default buffer size (Python defaults to 4096, but it can be adjusted based on detected block size at run time by CPython). The reason we flush revlogs is so readers have all data available. For example, the current code in revlog.py will re-open the revlog file (instead of seeking an existing file handle) to read the text of a revision. This happens when starting a new delta chain when adding several revisions from changegroups, for example. Yes, this is likely sub-optimal (we should probably be sharing file descriptors between readers and writers to avoid the flushing and associated overhead of re-opening files). While flushing revlogs is necessary, it appears all callers are diligent about flushing files before a read is performed (see buildtext() in _addrevision()), making the flush in _writeentry() redundant and unncessary. So, we remove it. In practice, this means we incur a write(2) a) when the buffer is full (typically 4096 bytes) b) when a new delta chain is created rather than after every added revision. This applies to every revlog, but by volume it mostly impacts filelogs. Removing the redundant flush from _writeentry() significantly reduces the number of write(2) calls during changegroup processing on my Linux machine. When applying a changegroup of the hg repo based on my local repo, the total number of write(2) calls during application of the mercurial/localrepo.py revlogs dropped from 1,320 to 217 with this patch applied. Total I/O related system calls dropped from 1,577 to 474. When unbundling a mozilla-central gzipped bundle (264,403 changesets with 1,492,215 changes to 222,507 files), total write(2) calls dropped from 1,252,881 to 827,106 and total system calls dropped from 3,601,259 to 3,178,636 - a reduction of 425,775! While the system call reduction is significant, it appears to have no impact on wall time on my Linux and Windows machines. Still, fewer syscalls is fewer syscalls. Surely this can't hurt. If nothing else, it makes examining remaining system call usage simpler and opens the door to experimenting with the performance impact of different buffer sizes.

File last commit:

r26321:db4c192c default
r26380:56a640b0 default
Show More
test-lock.py
137 lines | 3.9 KiB | text/x-python | PythonLexer
from __future__ import absolute_import
import os
import silenttestrunner
import tempfile
import unittest
from mercurial import (
lock,
scmutil,
)
testlockname = 'testlock'
class teststate(object):
def __init__(self, testcase):
self._testcase = testcase
self._acquirecalled = False
self._releasecalled = False
self._postreleasecalled = False
d = tempfile.mkdtemp(dir=os.getcwd())
self.vfs = scmutil.vfs(d, audit=False)
def makelock(self, *args, **kwargs):
l = lock.lock(self.vfs, testlockname, releasefn=self.releasefn,
acquirefn=self.acquirefn, *args, **kwargs)
l.postrelease.append(self.postreleasefn)
return l
def acquirefn(self):
self._acquirecalled = True
def releasefn(self):
self._releasecalled = True
def postreleasefn(self):
self._postreleasecalled = True
def assertacquirecalled(self, called):
self._testcase.assertEqual(
self._acquirecalled, called,
'expected acquire to be %s but was actually %s' % (
self._tocalled(called),
self._tocalled(self._acquirecalled),
))
def resetacquirefn(self):
self._acquirecalled = False
def assertreleasecalled(self, called):
self._testcase.assertEqual(
self._releasecalled, called,
'expected release to be %s but was actually %s' % (
self._tocalled(called),
self._tocalled(self._releasecalled),
))
def assertpostreleasecalled(self, called):
self._testcase.assertEqual(
self._postreleasecalled, called,
'expected postrelease to be %s but was actually %s' % (
self._tocalled(called),
self._tocalled(self._postreleasecalled),
))
def assertlockexists(self, exists):
actual = self.vfs.lexists(testlockname)
self._testcase.assertEqual(
actual, exists,
'expected lock to %s but actually did %s' % (
self._toexists(exists),
self._toexists(actual),
))
def _tocalled(self, called):
if called:
return 'called'
else:
return 'not called'
def _toexists(self, exists):
if exists:
return 'exists'
else:
return 'not exists'
class testlock(unittest.TestCase):
def testlock(self):
state = teststate(self)
lock = state.makelock()
state.assertacquirecalled(True)
lock.release()
state.assertreleasecalled(True)
state.assertpostreleasecalled(True)
state.assertlockexists(False)
def testrecursivelock(self):
state = teststate(self)
lock = state.makelock()
state.assertacquirecalled(True)
state.resetacquirefn()
lock.lock()
# recursive lock should not call acquirefn again
state.assertacquirecalled(False)
lock.release() # brings lock refcount down from 2 to 1
state.assertreleasecalled(False)
state.assertpostreleasecalled(False)
state.assertlockexists(True)
lock.release() # releases the lock
state.assertreleasecalled(True)
state.assertpostreleasecalled(True)
state.assertlockexists(False)
def testlockfork(self):
state = teststate(self)
lock = state.makelock()
state.assertacquirecalled(True)
lock.lock()
# fake a fork
lock.pid += 1
lock.release()
state.assertreleasecalled(False)
state.assertpostreleasecalled(False)
state.assertlockexists(True)
# release the actual lock
lock.pid -= 1
lock.release()
state.assertreleasecalled(True)
state.assertpostreleasecalled(True)
state.assertlockexists(False)
if __name__ == '__main__':
silenttestrunner.main(__name__)