##// END OF EJS Templates
wireproto: add streams to frame-based protocol...
wireproto: add streams to frame-based protocol Previously, the frame-based protocol was just a series of frames, with each frame associated with a request ID. In order to scale the protocol, we'll want to enable the use of compression. While it is possible to enable compression at the socket/pipe level, this has its disadvantages. The big one is it undermines the point of frames being standalone, atomic units that can be read and written: if you add compression above the framing protocol, you are back to having a stream-based protocol as opposed to something frame-based. So in order to preserve frames, compression needs to occur at the frame payload level. Compressing each frame's payload individually will limit compression ratios because the window size of the compressor will be limited by the max frame size, which is 32-64kb as currently defined. It will also add CPU overhead, as it is more efficient for compressors to operate on fewer, larger blocks of data than more, smaller blocks. So compressing each frame independently is out. This means we need to compress each frame's payload as if it is part of a larger stream. The simplest approach is to have 1 stream per connection. This could certainly work. However, it has disadvantages (documented below). We could also have 1 stream per RPC/command invocation. (This is the model HTTP/2 goes with.) This also has disadvantages. The main disadvantage to one global stream is that it has the very real potential to create CPU bottlenecks doing compression. Networks are only getting faster and the performance of single CPU cores has been relatively flat. Newer compression formats like zstandard offer better CPU cycle efficiency than predecessors like zlib. But it still all too common to saturate your CPU with compression overhead long before you saturate the network pipe. The main disadvantage with streams per request is that you can't reap the benefits of the compression context for multiple requests. For example, if you send 1000 RPC requests (or HTTP/2 requests for that matter), the response to each would have its own compression context. The overall size of the raw responses would be larger because compression contexts wouldn't be able to reference data from another request or response. The approach for streams as implemented in this commit is to support N streams per connection and for streams to potentially span requests and responses. As explained by the added internals docs, this facilitates servers and clients delegating independent streams and compression to independent threads / CPU cores. This helps alleviate the CPU bottleneck of compression. This design also allows compression contexts to be reused across requests/responses. This can result in improved compression ratios and less overhead for compressors and decompressors having to build new contexts. Another feature that was defined was the ability for individual frames within a stream to declare whether that individual frame's payload uses the content encoding (read: compression) defined by the stream. The idea here is that some servers may serve data from a combination of caches and dynamic resolution. Data coming from caches may be pre-compressed. We want to facilitate servers being able to essentially stream bytes from caches to the wire with minimal overhead. Being able to mix and match with frames are compressed within a stream enables these types of advanced server functionality. This commit defines the new streams mechanism. Basic code for supporting streams in frames has been added. But that code is seriously lacking and doesn't fully conform to the defined protocol. For example, we don't close any streams. And support for content encoding within streams is not yet implemented. The change was rather invasive and I didn't think it would be reasonable to implement the entire feature in a single commit. For the record, I would have loved to reuse an existing multiplexing protocol to build the new wire protocol on top of. However, I couldn't find a protocol that offers the performance and scaling characteristics that I desired. Namely, it should support multiple compression contexts to facilitate scaling out to multiple CPU cores and compression contexts should be able to live longer than single RPC requests. HTTP/2 *almost* fits the bill. But the semantics of HTTP message exchange state that streams can only live for a single request-response. We /could/ tunnel on top of HTTP/2 streams and frames with HEADER and DATA frames. But there's no guarantee that HTTP/2 libraries and proxies would allow us to use HTTP/2 streams and frames without the HTTP message exchange semantics defined in RFC 7540 Section 8. Other RPC protocols like gRPC tunnel are built on top of HTTP/2 and thus preserve its semantics of stream per RPC invocation. Even QUIC does this. We could attempt to invent a higher-level stream that spans HTTP/2 streams. But this would be violating HTTP/2 because there is no guarantee that HTTP/2 streams are routed to the same server. The best we can do - which is what this protocol does - is shoehorn all request and response data into a single HTTP message and create streams within. At that point, we've defined a Content-Type in HTTP parlance. It just so happens our media type can also work as a standalone, stream-based protocol, without leaning on HTTP or similar protocol. Differential Revision: https://phab.mercurial-scm.org/D2907

File last commit:

r35743:2a7e777c default
r37304:9bfcbe4f default
Show More
vfs.py
653 lines | 22.4 KiB | text/x-python | PythonLexer
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 # vfs.py - Mercurial 'vfs' classes
#
# Copyright Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import
import contextlib
import errno
import os
import shutil
import stat
import tempfile
import threading
from .i18n import _
from . import (
Augie Fackler
python3: wrap all uses of <exception>.strerror with strtolocal...
r34024 encoding,
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 error,
pathutil,
pycompat,
util,
)
FUJIWARA Katsunori
vfs: copy if EPERM to avoid file stat ambiguity forcibly at closing...
r33280 def _avoidambig(path, oldstat):
"""Avoid file stat ambiguity forcibly
This function causes copying ``path`` file, if it is owned by
another (see issue5418 and issue5584 for detail).
"""
def checkandavoid():
newstat = util.filestat.frompath(path)
# return whether file stat ambiguity is (already) avoided
return (not newstat.isambig(oldstat) or
newstat.avoidambig(path, oldstat))
if not checkandavoid():
# simply copy to change owner of path to get privilege to
# advance mtime (see issue5418)
util.rename(util.mktempcopy(path), path)
checkandavoid()
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 class abstractvfs(object):
"""Abstract base class; cannot be instantiated"""
def __init__(self, *args, **kwargs):
'''Prevent instantiation; don't call this from subclasses.'''
raise NotImplementedError('attempted instantiating ' + str(type(self)))
def tryread(self, path):
'''gracefully return an empty string for missing files'''
try:
return self.read(path)
except IOError as inst:
if inst.errno != errno.ENOENT:
raise
return ""
def tryreadlines(self, path, mode='rb'):
'''gracefully return an empty array for missing files'''
try:
return self.readlines(path, mode=mode)
except IOError as inst:
if inst.errno != errno.ENOENT:
raise
return []
@util.propertycache
def open(self):
'''Open ``path`` file, which is relative to vfs root.
Newly created directories are marked as "not to be indexed by
the content indexing service", if ``notindexed`` is specified
for "write" mode access.
'''
return self.__call__
def read(self, path):
with self(path, 'rb') as fp:
return fp.read()
def readlines(self, path, mode='rb'):
with self(path, mode=mode) as fp:
return fp.readlines()
Boris Feld
write: add the possibility to pass keyword argument from batchget to vfs...
r35743 def write(self, path, data, backgroundclose=False, **kwargs):
with self(path, 'wb', backgroundclose=backgroundclose, **kwargs) as fp:
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 return fp.write(data)
def writelines(self, path, data, mode='wb', notindexed=False):
with self(path, mode=mode, notindexed=notindexed) as fp:
return fp.writelines(data)
def append(self, path, data):
with self(path, 'ab') as fp:
return fp.write(data)
def basename(self, path):
"""return base element of a path (as os.path.basename would do)
This exists to allow handling of strange encoding if needed."""
return os.path.basename(path)
def chmod(self, path, mode):
return os.chmod(self.join(path), mode)
def dirname(self, path):
"""return dirname element of a path (as os.path.dirname would do)
This exists to allow handling of strange encoding if needed."""
return os.path.dirname(path)
def exists(self, path=None):
return os.path.exists(self.join(path))
def fstat(self, fp):
return util.fstat(fp)
def isdir(self, path=None):
return os.path.isdir(self.join(path))
def isfile(self, path=None):
return os.path.isfile(self.join(path))
def islink(self, path=None):
return os.path.islink(self.join(path))
def isfileorlink(self, path=None):
'''return whether path is a regular file or a symlink
Unlike isfile, this doesn't follow symlinks.'''
try:
st = self.lstat(path)
except OSError:
return False
mode = st.st_mode
return stat.S_ISREG(mode) or stat.S_ISLNK(mode)
def reljoin(self, *paths):
"""join various elements of a path together (as os.path.join would do)
The vfs base is not injected so that path stay relative. This exists
to allow handling of strange encoding if needed."""
return os.path.join(*paths)
def split(self, path):
"""split top-most element of a path (as os.path.split would do)
This exists to allow handling of strange encoding if needed."""
return os.path.split(path)
def lexists(self, path=None):
return os.path.lexists(self.join(path))
def lstat(self, path=None):
return os.lstat(self.join(path))
def listdir(self, path=None):
return os.listdir(self.join(path))
def makedir(self, path=None, notindexed=True):
return util.makedir(self.join(path), notindexed)
def makedirs(self, path=None, mode=None):
return util.makedirs(self.join(path), mode)
def makelock(self, info, path):
return util.makelock(info, self.join(path))
def mkdir(self, path=None):
return os.mkdir(self.join(path))
Yuya Nishihara
vfs: drop text mode flag (API)...
r35643 def mkstemp(self, suffix='', prefix='tmp', dir=None):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 fd, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
Yuya Nishihara
vfs: drop text mode flag (API)...
r35643 dir=self.join(dir))
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 dname, fname = util.split(name)
if dir:
return fd, os.path.join(dir, fname)
else:
return fd, fname
def readdir(self, path=None, stat=None, skip=None):
Yuya Nishihara
osutil: proxy through util (and platform) modules (API)...
r32203 return util.listdir(self.join(path), stat, skip)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def readlock(self, path):
return util.readlock(self.join(path))
def rename(self, src, dst, checkambig=False):
"""Rename from src to dst
checkambig argument is used with util.filestat, and is useful
only if destination file is guarded by any lock
(e.g. repo.lock or repo.wlock).
FUJIWARA Katsunori
vfs: add explanation about cost of checkambig=True in corner case
r33282
To avoid file stat ambiguity forcibly, checkambig=True involves
copying ``src`` file, if it is owned by another. Therefore, use
checkambig=True only in limited cases (see also issue5418 and
issue5584 for detail).
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 """
FUJIWARA Katsunori
vfs: create copy at renaming to avoid file stat ambiguity if needed...
r32748 srcpath = self.join(src)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 dstpath = self.join(dst)
Siddharth Agarwal
filestat: move __init__ to frompath constructor...
r32772 oldstat = checkambig and util.filestat.frompath(dstpath)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 if oldstat and oldstat.stat:
FUJIWARA Katsunori
vfs: replace avoiding ambiguity in abstractvfs.rename with _avoidambig...
r33281 ret = util.rename(srcpath, dstpath)
_avoidambig(dstpath, oldstat)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 return ret
FUJIWARA Katsunori
vfs: create copy at renaming to avoid file stat ambiguity if needed...
r32748 return util.rename(srcpath, dstpath)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def readlink(self, path):
return os.readlink(self.join(path))
def removedirs(self, path=None):
"""Remove a leaf directory and all empty intermediate ones
"""
return util.removedirs(self.join(path))
def rmtree(self, path=None, ignore_errors=False, forcibly=False):
"""Remove a directory tree recursively
If ``forcibly``, this tries to remove READ-ONLY files, too.
"""
if forcibly:
def onerror(function, path, excinfo):
if function is not os.remove:
raise
# read-only files cannot be unlinked under Windows
s = os.stat(path)
if (s.st_mode & stat.S_IWRITE) != 0:
raise
os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
os.remove(path)
else:
onerror = None
return shutil.rmtree(self.join(path),
ignore_errors=ignore_errors, onerror=onerror)
def setflags(self, path, l, x):
return util.setflags(self.join(path), l, x)
def stat(self, path=None):
return os.stat(self.join(path))
def unlink(self, path=None):
return util.unlink(self.join(path))
Ryan McElroy
vfs: add tryunlink method...
r31542 def tryunlink(self, path=None):
"""Attempt to remove a file, ignoring missing file errors."""
util.tryunlink(self.join(path))
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 def unlinkpath(self, path=None, ignoremissing=False):
Mads Kiilerich
vfs: use repo.wvfs.unlinkpath
r31309 return util.unlinkpath(self.join(path), ignoremissing=ignoremissing)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def utime(self, path=None, t=None):
return os.utime(self.join(path), t)
def walk(self, path=None, onerror=None):
"""Yield (dirpath, dirs, files) tuple for each directories under path
``dirpath`` is relative one from the root of this vfs. This
uses ``os.sep`` as path separator, even you specify POSIX
style ``path``.
"The root of this vfs" is represented as empty ``dirpath``.
"""
root = os.path.normpath(self.join(None))
# when dirpath == root, dirpath[prefixlen:] becomes empty
# because len(dirpath) < prefixlen.
prefixlen = len(pathutil.normasprefix(root))
for dirpath, dirs, files in os.walk(self.join(path), onerror=onerror):
yield (dirpath[prefixlen:], dirs, files)
@contextlib.contextmanager
def backgroundclosing(self, ui, expectedcount=-1):
"""Allow files to be closed asynchronously.
When this context manager is active, ``backgroundclose`` can be passed
to ``__call__``/``open`` to result in the file possibly being closed
asynchronously, on a background thread.
"""
Wojciech Lis
workers: don't use backgroundfilecloser in threads...
r35426 # Sharing backgroundfilecloser between threads is complex and using
# multiple instances puts us at risk of running out of file descriptors
# only allow to use backgroundfilecloser when in main thread.
if not isinstance(threading.currentThread(), threading._MainThread):
yield
return
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 vfs = getattr(self, 'vfs', self)
if getattr(vfs, '_backgroundfilecloser', None):
raise error.Abort(
_('can only have 1 active background file closer'))
with backgroundfilecloser(ui, expectedcount=expectedcount) as bfc:
try:
vfs._backgroundfilecloser = bfc
yield bfc
finally:
vfs._backgroundfilecloser = None
class vfs(abstractvfs):
'''Operate files relative to a base directory
This class is used to hide the details of COW semantics and
remote file access from higher level code.
Yuya Nishihara
pathauditor: disable cache of audited paths by default (issue5628)...
r33722
'cacheaudited' should be enabled only if (a) vfs object is short-lived, or
(b) the base directory is managed by hg and considered sort-of append-only.
See pathutil.pathauditor() for details.
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 '''
Yuya Nishihara
pathauditor: disable cache of audited paths by default (issue5628)...
r33722 def __init__(self, base, audit=True, cacheaudited=False, expandpath=False,
realpath=False):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 if expandpath:
base = util.expandpath(base)
if realpath:
base = os.path.realpath(base)
self.base = base
vfs: drop the 'mustaudit' API...
r33257 self._audit = audit
if audit:
Yuya Nishihara
pathauditor: disable cache of audited paths by default (issue5628)...
r33722 self.audit = pathutil.pathauditor(self.base, cached=cacheaudited)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 else:
Boris Feld
vfs: allow to pass more argument to audit...
r33435 self.audit = (lambda path, mode=None: True)
vfs: drop the 'mustaudit' API...
r33257 self.createmode = None
self._trustnlink = None
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
@util.propertycache
def _cansymlink(self):
return util.checklink(self.base)
@util.propertycache
def _chmod(self):
return util.checkexec(self.base)
def _fixfilemode(self, name):
if self.createmode is None or not self._chmod:
return
os.chmod(name, self.createmode & 0o666)
Yuya Nishihara
vfs: drop text mode flag (API)...
r35643 def __call__(self, path, mode="r", atomictemp=False, notindexed=False,
backgroundclose=False, checkambig=False, auditpath=True):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 '''Open ``path`` file, which is relative to vfs root.
Newly created directories are marked as "not to be indexed by
the content indexing service", if ``notindexed`` is specified
for "write" mode access.
If ``backgroundclose`` is passed, the file may be closed asynchronously.
It can only be used if the ``self.backgroundclosing()`` context manager
is active. This should only be specified if the following criteria hold:
1. There is a potential for writing thousands of files. Unless you
are writing thousands of files, the performance benefits of
asynchronously closing files is not realized.
2. Files are opened exactly once for the ``backgroundclosing``
active duration and are therefore free of race conditions between
closing a file on a background thread and reopening it. (If the
file were opened multiple times, there could be unflushed data
because the original file handle hasn't been flushed/closed yet.)
``checkambig`` argument is passed to atomictemplfile (valid
only for writing), and is useful only if target file is
guarded by any lock (e.g. repo.lock or repo.wlock).
FUJIWARA Katsunori
vfs: add explanation about cost of checkambig=True in corner case
r33282
To avoid file stat ambiguity forcibly, checkambig=True involves
copying ``path`` file opened in "append" mode (e.g. for
truncation), if it is owned by another. Therefore, use
combination of append mode and checkambig=True only in limited
cases (see also issue5418 and issue5584 for detail).
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 '''
vfs: simplify path audit disabling in stream clone...
r33255 if auditpath:
if self._audit:
r = util.checkosfilename(path)
if r:
raise error.Abort("%s: %r" % (r, path))
Boris Feld
vfs: allow to pass more argument to audit...
r33435 self.audit(path, mode=mode)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 f = self.join(path)
Yuya Nishihara
vfs: drop text mode flag (API)...
r35643 if "b" not in mode:
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 mode += "b" # for that other OS
nlink = -1
if mode not in ('r', 'rb'):
dirname, basename = util.split(f)
# If basename is empty, then the path is malformed because it points
# to a directory. Let the posixfile() call below raise IOError.
if basename:
if atomictemp:
util.makedirs(dirname, self.createmode, notindexed)
return util.atomictempfile(f, mode, self.createmode,
checkambig=checkambig)
try:
if 'w' in mode:
util.unlink(f)
nlink = 0
else:
# nlinks() may behave differently for files on Windows
# shares if the file is open.
with util.posixfile(f):
nlink = util.nlinks(f)
if nlink < 1:
nlink = 2 # force mktempcopy (issue1922)
except (OSError, IOError) as e:
if e.errno != errno.ENOENT:
raise
nlink = 0
util.makedirs(dirname, self.createmode, notindexed)
if nlink > 0:
if self._trustnlink is None:
self._trustnlink = nlink > 1 or util.checknlink(f)
if nlink > 1 or not self._trustnlink:
util.rename(util.mktempcopy(f), f)
fp = util.posixfile(f, mode)
if nlink == 0:
self._fixfilemode(f)
if checkambig:
if mode in ('r', 'rb'):
raise error.Abort(_('implementation error: mode %s is not'
' valid for checkambig=True') % mode)
fp = checkambigatclosing(fp)
Wojciech Lis
workers: don't use backgroundfilecloser in threads...
r35426 if (backgroundclose and
isinstance(threading.currentThread(), threading._MainThread)):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 if not self._backgroundfilecloser:
raise error.Abort(_('backgroundclose can only be used when a '
'backgroundclosing context manager is active')
)
fp = delayclosedfile(fp, self._backgroundfilecloser)
return fp
def symlink(self, src, dst):
self.audit(dst)
linkname = self.join(dst)
Ryan McElroy
vfs: use tryunlink
r31549 util.tryunlink(linkname)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
util.makedirs(os.path.dirname(linkname), self.createmode)
if self._cansymlink:
try:
os.symlink(src, linkname)
except OSError as err:
raise OSError(err.errno, _('could not symlink to %r: %s') %
Augie Fackler
python3: wrap all uses of <exception>.strerror with strtolocal...
r34024 (src, encoding.strtolocal(err.strerror)),
linkname)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 else:
self.write(dst, src)
def join(self, path, *insidef):
if path:
return os.path.join(self.base, path, *insidef)
else:
return self.base
opener = vfs
Yuya Nishihara
vfs: rename auditvfs to proxyvfs...
r33412 class proxyvfs(object):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 def __init__(self, vfs):
self.vfs = vfs
@property
def options(self):
return self.vfs.options
@options.setter
def options(self, value):
self.vfs.options = value
Yuya Nishihara
vfs: rename auditvfs to proxyvfs...
r33412 class filtervfs(abstractvfs, proxyvfs):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 '''Wrapper vfs for filtering filenames with a function.'''
def __init__(self, vfs, filter):
Yuya Nishihara
vfs: rename auditvfs to proxyvfs...
r33412 proxyvfs.__init__(self, vfs)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 self._filter = filter
def __call__(self, path, *args, **kwargs):
return self.vfs(self._filter(path), *args, **kwargs)
def join(self, path, *insidef):
if path:
return self.vfs.join(self._filter(self.vfs.reljoin(path, *insidef)))
else:
return self.vfs.join(path)
filteropener = filtervfs
Yuya Nishihara
vfs: rename auditvfs to proxyvfs...
r33412 class readonlyvfs(abstractvfs, proxyvfs):
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 '''Wrapper vfs preventing any writing.'''
def __init__(self, vfs):
Yuya Nishihara
vfs: rename auditvfs to proxyvfs...
r33412 proxyvfs.__init__(self, vfs)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def __call__(self, path, mode='r', *args, **kw):
if mode not in ('r', 'rb'):
raise error.Abort(_('this vfs is read only'))
return self.vfs(path, mode, *args, **kw)
def join(self, path, *insidef):
return self.vfs.join(path, *insidef)
class closewrapbase(object):
"""Base class of wrapper, which hooks closing
Do not instantiate outside of the vfs layer.
"""
def __init__(self, fh):
Yuya Nishihara
py3: abuse r'' to preserve str-ness of literals passed to __setattr__()
r31644 object.__setattr__(self, r'_origfh', fh)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def __getattr__(self, attr):
return getattr(self._origfh, attr)
def __setattr__(self, attr, value):
return setattr(self._origfh, attr, value)
def __delattr__(self, attr):
return delattr(self._origfh, attr)
def __enter__(self):
return self._origfh.__enter__()
def __exit__(self, exc_type, exc_value, exc_tb):
raise NotImplementedError('attempted instantiating ' + str(type(self)))
def close(self):
raise NotImplementedError('attempted instantiating ' + str(type(self)))
class delayclosedfile(closewrapbase):
"""Proxy for a file object whose close is delayed.
Do not instantiate outside of the vfs layer.
"""
def __init__(self, fh, closer):
super(delayclosedfile, self).__init__(fh)
Yuya Nishihara
py3: abuse r'' to preserve str-ness of literals passed to __setattr__()
r31644 object.__setattr__(self, r'_closer', closer)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def __exit__(self, exc_type, exc_value, exc_tb):
self._closer.close(self._origfh)
def close(self):
self._closer.close(self._origfh)
class backgroundfilecloser(object):
"""Coordinates background closing of file handles on multiple threads."""
def __init__(self, ui, expectedcount=-1):
self._running = False
self._entered = False
self._threads = []
self._threadexception = None
# Only Windows/NTFS has slow file closing. So only enable by default
# on that platform. But allow to be enabled elsewhere for testing.
Jun Wu
codemod: use pycompat.iswindows...
r34646 defaultenabled = pycompat.iswindows
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 enabled = ui.configbool('worker', 'backgroundclose', defaultenabled)
if not enabled:
return
# There is overhead to starting and stopping the background threads.
# Don't do background processing unless the file count is large enough
# to justify it.
configitems: register the 'worker.backgroundcloseminfilecount' config
r33228 minfilecount = ui.configint('worker', 'backgroundcloseminfilecount')
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217 # FUTURE dynamically start background threads after minfilecount closes.
# (We don't currently have any callers that don't know their file count)
if expectedcount > 0 and expectedcount < minfilecount:
return
configitems: register the 'worker.backgroundclosemaxqueue' config
r33227 maxqueue = ui.configint('worker', 'backgroundclosemaxqueue')
configitems: register the 'worker.backgroundclosethreadcount' config
r33229 threadcount = ui.configint('worker', 'backgroundclosethreadcount')
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
ui.debug('starting %d threads for background file closing\n' %
threadcount)
self._queue = util.queue(maxsize=maxqueue)
self._running = True
for i in range(threadcount):
t = threading.Thread(target=self._worker, name='backgroundcloser')
self._threads.append(t)
t.start()
def __enter__(self):
self._entered = True
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self._running = False
# Wait for threads to finish closing so open files don't linger for
# longer than lifetime of context manager.
for t in self._threads:
t.join()
def _worker(self):
"""Main routine for worker thread."""
while True:
try:
fh = self._queue.get(block=True, timeout=0.100)
# Need to catch or the thread will terminate and
# we could orphan file descriptors.
try:
fh.close()
except Exception as e:
# Stash so can re-raise from main thread later.
self._threadexception = e
except util.empty:
if not self._running:
break
def close(self, fh):
"""Schedule a file for closing."""
if not self._entered:
raise error.Abort(_('can only call close() when context manager '
'active'))
# If a background thread encountered an exception, raise now so we fail
# fast. Otherwise we may potentially go on for minutes until the error
# is acted on.
if self._threadexception:
e = self._threadexception
self._threadexception = None
raise e
# If we're not actively running, close synchronously.
if not self._running:
fh.close()
return
self._queue.put(fh, block=True, timeout=None)
class checkambigatclosing(closewrapbase):
"""Proxy for a file object, to avoid ambiguity of file stat
See also util.filestat for detail about "ambiguity of file stat".
This proxy is useful only if the target file is guarded by any
lock (e.g. repo.lock or repo.wlock)
Do not instantiate outside of the vfs layer.
"""
def __init__(self, fh):
super(checkambigatclosing, self).__init__(fh)
Siddharth Agarwal
filestat: move __init__ to frompath constructor...
r32772 object.__setattr__(self, r'_oldstat', util.filestat.frompath(fh.name))
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def _checkambig(self):
oldstat = self._oldstat
if oldstat.stat:
FUJIWARA Katsunori
vfs: copy if EPERM to avoid file stat ambiguity forcibly at closing...
r33280 _avoidambig(self._origfh.name, oldstat)
Pierre-Yves David
vfs: extract 'vfs' class and related code to a new 'vfs' module (API)...
r31217
def __exit__(self, exc_type, exc_value, exc_tb):
self._origfh.__exit__(exc_type, exc_value, exc_tb)
self._checkambig()
def close(self):
self._origfh.close()
self._checkambig()