##// END OF EJS Templates
localrepo: experimental support for non-zlib revlog compression...
localrepo: experimental support for non-zlib revlog compression The final part of integrating the compression manager APIs into revlog storage is the plumbing for repositories to advertise they are using non-zlib storage and for revlogs to instantiate a non-zlib compression engine. The main intent of the compression manager work was to zstd all of the things. Adding zstd to revlogs has proved to be more involved than other places because revlogs are... special. Very small inputs and the use of delta chains (which are themselves a form of compression) are a completely different use case from streaming compression, which bundles and the wire protocol employ. I've conducted numerous experiments with zstd in revlogs and have yet to formalize compression settings and a storage architecture that I'm confident I won't regret later. In other words, I'm not yet ready to commit to a new mechanism for using zstd - or any other compression format - in revlogs. That being said, having some support for zstd (and other compression formats) in revlogs in core is beneficial. It can allow others to conduct experiments. This patch introduces *highly experimental* support for non-zlib compression formats in revlogs. Introduced is a config option to control which compression engine to use. Also introduced is a namespace of "exp-compression-*" requirements to denote support for non-zlib compression in revlogs. I've prefixed the namespace with "exp-" (short for "experimental") because I'm not confident of the requirements "schema" and in no way want to give the illusion of supporting these requirements in the future. I fully intend to drop support for these requirements once we figure out what we're doing with zstd in revlogs. A good portion of the patch is teaching the requirements system about registered compression engines and passing the requested compression engine as an opener option so revlogs can instantiate the proper compression engine for new operations. That's a verbose way of saying "we can now use zstd in revlogs!" On an `hg pull` conversion of the mozilla-unified repo with no extra redelta settings (like aggressivemergedeltas), we can see the impact of zstd vs zlib in revlogs: $ hg perfrevlogchunks -c ! chunk ! wall 2.032052 comb 2.040000 user 1.990000 sys 0.050000 (best of 5) ! wall 1.866360 comb 1.860000 user 1.820000 sys 0.040000 (best of 6) ! chunk batch ! wall 1.877261 comb 1.870000 user 1.860000 sys 0.010000 (best of 6) ! wall 1.705410 comb 1.710000 user 1.690000 sys 0.020000 (best of 6) $ hg perfrevlogchunks -m ! chunk ! wall 2.721427 comb 2.720000 user 2.640000 sys 0.080000 (best of 4) ! wall 2.035076 comb 2.030000 user 1.950000 sys 0.080000 (best of 5) ! chunk batch ! wall 2.614561 comb 2.620000 user 2.580000 sys 0.040000 (best of 4) ! wall 1.910252 comb 1.910000 user 1.880000 sys 0.030000 (best of 6) $ hg perfrevlog -c -d 1 ! wall 4.812885 comb 4.820000 user 4.800000 sys 0.020000 (best of 3) ! wall 4.699621 comb 4.710000 user 4.700000 sys 0.010000 (best of 3) $ hg perfrevlog -m -d 1000 ! wall 34.252800 comb 34.250000 user 33.730000 sys 0.520000 (best of 3) ! wall 24.094999 comb 24.090000 user 23.320000 sys 0.770000 (best of 3) Only modest wins for the changelog. But manifest reading is significantly faster. What's going on? One reason might be data volume. zstd decompresses faster. So given more bytes, it will put more distance between it and zlib. Another reason is size. In the current design, zstd revlogs are *larger*: debugcreatestreamclonebundle (size in bytes) zlib: 1,638,852,492 zstd: 1,680,601,332 I haven't investigated this fully, but I reckon a significant cause of larger revlogs is that the zstd frame/header has more bytes than zlib's. For very small inputs or data that doesn't compress well, we'll tend to store more uncompressed chunks than with zlib (because the compressed size isn't smaller than original). This will make revlog reading faster because it is doing less decompression. Moving on to bundle performance: $ hg bundle -a -t none-v2 (total CPU time) zlib: 102.79s zstd: 97.75s So, marginal CPU decrease for reading all chunks in all revlogs (this is somewhat disappointing). $ hg bundle -a -t <engine>-v2 (total CPU time) zlib: 191.59s zstd: 115.36s This last test effectively measures the difference between zlib->zlib and zstd->zstd for revlogs to bundle. This is a rough approximation of what a server does during `hg clone`. There are some promising results for zstd. But not enough for me to feel comfortable advertising it to users. We'll get there...

File last commit:

r30639:d524c885 default
r30818:4c0a5a25 default
Show More
server.py
335 lines | 11.3 KiB | text/x-python | PythonLexer
# hgweb/server.py - The standalone hg web server.
#
# Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
# Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import
import errno
import os
import socket
import sys
import traceback
from ..i18n import _
from .. import (
error,
pycompat,
util,
)
httpservermod = util.httpserver
socketserver = util.socketserver
urlerr = util.urlerr
urlreq = util.urlreq
from . import (
common,
)
def _splitURI(uri):
"""Return path and query that has been split from uri
Just like CGI environment, the path is unquoted, the query is
not.
"""
if '?' in uri:
path, query = uri.split('?', 1)
else:
path, query = uri, ''
return urlreq.unquote(path), query
class _error_logger(object):
def __init__(self, handler):
self.handler = handler
def flush(self):
pass
def write(self, str):
self.writelines(str.split('\n'))
def writelines(self, seq):
for msg in seq:
self.handler.log_error("HG error: %s", msg)
class _httprequesthandler(httpservermod.basehttprequesthandler):
url_scheme = 'http'
@staticmethod
def preparehttpserver(httpserver, ui):
"""Prepare .socket of new HTTPServer instance"""
pass
def __init__(self, *args, **kargs):
self.protocol_version = 'HTTP/1.1'
httpservermod.basehttprequesthandler.__init__(self, *args, **kargs)
def _log_any(self, fp, format, *args):
fp.write("%s - - [%s] %s\n" % (self.client_address[0],
self.log_date_time_string(),
format % args))
fp.flush()
def log_error(self, format, *args):
self._log_any(self.server.errorlog, format, *args)
def log_message(self, format, *args):
self._log_any(self.server.accesslog, format, *args)
def log_request(self, code='-', size='-'):
xheaders = []
if util.safehasattr(self, 'headers'):
xheaders = [h for h in self.headers.items()
if h[0].startswith('x-')]
self.log_message('"%s" %s %s%s',
self.requestline, str(code), str(size),
''.join([' %s:%s' % h for h in sorted(xheaders)]))
def do_write(self):
try:
self.do_hgweb()
except socket.error as inst:
if inst[0] != errno.EPIPE:
raise
def do_POST(self):
try:
self.do_write()
except Exception:
self._start_response("500 Internal Server Error", [])
self._write("Internal Server Error")
self._done()
tb = "".join(traceback.format_exception(*sys.exc_info()))
self.log_error("Exception happened during processing "
"request '%s':\n%s", self.path, tb)
def do_GET(self):
self.do_POST()
def do_hgweb(self):
path, query = _splitURI(self.path)
env = {}
env['GATEWAY_INTERFACE'] = 'CGI/1.1'
env['REQUEST_METHOD'] = self.command
env['SERVER_NAME'] = self.server.server_name
env['SERVER_PORT'] = str(self.server.server_port)
env['REQUEST_URI'] = self.path
env['SCRIPT_NAME'] = self.server.prefix
env['PATH_INFO'] = path[len(self.server.prefix):]
env['REMOTE_HOST'] = self.client_address[0]
env['REMOTE_ADDR'] = self.client_address[0]
if query:
env['QUERY_STRING'] = query
if self.headers.typeheader is None:
env['CONTENT_TYPE'] = self.headers.type
else:
env['CONTENT_TYPE'] = self.headers.typeheader
length = self.headers.getheader('content-length')
if length:
env['CONTENT_LENGTH'] = length
for header in [h for h in self.headers.keys()
if h not in ('content-type', 'content-length')]:
hkey = 'HTTP_' + header.replace('-', '_').upper()
hval = self.headers.getheader(header)
hval = hval.replace('\n', '').strip()
if hval:
env[hkey] = hval
env['SERVER_PROTOCOL'] = self.request_version
env['wsgi.version'] = (1, 0)
env['wsgi.url_scheme'] = self.url_scheme
if env.get('HTTP_EXPECT', '').lower() == '100-continue':
self.rfile = common.continuereader(self.rfile, self.wfile.write)
env['wsgi.input'] = self.rfile
env['wsgi.errors'] = _error_logger(self)
env['wsgi.multithread'] = isinstance(self.server,
socketserver.ThreadingMixIn)
env['wsgi.multiprocess'] = isinstance(self.server,
socketserver.ForkingMixIn)
env['wsgi.run_once'] = 0
self.saved_status = None
self.saved_headers = []
self.sent_headers = False
self.length = None
self._chunked = None
for chunk in self.server.application(env, self._start_response):
self._write(chunk)
if not self.sent_headers:
self.send_headers()
self._done()
def send_headers(self):
if not self.saved_status:
raise AssertionError("Sending headers before "
"start_response() called")
saved_status = self.saved_status.split(None, 1)
saved_status[0] = int(saved_status[0])
self.send_response(*saved_status)
self.length = None
self._chunked = False
for h in self.saved_headers:
self.send_header(*h)
if h[0].lower() == 'content-length':
self.length = int(h[1])
if (self.length is None and
saved_status[0] != common.HTTP_NOT_MODIFIED):
self._chunked = (not self.close_connection and
self.request_version == "HTTP/1.1")
if self._chunked:
self.send_header('Transfer-Encoding', 'chunked')
else:
self.send_header('Connection', 'close')
self.end_headers()
self.sent_headers = True
def _start_response(self, http_status, headers, exc_info=None):
code, msg = http_status.split(None, 1)
code = int(code)
self.saved_status = http_status
bad_headers = ('connection', 'transfer-encoding')
self.saved_headers = [h for h in headers
if h[0].lower() not in bad_headers]
return self._write
def _write(self, data):
if not self.saved_status:
raise AssertionError("data written before start_response() called")
elif not self.sent_headers:
self.send_headers()
if self.length is not None:
if len(data) > self.length:
raise AssertionError("Content-length header sent, but more "
"bytes than specified are being written.")
self.length = self.length - len(data)
elif self._chunked and data:
data = '%x\r\n%s\r\n' % (len(data), data)
self.wfile.write(data)
self.wfile.flush()
def _done(self):
if self._chunked:
self.wfile.write('0\r\n\r\n')
self.wfile.flush()
class _httprequesthandlerssl(_httprequesthandler):
"""HTTPS handler based on Python's ssl module"""
url_scheme = 'https'
@staticmethod
def preparehttpserver(httpserver, ui):
try:
from .. import sslutil
sslutil.modernssl
except ImportError:
raise error.Abort(_("SSL support is unavailable"))
certfile = ui.config('web', 'certificate')
# These config options are currently only meant for testing. Use
# at your own risk.
cafile = ui.config('devel', 'servercafile')
reqcert = ui.configbool('devel', 'serverrequirecert')
httpserver.socket = sslutil.wrapserversocket(httpserver.socket,
ui,
certfile=certfile,
cafile=cafile,
requireclientcert=reqcert)
def setup(self):
self.connection = self.request
self.rfile = socket._fileobject(self.request, "rb", self.rbufsize)
self.wfile = socket._fileobject(self.request, "wb", self.wbufsize)
try:
import threading
threading.activeCount() # silence pyflakes and bypass demandimport
_mixin = socketserver.ThreadingMixIn
except ImportError:
if util.safehasattr(os, "fork"):
_mixin = socketserver.ForkingMixIn
else:
class _mixin(object):
pass
def openlog(opt, default):
if opt and opt != '-':
return open(opt, 'a')
return default
class MercurialHTTPServer(_mixin, httpservermod.httpserver, object):
# SO_REUSEADDR has broken semantics on windows
if pycompat.osname == 'nt':
allow_reuse_address = 0
def __init__(self, ui, app, addr, handler, **kwargs):
httpservermod.httpserver.__init__(self, addr, handler, **kwargs)
self.daemon_threads = True
self.application = app
handler.preparehttpserver(self, ui)
prefix = ui.config('web', 'prefix', '')
if prefix:
prefix = '/' + prefix.strip('/')
self.prefix = prefix
alog = openlog(ui.config('web', 'accesslog', '-'), ui.fout)
elog = openlog(ui.config('web', 'errorlog', '-'), ui.ferr)
self.accesslog = alog
self.errorlog = elog
self.addr, self.port = self.socket.getsockname()[0:2]
self.fqaddr = socket.getfqdn(addr[0])
class IPv6HTTPServer(MercurialHTTPServer):
address_family = getattr(socket, 'AF_INET6', None)
def __init__(self, *args, **kwargs):
if self.address_family is None:
raise error.RepoError(_('IPv6 is not available on this system'))
super(IPv6HTTPServer, self).__init__(*args, **kwargs)
def create_server(ui, app):
if ui.config('web', 'certificate'):
handler = _httprequesthandlerssl
else:
handler = _httprequesthandler
if ui.configbool('web', 'ipv6'):
cls = IPv6HTTPServer
else:
cls = MercurialHTTPServer
# ugly hack due to python issue5853 (for threaded use)
try:
import mimetypes
mimetypes.init()
except UnicodeDecodeError:
# Python 2.x's mimetypes module attempts to decode strings
# from Windows' ANSI APIs as ascii (fail), then re-encode them
# as ascii (clown fail), because the default Python Unicode
# codec is hardcoded as ascii.
sys.argv # unwrap demand-loader so that reload() works
reload(sys) # resurrect sys.setdefaultencoding()
oldenc = sys.getdefaultencoding()
sys.setdefaultencoding("latin1") # or any full 8-bit encoding
mimetypes.init()
sys.setdefaultencoding(oldenc)
address = ui.config('web', 'address', '')
port = util.getport(ui.config('web', 'port', 8000))
try:
return cls(ui, app, (address, port), handler)
except socket.error as inst:
raise error.Abort(_("cannot start server at '%s:%d': %s")
% (address, port, inst.args[1]))