##// END OF EJS Templates
wireproto: add streams to frame-based protocol...
wireproto: add streams to frame-based protocol Previously, the frame-based protocol was just a series of frames, with each frame associated with a request ID. In order to scale the protocol, we'll want to enable the use of compression. While it is possible to enable compression at the socket/pipe level, this has its disadvantages. The big one is it undermines the point of frames being standalone, atomic units that can be read and written: if you add compression above the framing protocol, you are back to having a stream-based protocol as opposed to something frame-based. So in order to preserve frames, compression needs to occur at the frame payload level. Compressing each frame's payload individually will limit compression ratios because the window size of the compressor will be limited by the max frame size, which is 32-64kb as currently defined. It will also add CPU overhead, as it is more efficient for compressors to operate on fewer, larger blocks of data than more, smaller blocks. So compressing each frame independently is out. This means we need to compress each frame's payload as if it is part of a larger stream. The simplest approach is to have 1 stream per connection. This could certainly work. However, it has disadvantages (documented below). We could also have 1 stream per RPC/command invocation. (This is the model HTTP/2 goes with.) This also has disadvantages. The main disadvantage to one global stream is that it has the very real potential to create CPU bottlenecks doing compression. Networks are only getting faster and the performance of single CPU cores has been relatively flat. Newer compression formats like zstandard offer better CPU cycle efficiency than predecessors like zlib. But it still all too common to saturate your CPU with compression overhead long before you saturate the network pipe. The main disadvantage with streams per request is that you can't reap the benefits of the compression context for multiple requests. For example, if you send 1000 RPC requests (or HTTP/2 requests for that matter), the response to each would have its own compression context. The overall size of the raw responses would be larger because compression contexts wouldn't be able to reference data from another request or response. The approach for streams as implemented in this commit is to support N streams per connection and for streams to potentially span requests and responses. As explained by the added internals docs, this facilitates servers and clients delegating independent streams and compression to independent threads / CPU cores. This helps alleviate the CPU bottleneck of compression. This design also allows compression contexts to be reused across requests/responses. This can result in improved compression ratios and less overhead for compressors and decompressors having to build new contexts. Another feature that was defined was the ability for individual frames within a stream to declare whether that individual frame's payload uses the content encoding (read: compression) defined by the stream. The idea here is that some servers may serve data from a combination of caches and dynamic resolution. Data coming from caches may be pre-compressed. We want to facilitate servers being able to essentially stream bytes from caches to the wire with minimal overhead. Being able to mix and match with frames are compressed within a stream enables these types of advanced server functionality. This commit defines the new streams mechanism. Basic code for supporting streams in frames has been added. But that code is seriously lacking and doesn't fully conform to the defined protocol. For example, we don't close any streams. And support for content encoding within streams is not yet implemented. The change was rather invasive and I didn't think it would be reasonable to implement the entire feature in a single commit. For the record, I would have loved to reuse an existing multiplexing protocol to build the new wire protocol on top of. However, I couldn't find a protocol that offers the performance and scaling characteristics that I desired. Namely, it should support multiple compression contexts to facilitate scaling out to multiple CPU cores and compression contexts should be able to live longer than single RPC requests. HTTP/2 *almost* fits the bill. But the semantics of HTTP message exchange state that streams can only live for a single request-response. We /could/ tunnel on top of HTTP/2 streams and frames with HEADER and DATA frames. But there's no guarantee that HTTP/2 libraries and proxies would allow us to use HTTP/2 streams and frames without the HTTP message exchange semantics defined in RFC 7540 Section 8. Other RPC protocols like gRPC tunnel are built on top of HTTP/2 and thus preserve its semantics of stream per RPC invocation. Even QUIC does this. We could attempt to invent a higher-level stream that spans HTTP/2 streams. But this would be violating HTTP/2 because there is no guarantee that HTTP/2 streams are routed to the same server. The best we can do - which is what this protocol does - is shoehorn all request and response data into a single HTTP message and create streams within. At that point, we've defined a Content-Type in HTTP parlance. It just so happens our media type can also work as a standalone, stream-based protocol, without leaning on HTTP or similar protocol. Differential Revision: https://phab.mercurial-scm.org/D2907

File last commit:

r37117:6ca5f825 default
r37304:9bfcbe4f default
Show More
pycompat.py
380 lines | 11.4 KiB | text/x-python | PythonLexer
# pycompat.py - portability shim for python 3
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
"""Mercurial portability shim for python 3.
This contains aliases to hide python version-specific details from the core.
"""
from __future__ import absolute_import
import getopt
import inspect
import os
import shlex
import sys
ispy3 = (sys.version_info[0] >= 3)
ispypy = (r'__pypy__' in sys.builtin_module_names)
if not ispy3:
import cookielib
import cPickle as pickle
import httplib
import Queue as _queue
import SocketServer as socketserver
import xmlrpclib
else:
import http.cookiejar as cookielib
import http.client as httplib
import pickle
import queue as _queue
import socketserver
import xmlrpc.client as xmlrpclib
empty = _queue.Empty
queue = _queue.Queue
def identity(a):
return a
if ispy3:
import builtins
import functools
import io
import struct
fsencode = os.fsencode
fsdecode = os.fsdecode
oscurdir = os.curdir.encode('ascii')
oslinesep = os.linesep.encode('ascii')
osname = os.name.encode('ascii')
ospathsep = os.pathsep.encode('ascii')
ospardir = os.pardir.encode('ascii')
ossep = os.sep.encode('ascii')
osaltsep = os.altsep
if osaltsep:
osaltsep = osaltsep.encode('ascii')
# os.getcwd() on Python 3 returns string, but it has os.getcwdb() which
# returns bytes.
getcwd = os.getcwdb
sysplatform = sys.platform.encode('ascii')
sysexecutable = sys.executable
if sysexecutable:
sysexecutable = os.fsencode(sysexecutable)
bytesio = io.BytesIO
# TODO deprecate stringio name, as it is a lie on Python 3.
stringio = bytesio
def maplist(*args):
return list(map(*args))
def rangelist(*args):
return list(range(*args))
def ziplist(*args):
return list(zip(*args))
rawinput = input
getargspec = inspect.getfullargspec
# TODO: .buffer might not exist if std streams were replaced; we'll need
# a silly wrapper to make a bytes stream backed by a unicode one.
stdin = sys.stdin.buffer
stdout = sys.stdout.buffer
stderr = sys.stderr.buffer
# Since Python 3 converts argv to wchar_t type by Py_DecodeLocale() on Unix,
# we can use os.fsencode() to get back bytes argv.
#
# https://hg.python.org/cpython/file/v3.5.1/Programs/python.c#l55
#
# TODO: On Windows, the native argv is wchar_t, so we'll need a different
# workaround to simulate the Python 2 (i.e. ANSI Win32 API) behavior.
if getattr(sys, 'argv', None) is not None:
sysargv = list(map(os.fsencode, sys.argv))
bytechr = struct.Struct('>B').pack
byterepr = b'%r'.__mod__
class bytestr(bytes):
"""A bytes which mostly acts as a Python 2 str
>>> bytestr(), bytestr(bytearray(b'foo')), bytestr(u'ascii'), bytestr(1)
('', 'foo', 'ascii', '1')
>>> s = bytestr(b'foo')
>>> assert s is bytestr(s)
__bytes__() should be called if provided:
>>> class bytesable(object):
... def __bytes__(self):
... return b'bytes'
>>> bytestr(bytesable())
'bytes'
There's no implicit conversion from non-ascii str as its encoding is
unknown:
>>> bytestr(chr(0x80)) # doctest: +ELLIPSIS
Traceback (most recent call last):
...
UnicodeEncodeError: ...
Comparison between bytestr and bytes should work:
>>> assert bytestr(b'foo') == b'foo'
>>> assert b'foo' == bytestr(b'foo')
>>> assert b'f' in bytestr(b'foo')
>>> assert bytestr(b'f') in b'foo'
Sliced elements should be bytes, not integer:
>>> s[1], s[:2]
(b'o', b'fo')
>>> list(s), list(reversed(s))
([b'f', b'o', b'o'], [b'o', b'o', b'f'])
As bytestr type isn't propagated across operations, you need to cast
bytes to bytestr explicitly:
>>> s = bytestr(b'foo').upper()
>>> t = bytestr(s)
>>> s[0], t[0]
(70, b'F')
Be careful to not pass a bytestr object to a function which expects
bytearray-like behavior.
>>> t = bytes(t) # cast to bytes
>>> assert type(t) is bytes
"""
def __new__(cls, s=b''):
if isinstance(s, bytestr):
return s
if (not isinstance(s, (bytes, bytearray))
and not hasattr(s, u'__bytes__')): # hasattr-py3-only
s = str(s).encode(u'ascii')
return bytes.__new__(cls, s)
def __getitem__(self, key):
s = bytes.__getitem__(self, key)
if not isinstance(s, bytes):
s = bytechr(s)
return s
def __iter__(self):
return iterbytestr(bytes.__iter__(self))
def __repr__(self):
return bytes.__repr__(self)[1:] # drop b''
def iterbytestr(s):
"""Iterate bytes as if it were a str object of Python 2"""
return map(bytechr, s)
def maybebytestr(s):
"""Promote bytes to bytestr"""
if isinstance(s, bytes):
return bytestr(s)
return s
def sysbytes(s):
"""Convert an internal str (e.g. keyword, __doc__) back to bytes
This never raises UnicodeEncodeError, but only ASCII characters
can be round-trip by sysstr(sysbytes(s)).
"""
return s.encode(u'utf-8')
def sysstr(s):
"""Return a keyword str to be passed to Python functions such as
getattr() and str.encode()
This never raises UnicodeDecodeError. Non-ascii characters are
considered invalid and mapped to arbitrary but unique code points
such that 'sysstr(a) != sysstr(b)' for all 'a != b'.
"""
if isinstance(s, builtins.str):
return s
return s.decode(u'latin-1')
def strurl(url):
"""Converts a bytes url back to str"""
if isinstance(url, bytes):
return url.decode(u'ascii')
return url
def bytesurl(url):
"""Converts a str url to bytes by encoding in ascii"""
if isinstance(url, str):
return url.encode(u'ascii')
return url
def raisewithtb(exc, tb):
"""Raise exception with the given traceback"""
raise exc.with_traceback(tb)
def getdoc(obj):
"""Get docstring as bytes; may be None so gettext() won't confuse it
with _('')"""
doc = getattr(obj, u'__doc__', None)
if doc is None:
return doc
return sysbytes(doc)
def _wrapattrfunc(f):
@functools.wraps(f)
def w(object, name, *args):
return f(object, sysstr(name), *args)
return w
# these wrappers are automagically imported by hgloader
delattr = _wrapattrfunc(builtins.delattr)
getattr = _wrapattrfunc(builtins.getattr)
hasattr = _wrapattrfunc(builtins.hasattr)
setattr = _wrapattrfunc(builtins.setattr)
xrange = builtins.range
unicode = str
def open(name, mode='r', buffering=-1, encoding=None):
return builtins.open(name, sysstr(mode), buffering, encoding)
safehasattr = _wrapattrfunc(builtins.hasattr)
def _getoptbwrapper(orig, args, shortlist, namelist):
"""
Takes bytes arguments, converts them to unicode, pass them to
getopt.getopt(), convert the returned values back to bytes and then
return them for Python 3 compatibility as getopt.getopt() don't accepts
bytes on Python 3.
"""
args = [a.decode('latin-1') for a in args]
shortlist = shortlist.decode('latin-1')
namelist = [a.decode('latin-1') for a in namelist]
opts, args = orig(args, shortlist, namelist)
opts = [(a[0].encode('latin-1'), a[1].encode('latin-1'))
for a in opts]
args = [a.encode('latin-1') for a in args]
return opts, args
def strkwargs(dic):
"""
Converts the keys of a python dictonary to str i.e. unicodes so that
they can be passed as keyword arguments as dictonaries with bytes keys
can't be passed as keyword arguments to functions on Python 3.
"""
dic = dict((k.decode('latin-1'), v) for k, v in dic.iteritems())
return dic
def byteskwargs(dic):
"""
Converts keys of python dictonaries to bytes as they were converted to
str to pass that dictonary as a keyword argument on Python 3.
"""
dic = dict((k.encode('latin-1'), v) for k, v in dic.iteritems())
return dic
# TODO: handle shlex.shlex().
def shlexsplit(s, comments=False, posix=True):
"""
Takes bytes argument, convert it to str i.e. unicodes, pass that into
shlex.split(), convert the returned value to bytes and return that for
Python 3 compatibility as shelx.split() don't accept bytes on Python 3.
"""
ret = shlex.split(s.decode('latin-1'), comments, posix)
return [a.encode('latin-1') for a in ret]
def emailparser(*args, **kwargs):
import email.parser
return email.parser.BytesParser(*args, **kwargs)
else:
import cStringIO
bytechr = chr
byterepr = repr
bytestr = str
iterbytestr = iter
maybebytestr = identity
sysbytes = identity
sysstr = identity
strurl = identity
bytesurl = identity
# this can't be parsed on Python 3
exec('def raisewithtb(exc, tb):\n'
' raise exc, None, tb\n')
def fsencode(filename):
"""
Partial backport from os.py in Python 3, which only accepts bytes.
In Python 2, our paths should only ever be bytes, a unicode path
indicates a bug.
"""
if isinstance(filename, str):
return filename
else:
raise TypeError(
"expect str, not %s" % type(filename).__name__)
# In Python 2, fsdecode() has a very chance to receive bytes. So it's
# better not to touch Python 2 part as it's already working fine.
fsdecode = identity
def getdoc(obj):
return getattr(obj, '__doc__', None)
_notset = object()
def safehasattr(thing, attr):
return getattr(thing, attr, _notset) is not _notset
def _getoptbwrapper(orig, args, shortlist, namelist):
return orig(args, shortlist, namelist)
strkwargs = identity
byteskwargs = identity
oscurdir = os.curdir
oslinesep = os.linesep
osname = os.name
ospathsep = os.pathsep
ospardir = os.pardir
ossep = os.sep
osaltsep = os.altsep
stdin = sys.stdin
stdout = sys.stdout
stderr = sys.stderr
if getattr(sys, 'argv', None) is not None:
sysargv = sys.argv
sysplatform = sys.platform
getcwd = os.getcwd
sysexecutable = sys.executable
shlexsplit = shlex.split
bytesio = cStringIO.StringIO
stringio = bytesio
maplist = map
rangelist = range
ziplist = zip
rawinput = raw_input
getargspec = inspect.getargspec
def emailparser(*args, **kwargs):
import email.parser
return email.parser.Parser(*args, **kwargs)
isjython = sysplatform.startswith('java')
isdarwin = sysplatform == 'darwin'
isposix = osname == 'posix'
iswindows = osname == 'nt'
def getoptb(args, shortlist, namelist):
return _getoptbwrapper(getopt.getopt, args, shortlist, namelist)
def gnugetoptb(args, shortlist, namelist):
return _getoptbwrapper(getopt.gnu_getopt, args, shortlist, namelist)