##// END OF EJS Templates
changegroup: remove reordering control (BC)...
changegroup: remove reordering control (BC) This logic - including the experimental bundle.reorder option - was originally added in a8e3931e3fb5 in 2011 and then later ported to changegroup.py. The intent of this option and associated logic is to control the ordering of revisions in deltagroups in changegroups. At the time it was implemented, only changegroup version 1 existed and generaldelta revlogs were just coming into the world. Changegroup version 1 requires that deltas be made against the last revision sent over the wire. Used with generaldelta, this created an impedance mismatch of sorts and resulted in changegroup producers spending a lot of time recomputing deltas. Revision reordering was introduced so outgoing revisions would be sent in "generaldelta order" and producers would be able to reuse internal deltas from storage. Later on, we introduced changegroup version 2. It supported denoting which revision a delta was against. So we no longer needed to sort outgoing revisions to ensure optimal delta generation from the producer. So, subsequent changegroup versions disabled reordering. We also later made the changelog not store deltas by default. And we also made the changelog send out deltas in storage order. Why we do this for changelog, I'm not sure. Maybe we want to preserve revision order across clones? It doesn't really matter for this commit. Fast forward to 2018. We want to abstract storage backends. And having changegroup code require knowledge about how deltas are stored internally interferes with that goal. This commit removes reordering control from changegroup generation. After this commit, the reordering behavior is: * The changelog is always sent out in storage order (no behavior change). * Non-changelog generaldelta revlogs are reordered to always be in DAG topological order (previously, generaldelta revlogs would be emitted in storage order for version 2 and 3 changegroups). * Non-changelog non-generaldelta revlogs are sent in storage order (no behavior change). * There exists no config option to override behavior. The big difference here is that generaldelta revlogs now *always* have their revisions sorted in DAG order before going out over the wire. This behavior was previously only done for changegroup version 1. Version 2 and version 3 changegroups disabled reordering because the interchange format supported encoding arbitrary delta parents, so reordering wasn't strictly necessary. I can think of a few significant implications for this change. Because changegroup receivers will now see non-changelog revisions in DAG order instead of storage order, the internal storage order of manifests and files may differ substantially between producer and consumer. I don't think this matters that much, since the storage order of manifests and files is largely hidden from users. Only the storage order of changelog matters (because `hg log` shows the changelog in storage order). I don't think there should be any controversy here. The reordering of revisions has implications for changegroup producers. Previously, generaldelta revlogs would be emitted in storage order. And in the common case, the internally-stored delta could effectively be copied from disk into the deltagroup delta. This meant that emitting delta groups for generaldelta revlogs would be mostly linear read I/O. This is desirable for performance. With us now reordering generaldelta revlog revisions in DAG order, the read operations may use more random I/O instead of sequential I/O. This could result in performance loss. But with the prevalence of SSDs and fast random I/O, I'm not too worried. (Note: the optimal emission order for revlogs is actually delta encoding order. But the changegroup code wasn't doing that before or after this change. We could potentially implement that in a later commit.) Changegroups in DAG order will have implications for receivers. Previously, receiving storage order might mean seeing a number of interleaved branches. This would mean long delta chains, sparse I/O, and possibly more fulltext revisions instead of deltas, blowing up storage storage. (This is the same set of problems that sparse revlogs aims to address.) With the producer now sending revisions in DAG order, the receiver also stores revisions in DAG order. That means revisions for the same DAG branch are all grouped together. And this should yield better storage outcomes. In other words, sending the reordered changegroup allows the receiver to have better storage order and for the producer to not propagate its (possibly sub-optimal) internal storage order. On the mozilla-unified repository, this change influences bundle generation: $ hg bundle -t none-v2 -a before: time: real 355.680 secs (user 256.790+0.000 sys 16.820+0.000) after: time: real 382.950 secs (user 281.700+0.000 sys 17.690+0.000) before: 7,150,228,967 bytes (uncompressed) after: 7,041,556,273 bytes (uncompressed) before: 1,669,063,234 bytes (zstd l=3) after: 1,628,598,830 bytes (zstd l=3) $ hg unbundle before: time: real 511.910 secs (user 466.750+0.000 sys 32.680+0.000) after: time: real 487.790 secs (user 443.940+0.000 sys 30.840+0.000) 00manifest.d size: source: 274,924,292 bytes before: 304,741,626 bytes after: 245,252,087 bytes .hg/store total file size: source: 2,649,133,490 before: 2,680,888,130 after: 2,627,875,673 We see the bundle size drop. That's probably because if a revlog internally isn't storing a delta, it will choose to delta against the last emitted revision. And on repos with interleaved branches (like mozilla-unified), the previous revision could be an unrelated branch and therefore be a large delta. But with this patch, the previous revision is likely p1 or p2 and a delta should be small. We also see the manifest size drop by ~50 MB. It's worth noting that the manifest actually *increased* in size by ~25 MB in the old strategy and decreased ~25 MB from its source in the new strategy. Again, my explanation for this is that the DAG ordering in the changegroup is resulting in better grouping of revisions in the receiver, which results in more compact delta chains and higher storage efficiency. Unbundle time also dropped. I suspect this is due to the revlog having to work less to compute deltas since the incoming deltas are more optimal. i.e. the receiver spends less time resolving fulltext revisions as incoming deltas bounce around between DAG branches and delta chains. We also see bundle generation time increase. This is not desirable. However, the regression is only significant on the original repository: if we generate a bundle from the repository created from the new, always reordered bundles, we're close to baseline (if not at it with expected noise): $ hg bundle -t none-v2 -a before (original): time: real 355.680 secs (user 256.790+0.000 sys 16.820+0.000) after (original): time: real 382.950 secs (user 281.700+0.000 sys 17.690+0.000) after (new repo): time: real 362.280 secs (user 260.300+0.000 sys 17.700+0.000) This regression is a bit worrying because it will impact serving canonical repositories (that don't have optimal internal storage unless they are reordered - possibly as part of running `hg debugupgraderepo`). However, this regression will only be noticed by very large changegroups. And I'm guessing/hoping that any repository that large is using clonebundles to mitigate server load. Again, sending DAG order isn't the optimal send order for servers: sending in storage-delta order is. But in order to enable storage-optimal send order, we'll need a storage API that handles sorting. Future commits will introduce such an API. Differential Revision: https://phab.mercurial-scm.org/D4721

File last commit:

r39490:8d858fbf default
r39897:db5501d9 default
Show More
cborutil.py
968 lines | 34.4 KiB | text/x-python | PythonLexer
# cborutil.py - CBOR extensions
#
# Copyright 2018 Gregory Szorc <gregory.szorc@gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import
import struct
import sys
from .. import pycompat
# Very short very of RFC 7049...
#
# Each item begins with a byte. The 3 high bits of that byte denote the
# "major type." The lower 5 bits denote the "subtype." Each major type
# has its own encoding mechanism.
#
# Most types have lengths. However, bytestring, string, array, and map
# can be indefinite length. These are denotes by a subtype with value 31.
# Sub-components of those types then come afterwards and are terminated
# by a "break" byte.
MAJOR_TYPE_UINT = 0
MAJOR_TYPE_NEGINT = 1
MAJOR_TYPE_BYTESTRING = 2
MAJOR_TYPE_STRING = 3
MAJOR_TYPE_ARRAY = 4
MAJOR_TYPE_MAP = 5
MAJOR_TYPE_SEMANTIC = 6
MAJOR_TYPE_SPECIAL = 7
SUBTYPE_MASK = 0b00011111
SUBTYPE_FALSE = 20
SUBTYPE_TRUE = 21
SUBTYPE_NULL = 22
SUBTYPE_HALF_FLOAT = 25
SUBTYPE_SINGLE_FLOAT = 26
SUBTYPE_DOUBLE_FLOAT = 27
SUBTYPE_INDEFINITE = 31
SEMANTIC_TAG_FINITE_SET = 258
# Indefinite types begin with their major type ORd with information value 31.
BEGIN_INDEFINITE_BYTESTRING = struct.pack(
r'>B', MAJOR_TYPE_BYTESTRING << 5 | SUBTYPE_INDEFINITE)
BEGIN_INDEFINITE_ARRAY = struct.pack(
r'>B', MAJOR_TYPE_ARRAY << 5 | SUBTYPE_INDEFINITE)
BEGIN_INDEFINITE_MAP = struct.pack(
r'>B', MAJOR_TYPE_MAP << 5 | SUBTYPE_INDEFINITE)
ENCODED_LENGTH_1 = struct.Struct(r'>B')
ENCODED_LENGTH_2 = struct.Struct(r'>BB')
ENCODED_LENGTH_3 = struct.Struct(r'>BH')
ENCODED_LENGTH_4 = struct.Struct(r'>BL')
ENCODED_LENGTH_5 = struct.Struct(r'>BQ')
# The break ends an indefinite length item.
BREAK = b'\xff'
BREAK_INT = 255
def encodelength(majortype, length):
"""Obtain a value encoding the major type and its length."""
if length < 24:
return ENCODED_LENGTH_1.pack(majortype << 5 | length)
elif length < 256:
return ENCODED_LENGTH_2.pack(majortype << 5 | 24, length)
elif length < 65536:
return ENCODED_LENGTH_3.pack(majortype << 5 | 25, length)
elif length < 4294967296:
return ENCODED_LENGTH_4.pack(majortype << 5 | 26, length)
else:
return ENCODED_LENGTH_5.pack(majortype << 5 | 27, length)
def streamencodebytestring(v):
yield encodelength(MAJOR_TYPE_BYTESTRING, len(v))
yield v
def streamencodebytestringfromiter(it):
"""Convert an iterator of chunks to an indefinite bytestring.
Given an input that is iterable and each element in the iterator is
representable as bytes, emit an indefinite length bytestring.
"""
yield BEGIN_INDEFINITE_BYTESTRING
for chunk in it:
yield encodelength(MAJOR_TYPE_BYTESTRING, len(chunk))
yield chunk
yield BREAK
def streamencodeindefinitebytestring(source, chunksize=65536):
"""Given a large source buffer, emit as an indefinite length bytestring.
This is a generator of chunks constituting the encoded CBOR data.
"""
yield BEGIN_INDEFINITE_BYTESTRING
i = 0
l = len(source)
while True:
chunk = source[i:i + chunksize]
i += len(chunk)
yield encodelength(MAJOR_TYPE_BYTESTRING, len(chunk))
yield chunk
if i >= l:
break
yield BREAK
def streamencodeint(v):
if v >= 18446744073709551616 or v < -18446744073709551616:
raise ValueError('big integers not supported')
if v >= 0:
yield encodelength(MAJOR_TYPE_UINT, v)
else:
yield encodelength(MAJOR_TYPE_NEGINT, abs(v) - 1)
def streamencodearray(l):
"""Encode a known size iterable to an array."""
yield encodelength(MAJOR_TYPE_ARRAY, len(l))
for i in l:
for chunk in streamencode(i):
yield chunk
def streamencodearrayfromiter(it):
"""Encode an iterator of items to an indefinite length array."""
yield BEGIN_INDEFINITE_ARRAY
for i in it:
for chunk in streamencode(i):
yield chunk
yield BREAK
def _mixedtypesortkey(v):
return type(v).__name__, v
def streamencodeset(s):
# https://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml defines
# semantic tag 258 for finite sets.
yield encodelength(MAJOR_TYPE_SEMANTIC, SEMANTIC_TAG_FINITE_SET)
for chunk in streamencodearray(sorted(s, key=_mixedtypesortkey)):
yield chunk
def streamencodemap(d):
"""Encode dictionary to a generator.
Does not supporting indefinite length dictionaries.
"""
yield encodelength(MAJOR_TYPE_MAP, len(d))
for key, value in sorted(d.iteritems(),
key=lambda x: _mixedtypesortkey(x[0])):
for chunk in streamencode(key):
yield chunk
for chunk in streamencode(value):
yield chunk
def streamencodemapfromiter(it):
"""Given an iterable of (key, value), encode to an indefinite length map."""
yield BEGIN_INDEFINITE_MAP
for key, value in it:
for chunk in streamencode(key):
yield chunk
for chunk in streamencode(value):
yield chunk
yield BREAK
def streamencodebool(b):
# major type 7, simple value 20 and 21.
yield b'\xf5' if b else b'\xf4'
def streamencodenone(v):
# major type 7, simple value 22.
yield b'\xf6'
STREAM_ENCODERS = {
bytes: streamencodebytestring,
int: streamencodeint,
pycompat.long: streamencodeint,
list: streamencodearray,
tuple: streamencodearray,
dict: streamencodemap,
set: streamencodeset,
bool: streamencodebool,
type(None): streamencodenone,
}
def streamencode(v):
"""Encode a value in a streaming manner.
Given an input object, encode it to CBOR recursively.
Returns a generator of CBOR encoded bytes. There is no guarantee
that each emitted chunk fully decodes to a value or sub-value.
Encoding is deterministic - unordered collections are sorted.
"""
fn = STREAM_ENCODERS.get(v.__class__)
if not fn:
raise ValueError('do not know how to encode %s' % type(v))
return fn(v)
class CBORDecodeError(Exception):
"""Represents an error decoding CBOR."""
if sys.version_info.major >= 3:
def _elementtointeger(b, i):
return b[i]
else:
def _elementtointeger(b, i):
return ord(b[i])
STRUCT_BIG_UBYTE = struct.Struct(r'>B')
STRUCT_BIG_USHORT = struct.Struct('>H')
STRUCT_BIG_ULONG = struct.Struct('>L')
STRUCT_BIG_ULONGLONG = struct.Struct('>Q')
SPECIAL_NONE = 0
SPECIAL_START_INDEFINITE_BYTESTRING = 1
SPECIAL_START_ARRAY = 2
SPECIAL_START_MAP = 3
SPECIAL_START_SET = 4
SPECIAL_INDEFINITE_BREAK = 5
def decodeitem(b, offset=0):
"""Decode a new CBOR value from a buffer at offset.
This function attempts to decode up to one complete CBOR value
from ``b`` starting at offset ``offset``.
The beginning of a collection (such as an array, map, set, or
indefinite length bytestring) counts as a single value. For these
special cases, a state flag will indicate that a special value was seen.
When called, the function either returns a decoded value or gives
a hint as to how many more bytes are needed to do so. By calling
the function repeatedly given a stream of bytes, the caller can
build up the original values.
Returns a tuple with the following elements:
* Bool indicating whether a complete value was decoded.
* A decoded value if first value is True otherwise None
* Integer number of bytes. If positive, the number of bytes
read. If negative, the number of bytes we need to read to
decode this value or the next chunk in this value.
* One of the ``SPECIAL_*`` constants indicating special treatment
for this value. ``SPECIAL_NONE`` means this is a fully decoded
simple value (such as an integer or bool).
"""
initial = _elementtointeger(b, offset)
offset += 1
majortype = initial >> 5
subtype = initial & SUBTYPE_MASK
if majortype == MAJOR_TYPE_UINT:
complete, value, readcount = decodeuint(subtype, b, offset)
if complete:
return True, value, readcount + 1, SPECIAL_NONE
else:
return False, None, readcount, SPECIAL_NONE
elif majortype == MAJOR_TYPE_NEGINT:
# Negative integers are the same as UINT except inverted minus 1.
complete, value, readcount = decodeuint(subtype, b, offset)
if complete:
return True, -value - 1, readcount + 1, SPECIAL_NONE
else:
return False, None, readcount, SPECIAL_NONE
elif majortype == MAJOR_TYPE_BYTESTRING:
# Beginning of bytestrings are treated as uints in order to
# decode their length, which may be indefinite.
complete, size, readcount = decodeuint(subtype, b, offset,
allowindefinite=True)
# We don't know the size of the bytestring. It must be a definitive
# length since the indefinite subtype would be encoded in the initial
# byte.
if not complete:
return False, None, readcount, SPECIAL_NONE
# We know the length of the bytestring.
if size is not None:
# And the data is available in the buffer.
if offset + readcount + size <= len(b):
value = b[offset + readcount:offset + readcount + size]
return True, value, readcount + size + 1, SPECIAL_NONE
# And we need more data in order to return the bytestring.
else:
wanted = len(b) - offset - readcount - size
return False, None, wanted, SPECIAL_NONE
# It is an indefinite length bytestring.
else:
return True, None, 1, SPECIAL_START_INDEFINITE_BYTESTRING
elif majortype == MAJOR_TYPE_STRING:
raise CBORDecodeError('string major type not supported')
elif majortype == MAJOR_TYPE_ARRAY:
# Beginning of arrays are treated as uints in order to decode their
# length. We don't allow indefinite length arrays.
complete, size, readcount = decodeuint(subtype, b, offset)
if complete:
return True, size, readcount + 1, SPECIAL_START_ARRAY
else:
return False, None, readcount, SPECIAL_NONE
elif majortype == MAJOR_TYPE_MAP:
# Beginning of maps are treated as uints in order to decode their
# number of elements. We don't allow indefinite length arrays.
complete, size, readcount = decodeuint(subtype, b, offset)
if complete:
return True, size, readcount + 1, SPECIAL_START_MAP
else:
return False, None, readcount, SPECIAL_NONE
elif majortype == MAJOR_TYPE_SEMANTIC:
# Semantic tag value is read the same as a uint.
complete, tagvalue, readcount = decodeuint(subtype, b, offset)
if not complete:
return False, None, readcount, SPECIAL_NONE
# This behavior here is a little wonky. The main type being "decorated"
# by this semantic tag follows. A more robust parser would probably emit
# a special flag indicating this as a semantic tag and let the caller
# deal with the types that follow. But since we don't support many
# semantic tags, it is easier to deal with the special cases here and
# hide complexity from the caller. If we add support for more semantic
# tags, we should probably move semantic tag handling into the caller.
if tagvalue == SEMANTIC_TAG_FINITE_SET:
if offset + readcount >= len(b):
return False, None, -1, SPECIAL_NONE
complete, size, readcount2, special = decodeitem(b,
offset + readcount)
if not complete:
return False, None, readcount2, SPECIAL_NONE
if special != SPECIAL_START_ARRAY:
raise CBORDecodeError('expected array after finite set '
'semantic tag')
return True, size, readcount + readcount2 + 1, SPECIAL_START_SET
else:
raise CBORDecodeError('semantic tag %d not allowed' % tagvalue)
elif majortype == MAJOR_TYPE_SPECIAL:
# Only specific values for the information field are allowed.
if subtype == SUBTYPE_FALSE:
return True, False, 1, SPECIAL_NONE
elif subtype == SUBTYPE_TRUE:
return True, True, 1, SPECIAL_NONE
elif subtype == SUBTYPE_NULL:
return True, None, 1, SPECIAL_NONE
elif subtype == SUBTYPE_INDEFINITE:
return True, None, 1, SPECIAL_INDEFINITE_BREAK
# If value is 24, subtype is in next byte.
else:
raise CBORDecodeError('special type %d not allowed' % subtype)
else:
assert False
def decodeuint(subtype, b, offset=0, allowindefinite=False):
"""Decode an unsigned integer.
``subtype`` is the lower 5 bits from the initial byte CBOR item
"header." ``b`` is a buffer containing bytes. ``offset`` points to
the index of the first byte after the byte that ``subtype`` was
derived from.
``allowindefinite`` allows the special indefinite length value
indicator.
Returns a 3-tuple of (successful, value, count).
The first element is a bool indicating if decoding completed. The 2nd
is the decoded integer value or None if not fully decoded or the subtype
is 31 and ``allowindefinite`` is True. The 3rd value is the count of bytes.
If positive, it is the number of additional bytes decoded. If negative,
it is the number of additional bytes needed to decode this value.
"""
# Small values are inline.
if subtype < 24:
return True, subtype, 0
# Indefinite length specifier.
elif subtype == 31:
if allowindefinite:
return True, None, 0
else:
raise CBORDecodeError('indefinite length uint not allowed here')
elif subtype >= 28:
raise CBORDecodeError('unsupported subtype on integer type: %d' %
subtype)
if subtype == 24:
s = STRUCT_BIG_UBYTE
elif subtype == 25:
s = STRUCT_BIG_USHORT
elif subtype == 26:
s = STRUCT_BIG_ULONG
elif subtype == 27:
s = STRUCT_BIG_ULONGLONG
else:
raise CBORDecodeError('bounds condition checking violation')
if len(b) - offset >= s.size:
return True, s.unpack_from(b, offset)[0], s.size
else:
return False, None, len(b) - offset - s.size
class bytestringchunk(bytes):
"""Represents a chunk/segment in an indefinite length bytestring.
This behaves like a ``bytes`` but in addition has the ``isfirst``
and ``islast`` attributes indicating whether this chunk is the first
or last in an indefinite length bytestring.
"""
def __new__(cls, v, first=False, last=False):
self = bytes.__new__(cls, v)
self.isfirst = first
self.islast = last
return self
class sansiodecoder(object):
"""A CBOR decoder that doesn't perform its own I/O.
To use, construct an instance and feed it segments containing
CBOR-encoded bytes via ``decode()``. The return value from ``decode()``
indicates whether a fully-decoded value is available, how many bytes
were consumed, and offers a hint as to how many bytes should be fed
in next time to decode the next value.
The decoder assumes it will decode N discrete CBOR values, not just
a single value. i.e. if the bytestream contains uints packed one after
the other, the decoder will decode them all, rather than just the initial
one.
When ``decode()`` indicates a value is available, call ``getavailable()``
to return all fully decoded values.
``decode()`` can partially decode input. It is up to the caller to keep
track of what data was consumed and to pass unconsumed data in on the
next invocation.
The decoder decodes atomically at the *item* level. See ``decodeitem()``.
If an *item* cannot be fully decoded, the decoder won't record it as
partially consumed. Instead, the caller will be instructed to pass in
the initial bytes of this item on the next invocation. This does result
in some redundant parsing. But the overhead should be minimal.
This decoder only supports a subset of CBOR as required by Mercurial.
It lacks support for:
* Indefinite length arrays
* Indefinite length maps
* Use of indefinite length bytestrings as keys or values within
arrays, maps, or sets.
* Nested arrays, maps, or sets within sets
* Any semantic tag that isn't a mathematical finite set
* Floating point numbers
* Undefined special value
CBOR types are decoded to Python types as follows:
uint -> int
negint -> int
bytestring -> bytes
map -> dict
array -> list
True -> bool
False -> bool
null -> None
indefinite length bytestring chunk -> [bytestringchunk]
The only non-obvious mapping here is an indefinite length bytestring
to the ``bytestringchunk`` type. This is to facilitate streaming
indefinite length bytestrings out of the decoder and to differentiate
a regular bytestring from an indefinite length bytestring.
"""
_STATE_NONE = 0
_STATE_WANT_MAP_KEY = 1
_STATE_WANT_MAP_VALUE = 2
_STATE_WANT_ARRAY_VALUE = 3
_STATE_WANT_SET_VALUE = 4
_STATE_WANT_BYTESTRING_CHUNK_FIRST = 5
_STATE_WANT_BYTESTRING_CHUNK_SUBSEQUENT = 6
def __init__(self):
# TODO add support for limiting size of bytestrings
# TODO add support for limiting number of keys / values in collections
# TODO add support for limiting size of buffered partial values
self.decodedbytecount = 0
self._state = self._STATE_NONE
# Stack of active nested collections. Each entry is a dict describing
# the collection.
self._collectionstack = []
# Fully decoded key to use for the current map.
self._currentmapkey = None
# Fully decoded values available for retrieval.
self._decodedvalues = []
@property
def inprogress(self):
"""Whether the decoder has partially decoded a value."""
return self._state != self._STATE_NONE
def decode(self, b, offset=0):
"""Attempt to decode bytes from an input buffer.
``b`` is a collection of bytes and ``offset`` is the byte
offset within that buffer from which to begin reading data.
``b`` must support ``len()`` and accessing bytes slices via
``__slice__``. Typically ``bytes`` instances are used.
Returns a tuple with the following fields:
* Bool indicating whether values are available for retrieval.
* Integer indicating the number of bytes that were fully consumed,
starting from ``offset``.
* Integer indicating the number of bytes that are desired for the
next call in order to decode an item.
"""
if not b:
return bool(self._decodedvalues), 0, 0
initialoffset = offset
# We could easily split the body of this loop into a function. But
# Python performance is sensitive to function calls and collections
# are composed of many items. So leaving as a while loop could help
# with performance. One thing that may not help is the use of
# if..elif versus a lookup/dispatch table. There may be value
# in switching that.
while offset < len(b):
# Attempt to decode an item. This could be a whole value or a
# special value indicating an event, such as start or end of a
# collection or indefinite length type.
complete, value, readcount, special = decodeitem(b, offset)
if readcount > 0:
self.decodedbytecount += readcount
if not complete:
assert readcount < 0
return (
bool(self._decodedvalues),
offset - initialoffset,
-readcount,
)
offset += readcount
# No nested state. We either have a full value or beginning of a
# complex value to deal with.
if self._state == self._STATE_NONE:
# A normal value.
if special == SPECIAL_NONE:
self._decodedvalues.append(value)
elif special == SPECIAL_START_ARRAY:
self._collectionstack.append({
'remaining': value,
'v': [],
})
self._state = self._STATE_WANT_ARRAY_VALUE
elif special == SPECIAL_START_MAP:
self._collectionstack.append({
'remaining': value,
'v': {},
})
self._state = self._STATE_WANT_MAP_KEY
elif special == SPECIAL_START_SET:
self._collectionstack.append({
'remaining': value,
'v': set(),
})
self._state = self._STATE_WANT_SET_VALUE
elif special == SPECIAL_START_INDEFINITE_BYTESTRING:
self._state = self._STATE_WANT_BYTESTRING_CHUNK_FIRST
else:
raise CBORDecodeError('unhandled special state: %d' %
special)
# This value becomes an element of the current array.
elif self._state == self._STATE_WANT_ARRAY_VALUE:
# Simple values get appended.
if special == SPECIAL_NONE:
c = self._collectionstack[-1]
c['v'].append(value)
c['remaining'] -= 1
# self._state doesn't need changed.
# An array nested within an array.
elif special == SPECIAL_START_ARRAY:
lastc = self._collectionstack[-1]
newvalue = []
lastc['v'].append(newvalue)
lastc['remaining'] -= 1
self._collectionstack.append({
'remaining': value,
'v': newvalue,
})
# self._state doesn't need changed.
# A map nested within an array.
elif special == SPECIAL_START_MAP:
lastc = self._collectionstack[-1]
newvalue = {}
lastc['v'].append(newvalue)
lastc['remaining'] -= 1
self._collectionstack.append({
'remaining': value,
'v': newvalue
})
self._state = self._STATE_WANT_MAP_KEY
elif special == SPECIAL_START_SET:
lastc = self._collectionstack[-1]
newvalue = set()
lastc['v'].append(newvalue)
lastc['remaining'] -= 1
self._collectionstack.append({
'remaining': value,
'v': newvalue,
})
self._state = self._STATE_WANT_SET_VALUE
elif special == SPECIAL_START_INDEFINITE_BYTESTRING:
raise CBORDecodeError('indefinite length bytestrings '
'not allowed as array values')
else:
raise CBORDecodeError('unhandled special item when '
'expecting array value: %d' % special)
# This value becomes the key of the current map instance.
elif self._state == self._STATE_WANT_MAP_KEY:
if special == SPECIAL_NONE:
self._currentmapkey = value
self._state = self._STATE_WANT_MAP_VALUE
elif special == SPECIAL_START_INDEFINITE_BYTESTRING:
raise CBORDecodeError('indefinite length bytestrings '
'not allowed as map keys')
elif special in (SPECIAL_START_ARRAY, SPECIAL_START_MAP,
SPECIAL_START_SET):
raise CBORDecodeError('collections not supported as map '
'keys')
# We do not allow special values to be used as map keys.
else:
raise CBORDecodeError('unhandled special item when '
'expecting map key: %d' % special)
# This value becomes the value of the current map key.
elif self._state == self._STATE_WANT_MAP_VALUE:
# Simple values simply get inserted into the map.
if special == SPECIAL_NONE:
lastc = self._collectionstack[-1]
lastc['v'][self._currentmapkey] = value
lastc['remaining'] -= 1
self._state = self._STATE_WANT_MAP_KEY
# A new array is used as the map value.
elif special == SPECIAL_START_ARRAY:
lastc = self._collectionstack[-1]
newvalue = []
lastc['v'][self._currentmapkey] = newvalue
lastc['remaining'] -= 1
self._collectionstack.append({
'remaining': value,
'v': newvalue,
})
self._state = self._STATE_WANT_ARRAY_VALUE
# A new map is used as the map value.
elif special == SPECIAL_START_MAP:
lastc = self._collectionstack[-1]
newvalue = {}
lastc['v'][self._currentmapkey] = newvalue
lastc['remaining'] -= 1
self._collectionstack.append({
'remaining': value,
'v': newvalue,
})
self._state = self._STATE_WANT_MAP_KEY
# A new set is used as the map value.
elif special == SPECIAL_START_SET:
lastc = self._collectionstack[-1]
newvalue = set()
lastc['v'][self._currentmapkey] = newvalue
lastc['remaining'] -= 1
self._collectionstack.append({
'remaining': value,
'v': newvalue,
})
self._state = self._STATE_WANT_SET_VALUE
elif special == SPECIAL_START_INDEFINITE_BYTESTRING:
raise CBORDecodeError('indefinite length bytestrings not '
'allowed as map values')
else:
raise CBORDecodeError('unhandled special item when '
'expecting map value: %d' % special)
self._currentmapkey = None
# This value is added to the current set.
elif self._state == self._STATE_WANT_SET_VALUE:
if special == SPECIAL_NONE:
lastc = self._collectionstack[-1]
lastc['v'].add(value)
lastc['remaining'] -= 1
elif special == SPECIAL_START_INDEFINITE_BYTESTRING:
raise CBORDecodeError('indefinite length bytestrings not '
'allowed as set values')
elif special in (SPECIAL_START_ARRAY,
SPECIAL_START_MAP,
SPECIAL_START_SET):
raise CBORDecodeError('collections not allowed as set '
'values')
# We don't allow non-trivial types to exist as set values.
else:
raise CBORDecodeError('unhandled special item when '
'expecting set value: %d' % special)
# This value represents the first chunk in an indefinite length
# bytestring.
elif self._state == self._STATE_WANT_BYTESTRING_CHUNK_FIRST:
# We received a full chunk.
if special == SPECIAL_NONE:
self._decodedvalues.append(bytestringchunk(value,
first=True))
self._state = self._STATE_WANT_BYTESTRING_CHUNK_SUBSEQUENT
# The end of stream marker. This means it is an empty
# indefinite length bytestring.
elif special == SPECIAL_INDEFINITE_BREAK:
# We /could/ convert this to a b''. But we want to preserve
# the nature of the underlying data so consumers expecting
# an indefinite length bytestring get one.
self._decodedvalues.append(bytestringchunk(b'',
first=True,
last=True))
# Since indefinite length bytestrings can't be used in
# collections, we must be at the root level.
assert not self._collectionstack
self._state = self._STATE_NONE
else:
raise CBORDecodeError('unexpected special value when '
'expecting bytestring chunk: %d' %
special)
# This value represents the non-initial chunk in an indefinite
# length bytestring.
elif self._state == self._STATE_WANT_BYTESTRING_CHUNK_SUBSEQUENT:
# We received a full chunk.
if special == SPECIAL_NONE:
self._decodedvalues.append(bytestringchunk(value))
# The end of stream marker.
elif special == SPECIAL_INDEFINITE_BREAK:
self._decodedvalues.append(bytestringchunk(b'', last=True))
# Since indefinite length bytestrings can't be used in
# collections, we must be at the root level.
assert not self._collectionstack
self._state = self._STATE_NONE
else:
raise CBORDecodeError('unexpected special value when '
'expecting bytestring chunk: %d' %
special)
else:
raise CBORDecodeError('unhandled decoder state: %d' %
self._state)
# We could have just added the final value in a collection. End
# all complete collections at the top of the stack.
while True:
# Bail if we're not waiting on a new collection item.
if self._state not in (self._STATE_WANT_ARRAY_VALUE,
self._STATE_WANT_MAP_KEY,
self._STATE_WANT_SET_VALUE):
break
# Or we are expecting more items for this collection.
lastc = self._collectionstack[-1]
if lastc['remaining']:
break
# The collection at the top of the stack is complete.
# Discard it, as it isn't needed for future items.
self._collectionstack.pop()
# If this is a nested collection, we don't emit it, since it
# will be emitted by its parent collection. But we do need to
# update state to reflect what the new top-most collection
# on the stack is.
if self._collectionstack:
self._state = {
list: self._STATE_WANT_ARRAY_VALUE,
dict: self._STATE_WANT_MAP_KEY,
set: self._STATE_WANT_SET_VALUE,
}[type(self._collectionstack[-1]['v'])]
# If this is the root collection, emit it.
else:
self._decodedvalues.append(lastc['v'])
self._state = self._STATE_NONE
return (
bool(self._decodedvalues),
offset - initialoffset,
0,
)
def getavailable(self):
"""Returns an iterator over fully decoded values.
Once values are retrieved, they won't be available on the next call.
"""
l = list(self._decodedvalues)
self._decodedvalues = []
return l
class bufferingdecoder(object):
"""A CBOR decoder that buffers undecoded input.
This is a glorified wrapper around ``sansiodecoder`` that adds a buffering
layer. All input that isn't consumed by ``sansiodecoder`` will be buffered
and concatenated with any new input that arrives later.
TODO consider adding limits as to the maximum amount of data that can
be buffered.
"""
def __init__(self):
self._decoder = sansiodecoder()
self._leftover = None
def decode(self, b):
"""Attempt to decode bytes to CBOR values.
Returns a tuple with the following fields:
* Bool indicating whether new values are available for retrieval.
* Integer number of bytes decoded from the new input.
* Integer number of bytes wanted to decode the next value.
"""
if self._leftover:
oldlen = len(self._leftover)
b = self._leftover + b
self._leftover = None
else:
b = b
oldlen = 0
available, readcount, wanted = self._decoder.decode(b)
if readcount < len(b):
self._leftover = b[readcount:]
return available, readcount - oldlen, wanted
def getavailable(self):
return self._decoder.getavailable()
def decodeall(b):
"""Decode all CBOR items present in an iterable of bytes.
In addition to regular decode errors, raises CBORDecodeError if the
entirety of the passed buffer does not fully decode to complete CBOR
values. This includes failure to decode any value, incomplete collection
types, incomplete indefinite length items, and extra data at the end of
the buffer.
"""
if not b:
return []
decoder = sansiodecoder()
havevalues, readcount, wantbytes = decoder.decode(b)
if readcount != len(b):
raise CBORDecodeError('input data not fully consumed')
if decoder.inprogress:
raise CBORDecodeError('input data not complete')
return decoder.getavailable()