bundle2.py
2690 lines
| 90.2 KiB
| text/x-python
|
PythonLexer
/ mercurial / bundle2.py
Pierre-Yves David
|
r20801 | # bundle2.py - generic container format to transmit arbitrary data. | ||
# | ||||
# Copyright 2013 Facebook, Inc. | ||||
# | ||||
# This software may be used and distributed according to the terms of the | ||||
# GNU General Public License version 2 or any later version. | ||||
"""Handling of the new bundle2 format | ||||
The goal of bundle2 is to act as an atomically packet to transmit a set of | ||||
payloads in an application agnostic way. It consist in a sequence of "parts" | ||||
that will be handed to and processed by the application layer. | ||||
General format architecture | ||||
=========================== | ||||
The format is architectured as follow | ||||
- magic string | ||||
- stream level parameters | ||||
- payload parts (any number) | ||||
- end of stream marker. | ||||
Pierre-Yves David
|
r20856 | the Binary format | ||
Pierre-Yves David
|
r20801 | ============================ | ||
Mads Kiilerich
|
r21024 | All numbers are unsigned and big-endian. | ||
Pierre-Yves David
|
r20801 | |||
stream level parameters | ||||
------------------------ | ||||
Binary format is as follow | ||||
Pierre-Yves David
|
r23009 | :params size: int32 | ||
Pierre-Yves David
|
r20801 | |||
The total number of Bytes used by the parameters | ||||
:params value: arbitrary number of Bytes | ||||
A blob of `params size` containing the serialized version of all stream level | ||||
parameters. | ||||
Mads Kiilerich
|
r21024 | The blob contains a space separated list of parameters. Parameters with value | ||
Pierre-Yves David
|
r20811 | are stored in the form `<name>=<value>`. Both name and value are urlquoted. | ||
Pierre-Yves David
|
r20804 | |||
Pierre-Yves David
|
r20813 | Empty name are obviously forbidden. | ||
Pierre-Yves David
|
r20844 | Name MUST start with a letter. If this first letter is lower case, the | ||
Mads Kiilerich
|
r21024 | parameter is advisory and can be safely ignored. However when the first | ||
Pierre-Yves David
|
r20844 | letter is capital, the parameter is mandatory and the bundling process MUST | ||
stop if he is not able to proceed it. | ||||
Pierre-Yves David
|
r20814 | |||
Pierre-Yves David
|
r20808 | Stream parameters use a simple textual format for two main reasons: | ||
Pierre-Yves David
|
r20804 | |||
Mads Kiilerich
|
r21024 | - Stream level parameters should remain simple and we want to discourage any | ||
Pierre-Yves David
|
r20808 | crazy usage. | ||
Mads Kiilerich
|
r21024 | - Textual data allow easy human inspection of a bundle2 header in case of | ||
Pierre-Yves David
|
r20808 | troubles. | ||
Any Applicative level options MUST go into a bundle2 part instead. | ||||
Pierre-Yves David
|
r20801 | |||
Payload part | ||||
------------------------ | ||||
Binary format is as follow | ||||
Pierre-Yves David
|
r23009 | :header size: int32 | ||
Pierre-Yves David
|
r20801 | |||
Martin von Zweigbergk
|
r25507 | The total number of Bytes used by the part header. When the header is empty | ||
Pierre-Yves David
|
r20801 | (size = 0) this is interpreted as the end of stream marker. | ||
Pierre-Yves David
|
r20856 | :header: | ||
The header defines how to interpret the part. It contains two piece of | ||||
data: the part type, and the part parameters. | ||||
The part type is used to route an application level handler, that can | ||||
interpret payload. | ||||
Part parameters are passed to the application level handler. They are | ||||
meant to convey information that will help the application level object to | ||||
interpret the part payload. | ||||
The binary format of the header is has follow | ||||
:typesize: (one byte) | ||||
Pierre-Yves David
|
r20877 | |||
Matt Mackall
|
r23916 | :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*) | ||
Pierre-Yves David
|
r20877 | |||
Pierre-Yves David
|
r20995 | :partid: A 32bits integer (unique in the bundle) that can be used to refer | ||
to this part. | ||||
Pierre-Yves David
|
r20877 | :parameters: | ||
Mads Kiilerich
|
r21024 | Part's parameter may have arbitrary content, the binary structure is:: | ||
Pierre-Yves David
|
r20877 | |||
<mandatory-count><advisory-count><param-sizes><param-data> | ||||
:mandatory-count: 1 byte, number of mandatory parameters | ||||
:advisory-count: 1 byte, number of advisory parameters | ||||
:param-sizes: | ||||
N couple of bytes, where N is the total number of parameters. Each | ||||
couple contains (<size-of-key>, <size-of-value) for one parameter. | ||||
:param-data: | ||||
A blob of bytes from which each parameter key and value can be | ||||
retrieved using the list of size couples stored in the previous | ||||
field. | ||||
Mandatory parameters comes first, then the advisory ones. | ||||
Pierre-Yves David
|
r20856 | |||
Pierre-Yves David
|
r21607 | Each parameter's key MUST be unique within the part. | ||
Pierre-Yves David
|
r20856 | :payload: | ||
Pierre-Yves David
|
r20876 | payload is a series of `<chunksize><chunkdata>`. | ||
Pierre-Yves David
|
r23009 | `chunksize` is an int32, `chunkdata` are plain bytes (as much as | ||
Pierre-Yves David
|
r20876 | `chunksize` says)` The payload part is concluded by a zero size chunk. | ||
The current implementation always produces either zero or one chunk. | ||||
Mads Kiilerich
|
r21024 | This is an implementation limitation that will ultimately be lifted. | ||
Pierre-Yves David
|
r20891 | |||
Pierre-Yves David
|
r23009 | `chunksize` can be negative to trigger special case processing. No such | ||
processing is in place yet. | ||||
Pierre-Yves David
|
r20891 | Bundle processing | ||
============================ | ||||
Each part is processed in order using a "part handler". Handler are registered | ||||
for a certain part type. | ||||
The matching of a part to its handler is case insensitive. The case of the | ||||
part type is used to know if a part is mandatory or advisory. If the Part type | ||||
contains any uppercase char it is considered mandatory. When no handler is | ||||
known for a Mandatory part, the process is aborted and an exception is raised. | ||||
Pierre-Yves David
|
r20892 | If the part is advisory and no handler is known, the part is ignored. When the | ||
process is aborted, the full bundle is still read from the stream to keep the | ||||
channel usable. But none of the part read from an abort are processed. In the | ||||
future, dropping the stream may become an option for channel we do not care to | ||||
preserve. | ||||
Pierre-Yves David
|
r20801 | """ | ||
Matt Harbison
|
r52756 | from __future__ import annotations | ||
Gregory Szorc
|
r25919 | |||
Boris Feld
|
r36982 | import collections | ||
Eric Sumner
|
r24026 | import errno | ||
Gregory Szorc
|
r35037 | import os | ||
Gregory Szorc
|
r25919 | import re | ||
Pierre-Yves David
|
r20814 | import string | ||
Gregory Szorc
|
r25919 | import struct | ||
import sys | ||||
Matt Harbison
|
r52564 | import typing | ||
Pierre-Yves David
|
r20804 | |||
Gregory Szorc
|
r25919 | from .i18n import _ | ||
Joerg Sonnenberger
|
r46729 | from .node import ( | ||
hex, | ||||
short, | ||||
) | ||||
Gregory Szorc
|
r25919 | from . import ( | ||
Boris Feld
|
r35259 | bookmarks, | ||
Gregory Szorc
|
r25919 | changegroup, | ||
Boris Feld
|
r36981 | encoding, | ||
Gregory Szorc
|
r25919 | error, | ||
obsolete, | ||||
Martin von Zweigbergk
|
r33031 | phases, | ||
Gregory Szorc
|
r25919 | pushkey, | ||
Yuya Nishihara
|
r30030 | pycompat, | ||
Pulkit Goyal
|
r45932 | requirements, | ||
Pulkit Goyal
|
r45666 | scmutil, | ||
Boris Feld
|
r35776 | streamclone, | ||
Gregory Szorc
|
r25919 | tags, | ||
url, | ||||
util, | ||||
) | ||||
r47669 | from .utils import ( | |||
stringutil, | ||||
urlutil, | ||||
) | ||||
r48000 | from .interfaces import repository | |||
Pierre-Yves David
|
r20802 | |||
Matt Harbison
|
r52564 | if typing.TYPE_CHECKING: | ||
from typing import ( | ||||
Dict, | ||||
List, | ||||
Optional, | ||||
Tuple, | ||||
Union, | ||||
) | ||||
Capabilities = Dict[bytes, Union[List[bytes], Tuple[bytes, ...]]] | ||||
timeless
|
r28883 | urlerr = util.urlerr | ||
urlreq = util.urlreq | ||||
Pierre-Yves David
|
r20804 | _pack = struct.pack | ||
_unpack = struct.unpack | ||||
Augie Fackler
|
r43347 | _fstreamparamsize = b'>i' | ||
_fpartheadersize = b'>i' | ||||
_fparttypesize = b'>B' | ||||
_fpartid = b'>I' | ||||
_fpayloadsize = b'>i' | ||||
_fpartparamcount = b'>BB' | ||||
Pierre-Yves David
|
r20877 | |||
Gregory Szorc
|
r35811 | preferedchunksize = 32768 | ||
Pierre-Yves David
|
r21001 | |||
Augie Fackler
|
r43347 | _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]') | ||
Pierre-Yves David
|
r23868 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r25313 | def outdebug(ui, message): | ||
"""debug regarding output stream (bundling)""" | ||||
Augie Fackler
|
r43347 | if ui.configbool(b'devel', b'bundle2.debug'): | ||
ui.debug(b'bundle2-output: %s\n' % message) | ||||
Pierre-Yves David
|
r25313 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r25318 | def indebug(ui, message): | ||
"""debug on input stream (unbundling)""" | ||||
Augie Fackler
|
r43347 | if ui.configbool(b'devel', b'bundle2.debug'): | ||
ui.debug(b'bundle2-input: %s\n' % message) | ||||
Pierre-Yves David
|
r25318 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r23868 | def validateparttype(parttype): | ||
"""raise ValueError if a parttype contains invalid character""" | ||||
Matt Mackall
|
r23916 | if _parttypeforbidden.search(parttype): | ||
Pierre-Yves David
|
r23868 | raise ValueError(parttype) | ||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r20877 | def _makefpartparamsizes(nbparams): | ||
"""return a struct format to read part parameter sizes | ||||
The number parameters is variable so we need to build that format | ||||
dynamically. | ||||
""" | ||||
Augie Fackler
|
r43347 | return b'>' + (b'BB' * nbparams) | ||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r20804 | |||
Pierre-Yves David
|
r20890 | parthandlermapping = {} | ||
Pierre-Yves David
|
r20889 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r21623 | def parthandler(parttype, params=()): | ||
Pierre-Yves David
|
r20890 | """decorator that register a function as a bundle2 part handler | ||
eg:: | ||||
Pierre-Yves David
|
r21624 | @parthandler('myparttype', ('mandatory', 'param', 'handled')) | ||
Pierre-Yves David
|
r20890 | def myparttypehandler(...): | ||
'''process a part of type "my part".''' | ||||
... | ||||
""" | ||||
Pierre-Yves David
|
r23868 | validateparttype(parttype) | ||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r20890 | def _decorator(func): | ||
Augie Fackler
|
r43346 | lparttype = parttype.lower() # enforce lower case matching. | ||
Pierre-Yves David
|
r20891 | assert lparttype not in parthandlermapping | ||
parthandlermapping[lparttype] = func | ||||
Pierre-Yves David
|
r21623 | func.params = frozenset(params) | ||
Pierre-Yves David
|
r20890 | return func | ||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r20890 | return _decorator | ||
Pierre-Yves David
|
r20889 | |||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r49801 | class unbundlerecords: | ||
Pierre-Yves David
|
r20949 | """keep record of what happens during and unbundle | ||
New records are added using `records.add('cat', obj)`. Where 'cat' is a | ||||
Mads Kiilerich
|
r21024 | category of record and obj is an arbitrary object. | ||
Pierre-Yves David
|
r20949 | |||
`records['cat']` will return all entries of this category 'cat'. | ||||
Iterating on the object itself will yield `('category', obj)` tuples | ||||
for all entries. | ||||
All iterations happens in chronological order. | ||||
""" | ||||
def __init__(self): | ||||
self._categories = {} | ||||
self._sequences = [] | ||||
Pierre-Yves David
|
r20996 | self._replies = {} | ||
Pierre-Yves David
|
r20949 | |||
Pierre-Yves David
|
r20996 | def add(self, category, entry, inreplyto=None): | ||
Pierre-Yves David
|
r20949 | """add a new record of a given category. | ||
The entry can then be retrieved in the list returned by | ||||
self['category'].""" | ||||
self._categories.setdefault(category, []).append(entry) | ||||
self._sequences.append((category, entry)) | ||||
Pierre-Yves David
|
r20996 | if inreplyto is not None: | ||
self.getreplies(inreplyto).add(category, entry) | ||||
def getreplies(self, partid): | ||||
Mads Kiilerich
|
r23139 | """get the records that are replies to a specific part""" | ||
Pierre-Yves David
|
r20996 | return self._replies.setdefault(partid, unbundlerecords()) | ||
Pierre-Yves David
|
r20949 | |||
def __getitem__(self, cat): | ||||
return tuple(self._categories.get(cat, ())) | ||||
def __iter__(self): | ||||
return iter(self._sequences) | ||||
def __len__(self): | ||||
return len(self._sequences) | ||||
def __nonzero__(self): | ||||
return bool(self._sequences) | ||||
Gregory Szorc
|
r31476 | __bool__ = __nonzero__ | ||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r49801 | class bundleoperation: | ||
Pierre-Yves David
|
r20948 | """an object that represents a single bundling process | ||
Its purpose is to carry unbundle-related objects and states. | ||||
A new object should be created at the beginning of each bundle processing. | ||||
The object is to be returned by the processing function. | ||||
The object has very little content now it will ultimately contain: | ||||
* an access to the repo the bundle is applied to, | ||||
* a ui object, | ||||
* a way to retrieve a transaction to add changes to the repo, | ||||
* a way to record the result of processing each part, | ||||
* a way to construct a bundle response when applicable. | ||||
""" | ||||
r50659 | def __init__( | |||
self, | ||||
repo, | ||||
transactiongetter, | ||||
captureoutput=True, | ||||
source=b'', | ||||
remote=None, | ||||
): | ||||
Pierre-Yves David
|
r20948 | self.repo = repo | ||
r50659 | # the peer object who produced this bundle if available | |||
self.remote = remote | ||||
Pierre-Yves David
|
r20948 | self.ui = repo.ui | ||
Pierre-Yves David
|
r20949 | self.records = unbundlerecords() | ||
Pierre-Yves David
|
r20997 | self.reply = None | ||
Pierre-Yves David
|
r24878 | self.captureoutput = captureoutput | ||
Pulkit Goyal
|
r33629 | self.hookargs = {} | ||
Pulkit Goyal
|
r33630 | self._gettransaction = transactiongetter | ||
Boris Feld
|
r35266 | # carries value that can modify part behavior | ||
self.modes = {} | ||||
Pulkit Goyal
|
r37252 | self.source = source | ||
Pulkit Goyal
|
r33630 | |||
def gettransaction(self): | ||||
transaction = self._gettransaction() | ||||
Yuya Nishihara
|
r33809 | if self.hookargs: | ||
Pulkit Goyal
|
r33630 | # the ones added to the transaction supercede those added | ||
# to the operation. | ||||
self.hookargs.update(transaction.hookargs) | ||||
transaction.hookargs = self.hookargs | ||||
Yuya Nishihara
|
r33809 | # mark the hookargs as flushed. further attempts to add to | ||
# hookargs will result in an abort. | ||||
self.hookargs = None | ||||
Pulkit Goyal
|
r33630 | |||
return transaction | ||||
Pulkit Goyal
|
r33629 | |||
def addhookargs(self, hookargs): | ||||
Pulkit Goyal
|
r33630 | if self.hookargs is None: | ||
Augie Fackler
|
r43346 | raise error.ProgrammingError( | ||
Augie Fackler
|
r43347 | b'attempted to add hookargs to ' | ||
b'operation after transaction started' | ||||
Augie Fackler
|
r43346 | ) | ||
Pulkit Goyal
|
r33629 | self.hookargs.update(hookargs) | ||
Pierre-Yves David
|
r20948 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r20952 | class TransactionUnavailable(RuntimeError): | ||
pass | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r20952 | def _notransaction(): | ||
"""default method to get a transaction while processing a bundle | ||||
Raise an exception to highlight the fact that no transaction was expected | ||||
to be created""" | ||||
raise TransactionUnavailable() | ||||
Augie Fackler
|
r43346 | |||
r50659 | def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs): | |||
Pierre-Yves David
|
r26790 | # transform me into unbundler.apply() as soon as the freeze is lifted | ||
Martin von Zweigbergk
|
r33043 | if isinstance(unbundler, unbundle20): | ||
Augie Fackler
|
r43347 | tr.hookargs[b'bundle2'] = b'1' | ||
if source is not None and b'source' not in tr.hookargs: | ||||
tr.hookargs[b'source'] = source | ||||
if url is not None and b'url' not in tr.hookargs: | ||||
tr.hookargs[b'url'] = url | ||||
r50659 | return processbundle( | |||
repo, unbundler, lambda: tr, source=source, remote=remote | ||||
) | ||||
Martin von Zweigbergk
|
r33043 | else: | ||
Martin von Zweigbergk
|
r33044 | # the transactiongetter won't be used, but we might as well set it | ||
r50659 | op = bundleoperation(repo, lambda: tr, source=source, remote=remote) | |||
Martin von Zweigbergk
|
r33044 | _processchangegroup(op, unbundler, tr, source, url, **kwargs) | ||
return op | ||||
Pierre-Yves David
|
r26790 | |||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r49801 | class partiterator: | ||
Durham Goode
|
r34151 | def __init__(self, repo, op, unbundler): | ||
Durham Goode
|
r34150 | self.repo = repo | ||
Durham Goode
|
r34151 | self.op = op | ||
Durham Goode
|
r34149 | self.unbundler = unbundler | ||
Durham Goode
|
r34150 | self.iterator = None | ||
self.count = 0 | ||||
Durham Goode
|
r34259 | self.current = None | ||
Durham Goode
|
r34149 | |||
def __enter__(self): | ||||
Durham Goode
|
r34150 | def func(): | ||
Martin von Zweigbergk
|
r43201 | itr = enumerate(self.unbundler.iterparts(), 1) | ||
Durham Goode
|
r34150 | for count, p in itr: | ||
self.count = count | ||||
Durham Goode
|
r34259 | self.current = p | ||
Durham Goode
|
r34150 | yield p | ||
Gregory Szorc
|
r35111 | p.consume() | ||
Durham Goode
|
r34259 | self.current = None | ||
Augie Fackler
|
r43346 | |||
Durham Goode
|
r34150 | self.iterator = func() | ||
return self.iterator | ||||
Durham Goode
|
r34149 | |||
Durham Goode
|
r34151 | def __exit__(self, type, exc, tb): | ||
Durham Goode
|
r34150 | if not self.iterator: | ||
return | ||||
Durham Goode
|
r34638 | # Only gracefully abort in a normal exception situation. User aborts | ||
# like Ctrl+C throw a KeyboardInterrupt which is not a base Exception, | ||||
# and should not gracefully cleanup. | ||||
if isinstance(exc, Exception): | ||||
Durham Goode
|
r34151 | # Any exceptions seeking to the end of the bundle at this point are | ||
# almost certainly related to the underlying stream being bad. | ||||
# And, chances are that the exception we're handling is related to | ||||
# getting in that bad state. So, we swallow the seeking error and | ||||
# re-raise the original error. | ||||
seekerror = False | ||||
try: | ||||
Durham Goode
|
r34638 | if self.current: | ||
# consume the part content to not corrupt the stream. | ||||
Gregory Szorc
|
r35111 | self.current.consume() | ||
Durham Goode
|
r34638 | |||
Durham Goode
|
r34151 | for part in self.iterator: | ||
# consume the bundle content | ||||
Gregory Szorc
|
r35111 | part.consume() | ||
Durham Goode
|
r34151 | except Exception: | ||
seekerror = True | ||||
# Small hack to let caller code distinguish exceptions from bundle2 | ||||
# processing from processing the old format. This is mostly needed | ||||
# to handle different return codes to unbundle according to the type | ||||
# of bundle. We should probably clean up or drop this return code | ||||
# craziness in a future version. | ||||
exc.duringunbundle2 = True | ||||
salvaged = [] | ||||
replycaps = None | ||||
if self.op.reply is not None: | ||||
salvaged = self.op.reply.salvageoutput() | ||||
replycaps = self.op.reply.capabilities | ||||
exc._replycaps = replycaps | ||||
exc._bundle2salvagedoutput = salvaged | ||||
# Re-raising from a variable loses the original stack. So only use | ||||
# that form if we need to. | ||||
if seekerror: | ||||
raise exc | ||||
Augie Fackler
|
r43346 | self.repo.ui.debug( | ||
Augie Fackler
|
r43347 | b'bundle2-input-bundle: %i parts total\n' % self.count | ||
Augie Fackler
|
r43346 | ) | ||
Durham Goode
|
r34149 | |||
r50659 | def processbundle( | |||
repo, | ||||
unbundler, | ||||
transactiongetter=None, | ||||
op=None, | ||||
source=b'', | ||||
remote=None, | ||||
): | ||||
Pierre-Yves David
|
r20889 | """This function process a bundle, apply effect to/from a repo | ||
Pierre-Yves David
|
r20947 | It iterates over each part then searches for and uses the proper handling | ||
code to process the part. Parts are processed in order. | ||||
Pierre-Yves David
|
r20889 | |||
Pierre-Yves David
|
r20891 | Unknown Mandatory part will abort the process. | ||
Pierre-Yves David
|
r24851 | |||
It is temporarily possible to provide a prebuilt bundleoperation to the | ||||
function. This is used to ensure output is properly propagated in case of | ||||
an error during the unbundling. This output capturing part will likely be | ||||
reworked and this ability will probably go away in the process. | ||||
Pierre-Yves David
|
r20889 | """ | ||
Pierre-Yves David
|
r24851 | if op is None: | ||
if transactiongetter is None: | ||||
transactiongetter = _notransaction | ||||
r50659 | op = bundleoperation( | |||
repo, | ||||
transactiongetter, | ||||
source=source, | ||||
remote=remote, | ||||
) | ||||
Pierre-Yves David
|
r20889 | # todo: | ||
# - replace this is a init function soon. | ||||
# - exception catching | ||||
unbundler.params | ||||
Pierre-Yves David
|
r25331 | if repo.ui.debugflag: | ||
Augie Fackler
|
r43347 | msg = [b'bundle2-input-bundle:'] | ||
Pierre-Yves David
|
r25331 | if unbundler.params: | ||
Augie Fackler
|
r43347 | msg.append(b' %i params' % len(unbundler.params)) | ||
Boris Feld
|
r33765 | if op._gettransaction is None or op._gettransaction is _notransaction: | ||
Augie Fackler
|
r43347 | msg.append(b' no-transaction') | ||
Pierre-Yves David
|
r25331 | else: | ||
Augie Fackler
|
r43347 | msg.append(b' with-transaction') | ||
msg.append(b'\n') | ||||
repo.ui.debug(b''.join(msg)) | ||||
Durham Goode
|
r34149 | |||
Durham Goode
|
r34262 | processparts(repo, op, unbundler) | ||
return op | ||||
Augie Fackler
|
r43346 | |||
Durham Goode
|
r34262 | def processparts(repo, op, unbundler): | ||
Durham Goode
|
r34151 | with partiterator(repo, op, unbundler) as parts: | ||
for part in parts: | ||||
_processpart(op, part) | ||||
Pierre-Yves David
|
r25332 | |||
Augie Fackler
|
r43346 | |||
Martin von Zweigbergk
|
r33038 | def _processchangegroup(op, cg, tr, source, url, **kwargs): | ||
r50661 | if op.remote is not None and op.remote.path is not None: | |||
remote_path = op.remote.path | ||||
kwargs = kwargs.copy() | ||||
kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy | ||||
Boris Feld
|
r33461 | ret = cg.apply(op.repo, tr, source, url, **kwargs) | ||
Augie Fackler
|
r46554 | op.records.add( | ||
b'changegroup', | ||||
{ | ||||
b'return': ret, | ||||
}, | ||||
) | ||||
Martin von Zweigbergk
|
r33038 | return ret | ||
Augie Fackler
|
r43346 | |||
Durham Goode
|
r34260 | def _gethandler(op, part): | ||
Augie Fackler
|
r43347 | status = b'unknown' # used by debug output | ||
Durham Goode
|
r34260 | try: | ||
handler = parthandlermapping.get(part.type) | ||||
if handler is None: | ||||
Augie Fackler
|
r43347 | status = b'unsupported-type' | ||
Durham Goode
|
r34260 | raise error.BundleUnknownFeatureError(parttype=part.type) | ||
Augie Fackler
|
r43347 | indebug(op.ui, b'found a handler for part %s' % part.type) | ||
Durham Goode
|
r34260 | unknownparams = part.mandatorykeys - handler.params | ||
if unknownparams: | ||||
unknownparams = list(unknownparams) | ||||
unknownparams.sort() | ||||
Augie Fackler
|
r43347 | status = b'unsupported-params (%s)' % b', '.join(unknownparams) | ||
Augie Fackler
|
r43346 | raise error.BundleUnknownFeatureError( | ||
parttype=part.type, params=unknownparams | ||||
) | ||||
Augie Fackler
|
r43347 | status = b'supported' | ||
Durham Goode
|
r34260 | except error.BundleUnknownFeatureError as exc: | ||
Augie Fackler
|
r43346 | if part.mandatory: # mandatory parts | ||
Durham Goode
|
r34260 | raise | ||
Augie Fackler
|
r43347 | indebug(op.ui, b'ignoring unsupported advisory part %s' % exc) | ||
Augie Fackler
|
r43346 | return # skip to part processing | ||
Durham Goode
|
r34260 | finally: | ||
if op.ui.debugflag: | ||||
Augie Fackler
|
r43347 | msg = [b'bundle2-input-part: "%s"' % part.type] | ||
Durham Goode
|
r34260 | if not part.mandatory: | ||
Augie Fackler
|
r43347 | msg.append(b' (advisory)') | ||
Durham Goode
|
r34260 | nbmp = len(part.mandatorykeys) | ||
nbap = len(part.params) - nbmp | ||||
if nbmp or nbap: | ||||
Augie Fackler
|
r43347 | msg.append(b' (params:') | ||
Durham Goode
|
r34260 | if nbmp: | ||
Augie Fackler
|
r43347 | msg.append(b' %i mandatory' % nbmp) | ||
Durham Goode
|
r34260 | if nbap: | ||
Augie Fackler
|
r43347 | msg.append(b' %i advisory' % nbmp) | ||
msg.append(b')') | ||||
msg.append(b' %s\n' % status) | ||||
op.ui.debug(b''.join(msg)) | ||||
Durham Goode
|
r34260 | |||
return handler | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r23008 | def _processpart(op, part): | ||
"""process a single part from a bundle | ||||
The part is guaranteed to have been fully consumed when the function exits | ||||
(even if an exception is raised).""" | ||||
Durham Goode
|
r34261 | handler = _gethandler(op, part) | ||
if handler is None: | ||||
return | ||||
Pierre-Yves David
|
r23008 | |||
Durham Goode
|
r34261 | # handler is called outside the above try block so that we don't | ||
# risk catching KeyErrors from anything other than the | ||||
# parthandlermapping lookup (any KeyError raised by handler() | ||||
# itself represents a defect of a different variety). | ||||
output = None | ||||
if op.captureoutput and op.reply is not None: | ||||
op.ui.pushbuffer(error=True, subproc=True) | ||||
Augie Fackler
|
r43347 | output = b'' | ||
Durham Goode
|
r34261 | try: | ||
handler(op, part) | ||||
Pierre-Yves David
|
r23008 | finally: | ||
Durham Goode
|
r34261 | if output is not None: | ||
output = op.ui.popbuffer() | ||||
if output: | ||||
Augie Fackler
|
r43347 | outpart = op.reply.newpart(b'output', data=output, mandatory=False) | ||
Durham Goode
|
r34261 | outpart.addparam( | ||
Augie Fackler
|
r43347 | b'in-reply-to', pycompat.bytestr(part.id), mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r23008 | |||
Matt Harbison
|
r52564 | def decodecaps(blob: bytes) -> "Capabilities": | ||
Mads Kiilerich
|
r23139 | """decode a bundle2 caps bytes blob into a dictionary | ||
Pierre-Yves David
|
r21138 | |||
The blob is a list of capabilities (one per line) | ||||
Capabilities may have values using a line of the form:: | ||||
capability=value1,value2,value3 | ||||
The values are always a list.""" | ||||
caps = {} | ||||
for line in blob.splitlines(): | ||||
if not line: | ||||
continue | ||||
Augie Fackler
|
r43347 | if b'=' not in line: | ||
Pierre-Yves David
|
r21138 | key, vals = line, () | ||
else: | ||||
Augie Fackler
|
r43347 | key, vals = line.split(b'=', 1) | ||
vals = vals.split(b',') | ||||
timeless
|
r28883 | key = urlreq.unquote(key) | ||
vals = [urlreq.unquote(v) for v in vals] | ||||
Pierre-Yves David
|
r21138 | caps[key] = vals | ||
return caps | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r21139 | def encodecaps(caps): | ||
"""encode a bundle2 caps dictionary into a bytes blob""" | ||||
chunks = [] | ||||
for ca in sorted(caps): | ||||
vals = caps[ca] | ||||
timeless
|
r28883 | ca = urlreq.quote(ca) | ||
vals = [urlreq.quote(v) for v in vals] | ||||
Pierre-Yves David
|
r21139 | if vals: | ||
Augie Fackler
|
r43347 | ca = b"%s=%s" % (ca, b','.join(vals)) | ||
Pierre-Yves David
|
r21139 | chunks.append(ca) | ||
Augie Fackler
|
r43347 | return b'\n'.join(chunks) | ||
Pierre-Yves David
|
r21139 | |||
Augie Fackler
|
r43346 | |||
Martin von Zweigbergk
|
r28666 | bundletypes = { | ||
Augie Fackler
|
r43347 | b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers | ||
Augie Fackler
|
r43346 | # since the unification ssh accepts a header but there | ||
# is no capability signaling it. | ||||
Augie Fackler
|
r43347 | b"HG20": (), # special-cased below | ||
b"HG10UN": (b"HG10UN", b'UN'), | ||||
b"HG10BZ": (b"HG10", b'BZ'), | ||||
b"HG10GZ": (b"HG10GZ", b'GZ'), | ||||
Martin von Zweigbergk
|
r28666 | } | ||
# hgweb uses this list to communicate its preferred type | ||||
Augie Fackler
|
r43347 | bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN'] | ||
Martin von Zweigbergk
|
r28666 | |||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r49801 | class bundle20: | ||
Pierre-Yves David
|
r20801 | """represent an outgoing bundle2 container | ||
Pierre-Yves David
|
r21599 | Use the `addparam` method to add stream level parameter. and `newpart` to | ||
Pierre-Yves David
|
r20856 | populate it. Then call `getchunks` to retrieve all the binary chunks of | ||
Mads Kiilerich
|
r21024 | data that compose the bundle2 container.""" | ||
Pierre-Yves David
|
r20801 | |||
Augie Fackler
|
r43347 | _magicstring = b'HG20' | ||
Pierre-Yves David
|
r24640 | |||
Matt Harbison
|
r52564 | def __init__(self, ui, capabilities: "Optional[Capabilities]" = None): | ||
if capabilities is None: | ||||
capabilities = {} | ||||
Pierre-Yves David
|
r20842 | self.ui = ui | ||
Pierre-Yves David
|
r20801 | self._params = [] | ||
self._parts = [] | ||||
Matt Harbison
|
r52564 | self.capabilities: "Capabilities" = dict(capabilities) | ||
Augie Fackler
|
r43347 | self._compengine = util.compengines.forbundletype(b'UN') | ||
Gregory Szorc
|
r30757 | self._compopts = None | ||
Gregory Szorc
|
r35805 | # If compression is being handled by a consumer of the raw | ||
# data (e.g. the wire protocol), unsetting this flag tells | ||||
# consumers that the bundle is best left uncompressed. | ||||
self.prefercompressed = True | ||||
Pierre-Yves David
|
r26404 | |||
Gregory Szorc
|
r30757 | def setcompression(self, alg, compopts=None): | ||
Pierre-Yves David
|
r26404 | """setup core part compression to <alg>""" | ||
Augie Fackler
|
r43347 | if alg in (None, b'UN'): | ||
Pierre-Yves David
|
r26404 | return | ||
Augie Fackler
|
r43347 | assert not any(n.lower() == b'compression' for n, v in self._params) | ||
self.addparam(b'Compression', alg) | ||||
Gregory Szorc
|
r30351 | self._compengine = util.compengines.forbundletype(alg) | ||
Gregory Szorc
|
r30757 | self._compopts = compopts | ||
Pierre-Yves David
|
r20801 | |||
Pierre-Yves David
|
r21900 | @property | ||
def nbparts(self): | ||||
"""total number of parts added to the bundler""" | ||||
return len(self._parts) | ||||
Pierre-Yves David
|
r21597 | # methods used to defines the bundle2 content | ||
Pierre-Yves David
|
r20804 | def addparam(self, name, value=None): | ||
"""add a stream level parameter""" | ||||
Pierre-Yves David
|
r20813 | if not name: | ||
Yuya Nishihara
|
r38628 | raise error.ProgrammingError(b'empty parameter name') | ||
Augie Fackler
|
r43789 | if name[0:1] not in pycompat.bytestr( | ||
string.ascii_letters # pytype: disable=wrong-arg-types | ||||
): | ||||
Augie Fackler
|
r43346 | raise error.ProgrammingError( | ||
b'non letter first character: %s' % name | ||||
) | ||||
Pierre-Yves David
|
r20804 | self._params.append((name, value)) | ||
Pierre-Yves David
|
r20856 | def addpart(self, part): | ||
"""add a new part to the bundle2 container | ||||
Mads Kiilerich
|
r21024 | Parts contains the actual applicative payload.""" | ||
Pierre-Yves David
|
r20995 | assert part.id is None | ||
Augie Fackler
|
r43346 | part.id = len(self._parts) # very cheap counter | ||
Pierre-Yves David
|
r20856 | self._parts.append(part) | ||
Pierre-Yves David
|
r21598 | def newpart(self, typeid, *args, **kwargs): | ||
Pierre-Yves David
|
r21602 | """create a new part and add it to the containers | ||
As the part is directly added to the containers. For now, this means | ||||
that any failure to properly initialize the part after calling | ||||
``newpart`` should result in a failure of the whole bundling process. | ||||
You can still fall back to manually create and add if you need better | ||||
control.""" | ||||
Pierre-Yves David
|
r21598 | part = bundlepart(typeid, *args, **kwargs) | ||
Pierre-Yves David
|
r21599 | self.addpart(part) | ||
Pierre-Yves David
|
r21598 | return part | ||
Pierre-Yves David
|
r21597 | # methods used to generate the bundle2 stream | ||
Pierre-Yves David
|
r20801 | def getchunks(self): | ||
Pierre-Yves David
|
r25322 | if self.ui.debugflag: | ||
Augie Fackler
|
r43347 | msg = [b'bundle2-output-bundle: "%s",' % self._magicstring] | ||
Pierre-Yves David
|
r25322 | if self._params: | ||
Augie Fackler
|
r43347 | msg.append(b' (%i params)' % len(self._params)) | ||
msg.append(b' %i parts total\n' % len(self._parts)) | ||||
self.ui.debug(b''.join(msg)) | ||||
outdebug(self.ui, b'start emission of %s stream' % self._magicstring) | ||||
Pierre-Yves David
|
r24640 | yield self._magicstring | ||
Pierre-Yves David
|
r20804 | param = self._paramchunk() | ||
Augie Fackler
|
r43347 | outdebug(self.ui, b'bundle parameter: %s' % param) | ||
Pierre-Yves David
|
r20804 | yield _pack(_fstreamparamsize, len(param)) | ||
if param: | ||||
yield param | ||||
Augie Fackler
|
r43346 | for chunk in self._compengine.compressstream( | ||
self._getcorechunk(), self._compopts | ||||
): | ||||
Gregory Szorc
|
r30357 | yield chunk | ||
Pierre-Yves David
|
r20802 | |||
Pierre-Yves David
|
r20804 | def _paramchunk(self): | ||
"""return a encoded version of all stream parameters""" | ||||
blocks = [] | ||||
Pierre-Yves David
|
r20809 | for par, value in self._params: | ||
timeless
|
r28883 | par = urlreq.quote(par) | ||
Pierre-Yves David
|
r20809 | if value is not None: | ||
timeless
|
r28883 | value = urlreq.quote(value) | ||
Augie Fackler
|
r43347 | par = b'%s=%s' % (par, value) | ||
Pierre-Yves David
|
r20809 | blocks.append(par) | ||
Augie Fackler
|
r43347 | return b' '.join(blocks) | ||
Pierre-Yves David
|
r20804 | |||
Pierre-Yves David
|
r26396 | def _getcorechunk(self): | ||
"""yield chunk for the core part of the bundle | ||||
(all but headers and parameters)""" | ||||
Augie Fackler
|
r43347 | outdebug(self.ui, b'start of parts') | ||
Pierre-Yves David
|
r26396 | for part in self._parts: | ||
Augie Fackler
|
r43347 | outdebug(self.ui, b'bundle part: "%s"' % part.type) | ||
Pierre-Yves David
|
r26396 | for chunk in part.getchunks(ui=self.ui): | ||
yield chunk | ||||
Augie Fackler
|
r43347 | outdebug(self.ui, b'end of bundle') | ||
Pierre-Yves David
|
r26396 | yield _pack(_fpartheadersize, 0) | ||
Pierre-Yves David
|
r24794 | def salvageoutput(self): | ||
"""return a list with a copy of all output parts in the bundle | ||||
This is meant to be used during error handling to make sure we preserve | ||||
server output""" | ||||
salvaged = [] | ||||
for part in self._parts: | ||||
Augie Fackler
|
r43347 | if part.type.startswith(b'output'): | ||
Pierre-Yves David
|
r24794 | salvaged.append(part.copy()) | ||
return salvaged | ||||
Gregory Szorc
|
r49801 | class unpackermixin: | ||
Pierre-Yves David
|
r21013 | """A mixin to extract bytes and struct data from a stream""" | ||
Pierre-Yves David
|
r20802 | |||
Pierre-Yves David
|
r21013 | def __init__(self, fp): | ||
Pierre-Yves David
|
r20802 | self._fp = fp | ||
def _unpack(self, format): | ||||
Pierre-Yves David
|
r31862 | """unpack this struct format from the stream | ||
This method is meant for internal usage by the bundle2 protocol only. | ||||
They directly manipulate the low level stream including bundle2 level | ||||
instruction. | ||||
Do not use it to implement higher-level logic or methods.""" | ||||
Pierre-Yves David
|
r20802 | data = self._readexact(struct.calcsize(format)) | ||
return _unpack(format, data) | ||||
def _readexact(self, size): | ||||
Pierre-Yves David
|
r31862 | """read exactly <size> bytes from the stream | ||
This method is meant for internal usage by the bundle2 protocol only. | ||||
They directly manipulate the low level stream including bundle2 level | ||||
instruction. | ||||
Do not use it to implement higher-level logic or methods.""" | ||||
Pierre-Yves David
|
r20802 | return changegroup.readexactly(self._fp, size) | ||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r25640 | def getunbundler(ui, fp, magicstring=None): | ||
"""return a valid unbundler object for a given magicstring""" | ||||
if magicstring is None: | ||||
magicstring = changegroup.readexactly(fp, 4) | ||||
magic, version = magicstring[0:2], magicstring[2:4] | ||||
Augie Fackler
|
r43347 | if magic != b'HG': | ||
Siddharth Agarwal
|
r33122 | ui.debug( | ||
Augie Fackler
|
r43347 | b"error: invalid magic: %r (version %r), should be 'HG'\n" | ||
Augie Fackler
|
r43346 | % (magic, version) | ||
) | ||||
Augie Fackler
|
r43347 | raise error.Abort(_(b'not a Mercurial bundle')) | ||
Pierre-Yves David
|
r24648 | unbundlerclass = formatmap.get(version) | ||
if unbundlerclass is None: | ||||
Augie Fackler
|
r43347 | raise error.Abort(_(b'unknown bundle version %s') % version) | ||
Pierre-Yves David
|
r24648 | unbundler = unbundlerclass(ui, fp) | ||
Augie Fackler
|
r43347 | indebug(ui, b'start processing of %s stream' % magicstring) | ||
Pierre-Yves David
|
r24642 | return unbundler | ||
Pierre-Yves David
|
r24641 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r21013 | class unbundle20(unpackermixin): | ||
"""interpret a bundle2 stream | ||||
Pierre-Yves David
|
r21129 | This class is fed with a binary stream and yields parts through its | ||
`iterparts` methods.""" | ||||
Pierre-Yves David
|
r21013 | |||
Augie Fackler
|
r43347 | _magicstring = b'HG20' | ||
Pierre-Yves David
|
r26542 | |||
Pierre-Yves David
|
r24642 | def __init__(self, ui, fp): | ||
Pierre-Yves David
|
r21066 | """If header is specified, we do not read it out of the stream.""" | ||
Pierre-Yves David
|
r21013 | self.ui = ui | ||
Augie Fackler
|
r43347 | self._compengine = util.compengines.forbundletype(b'UN') | ||
Pierre-Yves David
|
r26802 | self._compressed = None | ||
Pierre-Yves David
|
r21013 | super(unbundle20, self).__init__(fp) | ||
Pierre-Yves David
|
r20802 | @util.propertycache | ||
def params(self): | ||||
Mads Kiilerich
|
r21024 | """dictionary of stream level parameters""" | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'reading bundle2 stream parameters') | ||
Pierre-Yves David
|
r20805 | params = {} | ||
paramssize = self._unpack(_fstreamparamsize)[0] | ||||
Pierre-Yves David
|
r23011 | if paramssize < 0: | ||
Augie Fackler
|
r43346 | raise error.BundleValueError( | ||
Augie Fackler
|
r43347 | b'negative bundle param size: %i' % paramssize | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r20805 | if paramssize: | ||
Pierre-Yves David
|
r26541 | params = self._readexact(paramssize) | ||
params = self._processallparams(params) | ||||
Pierre-Yves David
|
r20805 | return params | ||
Pierre-Yves David
|
r20802 | |||
Pierre-Yves David
|
r26541 | def _processallparams(self, paramsblock): | ||
Kyle Lippincott
|
r47856 | """ """ | ||
Gregory Szorc
|
r29591 | params = util.sortdict() | ||
Augie Fackler
|
r43347 | for p in paramsblock.split(b' '): | ||
p = p.split(b'=', 1) | ||||
timeless
|
r28883 | p = [urlreq.unquote(i) for i in p] | ||
Pierre-Yves David
|
r26541 | if len(p) < 2: | ||
p.append(None) | ||||
self._processparam(*p) | ||||
params[p[0]] = p[1] | ||||
return params | ||||
Pierre-Yves David
|
r20844 | def _processparam(self, name, value): | ||
"""process a parameter, applying its effect if needed | ||||
Parameter starting with a lower case letter are advisory and will be | ||||
ignored when unknown. Those starting with an upper case letter are | ||||
mandatory and will this function will raise a KeyError when unknown. | ||||
Note: no option are currently supported. Any input will be either | ||||
ignored or failing. | ||||
""" | ||||
if not name: | ||||
Augie Fackler
|
r43906 | raise ValueError('empty parameter name') | ||
Augie Fackler
|
r43789 | if name[0:1] not in pycompat.bytestr( | ||
string.ascii_letters # pytype: disable=wrong-arg-types | ||||
): | ||||
Augie Fackler
|
r43906 | raise ValueError('non letter first character: %s' % name) | ||
Pierre-Yves David
|
r26395 | try: | ||
handler = b2streamparamsmap[name.lower()] | ||||
except KeyError: | ||||
Augie Fackler
|
r34278 | if name[0:1].islower(): | ||
Augie Fackler
|
r43347 | indebug(self.ui, b"ignoring unknown parameter %s" % name) | ||
Pierre-Yves David
|
r26395 | else: | ||
raise error.BundleUnknownFeatureError(params=(name,)) | ||||
Pierre-Yves David
|
r20844 | else: | ||
Pierre-Yves David
|
r26395 | handler(self, name, value) | ||
Pierre-Yves David
|
r20844 | |||
Pierre-Yves David
|
r26542 | def _forwardchunks(self): | ||
"""utility to transfer a bundle2 as binary | ||||
This is made necessary by the fact the 'getbundle' command over 'ssh' | ||||
Joerg Sonnenberger
|
r51885 | have no way to know when the reply ends, relying on the bundle to be | ||
Pierre-Yves David
|
r26542 | interpreted to know its end. This is terrible and we are sorry, but we | ||
needed to move forward to get general delta enabled. | ||||
""" | ||||
yield self._magicstring | ||||
Martin von Zweigbergk
|
r43744 | assert 'params' not in vars(self) | ||
Pierre-Yves David
|
r26542 | paramssize = self._unpack(_fstreamparamsize)[0] | ||
if paramssize < 0: | ||||
Augie Fackler
|
r43346 | raise error.BundleValueError( | ||
Augie Fackler
|
r43347 | b'negative bundle param size: %i' % paramssize | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r26542 | if paramssize: | ||
params = self._readexact(paramssize) | ||||
self._processallparams(params) | ||||
Joerg Sonnenberger
|
r42319 | # The payload itself is decompressed below, so drop | ||
# the compression parameter passed down to compensate. | ||||
outparams = [] | ||||
Augie Fackler
|
r43347 | for p in params.split(b' '): | ||
k, v = p.split(b'=', 1) | ||||
if k.lower() != b'compression': | ||||
Joerg Sonnenberger
|
r42319 | outparams.append(p) | ||
Augie Fackler
|
r43347 | outparams = b' '.join(outparams) | ||
Joerg Sonnenberger
|
r42319 | yield _pack(_fstreamparamsize, len(outparams)) | ||
yield outparams | ||||
else: | ||||
yield _pack(_fstreamparamsize, paramssize) | ||||
Pierre-Yves David
|
r26542 | # From there, payload might need to be decompressed | ||
Gregory Szorc
|
r30353 | self._fp = self._compengine.decompressorreader(self._fp) | ||
Pierre-Yves David
|
r26542 | emptycount = 0 | ||
while emptycount < 2: | ||||
# so we can brainlessly loop | ||||
assert _fpartheadersize == _fpayloadsize | ||||
size = self._unpack(_fpartheadersize)[0] | ||||
yield _pack(_fpartheadersize, size) | ||||
if size: | ||||
emptycount = 0 | ||||
else: | ||||
emptycount += 1 | ||||
continue | ||||
if size == flaginterrupt: | ||||
continue | ||||
elif size < 0: | ||||
Augie Fackler
|
r43347 | raise error.BundleValueError(b'negative chunk size: %i') | ||
Pierre-Yves David
|
r26542 | yield self._readexact(size) | ||
Gregory Szorc
|
r35113 | def iterparts(self, seekable=False): | ||
Pierre-Yves David
|
r20802 | """yield all parts contained in the stream""" | ||
Gregory Szorc
|
r35113 | cls = seekableunbundlepart if seekable else unbundlepart | ||
Pierre-Yves David
|
r20802 | # make sure param have been loaded | ||
self.params | ||||
Pierre-Yves David
|
r26404 | # From there, payload need to be decompressed | ||
Gregory Szorc
|
r30353 | self._fp = self._compengine.decompressorreader(self._fp) | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'start extraction of bundle2 parts') | ||
Pierre-Yves David
|
r21014 | headerblock = self._readpartheader() | ||
while headerblock is not None: | ||||
Gregory Szorc
|
r35113 | part = cls(self.ui, headerblock, self._fp) | ||
Pierre-Yves David
|
r20802 | yield part | ||
Gregory Szorc
|
r35111 | # Ensure part is fully consumed so we can start reading the next | ||
# part. | ||||
part.consume() | ||||
Gregory Szorc
|
r35112 | |||
Pierre-Yves David
|
r21014 | headerblock = self._readpartheader() | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'end of bundle2 stream') | ||
Pierre-Yves David
|
r20802 | |||
Pierre-Yves David
|
r21014 | def _readpartheader(self): | ||
"""reads a part header size and return the bytes blob | ||||
Pierre-Yves David
|
r20864 | |||
Pierre-Yves David
|
r21014 | returns None if empty""" | ||
Pierre-Yves David
|
r20864 | headersize = self._unpack(_fpartheadersize)[0] | ||
Pierre-Yves David
|
r23011 | if headersize < 0: | ||
Augie Fackler
|
r43346 | raise error.BundleValueError( | ||
Augie Fackler
|
r43347 | b'negative part header size: %i' % headersize | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'part header size: %i' % headersize) | ||
Pierre-Yves David
|
r21014 | if headersize: | ||
return self._readexact(headersize) | ||||
return None | ||||
Pierre-Yves David
|
r20864 | |||
Eric Sumner
|
r24071 | def compressed(self): | ||
Augie Fackler
|
r43346 | self.params # load params | ||
Pierre-Yves David
|
r26802 | return self._compressed | ||
Pierre-Yves David
|
r20802 | |||
Pierre-Yves David
|
r31863 | def close(self): | ||
"""close underlying file""" | ||||
r51821 | if hasattr(self._fp, 'close'): | |||
Pierre-Yves David
|
r31863 | return self._fp.close() | ||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | formatmap = {b'20': unbundle20} | ||
Pierre-Yves David
|
r24648 | |||
Pierre-Yves David
|
r26395 | b2streamparamsmap = {} | ||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r26395 | def b2streamparamhandler(name): | ||
"""register a handler for a stream level parameter""" | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r26395 | def decorator(func): | ||
assert name not in formatmap | ||||
b2streamparamsmap[name] = func | ||||
return func | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r26395 | return decorator | ||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @b2streamparamhandler(b'compression') | ||
Pierre-Yves David
|
r26404 | def processcompression(unbundler, param, value): | ||
"""read compression parameter and install payload decompression""" | ||||
Gregory Szorc
|
r30353 | if value not in util.compengines.supportedbundletypes: | ||
Augie Fackler
|
r43346 | raise error.BundleUnknownFeatureError(params=(param,), values=(value,)) | ||
Gregory Szorc
|
r30353 | unbundler._compengine = util.compengines.forbundletype(value) | ||
Pierre-Yves David
|
r26802 | if value is not None: | ||
unbundler._compressed = True | ||||
Pierre-Yves David
|
r26404 | |||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r49801 | class bundlepart: | ||
Pierre-Yves David
|
r20856 | """A bundle2 part contains application level payload | ||
The part `type` is used to route the part to the application level | ||||
handler. | ||||
Pierre-Yves David
|
r21604 | |||
The part payload is contained in ``part.data``. It could be raw bytes or a | ||||
Pierre-Yves David
|
r21605 | generator of byte chunks. | ||
You can add parameters to the part using the ``addparam`` method. | ||||
Parameters can be either mandatory (default) or advisory. Remote side | ||||
should be able to safely ignore the advisory ones. | ||||
Both data and parameters cannot be modified after the generation has begun. | ||||
Pierre-Yves David
|
r20856 | """ | ||
Augie Fackler
|
r43346 | def __init__( | ||
self, | ||||
parttype, | ||||
mandatoryparams=(), | ||||
advisoryparams=(), | ||||
Augie Fackler
|
r43347 | data=b'', | ||
Augie Fackler
|
r43346 | mandatory=True, | ||
): | ||||
Pierre-Yves David
|
r23868 | validateparttype(parttype) | ||
Pierre-Yves David
|
r20995 | self.id = None | ||
Pierre-Yves David
|
r20856 | self.type = parttype | ||
Pierre-Yves David
|
r21604 | self._data = data | ||
Pierre-Yves David
|
r21605 | self._mandatoryparams = list(mandatoryparams) | ||
self._advisoryparams = list(advisoryparams) | ||||
Pierre-Yves David
|
r21607 | # checking for duplicated entries | ||
self._seenparams = set() | ||||
for pname, __ in self._mandatoryparams + self._advisoryparams: | ||||
if pname in self._seenparams: | ||||
Augie Fackler
|
r43347 | raise error.ProgrammingError(b'duplicated params: %s' % pname) | ||
Pierre-Yves David
|
r21607 | self._seenparams.add(pname) | ||
Pierre-Yves David
|
r21601 | # status of the part's generation: | ||
# - None: not started, | ||||
# - False: currently generated, | ||||
# - True: generation done. | ||||
self._generated = None | ||||
Eric Sumner
|
r23590 | self.mandatory = mandatory | ||
Pierre-Yves David
|
r20856 | |||
Pierre-Yves David
|
r30872 | def __repr__(self): | ||
Kyle Lippincott
|
r44759 | cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__) | ||
return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % ( | ||||
Augie Fackler
|
r43346 | cls, | ||
id(self), | ||||
self.id, | ||||
self.type, | ||||
self.mandatory, | ||||
) | ||||
Pierre-Yves David
|
r30872 | |||
Pierre-Yves David
|
r24793 | def copy(self): | ||
"""return a copy of the part | ||||
The new part have the very same content but no partid assigned yet. | ||||
Parts with generated data cannot be copied.""" | ||||
r51821 | assert not hasattr(self.data, 'next') | |||
Augie Fackler
|
r43346 | return self.__class__( | ||
self.type, | ||||
self._mandatoryparams, | ||||
self._advisoryparams, | ||||
self._data, | ||||
self.mandatory, | ||||
) | ||||
Pierre-Yves David
|
r24793 | |||
Pierre-Yves David
|
r21604 | # methods used to defines the part content | ||
Augie Fackler
|
r27879 | @property | ||
def data(self): | ||||
return self._data | ||||
@data.setter | ||||
def data(self, data): | ||||
Pierre-Yves David
|
r21604 | if self._generated is not None: | ||
Augie Fackler
|
r43347 | raise error.ReadOnlyPartError(b'part is being generated') | ||
Pierre-Yves David
|
r21604 | self._data = data | ||
Pierre-Yves David
|
r21605 | @property | ||
def mandatoryparams(self): | ||||
# make it an immutable tuple to force people through ``addparam`` | ||||
return tuple(self._mandatoryparams) | ||||
@property | ||||
def advisoryparams(self): | ||||
# make it an immutable tuple to force people through ``addparam`` | ||||
return tuple(self._advisoryparams) | ||||
Augie Fackler
|
r43347 | def addparam(self, name, value=b'', mandatory=True): | ||
Pierre-Yves David
|
r31861 | """add a parameter to the part | ||
If 'mandatory' is set to True, the remote handler must claim support | ||||
for this parameter or the unbundling will be aborted. | ||||
The 'name' and 'value' cannot exceed 255 bytes each. | ||||
""" | ||||
Pierre-Yves David
|
r21605 | if self._generated is not None: | ||
Augie Fackler
|
r43347 | raise error.ReadOnlyPartError(b'part is being generated') | ||
Pierre-Yves David
|
r21607 | if name in self._seenparams: | ||
Augie Fackler
|
r43347 | raise ValueError(b'duplicated params: %s' % name) | ||
Pierre-Yves David
|
r21607 | self._seenparams.add(name) | ||
Pierre-Yves David
|
r21605 | params = self._advisoryparams | ||
if mandatory: | ||||
params = self._mandatoryparams | ||||
params.append((name, value)) | ||||
Pierre-Yves David
|
r21601 | # methods used to generates the bundle2 stream | ||
Pierre-Yves David
|
r25321 | def getchunks(self, ui): | ||
Pierre-Yves David
|
r21601 | if self._generated is not None: | ||
Augie Fackler
|
r43347 | raise error.ProgrammingError(b'part can only be consumed once') | ||
Pierre-Yves David
|
r21601 | self._generated = False | ||
Pierre-Yves David
|
r25323 | |||
if ui.debugflag: | ||||
Augie Fackler
|
r43347 | msg = [b'bundle2-output-part: "%s"' % self.type] | ||
Pierre-Yves David
|
r25323 | if not self.mandatory: | ||
Augie Fackler
|
r43347 | msg.append(b' (advisory)') | ||
Pierre-Yves David
|
r25323 | nbmp = len(self.mandatoryparams) | ||
nbap = len(self.advisoryparams) | ||||
if nbmp or nbap: | ||||
Augie Fackler
|
r43347 | msg.append(b' (params:') | ||
Pierre-Yves David
|
r25323 | if nbmp: | ||
Augie Fackler
|
r43347 | msg.append(b' %i mandatory' % nbmp) | ||
Pierre-Yves David
|
r25323 | if nbap: | ||
Augie Fackler
|
r43347 | msg.append(b' %i advisory' % nbmp) | ||
msg.append(b')') | ||||
Pierre-Yves David
|
r25323 | if not self.data: | ||
Augie Fackler
|
r43347 | msg.append(b' empty payload') | ||
r51821 | elif hasattr(self.data, 'next') or hasattr(self.data, '__next__'): | |||
Augie Fackler
|
r43347 | msg.append(b' streamed payload') | ||
Pierre-Yves David
|
r25323 | else: | ||
Augie Fackler
|
r43347 | msg.append(b' %i bytes payload' % len(self.data)) | ||
msg.append(b'\n') | ||||
ui.debug(b''.join(msg)) | ||||
Pierre-Yves David
|
r25323 | |||
Pierre-Yves David
|
r20877 | #### header | ||
Eric Sumner
|
r23590 | if self.mandatory: | ||
parttype = self.type.upper() | ||||
else: | ||||
parttype = self.type.lower() | ||||
Augie Fackler
|
r43347 | outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype)) | ||
Pierre-Yves David
|
r20877 | ## parttype | ||
Augie Fackler
|
r43346 | header = [ | ||
_pack(_fparttypesize, len(parttype)), | ||||
parttype, | ||||
_pack(_fpartid, self.id), | ||||
] | ||||
Pierre-Yves David
|
r20877 | ## parameters | ||
# count | ||||
manpar = self.mandatoryparams | ||||
advpar = self.advisoryparams | ||||
header.append(_pack(_fpartparamcount, len(manpar), len(advpar))) | ||||
# size | ||||
parsizes = [] | ||||
for key, value in manpar: | ||||
parsizes.append(len(key)) | ||||
parsizes.append(len(value)) | ||||
for key, value in advpar: | ||||
parsizes.append(len(key)) | ||||
parsizes.append(len(value)) | ||||
Augie Fackler
|
r33635 | paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes) | ||
Pierre-Yves David
|
r20877 | header.append(paramsizes) | ||
# key, value | ||||
for key, value in manpar: | ||||
header.append(key) | ||||
header.append(value) | ||||
for key, value in advpar: | ||||
header.append(key) | ||||
header.append(value) | ||||
## finalize header | ||||
Augie Fackler
|
r34246 | try: | ||
Augie Fackler
|
r43347 | headerchunk = b''.join(header) | ||
Augie Fackler
|
r34246 | except TypeError: | ||
Augie Fackler
|
r43346 | raise TypeError( | ||
Augie Fackler
|
r43906 | 'Found a non-bytes trying to ' | ||
'build bundle part header: %r' % header | ||||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | outdebug(ui, b'header chunk size: %i' % len(headerchunk)) | ||
Pierre-Yves David
|
r20856 | yield _pack(_fpartheadersize, len(headerchunk)) | ||
yield headerchunk | ||||
Pierre-Yves David
|
r20877 | ## payload | ||
Pierre-Yves David
|
r23067 | try: | ||
for chunk in self._payloadchunks(): | ||||
Augie Fackler
|
r43347 | outdebug(ui, b'payload chunk size: %i' % len(chunk)) | ||
Pierre-Yves David
|
r23067 | yield _pack(_fpayloadsize, len(chunk)) | ||
yield chunk | ||||
Augie Fackler
|
r26144 | except GeneratorExit: | ||
# GeneratorExit means that nobody is listening for our | ||||
# results anyway, so just bail quickly rather than trying | ||||
# to produce an error part. | ||||
Augie Fackler
|
r43347 | ui.debug(b'bundle2-generatorexit\n') | ||
Augie Fackler
|
r26144 | raise | ||
Gregory Szorc
|
r25660 | except BaseException as exc: | ||
Yuya Nishihara
|
r37102 | bexc = stringutil.forcebytestr(exc) | ||
Pierre-Yves David
|
r23067 | # backup exception data for later | ||
Augie Fackler
|
r43346 | ui.debug( | ||
Augie Fackler
|
r43347 | b'bundle2-input-stream-interrupt: encoding exception %s' % bexc | ||
Augie Fackler
|
r43346 | ) | ||
Yuya Nishihara
|
r32186 | tb = sys.exc_info()[2] | ||
Augie Fackler
|
r43347 | msg = b'unexpected error: %s' % bexc | ||
Augie Fackler
|
r43346 | interpart = bundlepart( | ||
Augie Fackler
|
r43347 | b'error:abort', [(b'message', msg)], mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r23067 | interpart.id = 0 | ||
yield _pack(_fpayloadsize, -1) | ||||
Pierre-Yves David
|
r25321 | for chunk in interpart.getchunks(ui=ui): | ||
Pierre-Yves David
|
r23067 | yield chunk | ||
Augie Fackler
|
r43347 | outdebug(ui, b'closing payload chunk') | ||
Pierre-Yves David
|
r23067 | # abort current part payload | ||
yield _pack(_fpayloadsize, 0) | ||||
Yuya Nishihara
|
r32186 | pycompat.raisewithtb(exc, tb) | ||
Pierre-Yves David
|
r21000 | # end of payload | ||
Augie Fackler
|
r43347 | outdebug(ui, b'closing payload chunk') | ||
Pierre-Yves David
|
r21000 | yield _pack(_fpayloadsize, 0) | ||
Pierre-Yves David
|
r21601 | self._generated = True | ||
Pierre-Yves David
|
r21000 | |||
def _payloadchunks(self): | ||||
"""yield chunks of a the part payload | ||||
Exists to handle the different methods to provide data to a part.""" | ||||
Pierre-Yves David
|
r20876 | # we only support fixed size data now. | ||
# This will be improved in the future. | ||||
r51821 | if hasattr(self.data, 'next') or hasattr(self.data, '__next__'): | |||
Pierre-Yves David
|
r21001 | buff = util.chunkbuffer(self.data) | ||
chunk = buff.read(preferedchunksize) | ||||
while chunk: | ||||
yield chunk | ||||
chunk = buff.read(preferedchunksize) | ||||
elif len(self.data): | ||||
Pierre-Yves David
|
r20876 | yield self.data | ||
Pierre-Yves David
|
r20802 | |||
Pierre-Yves David
|
r23066 | |||
flaginterrupt = -1 | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r23066 | class interrupthandler(unpackermixin): | ||
"""read one part and process it with restricted capability | ||||
This allows to transmit exception raised on the producer size during part | ||||
iteration while the consumer is reading a part. | ||||
Part processed in this manner only have access to a ui object,""" | ||||
def __init__(self, ui, fp): | ||||
super(interrupthandler, self).__init__(fp) | ||||
self.ui = ui | ||||
def _readpartheader(self): | ||||
"""reads a part header size and return the bytes blob | ||||
returns None if empty""" | ||||
headersize = self._unpack(_fpartheadersize)[0] | ||||
if headersize < 0: | ||||
Augie Fackler
|
r43346 | raise error.BundleValueError( | ||
Augie Fackler
|
r43347 | b'negative part header size: %i' % headersize | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'part header size: %i\n' % headersize) | ||
Pierre-Yves David
|
r23066 | if headersize: | ||
return self._readexact(headersize) | ||||
return None | ||||
def __call__(self): | ||||
Augie Fackler
|
r43346 | self.ui.debug( | ||
Martin von Zweigbergk
|
r43387 | b'bundle2-input-stream-interrupt: opening out of band context\n' | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'bundle2 stream interruption, looking for a part.') | ||
Pierre-Yves David
|
r23066 | headerblock = self._readpartheader() | ||
if headerblock is None: | ||||
Augie Fackler
|
r43347 | indebug(self.ui, b'no part found during interruption.') | ||
Pierre-Yves David
|
r23066 | return | ||
Gregory Szorc
|
r35113 | part = unbundlepart(self.ui, headerblock, self._fp) | ||
Pierre-Yves David
|
r23066 | op = interruptoperation(self.ui) | ||
Durham Goode
|
r34259 | hardabort = False | ||
try: | ||||
_processpart(op, part) | ||||
except (SystemExit, KeyboardInterrupt): | ||||
hardabort = True | ||||
raise | ||||
finally: | ||||
if not hardabort: | ||||
Gregory Szorc
|
r35111 | part.consume() | ||
Augie Fackler
|
r43346 | self.ui.debug( | ||
Martin von Zweigbergk
|
r43387 | b'bundle2-input-stream-interrupt: closing out of band context\n' | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r23066 | |||
Gregory Szorc
|
r49801 | class interruptoperation: | ||
Pierre-Yves David
|
r23066 | """A limited operation to be use by part handler during interruption | ||
It only have access to an ui object. | ||||
""" | ||||
def __init__(self, ui): | ||||
self.ui = ui | ||||
self.reply = None | ||||
Pierre-Yves David
|
r24878 | self.captureoutput = False | ||
Pierre-Yves David
|
r23066 | |||
@property | ||||
def repo(self): | ||||
Augie Fackler
|
r43347 | raise error.ProgrammingError(b'no repo access from stream interruption') | ||
Pierre-Yves David
|
r23066 | |||
def gettransaction(self): | ||||
Augie Fackler
|
r43347 | raise TransactionUnavailable(b'no repo access from stream interruption') | ||
Pierre-Yves David
|
r23066 | |||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r35110 | def decodepayloadchunks(ui, fh): | ||
"""Reads bundle2 part payload data into chunks. | ||||
Part payload data consists of framed chunks. This function takes | ||||
a file handle and emits those chunks. | ||||
""" | ||||
Augie Fackler
|
r43347 | dolog = ui.configbool(b'devel', b'bundle2.debug') | ||
Gregory Szorc
|
r35114 | debug = ui.debug | ||
Gregory Szorc
|
r35116 | headerstruct = struct.Struct(_fpayloadsize) | ||
headersize = headerstruct.size | ||||
unpack = headerstruct.unpack | ||||
Gregory Szorc
|
r35110 | readexactly = changegroup.readexactly | ||
Gregory Szorc
|
r35115 | read = fh.read | ||
Gregory Szorc
|
r35110 | |||
Gregory Szorc
|
r35116 | chunksize = unpack(readexactly(fh, headersize))[0] | ||
Augie Fackler
|
r43347 | indebug(ui, b'payload chunk size: %i' % chunksize) | ||
Gregory Szorc
|
r35110 | |||
Gregory Szorc
|
r35115 | # changegroup.readexactly() is inlined below for performance. | ||
Gregory Szorc
|
r35110 | while chunksize: | ||
if chunksize >= 0: | ||||
Gregory Szorc
|
r35115 | s = read(chunksize) | ||
if len(s) < chunksize: | ||||
Augie Fackler
|
r43346 | raise error.Abort( | ||
_( | ||||
Augie Fackler
|
r43347 | b'stream ended unexpectedly ' | ||
b' (got %d bytes, expected %d)' | ||||
Augie Fackler
|
r43346 | ) | ||
% (len(s), chunksize) | ||||
) | ||||
Gregory Szorc
|
r35115 | |||
yield s | ||||
Gregory Szorc
|
r35110 | elif chunksize == flaginterrupt: | ||
# Interrupt "signal" detected. The regular stream is interrupted | ||||
# and a bundle2 part follows. Consume it. | ||||
interrupthandler(ui, fh)() | ||||
else: | ||||
raise error.BundleValueError( | ||||
Augie Fackler
|
r43347 | b'negative payload chunk size: %s' % chunksize | ||
Augie Fackler
|
r43346 | ) | ||
Gregory Szorc
|
r35110 | |||
Gregory Szorc
|
r35115 | s = read(headersize) | ||
if len(s) < headersize: | ||||
Augie Fackler
|
r43346 | raise error.Abort( | ||
Martin von Zweigbergk
|
r43387 | _(b'stream ended unexpectedly (got %d bytes, expected %d)') | ||
Augie Fackler
|
r43346 | % (len(s), chunksize) | ||
) | ||||
Gregory Szorc
|
r35115 | |||
Gregory Szorc
|
r35116 | chunksize = unpack(s)[0] | ||
Gregory Szorc
|
r35114 | |||
# indebug() inlined for performance. | ||||
if dolog: | ||||
Augie Fackler
|
r43347 | debug(b'bundle2-input: payload chunk size: %i\n' % chunksize) | ||
Gregory Szorc
|
r35110 | |||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r21014 | class unbundlepart(unpackermixin): | ||
"""a bundle part read from a bundle""" | ||||
def __init__(self, ui, header, fp): | ||||
super(unbundlepart, self).__init__(fp) | ||||
r51821 | self._seekable = hasattr(fp, 'seek') and hasattr(fp, 'tell') | |||
Pierre-Yves David
|
r21014 | self.ui = ui | ||
# unbundle state attr | ||||
self._headerdata = header | ||||
Pierre-Yves David
|
r21015 | self._headeroffset = 0 | ||
Pierre-Yves David
|
r21019 | self._initialized = False | ||
self.consumed = False | ||||
Pierre-Yves David
|
r21014 | # part data | ||
self.id = None | ||||
self.type = None | ||||
self.mandatoryparams = None | ||||
self.advisoryparams = None | ||||
Pierre-Yves David
|
r21610 | self.params = None | ||
Pierre-Yves David
|
r21612 | self.mandatorykeys = () | ||
Pierre-Yves David
|
r21019 | self._readheader() | ||
Eric Sumner
|
r23585 | self._mandatory = None | ||
Eric Sumner
|
r24036 | self._pos = 0 | ||
Pierre-Yves David
|
r21014 | |||
Pierre-Yves David
|
r21015 | def _fromheader(self, size): | ||
"""return the next <size> byte from the header""" | ||||
offset = self._headeroffset | ||||
Augie Fackler
|
r43346 | data = self._headerdata[offset : (offset + size)] | ||
Pierre-Yves David
|
r21019 | self._headeroffset = offset + size | ||
Pierre-Yves David
|
r21015 | return data | ||
Pierre-Yves David
|
r21016 | def _unpackheader(self, format): | ||
"""read given format from header | ||||
This automatically compute the size of the format to read.""" | ||||
data = self._fromheader(struct.calcsize(format)) | ||||
return _unpack(format, data) | ||||
Pierre-Yves David
|
r21608 | def _initparams(self, mandatoryparams, advisoryparams): | ||
"""internal function to setup all logic related parameters""" | ||||
Pierre-Yves David
|
r21609 | # make it read only to prevent people touching it by mistake. | ||
self.mandatoryparams = tuple(mandatoryparams) | ||||
Augie Fackler
|
r43346 | self.advisoryparams = tuple(advisoryparams) | ||
Pierre-Yves David
|
r21610 | # user friendly UI | ||
Gregory Szorc
|
r29591 | self.params = util.sortdict(self.mandatoryparams) | ||
self.params.update(self.advisoryparams) | ||||
Pierre-Yves David
|
r21612 | self.mandatorykeys = frozenset(p[0] for p in mandatoryparams) | ||
Pierre-Yves David
|
r21608 | |||
Pierre-Yves David
|
r21019 | def _readheader(self): | ||
Pierre-Yves David
|
r21014 | """read the header and setup the object""" | ||
Pierre-Yves David
|
r21016 | typesize = self._unpackheader(_fparttypesize)[0] | ||
Pierre-Yves David
|
r21015 | self.type = self._fromheader(typesize) | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'part type: "%s"' % self.type) | ||
Pierre-Yves David
|
r21016 | self.id = self._unpackheader(_fpartid)[0] | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id)) | ||
Eric Sumner
|
r23585 | # extract mandatory bit from type | ||
Augie Fackler
|
r43346 | self.mandatory = self.type != self.type.lower() | ||
Eric Sumner
|
r23585 | self.type = self.type.lower() | ||
Pierre-Yves David
|
r21014 | ## reading parameters | ||
# param count | ||||
Pierre-Yves David
|
r21016 | mancount, advcount = self._unpackheader(_fpartparamcount) | ||
Augie Fackler
|
r43347 | indebug(self.ui, b'part parameters: %i' % (mancount + advcount)) | ||
Pierre-Yves David
|
r21014 | # param size | ||
Pierre-Yves David
|
r21016 | fparamsizes = _makefpartparamsizes(mancount + advcount) | ||
paramsizes = self._unpackheader(fparamsizes) | ||||
Pierre-Yves David
|
r21014 | # make it a list of couple again | ||
Augie Fackler
|
r33637 | paramsizes = list(zip(paramsizes[::2], paramsizes[1::2])) | ||
Pierre-Yves David
|
r21014 | # split mandatory from advisory | ||
mansizes = paramsizes[:mancount] | ||||
advsizes = paramsizes[mancount:] | ||||
Mads Kiilerich
|
r23139 | # retrieve param value | ||
Pierre-Yves David
|
r21014 | manparams = [] | ||
for key, value in mansizes: | ||||
Pierre-Yves David
|
r21015 | manparams.append((self._fromheader(key), self._fromheader(value))) | ||
Pierre-Yves David
|
r21014 | advparams = [] | ||
for key, value in advsizes: | ||||
Pierre-Yves David
|
r21015 | advparams.append((self._fromheader(key), self._fromheader(value))) | ||
Pierre-Yves David
|
r21608 | self._initparams(manparams, advparams) | ||
Pierre-Yves David
|
r21014 | ## part payload | ||
Eric Sumner
|
r24034 | self._payloadstream = util.chunkbuffer(self._payloadchunks()) | ||
Pierre-Yves David
|
r21019 | # we read the data, tell it | ||
self._initialized = True | ||||
Gregory Szorc
|
r35110 | def _payloadchunks(self): | ||
"""Generator of decoded chunks in the payload.""" | ||||
return decodepayloadchunks(self.ui, self._fp) | ||||
Gregory Szorc
|
r35111 | def consume(self): | ||
"""Read the part payload until completion. | ||||
By consuming the part data, the underlying stream read offset will | ||||
be advanced to the next part (or end of stream). | ||||
""" | ||||
if self.consumed: | ||||
return | ||||
chunk = self.read(32768) | ||||
while chunk: | ||||
self._pos += len(chunk) | ||||
chunk = self.read(32768) | ||||
Pierre-Yves David
|
r21019 | def read(self, size=None): | ||
"""read payload data""" | ||||
if not self._initialized: | ||||
self._readheader() | ||||
if size is None: | ||||
data = self._payloadstream.read() | ||||
else: | ||||
data = self._payloadstream.read(size) | ||||
Pierre-Yves David
|
r25334 | self._pos += len(data) | ||
Pierre-Yves David
|
r21019 | if size is None or len(data) < size: | ||
Pierre-Yves David
|
r25334 | if not self.consumed and self._pos: | ||
Augie Fackler
|
r43346 | self.ui.debug( | ||
Augie Fackler
|
r43347 | b'bundle2-input-part: total payload size %i\n' % self._pos | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r21019 | self.consumed = True | ||
return data | ||||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r35109 | class seekableunbundlepart(unbundlepart): | ||
"""A bundle2 part in a bundle that is seekable. | ||||
Regular ``unbundlepart`` instances can only be read once. This class | ||||
extends ``unbundlepart`` to enable bi-directional seeking within the | ||||
part. | ||||
Bundle2 part data consists of framed chunks. Offsets when seeking | ||||
refer to the decoded data, not the offsets in the underlying bundle2 | ||||
stream. | ||||
To facilitate quickly seeking within the decoded data, instances of this | ||||
class maintain a mapping between offsets in the underlying stream and | ||||
the decoded payload. This mapping will consume memory in proportion | ||||
to the number of chunks within the payload (which almost certainly | ||||
increases in proportion with the size of the part). | ||||
""" | ||||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r35109 | def __init__(self, ui, header, fp): | ||
# (payload, file) offsets for chunk starts. | ||||
self._chunkindex = [] | ||||
super(seekableunbundlepart, self).__init__(ui, header, fp) | ||||
def _payloadchunks(self, chunknum=0): | ||||
'''seek to specified chunk and start yielding data''' | ||||
if len(self._chunkindex) == 0: | ||||
Augie Fackler
|
r43347 | assert chunknum == 0, b'Must start with chunk 0' | ||
Gregory Szorc
|
r35109 | self._chunkindex.append((0, self._tellfp())) | ||
else: | ||||
Augie Fackler
|
r41925 | assert chunknum < len(self._chunkindex), ( | ||
Augie Fackler
|
r43347 | b'Unknown chunk %d' % chunknum | ||
Augie Fackler
|
r43346 | ) | ||
Gregory Szorc
|
r35109 | self._seekfp(self._chunkindex[chunknum][1]) | ||
pos = self._chunkindex[chunknum][0] | ||||
Gregory Szorc
|
r35110 | |||
for chunk in decodepayloadchunks(self.ui, self._fp): | ||||
chunknum += 1 | ||||
pos += len(chunk) | ||||
if chunknum == len(self._chunkindex): | ||||
self._chunkindex.append((pos, self._tellfp())) | ||||
yield chunk | ||||
Gregory Szorc
|
r35109 | |||
def _findchunk(self, pos): | ||||
'''for a given payload position, return a chunk number and offset''' | ||||
for chunk, (ppos, fpos) in enumerate(self._chunkindex): | ||||
if ppos == pos: | ||||
return chunk, 0 | ||||
elif ppos > pos: | ||||
return chunk - 1, pos - self._chunkindex[chunk - 1][0] | ||||
Augie Fackler
|
r43347 | raise ValueError(b'Unknown chunk') | ||
Gregory Szorc
|
r35109 | |||
Eric Sumner
|
r24036 | def tell(self): | ||
return self._pos | ||||
Gregory Szorc
|
r35037 | def seek(self, offset, whence=os.SEEK_SET): | ||
if whence == os.SEEK_SET: | ||||
Eric Sumner
|
r24037 | newpos = offset | ||
Gregory Szorc
|
r35037 | elif whence == os.SEEK_CUR: | ||
Eric Sumner
|
r24037 | newpos = self._pos + offset | ||
Gregory Szorc
|
r35037 | elif whence == os.SEEK_END: | ||
Eric Sumner
|
r24037 | if not self.consumed: | ||
Gregory Szorc
|
r35117 | # Can't use self.consume() here because it advances self._pos. | ||
chunk = self.read(32768) | ||||
while chunk: | ||||
chunk = self.read(32768) | ||||
Eric Sumner
|
r24037 | newpos = self._chunkindex[-1][0] - offset | ||
else: | ||||
Augie Fackler
|
r43347 | raise ValueError(b'Unknown whence value: %r' % (whence,)) | ||
Eric Sumner
|
r24037 | |||
if newpos > self._chunkindex[-1][0] and not self.consumed: | ||||
Gregory Szorc
|
r35117 | # Can't use self.consume() here because it advances self._pos. | ||
chunk = self.read(32768) | ||||
while chunk: | ||||
chunk = self.read(32668) | ||||
Eric Sumner
|
r24037 | if not 0 <= newpos <= self._chunkindex[-1][0]: | ||
Augie Fackler
|
r43347 | raise ValueError(b'Offset out of range') | ||
Eric Sumner
|
r24037 | |||
if self._pos != newpos: | ||||
chunk, internaloffset = self._findchunk(newpos) | ||||
self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk)) | ||||
adjust = self.read(internaloffset) | ||||
if len(adjust) != internaloffset: | ||||
Augie Fackler
|
r43347 | raise error.Abort(_(b'Seek failed\n')) | ||
Eric Sumner
|
r24037 | self._pos = newpos | ||
Pierre-Yves David
|
r31889 | def _seekfp(self, offset, whence=0): | ||
"""move the underlying file pointer | ||||
This method is meant for internal usage by the bundle2 protocol only. | ||||
They directly manipulate the low level stream including bundle2 level | ||||
instruction. | ||||
Do not use it to implement higher-level logic or methods.""" | ||||
if self._seekable: | ||||
return self._fp.seek(offset, whence) | ||||
else: | ||||
Augie Fackler
|
r43347 | raise NotImplementedError(_(b'File pointer is not seekable')) | ||
Pierre-Yves David
|
r31889 | |||
def _tellfp(self): | ||||
"""return the file offset, or None if file is not seekable | ||||
This method is meant for internal usage by the bundle2 protocol only. | ||||
They directly manipulate the low level stream including bundle2 level | ||||
instruction. | ||||
Do not use it to implement higher-level logic or methods.""" | ||||
if self._seekable: | ||||
try: | ||||
return self._fp.tell() | ||||
except IOError as e: | ||||
if e.errno == errno.ESPIPE: | ||||
self._seekable = False | ||||
else: | ||||
raise | ||||
return None | ||||
Augie Fackler
|
r43346 | |||
Pierre-Yves David
|
r25317 | # These are only the static capabilities. | ||
# Check the 'getrepocaps' function for the rest. | ||||
Matt Harbison
|
r52564 | capabilities: "Capabilities" = { | ||
Augie Fackler
|
r43347 | b'HG20': (), | ||
b'bookmarks': (), | ||||
b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'), | ||||
b'listkeys': (), | ||||
b'pushkey': (), | ||||
b'digests': tuple(sorted(util.DIGESTS.keys())), | ||||
b'remote-changegroup': (b'http', b'https'), | ||||
b'hgtagsfnodes': (), | ||||
b'phases': (b'heads',), | ||||
b'stream': (b'v2',), | ||||
Augie Fackler
|
r43346 | } | ||
Pierre-Yves David
|
r22341 | |||
Matt Harbison
|
r52564 | # TODO: drop the default value for 'role' | ||
def getrepocaps(repo, allowpushback: bool = False, role=None) -> "Capabilities": | ||||
Pierre-Yves David
|
r22342 | """return the bundle2 capabilities for a given repo | ||
Pierre-Yves David
|
r22343 | Exists to allow extensions (like evolution) to mutate the capabilities. | ||
Gregory Szorc
|
r35801 | |||
The returned value is used for servers advertising their capabilities as | ||||
well as clients advertising their capabilities to servers as part of | ||||
bundle2 requests. The ``role`` argument specifies which is which. | ||||
Pierre-Yves David
|
r22342 | """ | ||
Augie Fackler
|
r43347 | if role not in (b'client', b'server'): | ||
raise error.ProgrammingError(b'role argument must be client or server') | ||||
Gregory Szorc
|
r35801 | |||
Pierre-Yves David
|
r22343 | caps = capabilities.copy() | ||
Augie Fackler
|
r43347 | caps[b'changegroup'] = tuple( | ||
Augie Fackler
|
r43346 | sorted(changegroup.supportedincomingversions(repo)) | ||
) | ||||
Durham Goode
|
r22953 | if obsolete.isenabled(repo, obsolete.exchangeopt): | ||
Augie Fackler
|
r43347 | supportedformat = tuple(b'V%i' % v for v in obsolete.formats) | ||
caps[b'obsmarkers'] = supportedformat | ||||
Eric Sumner
|
r23439 | if allowpushback: | ||
Augie Fackler
|
r43347 | caps[b'pushback'] = () | ||
cpmode = repo.ui.config(b'server', b'concurrent-push-mode') | ||||
if cpmode == b'check-related': | ||||
caps[b'checkheads'] = (b'related',) | ||||
if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'): | ||||
caps.pop(b'phases') | ||||
Gregory Szorc
|
r35808 | |||
# Don't advertise stream clone support in server mode if not configured. | ||||
Augie Fackler
|
r43347 | if role == b'server': | ||
Augie Fackler
|
r43346 | streamsupported = repo.ui.configbool( | ||
Augie Fackler
|
r43347 | b'server', b'uncompressed', untrusted=True | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | featuresupported = repo.ui.configbool(b'server', b'bundle2.stream') | ||
Gregory Szorc
|
r35808 | |||
if not streamsupported or not featuresupported: | ||||
Augie Fackler
|
r43347 | caps.pop(b'stream') | ||
Gregory Szorc
|
r35810 | # Else always advertise support on client, because payload support | ||
# should always be advertised. | ||||
Gregory Szorc
|
r35808 | |||
r51417 | if repo.ui.configbool(b'experimental', b'stream-v3'): | |||
if b'stream' in caps: | ||||
caps[b'stream'] += (b'v3-exp',) | ||||
Joerg Sonnenberger
|
r47378 | # b'rev-branch-cache is no longer advertised, but still supported | ||
# for legacy clients. | ||||
Pierre-Yves David
|
r22343 | return caps | ||
Pierre-Yves David
|
r22342 | |||
Augie Fackler
|
r43346 | |||
Matt Harbison
|
r52564 | def bundle2caps(remote) -> "Capabilities": | ||
Mads Kiilerich
|
r23139 | """return the bundle capabilities of a peer as dict""" | ||
Augie Fackler
|
r43347 | raw = remote.capable(b'bundle2') | ||
if not raw and raw != b'': | ||||
Pierre-Yves David
|
r21644 | return {} | ||
Augie Fackler
|
r43347 | capsblob = urlreq.unquote(remote.capable(b'bundle2')) | ||
Pierre-Yves David
|
r21644 | return decodecaps(capsblob) | ||
Pierre-Yves David
|
r21014 | |||
Augie Fackler
|
r43346 | |||
Matt Harbison
|
r52564 | def obsmarkersversion(caps: "Capabilities"): | ||
Augie Fackler
|
r46554 | """extract the list of supported obsmarkers versions from a bundle2caps dict""" | ||
Augie Fackler
|
r43347 | obscaps = caps.get(b'obsmarkers', ()) | ||
return [int(c[1:]) for c in obscaps if c.startswith(b'V')] | ||||
Pierre-Yves David
|
r22344 | |||
Augie Fackler
|
r43346 | |||
def writenewbundle( | ||||
ui, | ||||
repo, | ||||
source, | ||||
filename, | ||||
bundletype, | ||||
outgoing, | ||||
opts, | ||||
vfs=None, | ||||
compression=None, | ||||
compopts=None, | ||||
r51213 | allow_internal=False, | |||
Augie Fackler
|
r43346 | ): | ||
Augie Fackler
|
r43347 | if bundletype.startswith(b'HG10'): | ||
cg = changegroup.makechangegroup(repo, outgoing, b'01', source) | ||||
Augie Fackler
|
r43346 | return writebundle( | ||
ui, | ||||
cg, | ||||
filename, | ||||
bundletype, | ||||
vfs=vfs, | ||||
compression=compression, | ||||
compopts=compopts, | ||||
) | ||||
Augie Fackler
|
r43347 | elif not bundletype.startswith(b'HG20'): | ||
raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype) | ||||
r32216 | ||||
r51213 | # enforce that no internal phase are to be bundled | |||
bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof) | ||||
if bundled_internal and not allow_internal: | ||||
count = len(repo.revs(b'%ln and _internal()', outgoing.missing)) | ||||
msg = "backup bundle would contains %d internal changesets" | ||||
msg %= count | ||||
raise error.ProgrammingError(msg) | ||||
Matt Harbison
|
r52564 | caps: "Capabilities" = {} | ||
r50231 | if opts.get(b'obsolescence', False): | |||
Augie Fackler
|
r43347 | caps[b'obsmarkers'] = (b'V1',) | ||
r52451 | stream_version = opts.get(b'stream', b"") | |||
if stream_version == b"v2": | ||||
r51411 | caps[b'stream'] = [b'v2'] | |||
r52451 | elif stream_version == b"v3-exp": | |||
Arseniy Alekseyev
|
r51426 | caps[b'stream'] = [b'v3-exp'] | ||
r32516 | bundle = bundle20(ui, caps) | |||
r32216 | bundle.setcompression(compression, compopts) | |||
_addpartsfromopts(ui, repo, bundle, source, outgoing, opts) | ||||
chunkiter = bundle.getchunks() | ||||
return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs) | ||||
Augie Fackler
|
r43346 | |||
r32216 | def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts): | |||
# We should eventually reconcile this logic with the one behind | ||||
# 'exchange.getbundle2partsgenerator'. | ||||
# | ||||
# The type of input from 'getbundle' and 'writenewbundle' are a bit | ||||
# different right now. So we keep them separated for now for the sake of | ||||
# simplicity. | ||||
Boris Feld
|
r37023 | # we might not always want a changegroup in such bundle, for example in | ||
# stream bundles | ||||
Augie Fackler
|
r43347 | if opts.get(b'changegroup', True): | ||
cgversion = opts.get(b'cg.version') | ||||
Boris Feld
|
r37023 | if cgversion is None: | ||
cgversion = changegroup.safeversion(repo) | ||||
cg = changegroup.makechangegroup(repo, outgoing, cgversion, source) | ||||
Augie Fackler
|
r43347 | part = bundler.newpart(b'changegroup', data=cg.getchunks()) | ||
part.addparam(b'version', cg.version) | ||||
if b'clcount' in cg.extras: | ||||
Augie Fackler
|
r43346 | part.addparam( | ||
Augie Fackler
|
r43347 | b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Jason R. Coombs , Pierre-Yves David pierre-yves.david@octobus.net
|
r51206 | if opts.get(b'phases'): | ||
target_phase = phases.draft | ||||
for head in outgoing.ancestorsof: | ||||
target_phase = max(target_phase, repo[head].phase()) | ||||
if target_phase > phases.draft: | ||||
part.addparam( | ||||
b'targetphase', | ||||
b'%d' % target_phase, | ||||
mandatory=False, | ||||
) | ||||
r48000 | if repository.REPO_FEATURE_SIDE_DATA in repo.features: | |||
part.addparam(b'exp-sidedata', b'1') | ||||
Augie Fackler
|
r43347 | |||
r52451 | if opts.get(b'stream', b"") == b"v2": | |||
Boris Feld
|
r37184 | addpartbundlestream2(bundler, repo, stream=True) | ||
r52451 | if opts.get(b'stream', b"") == b"v3-exp": | |||
Arseniy Alekseyev
|
r51426 | addpartbundlestream2(bundler, repo, stream=True) | ||
Augie Fackler
|
r43347 | if opts.get(b'tagsfnodescache', True): | ||
Boris Feld
|
r37182 | addparttagsfnodescache(repo, bundler, outgoing) | ||
Augie Fackler
|
r43347 | if opts.get(b'revbranchcache', True): | ||
Boris Feld
|
r37182 | addpartrevbranchcache(repo, bundler, outgoing) | ||
r32218 | ||||
Augie Fackler
|
r43347 | if opts.get(b'obsolescence', False): | ||
Joerg Sonnenberger
|
r52789 | obsmarkers = repo.obsstore.relevantmarkers(nodes=outgoing.missing) | ||
Joerg Sonnenberger
|
r46780 | buildobsmarkerspart( | ||
bundler, | ||||
obsmarkers, | ||||
mandatory=opts.get(b'obsolescence-mandatory', True), | ||||
) | ||||
r32516 | ||||
Augie Fackler
|
r43347 | if opts.get(b'phases', False): | ||
Martin von Zweigbergk
|
r33031 | headsbyphase = phases.subsetphaseheads(repo, outgoing.missing) | ||
Boris Feld
|
r34320 | phasedata = phases.binaryencode(headsbyphase) | ||
Augie Fackler
|
r43347 | bundler.newpart(b'phase-heads', data=phasedata) | ||
Martin von Zweigbergk
|
r33031 | |||
Augie Fackler
|
r43346 | |||
r32217 | def addparttagsfnodescache(repo, bundler, outgoing): | |||
# we include the tags fnode cache for the bundle changeset | ||||
# (as an optional parts) | ||||
cache = tags.hgtagsfnodescache(repo.unfiltered()) | ||||
chunks = [] | ||||
# .hgtags fnodes are only relevant for head changesets. While we could | ||||
# transfer values for all known nodes, there will likely be little to | ||||
# no benefit. | ||||
# | ||||
# We don't bother using a generator to produce output data because | ||||
# a) we only have 40 bytes per head and even esoteric numbers of heads | ||||
# consume little memory (1M heads is 40MB) b) we don't want to send the | ||||
# part if we don't have entries and knowing if we have entries requires | ||||
# cache lookups. | ||||
Manuel Jacob
|
r45704 | for node in outgoing.ancestorsof: | ||
r32217 | # Don't compute missing, as this may slow down serving. | |||
fnode = cache.getfnode(node, computemissing=False) | ||||
Matt Harbison
|
r47245 | if fnode: | ||
r32217 | chunks.extend([node, fnode]) | |||
if chunks: | ||||
r52432 | bundler.newpart( | |||
b'hgtagsfnodes', | ||||
mandatory=False, | ||||
data=b''.join(chunks), | ||||
) | ||||
r32217 | ||||
Augie Fackler
|
r43346 | |||
Boris Feld
|
r36982 | def addpartrevbranchcache(repo, bundler, outgoing): | ||
# we include the rev branch cache for the bundle changeset | ||||
# (as an optional parts) | ||||
cache = repo.revbranchcache() | ||||
cl = repo.unfiltered().changelog | ||||
branchesdata = collections.defaultdict(lambda: (set(), set())) | ||||
for node in outgoing.missing: | ||||
branch, close = cache.branchinfo(cl.rev(node)) | ||||
branchesdata[branch][close].add(node) | ||||
def generate(): | ||||
for branch, (nodes, closed) in sorted(branchesdata.items()): | ||||
utf8branch = encoding.fromlocal(branch) | ||||
yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed)) | ||||
yield utf8branch | ||||
for n in sorted(nodes): | ||||
yield n | ||||
for n in sorted(closed): | ||||
yield n | ||||
Augie Fackler
|
r43347 | bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False) | ||
Augie Fackler
|
r43346 | |||
Boris Feld
|
r36982 | |||
Boris Feld
|
r37184 | def _formatrequirementsspec(requirements): | ||
Augie Fackler
|
r43347 | requirements = [req for req in requirements if req != b"shared"] | ||
return urlreq.quote(b','.join(sorted(requirements))) | ||||
Boris Feld
|
r37184 | |||
Augie Fackler
|
r43346 | |||
Boris Feld
|
r37184 | def _formatrequirementsparams(requirements): | ||
requirements = _formatrequirementsspec(requirements) | ||||
Augie Fackler
|
r43347 | params = b"%s%s" % (urlreq.quote(b"requirements="), requirements) | ||
Boris Feld
|
r37184 | return params | ||
Augie Fackler
|
r43346 | |||
Raphaël Gomès
|
r47447 | def format_remote_wanted_sidedata(repo): | ||
"""Formats a repo's wanted sidedata categories into a bytestring for | ||||
capabilities exchange.""" | ||||
wanted = b"" | ||||
if repo._wanted_sidedata: | ||||
wanted = b','.join( | ||||
pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata) | ||||
) | ||||
return wanted | ||||
def read_remote_wanted_sidedata(remote): | ||||
sidedata_categories = remote.capable(b'exp-wanted-sidedata') | ||||
return read_wanted_sidedata(sidedata_categories) | ||||
def read_wanted_sidedata(formatted): | ||||
if formatted: | ||||
return set(formatted.split(b',')) | ||||
return set() | ||||
Boris Feld
|
r37184 | def addpartbundlestream2(bundler, repo, **kwargs): | ||
Augie Fackler
|
r43906 | if not kwargs.get('stream', False): | ||
Boris Feld
|
r37184 | return | ||
if not streamclone.allowservergeneration(repo): | ||||
Arseniy Alekseyev
|
r51410 | msg = _(b'stream data requested but server does not allow this feature') | ||
hint = _(b'the client seems buggy') | ||||
raise error.Abort(msg, hint=hint) | ||||
Arseniy Alekseyev
|
r51412 | if not (b'stream' in bundler.capabilities): | ||
msg = _( | ||||
b'stream data requested but supported streaming clone versions were not specified' | ||||
) | ||||
hint = _(b'the client seems buggy') | ||||
raise error.Abort(msg, hint=hint) | ||||
r51417 | client_supported = set(bundler.capabilities[b'stream']) | |||
server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', [])) | ||||
common_supported = client_supported & server_supported | ||||
if not common_supported: | ||||
msg = _(b'no common supported version with the client: %s; %s') | ||||
str_server = b','.join(sorted(server_supported)) | ||||
str_client = b','.join(sorted(client_supported)) | ||||
msg %= (str_server, str_client) | ||||
raise error.Abort(msg) | ||||
version = max(common_supported) | ||||
Boris Feld
|
r37184 | |||
# Stream clones don't compress well. And compression undermines a | ||||
# goal of stream clones, which is to be fast. Communicate the desire | ||||
# to avoid compression to consumers of the bundle. | ||||
bundler.prefercompressed = False | ||||
Pulkit Goyal
|
r40374 | # get the includes and excludes | ||
Augie Fackler
|
r43906 | includepats = kwargs.get('includepats') | ||
excludepats = kwargs.get('excludepats') | ||||
Pulkit Goyal
|
r40374 | |||
Augie Fackler
|
r43346 | narrowstream = repo.ui.configbool( | ||
Augie Fackler
|
r43347 | b'experimental', b'server.stream-narrow-clones' | ||
Augie Fackler
|
r43346 | ) | ||
Pulkit Goyal
|
r40374 | |||
if (includepats or excludepats) and not narrowstream: | ||||
Augie Fackler
|
r43347 | raise error.Abort(_(b'server does not support narrow stream clones')) | ||
Pulkit Goyal
|
r40374 | |||
r40434 | includeobsmarkers = False | |||
if repo.obsstore: | ||||
remoteversions = obsmarkersversion(bundler.capabilities) | ||||
r40435 | if not remoteversions: | |||
Augie Fackler
|
r43346 | raise error.Abort( | ||
_( | ||||
Augie Fackler
|
r43347 | b'server has obsolescence markers, but client ' | ||
b'cannot receive them via stream clone' | ||||
Augie Fackler
|
r43346 | ) | ||
) | ||||
r40435 | elif repo.obsstore._version in remoteversions: | |||
r40434 | includeobsmarkers = True | |||
r51417 | if version == b"v2": | |||
filecount, bytecount, it = streamclone.generatev2( | ||||
repo, includepats, excludepats, includeobsmarkers | ||||
) | ||||
requirements = streamclone.streamed_requirements(repo) | ||||
requirements = _formatrequirementsspec(requirements) | ||||
part = bundler.newpart(b'stream2', data=it) | ||||
part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True) | ||||
part.addparam(b'filecount', b'%d' % filecount, mandatory=True) | ||||
part.addparam(b'requirements', requirements, mandatory=True) | ||||
elif version == b"v3-exp": | ||||
Arseniy Alekseyev
|
r51599 | it = streamclone.generatev3( | ||
r51417 | repo, includepats, excludepats, includeobsmarkers | |||
) | ||||
requirements = streamclone.streamed_requirements(repo) | ||||
requirements = _formatrequirementsspec(requirements) | ||||
r51425 | part = bundler.newpart(b'stream3-exp', data=it) | |||
r51417 | part.addparam(b'requirements', requirements, mandatory=True) | |||
Boris Feld
|
r37184 | |||
Augie Fackler
|
r43346 | |||
Joerg Sonnenberger
|
r46780 | def buildobsmarkerspart(bundler, markers, mandatory=True): | ||
r32515 | """add an obsmarker part to the bundler with <markers> | |||
No part is created if markers is empty. | ||||
Raises ValueError if the bundler doesn't support any known obsmarker format. | ||||
""" | ||||
if not markers: | ||||
return None | ||||
remoteversions = obsmarkersversion(bundler.capabilities) | ||||
version = obsolete.commonversion(remoteversions) | ||||
if version is None: | ||||
Augie Fackler
|
r43347 | raise ValueError(b'bundler does not support common obsmarker format') | ||
r32515 | stream = obsolete.encodemarkers(markers, True, version=version) | |||
Joerg Sonnenberger
|
r46780 | return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory) | ||
r32515 | ||||
Augie Fackler
|
r43346 | |||
def writebundle( | ||||
ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None | ||||
): | ||||
Martin von Zweigbergk
|
r28666 | """Write a bundle file and return its filename. | ||
Existing files will not be overwritten. | ||||
If no filename is specified, a temporary file is created. | ||||
bz2 compression can be turned off. | ||||
The bundle file will be deleted in case of errors. | ||||
""" | ||||
Augie Fackler
|
r43347 | if bundletype == b"HG20": | ||
Martin von Zweigbergk
|
r28666 | bundle = bundle20(ui) | ||
Gregory Szorc
|
r30757 | bundle.setcompression(compression, compopts) | ||
Augie Fackler
|
r43347 | part = bundle.newpart(b'changegroup', data=cg.getchunks()) | ||
part.addparam(b'version', cg.version) | ||||
if b'clcount' in cg.extras: | ||||
Augie Fackler
|
r43346 | part.addparam( | ||
Augie Fackler
|
r43347 | b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Martin von Zweigbergk
|
r28666 | chunkiter = bundle.getchunks() | ||
else: | ||||
# compression argument is only for the bundle2 case | ||||
assert compression is None | ||||
Augie Fackler
|
r43347 | if cg.version != b'01': | ||
Augie Fackler
|
r43346 | raise error.Abort( | ||
Martin von Zweigbergk
|
r43387 | _(b'old bundle types only supports v1 changegroups') | ||
Augie Fackler
|
r43346 | ) | ||
Matt Harbison
|
r50543 | |||
# HG20 is the case without 2 values to unpack, but is handled above. | ||||
# pytype: disable=bad-unpacking | ||||
Martin von Zweigbergk
|
r28666 | header, comp = bundletypes[bundletype] | ||
Matt Harbison
|
r50543 | # pytype: enable=bad-unpacking | ||
Gregory Szorc
|
r30351 | if comp not in util.compengines.supportedbundletypes: | ||
Augie Fackler
|
r43347 | raise error.Abort(_(b'unknown stream compression type: %s') % comp) | ||
Gregory Szorc
|
r30351 | compengine = util.compengines.forbundletype(comp) | ||
Augie Fackler
|
r43346 | |||
Martin von Zweigbergk
|
r28666 | def chunkiter(): | ||
yield header | ||||
Gregory Szorc
|
r30757 | for chunk in compengine.compressstream(cg.getchunks(), compopts): | ||
Gregory Szorc
|
r30357 | yield chunk | ||
Augie Fackler
|
r43346 | |||
Martin von Zweigbergk
|
r28666 | chunkiter = chunkiter() | ||
# parse the changegroup data, otherwise we will block | ||||
# in case of sshrepo because we don't know the end of the stream | ||||
return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs) | ||||
Augie Fackler
|
r43346 | |||
Martin von Zweigbergk
|
r33037 | def combinechangegroupresults(op): | ||
Martin von Zweigbergk
|
r33036 | """logic to combine 0 or more addchangegroup results into one""" | ||
Augie Fackler
|
r43347 | results = [r.get(b'return', 0) for r in op.records[b'changegroup']] | ||
Martin von Zweigbergk
|
r33036 | changedheads = 0 | ||
result = 1 | ||||
for ret in results: | ||||
# If any changegroup result is 0, return 0 | ||||
if ret == 0: | ||||
result = 0 | ||||
break | ||||
if ret < -1: | ||||
changedheads += ret + 1 | ||||
elif ret > 1: | ||||
changedheads += ret - 1 | ||||
if changedheads > 0: | ||||
result = 1 + changedheads | ||||
elif changedheads < 0: | ||||
result = -1 + changedheads | ||||
return result | ||||
Augie Fackler
|
r43346 | |||
@parthandler( | ||||
r43401 | b'changegroup', | |||
( | ||||
b'version', | ||||
b'nbchanges', | ||||
b'exp-sidedata', | ||||
Raphaël Gomès
|
r47447 | b'exp-wanted-sidedata', | ||
r43401 | b'treemanifest', | |||
b'targetphase', | ||||
), | ||||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r20998 | def handlechangegroup(op, inpart): | ||
r46781 | """apply a changegroup part on the repo""" | |||
Gregory Szorc
|
r39736 | from . import localrepo | ||
Martin von Zweigbergk
|
r32930 | tr = op.gettransaction() | ||
Augie Fackler
|
r43347 | unpackerversion = inpart.params.get(b'version', b'01') | ||
Pierre-Yves David
|
r23170 | # We should raise an appropriate exception here | ||
Martin von Zweigbergk
|
r27751 | cg = changegroup.getunbundler(unpackerversion, inpart, None) | ||
Pierre-Yves David
|
r23001 | # the source and url passed here are overwritten by the one contained in | ||
# the transaction.hookargs argument. So 'bundle2' is a placeholder | ||||
Pierre-Yves David
|
r25518 | nbchangesets = None | ||
Augie Fackler
|
r43347 | if b'nbchanges' in inpart.params: | ||
nbchangesets = int(inpart.params.get(b'nbchanges')) | ||||
Pulkit Goyal
|
r46129 | if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo): | ||
Martin von Zweigbergk
|
r27734 | if len(op.repo.changelog) != 0: | ||
Augie Fackler
|
r43346 | raise error.Abort( | ||
_( | ||||
Augie Fackler
|
r43347 | b"bundle contains tree manifests, but local repo is " | ||
b"non-empty and does not use tree manifests" | ||||
Augie Fackler
|
r43346 | ) | ||
) | ||||
Pulkit Goyal
|
r45932 | op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT) | ||
Gregory Szorc
|
r39736 | op.repo.svfs.options = localrepo.resolvestorevfsoptions( | ||
Augie Fackler
|
r43346 | op.repo.ui, op.repo.requirements, op.repo.features | ||
) | ||||
Pulkit Goyal
|
r45666 | scmutil.writereporequirements(op.repo) | ||
r43401 | ||||
Boris Feld
|
r33407 | extrakwargs = {} | ||
Augie Fackler
|
r43347 | targetphase = inpart.params.get(b'targetphase') | ||
Boris Feld
|
r33407 | if targetphase is not None: | ||
Augie Fackler
|
r43906 | extrakwargs['targetphase'] = int(targetphase) | ||
Raphaël Gomès
|
r47447 | |||
remote_sidedata = inpart.params.get(b'exp-wanted-sidedata') | ||||
extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata) | ||||
Augie Fackler
|
r43346 | ret = _processchangegroup( | ||
op, | ||||
cg, | ||||
tr, | ||||
Raphaël Gomès
|
r47269 | op.source, | ||
Augie Fackler
|
r43347 | b'bundle2', | ||
Augie Fackler
|
r43346 | expectedtotal=nbchangesets, | ||
Matt Harbison
|
r52755 | **extrakwargs, | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r20998 | if op.reply is not None: | ||
Mads Kiilerich
|
r23139 | # This is definitely not the final form of this | ||
Pierre-Yves David
|
r20998 | # return. But one need to start somewhere. | ||
Augie Fackler
|
r43347 | part = op.reply.newpart(b'reply:changegroup', mandatory=False) | ||
Augie Fackler
|
r33638 | part.addparam( | ||
Augie Fackler
|
r43347 | b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | part.addparam(b'return', b'%i' % ret, mandatory=False) | ||
Pierre-Yves David
|
r21019 | assert not inpart.read() | ||
Pierre-Yves David
|
r20950 | |||
Augie Fackler
|
r43346 | |||
_remotechangegroupparams = tuple( | ||||
Augie Fackler
|
r43347 | [b'url', b'size', b'digests'] | ||
+ [b'digest:%s' % k for k in util.DIGESTS.keys()] | ||||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | @parthandler(b'remote-changegroup', _remotechangegroupparams) | ||
Mike Hommey
|
r23029 | def handleremotechangegroup(op, inpart): | ||
"""apply a bundle10 on the repo, given an url and validation information | ||||
All the information about the remote bundle to import are given as | ||||
parameters. The parameters include: | ||||
- url: the url to the bundle10. | ||||
- size: the bundle10 file size. It is used to validate what was | ||||
retrieved by the client matches the server knowledge about the bundle. | ||||
- digests: a space separated list of the digest types provided as | ||||
parameters. | ||||
- digest:<digest-type>: the hexadecimal representation of the digest with | ||||
that name. Like the size, it is used to validate what was retrieved by | ||||
the client matches what the server knows about the bundle. | ||||
When multiple digest types are given, all of them are checked. | ||||
""" | ||||
try: | ||||
Augie Fackler
|
r43347 | raw_url = inpart.params[b'url'] | ||
Mike Hommey
|
r23029 | except KeyError: | ||
Augie Fackler
|
r43347 | raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url') | ||
r47669 | parsed_url = urlutil.url(raw_url) | |||
Augie Fackler
|
r43347 | if parsed_url.scheme not in capabilities[b'remote-changegroup']: | ||
Augie Fackler
|
r43346 | raise error.Abort( | ||
Augie Fackler
|
r43347 | _(b'remote-changegroup does not support %s urls') | ||
% parsed_url.scheme | ||||
Augie Fackler
|
r43346 | ) | ||
Mike Hommey
|
r23029 | |||
try: | ||||
Augie Fackler
|
r43347 | size = int(inpart.params[b'size']) | ||
Mike Hommey
|
r23029 | except ValueError: | ||
Augie Fackler
|
r43346 | raise error.Abort( | ||
Augie Fackler
|
r43347 | _(b'remote-changegroup: invalid value for param "%s"') % b'size' | ||
Augie Fackler
|
r43346 | ) | ||
Mike Hommey
|
r23029 | except KeyError: | ||
Augie Fackler
|
r43347 | raise error.Abort( | ||
_(b'remote-changegroup: missing "%s" param') % b'size' | ||||
) | ||||
Mike Hommey
|
r23029 | |||
digests = {} | ||||
Augie Fackler
|
r43347 | for typ in inpart.params.get(b'digests', b'').split(): | ||
param = b'digest:%s' % typ | ||||
Mike Hommey
|
r23029 | try: | ||
value = inpart.params[param] | ||||
except KeyError: | ||||
Augie Fackler
|
r43346 | raise error.Abort( | ||
Augie Fackler
|
r43347 | _(b'remote-changegroup: missing "%s" param') % param | ||
Augie Fackler
|
r43346 | ) | ||
Mike Hommey
|
r23029 | digests[typ] = value | ||
real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests) | ||||
Martin von Zweigbergk
|
r32930 | tr = op.gettransaction() | ||
Gregory Szorc
|
r25919 | from . import exchange | ||
Augie Fackler
|
r43346 | |||
Mike Hommey
|
r23029 | cg = exchange.readbundle(op.repo.ui, real_part, raw_url) | ||
if not isinstance(cg, changegroup.cg1unpacker): | ||||
Augie Fackler
|
r43346 | raise error.Abort( | ||
r47669 | _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url) | |||
Augie Fackler
|
r43346 | ) | ||
Raphaël Gomès
|
r47269 | ret = _processchangegroup(op, cg, tr, op.source, b'bundle2') | ||
Mike Hommey
|
r23029 | if op.reply is not None: | ||
Mads Kiilerich
|
r23139 | # This is definitely not the final form of this | ||
Mike Hommey
|
r23029 | # return. But one need to start somewhere. | ||
Augie Fackler
|
r43347 | part = op.reply.newpart(b'reply:changegroup') | ||
Augie Fackler
|
r33638 | part.addparam( | ||
Augie Fackler
|
r43347 | b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | part.addparam(b'return', b'%i' % ret, mandatory=False) | ||
Mike Hommey
|
r23029 | try: | ||
real_part.validate() | ||||
Pierre-Yves David
|
r26587 | except error.Abort as e: | ||
Augie Fackler
|
r43346 | raise error.Abort( | ||
Augie Fackler
|
r43347 | _(b'bundle at %s is corrupted:\n%s') | ||
r47669 | % (urlutil.hidepassword(raw_url), e.message) | |||
Augie Fackler
|
r43346 | ) | ||
Mike Hommey
|
r23029 | assert not inpart.read() | ||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'reply:changegroup', (b'return', b'in-reply-to')) | ||
Mike Hommey
|
r22548 | def handlereplychangegroup(op, inpart): | ||
Augie Fackler
|
r43347 | ret = int(inpart.params[b'return']) | ||
replyto = int(inpart.params[b'in-reply-to']) | ||||
op.records.add(b'changegroup', {b'return': ret}, replyto) | ||||
@parthandler(b'check:bookmarks') | ||||
Boris Feld
|
r35259 | def handlecheckbookmarks(op, inpart): | ||
"""check location of bookmarks | ||||
This part is to be used to detect push race regarding bookmark, it | ||||
contains binary encoded (bookmark, node) tuple. If the local state does | ||||
not marks the one in the part, a PushRaced exception is raised | ||||
""" | ||||
Joerg Sonnenberger
|
r47538 | bookdata = bookmarks.binarydecode(op.repo, inpart) | ||
Boris Feld
|
r35259 | |||
Augie Fackler
|
r43346 | msgstandard = ( | ||
Augie Fackler
|
r43347 | b'remote repository changed while pushing - please try again ' | ||
b'(bookmark "%s" move from %s to %s)' | ||||
Augie Fackler
|
r43346 | ) | ||
msgmissing = ( | ||||
Augie Fackler
|
r43347 | b'remote repository changed while pushing - please try again ' | ||
b'(bookmark "%s" is missing, expected %s)' | ||||
Augie Fackler
|
r43346 | ) | ||
msgexist = ( | ||||
Augie Fackler
|
r43347 | b'remote repository changed while pushing - please try again ' | ||
b'(bookmark "%s" set on %s, expected missing)' | ||||
Augie Fackler
|
r43346 | ) | ||
Boris Feld
|
r35259 | for book, node in bookdata: | ||
currentnode = op.repo._bookmarks.get(book) | ||||
if currentnode != node: | ||||
if node is None: | ||||
Joerg Sonnenberger
|
r46729 | finalmsg = msgexist % (book, short(currentnode)) | ||
Boris Feld
|
r35259 | elif currentnode is None: | ||
Joerg Sonnenberger
|
r46729 | finalmsg = msgmissing % (book, short(node)) | ||
Boris Feld
|
r35259 | else: | ||
Augie Fackler
|
r43346 | finalmsg = msgstandard % ( | ||
book, | ||||
Joerg Sonnenberger
|
r46729 | short(node), | ||
short(currentnode), | ||||
Augie Fackler
|
r43346 | ) | ||
Boris Feld
|
r35259 | raise error.PushRaced(finalmsg) | ||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'check:heads') | ||
Mike Hommey
|
r22548 | def handlecheckheads(op, inpart): | ||
Pierre-Yves David
|
r21060 | """check that head of the repo did not change | ||
This is used to detect a push race when using unbundle. | ||||
This replaces the "heads" argument of unbundle.""" | ||||
h = inpart.read(20) | ||||
heads = [] | ||||
while len(h) == 20: | ||||
heads.append(h) | ||||
h = inpart.read(20) | ||||
assert not h | ||||
Durham Goode
|
r26565 | # Trigger a transaction so that we are guaranteed to have the lock now. | ||
Augie Fackler
|
r43347 | if op.ui.configbool(b'experimental', b'bundle2lazylocking'): | ||
Durham Goode
|
r26565 | op.gettransaction() | ||
Mads Kiilerich
|
r29294 | if sorted(heads) != sorted(op.repo.heads()): | ||
Augie Fackler
|
r43346 | raise error.PushRaced( | ||
Martin von Zweigbergk
|
r43387 | b'remote repository changed while pushing - please try again' | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r21130 | |||
Augie Fackler
|
r43347 | @parthandler(b'check:updated-heads') | ||
r32709 | def handlecheckupdatedheads(op, inpart): | |||
"""check for race on the heads touched by a push | ||||
This is similar to 'check:heads' but focus on the heads actually updated | ||||
during the push. If other activities happen on unrelated heads, it is | ||||
ignored. | ||||
This allow server with high traffic to avoid push contention as long as | ||||
unrelated parts of the graph are involved.""" | ||||
h = inpart.read(20) | ||||
heads = [] | ||||
while len(h) == 20: | ||||
heads.append(h) | ||||
h = inpart.read(20) | ||||
assert not h | ||||
# trigger a transaction so that we are guaranteed to have the lock now. | ||||
Augie Fackler
|
r43347 | if op.ui.configbool(b'experimental', b'bundle2lazylocking'): | ||
r32709 | op.gettransaction() | |||
currentheads = set() | ||||
Pulkit Goyal
|
r42169 | for ls in op.repo.branchmap().iterheads(): | ||
r32709 | currentheads.update(ls) | |||
for h in heads: | ||||
if h not in currentheads: | ||||
Augie Fackler
|
r43346 | raise error.PushRaced( | ||
Augie Fackler
|
r43347 | b'remote repository changed while pushing - ' | ||
b'please try again' | ||||
Augie Fackler
|
r43346 | ) | ||
r32709 | ||||
Augie Fackler
|
r43347 | @parthandler(b'check:phases') | ||
Boris Feld
|
r34821 | def handlecheckphases(op, inpart): | ||
"""check that phase boundaries of the repository did not change | ||||
This is used to detect a push race. | ||||
""" | ||||
phasetonodes = phases.binarydecode(inpart) | ||||
unfi = op.repo.unfiltered() | ||||
cl = unfi.changelog | ||||
phasecache = unfi._phasecache | ||||
Augie Fackler
|
r43346 | msg = ( | ||
Augie Fackler
|
r43347 | b'remote repository changed while pushing - please try again ' | ||
b'(%s is %s expected %s)' | ||||
Augie Fackler
|
r43346 | ) | ||
Gregory Szorc
|
r49768 | for expectedphase, nodes in phasetonodes.items(): | ||
Boris Feld
|
r34821 | for n in nodes: | ||
actualphase = phasecache.phase(unfi, cl.rev(n)) | ||||
if actualphase != expectedphase: | ||||
Augie Fackler
|
r43346 | finalmsg = msg % ( | ||
Joerg Sonnenberger
|
r46729 | short(n), | ||
Augie Fackler
|
r43346 | phases.phasenames[actualphase], | ||
phases.phasenames[expectedphase], | ||||
) | ||||
Boris Feld
|
r34821 | raise error.PushRaced(finalmsg) | ||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'output') | ||
Pierre-Yves David
|
r21131 | def handleoutput(op, inpart): | ||
"""forward output captured on the server to the client""" | ||||
for line in inpart.read().splitlines(): | ||||
Augie Fackler
|
r43347 | op.ui.status(_(b'remote: %s\n') % line) | ||
@parthandler(b'replycaps') | ||||
Pierre-Yves David
|
r21130 | def handlereplycaps(op, inpart): | ||
"""Notify that a reply bundle should be created | ||||
Pierre-Yves David
|
r21138 | The payload contains the capabilities information for the reply""" | ||
caps = decodecaps(inpart.read()) | ||||
Pierre-Yves David
|
r21130 | if op.reply is None: | ||
Pierre-Yves David
|
r21135 | op.reply = bundle20(op.ui, caps) | ||
Pierre-Yves David
|
r21131 | |||
Augie Fackler
|
r43346 | |||
Gregory Szorc
|
r26829 | class AbortFromPart(error.Abort): | ||
"""Sub-class of Abort that denotes an error from a bundle2 part.""" | ||||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'error:abort', (b'message', b'hint')) | ||
Pierre-Yves David
|
r24741 | def handleerrorabort(op, inpart): | ||
Pierre-Yves David
|
r21177 | """Used to transmit abort error over the wire""" | ||
Augie Fackler
|
r43346 | raise AbortFromPart( | ||
Augie Fackler
|
r43347 | inpart.params[b'message'], hint=inpart.params.get(b'hint') | ||
Augie Fackler
|
r43346 | ) | ||
@parthandler( | ||||
Augie Fackler
|
r43347 | b'error:pushkey', | ||
(b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'), | ||||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r25493 | def handleerrorpushkey(op, inpart): | ||
"""Used to transmit failure of a mandatory pushkey over the wire""" | ||||
kwargs = {} | ||||
Augie Fackler
|
r43347 | for name in (b'namespace', b'key', b'new', b'old', b'ret'): | ||
Pierre-Yves David
|
r25493 | value = inpart.params.get(name) | ||
if value is not None: | ||||
kwargs[name] = value | ||||
Augie Fackler
|
r43346 | raise error.PushkeyFailed( | ||
Augie Fackler
|
r43347 | inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs) | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r25493 | |||
Augie Fackler
|
r43347 | @parthandler(b'error:unsupportedcontent', (b'parttype', b'params')) | ||
Pierre-Yves David
|
r24741 | def handleerrorunsupportedcontent(op, inpart): | ||
Pierre-Yves David
|
r21619 | """Used to transmit unknown content error over the wire""" | ||
Pierre-Yves David
|
r21622 | kwargs = {} | ||
Augie Fackler
|
r43347 | parttype = inpart.params.get(b'parttype') | ||
Pierre-Yves David
|
r21627 | if parttype is not None: | ||
Augie Fackler
|
r43347 | kwargs[b'parttype'] = parttype | ||
params = inpart.params.get(b'params') | ||||
Pierre-Yves David
|
r21622 | if params is not None: | ||
Augie Fackler
|
r43347 | kwargs[b'params'] = params.split(b'\0') | ||
Pierre-Yves David
|
r21622 | |||
Augie Fackler
|
r36445 | raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs)) | ||
Pierre-Yves David
|
r21186 | |||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'error:pushraced', (b'message',)) | ||
Pierre-Yves David
|
r24741 | def handleerrorpushraced(op, inpart): | ||
Pierre-Yves David
|
r21186 | """Used to transmit push race error over the wire""" | ||
Augie Fackler
|
r43347 | raise error.ResponseError(_(b'push failed:'), inpart.params[b'message']) | ||
@parthandler(b'listkeys', (b'namespace',)) | ||||
Pierre-Yves David
|
r21655 | def handlelistkeys(op, inpart): | ||
"""retrieve pushkey namespace content stored in a bundle2""" | ||||
Augie Fackler
|
r43347 | namespace = inpart.params[b'namespace'] | ||
Pierre-Yves David
|
r21655 | r = pushkey.decodekeys(inpart.read()) | ||
Augie Fackler
|
r43347 | op.records.add(b'listkeys', (namespace, r)) | ||
@parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new')) | ||||
Pierre-Yves David
|
r21660 | def handlepushkey(op, inpart): | ||
"""process a pushkey request""" | ||||
dec = pushkey.decode | ||||
Augie Fackler
|
r43347 | namespace = dec(inpart.params[b'namespace']) | ||
key = dec(inpart.params[b'key']) | ||||
old = dec(inpart.params[b'old']) | ||||
new = dec(inpart.params[b'new']) | ||||
Durham Goode
|
r26565 | # Grab the transaction to ensure that we have the lock before performing the | ||
# pushkey. | ||||
Augie Fackler
|
r43347 | if op.ui.configbool(b'experimental', b'bundle2lazylocking'): | ||
Durham Goode
|
r26565 | op.gettransaction() | ||
Pierre-Yves David
|
r21660 | ret = op.repo.pushkey(namespace, key, old, new) | ||
Augie Fackler
|
r43347 | record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new} | ||
op.records.add(b'pushkey', record) | ||||
Pierre-Yves David
|
r21660 | if op.reply is not None: | ||
Augie Fackler
|
r43347 | rpart = op.reply.newpart(b'reply:pushkey') | ||
Augie Fackler
|
r33638 | rpart.addparam( | ||
Augie Fackler
|
r43347 | b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | rpart.addparam(b'return', b'%i' % ret, mandatory=False) | ||
Pierre-Yves David
|
r25481 | if inpart.mandatory and not ret: | ||
Pierre-Yves David
|
r25484 | kwargs = {} | ||
Augie Fackler
|
r43347 | for key in (b'namespace', b'key', b'new', b'old', b'ret'): | ||
Pierre-Yves David
|
r25484 | if key in inpart.params: | ||
kwargs[key] = inpart.params[key] | ||||
Augie Fackler
|
r43346 | raise error.PushkeyFailed( | ||
Augie Fackler
|
r43347 | partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs) | ||
Augie Fackler
|
r43346 | ) | ||
Pierre-Yves David
|
r21660 | |||
Augie Fackler
|
r43347 | @parthandler(b'bookmarks') | ||
Boris Feld
|
r35261 | def handlebookmark(op, inpart): | ||
"""transmit bookmark information | ||||
Boris Feld
|
r35267 | The part contains binary encoded bookmark information. | ||
The exact behavior of this part can be controlled by the 'bookmarks' mode | ||||
on the bundle operation. | ||||
Boris Feld
|
r35261 | |||
Boris Feld
|
r35267 | When mode is 'apply' (the default) the bookmark information is applied as | ||
is to the unbundling repository. Make sure a 'check:bookmarks' part is | ||||
issued earlier to check for push races in such update. This behavior is | ||||
suitable for pushing. | ||||
When mode is 'records', the information is recorded into the 'bookmarks' | ||||
records of the bundle operation. This behavior is suitable for pulling. | ||||
Boris Feld
|
r35261 | """ | ||
Joerg Sonnenberger
|
r47538 | changes = bookmarks.binarydecode(op.repo, inpart) | ||
Boris Feld
|
r35262 | |||
Augie Fackler
|
r43347 | pushkeycompat = op.repo.ui.configbool( | ||
b'server', b'bookmarks-pushkey-compat' | ||||
) | ||||
bookmarksmode = op.modes.get(b'bookmarks', b'apply') | ||||
if bookmarksmode == b'apply': | ||||
Boris Feld
|
r35267 | tr = op.gettransaction() | ||
bookstore = op.repo._bookmarks | ||||
if pushkeycompat: | ||||
allhooks = [] | ||||
for book, node in changes: | ||||
hookargs = tr.hookargs.copy() | ||||
Augie Fackler
|
r43347 | hookargs[b'pushkeycompat'] = b'1' | ||
hookargs[b'namespace'] = b'bookmarks' | ||||
hookargs[b'key'] = book | ||||
Joerg Sonnenberger
|
r46729 | hookargs[b'old'] = hex(bookstore.get(book, b'')) | ||
hookargs[b'new'] = hex(node if node is not None else b'') | ||||
Boris Feld
|
r35267 | allhooks.append(hookargs) | ||
for hookargs in allhooks: | ||||
Augie Fackler
|
r43346 | op.repo.hook( | ||
Augie Fackler
|
r43347 | b'prepushkey', throw=True, **pycompat.strkwargs(hookargs) | ||
Augie Fackler
|
r43346 | ) | ||
Boris Feld
|
r35267 | |||
Valentin Gatien-Baron
|
r44854 | for book, node in changes: | ||
if bookmarks.isdivergent(book): | ||||
msg = _(b'cannot accept divergent bookmark %s!') % book | ||||
raise error.Abort(msg) | ||||
Boris Feld
|
r35267 | bookstore.applychanges(op.repo, op.gettransaction(), changes) | ||
if pushkeycompat: | ||||
Augie Fackler
|
r43346 | |||
Kyle Lippincott
|
r44217 | def runhook(unused_success): | ||
Boris Feld
|
r35267 | for hookargs in allhooks: | ||
Augie Fackler
|
r43347 | op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs)) | ||
Augie Fackler
|
r43346 | |||
Boris Feld
|
r35267 | op.repo._afterlock(runhook) | ||
Augie Fackler
|
r43347 | elif bookmarksmode == b'records': | ||
Boris Feld
|
r35262 | for book, node in changes: | ||
Augie Fackler
|
r43347 | record = {b'bookmark': book, b'node': node} | ||
op.records.add(b'bookmarks', record) | ||||
Boris Feld
|
r35267 | else: | ||
Augie Fackler
|
r43347 | raise error.ProgrammingError( | ||
Raphaël Gomès
|
r49252 | b'unknown bookmark mode: %s' % bookmarksmode | ||
Augie Fackler
|
r43347 | ) | ||
@parthandler(b'phase-heads') | ||||
Martin von Zweigbergk
|
r33031 | def handlephases(op, inpart): | ||
"""apply phases from bundle part to repo""" | ||||
Boris Feld
|
r34321 | headsbyphase = phases.binarydecode(inpart) | ||
Boris Feld
|
r34322 | phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase) | ||
Martin von Zweigbergk
|
r33031 | |||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'reply:pushkey', (b'return', b'in-reply-to')) | ||
Pierre-Yves David
|
r21660 | def handlepushkeyreply(op, inpart): | ||
"""retrieve the result of a pushkey request""" | ||||
Augie Fackler
|
r43347 | ret = int(inpart.params[b'return']) | ||
partid = int(inpart.params[b'in-reply-to']) | ||||
op.records.add(b'pushkey', {b'return': ret}, partid) | ||||
@parthandler(b'obsmarkers') | ||||
Pierre-Yves David
|
r22336 | def handleobsmarker(op, inpart): | ||
"""add a stream of obsmarkers to the repo""" | ||||
tr = op.gettransaction() | ||||
Pierre-Yves David
|
r24733 | markerdata = inpart.read() | ||
Augie Fackler
|
r43347 | if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'): | ||
Augie Fackler
|
r43350 | op.ui.writenoi18n( | ||
Augie Fackler
|
r43347 | b'obsmarker-exchange: %i bytes received\n' % len(markerdata) | ||
) | ||||
Pierre-Yves David
|
r26685 | # The mergemarkers call will crash if marker creation is not enabled. | ||
# we want to avoid this if the part is advisory. | ||||
if not inpart.mandatory and op.repo.obsstore.readonly: | ||||
Augie Fackler
|
r43347 | op.repo.ui.debug( | ||
b'ignoring obsolescence markers, feature not enabled\n' | ||||
) | ||||
Pierre-Yves David
|
r26685 | return | ||
Pierre-Yves David
|
r24733 | new = op.repo.obsstore.mergemarkers(tr, markerdata) | ||
r32314 | op.repo.invalidatevolatilesets() | |||
Augie Fackler
|
r43347 | op.records.add(b'obsmarkers', {b'new': new}) | ||
Pierre-Yves David
|
r22340 | if op.reply is not None: | ||
Augie Fackler
|
r43347 | rpart = op.reply.newpart(b'reply:obsmarkers') | ||
Augie Fackler
|
r33638 | rpart.addparam( | ||
Augie Fackler
|
r43347 | b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False | ||
Augie Fackler
|
r43346 | ) | ||
Augie Fackler
|
r43347 | rpart.addparam(b'new', b'%i' % new, mandatory=False) | ||
@parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to')) | ||||
Martin von Zweigbergk
|
r25506 | def handleobsmarkerreply(op, inpart): | ||
Pierre-Yves David
|
r22340 | """retrieve the result of a pushkey request""" | ||
Augie Fackler
|
r43347 | ret = int(inpart.params[b'new']) | ||
partid = int(inpart.params[b'in-reply-to']) | ||||
op.records.add(b'obsmarkers', {b'new': ret}, partid) | ||||
@parthandler(b'hgtagsfnodes') | ||||
Gregory Szorc
|
r25401 | def handlehgtagsfnodes(op, inpart): | ||
"""Applies .hgtags fnodes cache entries to the local repo. | ||||
Payload is pairs of 20 byte changeset nodes and filenodes. | ||||
""" | ||||
Durham Goode
|
r26565 | # Grab the transaction so we ensure that we have the lock at this point. | ||
Augie Fackler
|
r43347 | if op.ui.configbool(b'experimental', b'bundle2lazylocking'): | ||
Durham Goode
|
r26565 | op.gettransaction() | ||
Gregory Szorc
|
r25401 | cache = tags.hgtagsfnodescache(op.repo.unfiltered()) | ||
count = 0 | ||||
while True: | ||||
node = inpart.read(20) | ||||
fnode = inpart.read(20) | ||||
if len(node) < 20 or len(fnode) < 20: | ||||
Augie Fackler
|
r43347 | op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n') | ||
Gregory Szorc
|
r25401 | break | ||
cache.setfnode(node, fnode) | ||||
count += 1 | ||||
cache.write() | ||||
Augie Fackler
|
r43347 | op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count) | ||
rbcstruct = struct.Struct(b'>III') | ||||
@parthandler(b'cache:rev-branch-cache') | ||||
Boris Feld
|
r36981 | def handlerbc(op, inpart): | ||
Joerg Sonnenberger
|
r47084 | """Legacy part, ignored for compatibility with bundles from or | ||
for Mercurial before 5.7. Newer Mercurial computes the cache | ||||
efficiently enough during unbundling that the additional transfer | ||||
is unnecessary.""" | ||||
Boris Feld
|
r36981 | |||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'pushvars') | ||
Pulkit Goyal
|
r33656 | def bundle2getvars(op, part): | ||
'''unbundle a bundle2 containing shellvars on the server''' | ||||
# An option to disable unbundling on server-side for security reasons | ||||
Augie Fackler
|
r43347 | if op.ui.configbool(b'push', b'pushvars.server'): | ||
Pulkit Goyal
|
r33656 | hookargs = {} | ||
for key, value in part.advisoryparams: | ||||
key = key.upper() | ||||
# We want pushed variables to have USERVAR_ prepended so we know | ||||
# they came from the --pushvar flag. | ||||
Augie Fackler
|
r43347 | key = b"USERVAR_" + key | ||
Pulkit Goyal
|
r33656 | hookargs[key] = value | ||
op.addhookargs(hookargs) | ||||
Boris Feld
|
r35776 | |||
Augie Fackler
|
r43346 | |||
Augie Fackler
|
r43347 | @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount')) | ||
Gregory Szorc
|
r35806 | def handlestreamv2bundle(op, part): | ||
Valentin Gatien-Baron
|
r49847 | requirements = urlreq.unquote(part.params[b'requirements']) | ||
requirements = requirements.split(b',') if requirements else [] | ||||
Augie Fackler
|
r43347 | filecount = int(part.params[b'filecount']) | ||
bytecount = int(part.params[b'bytecount']) | ||||
Boris Feld
|
r35776 | |||
repo = op.repo | ||||
if len(repo): | ||||
Augie Fackler
|
r43347 | msg = _(b'cannot apply stream clone to non empty repository') | ||
Boris Feld
|
r35776 | raise error.Abort(msg) | ||
Augie Fackler
|
r43347 | repo.ui.debug(b'applying stream bundle\n') | ||
Augie Fackler
|
r43346 | streamclone.applybundlev2(repo, part, filecount, bytecount, requirements) | ||
Arseniy Alekseyev
|
r51599 | @parthandler(b'stream3-exp', (b'requirements',)) | ||
r51417 | def handlestreamv3bundle(op, part): | |||
Arseniy Alekseyev
|
r51599 | requirements = urlreq.unquote(part.params[b'requirements']) | ||
requirements = requirements.split(b',') if requirements else [] | ||||
repo = op.repo | ||||
if len(repo): | ||||
msg = _(b'cannot apply stream clone to non empty repository') | ||||
raise error.Abort(msg) | ||||
repo.ui.debug(b'applying stream bundle\n') | ||||
streamclone.applybundlev3(repo, part, requirements) | ||||
r51417 | ||||
Augie Fackler
|
r43346 | def widen_bundle( | ||
bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses | ||||
): | ||||
Pulkit Goyal
|
r40108 | """generates bundle2 for widening a narrow clone | ||
Pulkit Goyal
|
r42607 | bundler is the bundle to which data should be added | ||
Pulkit Goyal
|
r40108 | repo is the localrepository instance | ||
Martin von Zweigbergk
|
r40380 | oldmatcher matches what the client already has | ||
newmatcher matches what the client needs (including what it already has) | ||||
Pulkit Goyal
|
r40108 | common is set of common heads between server and client | ||
known is a set of revs known on the client side (used in ellipses) | ||||
cgversion is the changegroup version to send | ||||
ellipses is boolean value telling whether to send ellipses data or not | ||||
returns bundle2 of the data required for extending | ||||
""" | ||||
commonnodes = set() | ||||
cl = repo.changelog | ||||
Augie Fackler
|
r43347 | for r in repo.revs(b"::%ln", common): | ||
Pulkit Goyal
|
r40108 | commonnodes.add(cl.node(r)) | ||
if commonnodes: | ||||
Augie Fackler
|
r43346 | packer = changegroup.getbundler( | ||
cgversion, | ||||
repo, | ||||
oldmatcher=oldmatcher, | ||||
matcher=newmatcher, | ||||
fullnodes=commonnodes, | ||||
) | ||||
cgdata = packer.generate( | ||||
Joerg Sonnenberger
|
r47771 | {repo.nullid}, | ||
Augie Fackler
|
r43346 | list(commonnodes), | ||
False, | ||||
Augie Fackler
|
r43347 | b'narrow_widen', | ||
Augie Fackler
|
r43346 | changelog=False, | ||
) | ||||
Pulkit Goyal
|
r40108 | |||
Augie Fackler
|
r43347 | part = bundler.newpart(b'changegroup', data=cgdata) | ||
part.addparam(b'version', cgversion) | ||||
Pulkit Goyal
|
r46129 | if scmutil.istreemanifest(repo): | ||
Augie Fackler
|
r43347 | part.addparam(b'treemanifest', b'1') | ||
r48000 | if repository.REPO_FEATURE_SIDE_DATA in repo.features: | |||
part.addparam(b'exp-sidedata', b'1') | ||||
wanted = format_remote_wanted_sidedata(repo) | ||||
part.addparam(b'exp-wanted-sidedata', wanted) | ||||
Pulkit Goyal
|
r40108 | |||
return bundler | ||||