##// END OF EJS Templates
stream-clone: add the `-exp` prefix to the bundle part...
marmoute -
r51425:116da6bb default
parent child Browse files
Show More
@@ -1,2664 +1,2664 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148
148
149 import collections
149 import collections
150 import errno
150 import errno
151 import os
151 import os
152 import re
152 import re
153 import string
153 import string
154 import struct
154 import struct
155 import sys
155 import sys
156
156
157 from .i18n import _
157 from .i18n import _
158 from .node import (
158 from .node import (
159 hex,
159 hex,
160 short,
160 short,
161 )
161 )
162 from . import (
162 from . import (
163 bookmarks,
163 bookmarks,
164 changegroup,
164 changegroup,
165 encoding,
165 encoding,
166 error,
166 error,
167 obsolete,
167 obsolete,
168 phases,
168 phases,
169 pushkey,
169 pushkey,
170 pycompat,
170 pycompat,
171 requirements,
171 requirements,
172 scmutil,
172 scmutil,
173 streamclone,
173 streamclone,
174 tags,
174 tags,
175 url,
175 url,
176 util,
176 util,
177 )
177 )
178 from .utils import (
178 from .utils import (
179 stringutil,
179 stringutil,
180 urlutil,
180 urlutil,
181 )
181 )
182 from .interfaces import repository
182 from .interfaces import repository
183
183
184 urlerr = util.urlerr
184 urlerr = util.urlerr
185 urlreq = util.urlreq
185 urlreq = util.urlreq
186
186
187 _pack = struct.pack
187 _pack = struct.pack
188 _unpack = struct.unpack
188 _unpack = struct.unpack
189
189
190 _fstreamparamsize = b'>i'
190 _fstreamparamsize = b'>i'
191 _fpartheadersize = b'>i'
191 _fpartheadersize = b'>i'
192 _fparttypesize = b'>B'
192 _fparttypesize = b'>B'
193 _fpartid = b'>I'
193 _fpartid = b'>I'
194 _fpayloadsize = b'>i'
194 _fpayloadsize = b'>i'
195 _fpartparamcount = b'>BB'
195 _fpartparamcount = b'>BB'
196
196
197 preferedchunksize = 32768
197 preferedchunksize = 32768
198
198
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200
200
201
201
202 def outdebug(ui, message):
202 def outdebug(ui, message):
203 """debug regarding output stream (bundling)"""
203 """debug regarding output stream (bundling)"""
204 if ui.configbool(b'devel', b'bundle2.debug'):
204 if ui.configbool(b'devel', b'bundle2.debug'):
205 ui.debug(b'bundle2-output: %s\n' % message)
205 ui.debug(b'bundle2-output: %s\n' % message)
206
206
207
207
208 def indebug(ui, message):
208 def indebug(ui, message):
209 """debug on input stream (unbundling)"""
209 """debug on input stream (unbundling)"""
210 if ui.configbool(b'devel', b'bundle2.debug'):
210 if ui.configbool(b'devel', b'bundle2.debug'):
211 ui.debug(b'bundle2-input: %s\n' % message)
211 ui.debug(b'bundle2-input: %s\n' % message)
212
212
213
213
214 def validateparttype(parttype):
214 def validateparttype(parttype):
215 """raise ValueError if a parttype contains invalid character"""
215 """raise ValueError if a parttype contains invalid character"""
216 if _parttypeforbidden.search(parttype):
216 if _parttypeforbidden.search(parttype):
217 raise ValueError(parttype)
217 raise ValueError(parttype)
218
218
219
219
220 def _makefpartparamsizes(nbparams):
220 def _makefpartparamsizes(nbparams):
221 """return a struct format to read part parameter sizes
221 """return a struct format to read part parameter sizes
222
222
223 The number parameters is variable so we need to build that format
223 The number parameters is variable so we need to build that format
224 dynamically.
224 dynamically.
225 """
225 """
226 return b'>' + (b'BB' * nbparams)
226 return b'>' + (b'BB' * nbparams)
227
227
228
228
229 parthandlermapping = {}
229 parthandlermapping = {}
230
230
231
231
232 def parthandler(parttype, params=()):
232 def parthandler(parttype, params=()):
233 """decorator that register a function as a bundle2 part handler
233 """decorator that register a function as a bundle2 part handler
234
234
235 eg::
235 eg::
236
236
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 def myparttypehandler(...):
238 def myparttypehandler(...):
239 '''process a part of type "my part".'''
239 '''process a part of type "my part".'''
240 ...
240 ...
241 """
241 """
242 validateparttype(parttype)
242 validateparttype(parttype)
243
243
244 def _decorator(func):
244 def _decorator(func):
245 lparttype = parttype.lower() # enforce lower case matching.
245 lparttype = parttype.lower() # enforce lower case matching.
246 assert lparttype not in parthandlermapping
246 assert lparttype not in parthandlermapping
247 parthandlermapping[lparttype] = func
247 parthandlermapping[lparttype] = func
248 func.params = frozenset(params)
248 func.params = frozenset(params)
249 return func
249 return func
250
250
251 return _decorator
251 return _decorator
252
252
253
253
254 class unbundlerecords:
254 class unbundlerecords:
255 """keep record of what happens during and unbundle
255 """keep record of what happens during and unbundle
256
256
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 category of record and obj is an arbitrary object.
258 category of record and obj is an arbitrary object.
259
259
260 `records['cat']` will return all entries of this category 'cat'.
260 `records['cat']` will return all entries of this category 'cat'.
261
261
262 Iterating on the object itself will yield `('category', obj)` tuples
262 Iterating on the object itself will yield `('category', obj)` tuples
263 for all entries.
263 for all entries.
264
264
265 All iterations happens in chronological order.
265 All iterations happens in chronological order.
266 """
266 """
267
267
268 def __init__(self):
268 def __init__(self):
269 self._categories = {}
269 self._categories = {}
270 self._sequences = []
270 self._sequences = []
271 self._replies = {}
271 self._replies = {}
272
272
273 def add(self, category, entry, inreplyto=None):
273 def add(self, category, entry, inreplyto=None):
274 """add a new record of a given category.
274 """add a new record of a given category.
275
275
276 The entry can then be retrieved in the list returned by
276 The entry can then be retrieved in the list returned by
277 self['category']."""
277 self['category']."""
278 self._categories.setdefault(category, []).append(entry)
278 self._categories.setdefault(category, []).append(entry)
279 self._sequences.append((category, entry))
279 self._sequences.append((category, entry))
280 if inreplyto is not None:
280 if inreplyto is not None:
281 self.getreplies(inreplyto).add(category, entry)
281 self.getreplies(inreplyto).add(category, entry)
282
282
283 def getreplies(self, partid):
283 def getreplies(self, partid):
284 """get the records that are replies to a specific part"""
284 """get the records that are replies to a specific part"""
285 return self._replies.setdefault(partid, unbundlerecords())
285 return self._replies.setdefault(partid, unbundlerecords())
286
286
287 def __getitem__(self, cat):
287 def __getitem__(self, cat):
288 return tuple(self._categories.get(cat, ()))
288 return tuple(self._categories.get(cat, ()))
289
289
290 def __iter__(self):
290 def __iter__(self):
291 return iter(self._sequences)
291 return iter(self._sequences)
292
292
293 def __len__(self):
293 def __len__(self):
294 return len(self._sequences)
294 return len(self._sequences)
295
295
296 def __nonzero__(self):
296 def __nonzero__(self):
297 return bool(self._sequences)
297 return bool(self._sequences)
298
298
299 __bool__ = __nonzero__
299 __bool__ = __nonzero__
300
300
301
301
302 class bundleoperation:
302 class bundleoperation:
303 """an object that represents a single bundling process
303 """an object that represents a single bundling process
304
304
305 Its purpose is to carry unbundle-related objects and states.
305 Its purpose is to carry unbundle-related objects and states.
306
306
307 A new object should be created at the beginning of each bundle processing.
307 A new object should be created at the beginning of each bundle processing.
308 The object is to be returned by the processing function.
308 The object is to be returned by the processing function.
309
309
310 The object has very little content now it will ultimately contain:
310 The object has very little content now it will ultimately contain:
311 * an access to the repo the bundle is applied to,
311 * an access to the repo the bundle is applied to,
312 * a ui object,
312 * a ui object,
313 * a way to retrieve a transaction to add changes to the repo,
313 * a way to retrieve a transaction to add changes to the repo,
314 * a way to record the result of processing each part,
314 * a way to record the result of processing each part,
315 * a way to construct a bundle response when applicable.
315 * a way to construct a bundle response when applicable.
316 """
316 """
317
317
318 def __init__(
318 def __init__(
319 self,
319 self,
320 repo,
320 repo,
321 transactiongetter,
321 transactiongetter,
322 captureoutput=True,
322 captureoutput=True,
323 source=b'',
323 source=b'',
324 remote=None,
324 remote=None,
325 ):
325 ):
326 self.repo = repo
326 self.repo = repo
327 # the peer object who produced this bundle if available
327 # the peer object who produced this bundle if available
328 self.remote = remote
328 self.remote = remote
329 self.ui = repo.ui
329 self.ui = repo.ui
330 self.records = unbundlerecords()
330 self.records = unbundlerecords()
331 self.reply = None
331 self.reply = None
332 self.captureoutput = captureoutput
332 self.captureoutput = captureoutput
333 self.hookargs = {}
333 self.hookargs = {}
334 self._gettransaction = transactiongetter
334 self._gettransaction = transactiongetter
335 # carries value that can modify part behavior
335 # carries value that can modify part behavior
336 self.modes = {}
336 self.modes = {}
337 self.source = source
337 self.source = source
338
338
339 def gettransaction(self):
339 def gettransaction(self):
340 transaction = self._gettransaction()
340 transaction = self._gettransaction()
341
341
342 if self.hookargs:
342 if self.hookargs:
343 # the ones added to the transaction supercede those added
343 # the ones added to the transaction supercede those added
344 # to the operation.
344 # to the operation.
345 self.hookargs.update(transaction.hookargs)
345 self.hookargs.update(transaction.hookargs)
346 transaction.hookargs = self.hookargs
346 transaction.hookargs = self.hookargs
347
347
348 # mark the hookargs as flushed. further attempts to add to
348 # mark the hookargs as flushed. further attempts to add to
349 # hookargs will result in an abort.
349 # hookargs will result in an abort.
350 self.hookargs = None
350 self.hookargs = None
351
351
352 return transaction
352 return transaction
353
353
354 def addhookargs(self, hookargs):
354 def addhookargs(self, hookargs):
355 if self.hookargs is None:
355 if self.hookargs is None:
356 raise error.ProgrammingError(
356 raise error.ProgrammingError(
357 b'attempted to add hookargs to '
357 b'attempted to add hookargs to '
358 b'operation after transaction started'
358 b'operation after transaction started'
359 )
359 )
360 self.hookargs.update(hookargs)
360 self.hookargs.update(hookargs)
361
361
362
362
363 class TransactionUnavailable(RuntimeError):
363 class TransactionUnavailable(RuntimeError):
364 pass
364 pass
365
365
366
366
367 def _notransaction():
367 def _notransaction():
368 """default method to get a transaction while processing a bundle
368 """default method to get a transaction while processing a bundle
369
369
370 Raise an exception to highlight the fact that no transaction was expected
370 Raise an exception to highlight the fact that no transaction was expected
371 to be created"""
371 to be created"""
372 raise TransactionUnavailable()
372 raise TransactionUnavailable()
373
373
374
374
375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
376 # transform me into unbundler.apply() as soon as the freeze is lifted
376 # transform me into unbundler.apply() as soon as the freeze is lifted
377 if isinstance(unbundler, unbundle20):
377 if isinstance(unbundler, unbundle20):
378 tr.hookargs[b'bundle2'] = b'1'
378 tr.hookargs[b'bundle2'] = b'1'
379 if source is not None and b'source' not in tr.hookargs:
379 if source is not None and b'source' not in tr.hookargs:
380 tr.hookargs[b'source'] = source
380 tr.hookargs[b'source'] = source
381 if url is not None and b'url' not in tr.hookargs:
381 if url is not None and b'url' not in tr.hookargs:
382 tr.hookargs[b'url'] = url
382 tr.hookargs[b'url'] = url
383 return processbundle(
383 return processbundle(
384 repo, unbundler, lambda: tr, source=source, remote=remote
384 repo, unbundler, lambda: tr, source=source, remote=remote
385 )
385 )
386 else:
386 else:
387 # the transactiongetter won't be used, but we might as well set it
387 # the transactiongetter won't be used, but we might as well set it
388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
390 return op
390 return op
391
391
392
392
393 class partiterator:
393 class partiterator:
394 def __init__(self, repo, op, unbundler):
394 def __init__(self, repo, op, unbundler):
395 self.repo = repo
395 self.repo = repo
396 self.op = op
396 self.op = op
397 self.unbundler = unbundler
397 self.unbundler = unbundler
398 self.iterator = None
398 self.iterator = None
399 self.count = 0
399 self.count = 0
400 self.current = None
400 self.current = None
401
401
402 def __enter__(self):
402 def __enter__(self):
403 def func():
403 def func():
404 itr = enumerate(self.unbundler.iterparts(), 1)
404 itr = enumerate(self.unbundler.iterparts(), 1)
405 for count, p in itr:
405 for count, p in itr:
406 self.count = count
406 self.count = count
407 self.current = p
407 self.current = p
408 yield p
408 yield p
409 p.consume()
409 p.consume()
410 self.current = None
410 self.current = None
411
411
412 self.iterator = func()
412 self.iterator = func()
413 return self.iterator
413 return self.iterator
414
414
415 def __exit__(self, type, exc, tb):
415 def __exit__(self, type, exc, tb):
416 if not self.iterator:
416 if not self.iterator:
417 return
417 return
418
418
419 # Only gracefully abort in a normal exception situation. User aborts
419 # Only gracefully abort in a normal exception situation. User aborts
420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
421 # and should not gracefully cleanup.
421 # and should not gracefully cleanup.
422 if isinstance(exc, Exception):
422 if isinstance(exc, Exception):
423 # Any exceptions seeking to the end of the bundle at this point are
423 # Any exceptions seeking to the end of the bundle at this point are
424 # almost certainly related to the underlying stream being bad.
424 # almost certainly related to the underlying stream being bad.
425 # And, chances are that the exception we're handling is related to
425 # And, chances are that the exception we're handling is related to
426 # getting in that bad state. So, we swallow the seeking error and
426 # getting in that bad state. So, we swallow the seeking error and
427 # re-raise the original error.
427 # re-raise the original error.
428 seekerror = False
428 seekerror = False
429 try:
429 try:
430 if self.current:
430 if self.current:
431 # consume the part content to not corrupt the stream.
431 # consume the part content to not corrupt the stream.
432 self.current.consume()
432 self.current.consume()
433
433
434 for part in self.iterator:
434 for part in self.iterator:
435 # consume the bundle content
435 # consume the bundle content
436 part.consume()
436 part.consume()
437 except Exception:
437 except Exception:
438 seekerror = True
438 seekerror = True
439
439
440 # Small hack to let caller code distinguish exceptions from bundle2
440 # Small hack to let caller code distinguish exceptions from bundle2
441 # processing from processing the old format. This is mostly needed
441 # processing from processing the old format. This is mostly needed
442 # to handle different return codes to unbundle according to the type
442 # to handle different return codes to unbundle according to the type
443 # of bundle. We should probably clean up or drop this return code
443 # of bundle. We should probably clean up or drop this return code
444 # craziness in a future version.
444 # craziness in a future version.
445 exc.duringunbundle2 = True
445 exc.duringunbundle2 = True
446 salvaged = []
446 salvaged = []
447 replycaps = None
447 replycaps = None
448 if self.op.reply is not None:
448 if self.op.reply is not None:
449 salvaged = self.op.reply.salvageoutput()
449 salvaged = self.op.reply.salvageoutput()
450 replycaps = self.op.reply.capabilities
450 replycaps = self.op.reply.capabilities
451 exc._replycaps = replycaps
451 exc._replycaps = replycaps
452 exc._bundle2salvagedoutput = salvaged
452 exc._bundle2salvagedoutput = salvaged
453
453
454 # Re-raising from a variable loses the original stack. So only use
454 # Re-raising from a variable loses the original stack. So only use
455 # that form if we need to.
455 # that form if we need to.
456 if seekerror:
456 if seekerror:
457 raise exc
457 raise exc
458
458
459 self.repo.ui.debug(
459 self.repo.ui.debug(
460 b'bundle2-input-bundle: %i parts total\n' % self.count
460 b'bundle2-input-bundle: %i parts total\n' % self.count
461 )
461 )
462
462
463
463
464 def processbundle(
464 def processbundle(
465 repo,
465 repo,
466 unbundler,
466 unbundler,
467 transactiongetter=None,
467 transactiongetter=None,
468 op=None,
468 op=None,
469 source=b'',
469 source=b'',
470 remote=None,
470 remote=None,
471 ):
471 ):
472 """This function process a bundle, apply effect to/from a repo
472 """This function process a bundle, apply effect to/from a repo
473
473
474 It iterates over each part then searches for and uses the proper handling
474 It iterates over each part then searches for and uses the proper handling
475 code to process the part. Parts are processed in order.
475 code to process the part. Parts are processed in order.
476
476
477 Unknown Mandatory part will abort the process.
477 Unknown Mandatory part will abort the process.
478
478
479 It is temporarily possible to provide a prebuilt bundleoperation to the
479 It is temporarily possible to provide a prebuilt bundleoperation to the
480 function. This is used to ensure output is properly propagated in case of
480 function. This is used to ensure output is properly propagated in case of
481 an error during the unbundling. This output capturing part will likely be
481 an error during the unbundling. This output capturing part will likely be
482 reworked and this ability will probably go away in the process.
482 reworked and this ability will probably go away in the process.
483 """
483 """
484 if op is None:
484 if op is None:
485 if transactiongetter is None:
485 if transactiongetter is None:
486 transactiongetter = _notransaction
486 transactiongetter = _notransaction
487 op = bundleoperation(
487 op = bundleoperation(
488 repo,
488 repo,
489 transactiongetter,
489 transactiongetter,
490 source=source,
490 source=source,
491 remote=remote,
491 remote=remote,
492 )
492 )
493 # todo:
493 # todo:
494 # - replace this is a init function soon.
494 # - replace this is a init function soon.
495 # - exception catching
495 # - exception catching
496 unbundler.params
496 unbundler.params
497 if repo.ui.debugflag:
497 if repo.ui.debugflag:
498 msg = [b'bundle2-input-bundle:']
498 msg = [b'bundle2-input-bundle:']
499 if unbundler.params:
499 if unbundler.params:
500 msg.append(b' %i params' % len(unbundler.params))
500 msg.append(b' %i params' % len(unbundler.params))
501 if op._gettransaction is None or op._gettransaction is _notransaction:
501 if op._gettransaction is None or op._gettransaction is _notransaction:
502 msg.append(b' no-transaction')
502 msg.append(b' no-transaction')
503 else:
503 else:
504 msg.append(b' with-transaction')
504 msg.append(b' with-transaction')
505 msg.append(b'\n')
505 msg.append(b'\n')
506 repo.ui.debug(b''.join(msg))
506 repo.ui.debug(b''.join(msg))
507
507
508 processparts(repo, op, unbundler)
508 processparts(repo, op, unbundler)
509
509
510 return op
510 return op
511
511
512
512
513 def processparts(repo, op, unbundler):
513 def processparts(repo, op, unbundler):
514 with partiterator(repo, op, unbundler) as parts:
514 with partiterator(repo, op, unbundler) as parts:
515 for part in parts:
515 for part in parts:
516 _processpart(op, part)
516 _processpart(op, part)
517
517
518
518
519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
520 if op.remote is not None and op.remote.path is not None:
520 if op.remote is not None and op.remote.path is not None:
521 remote_path = op.remote.path
521 remote_path = op.remote.path
522 kwargs = kwargs.copy()
522 kwargs = kwargs.copy()
523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
525 op.records.add(
525 op.records.add(
526 b'changegroup',
526 b'changegroup',
527 {
527 {
528 b'return': ret,
528 b'return': ret,
529 },
529 },
530 )
530 )
531 return ret
531 return ret
532
532
533
533
534 def _gethandler(op, part):
534 def _gethandler(op, part):
535 status = b'unknown' # used by debug output
535 status = b'unknown' # used by debug output
536 try:
536 try:
537 handler = parthandlermapping.get(part.type)
537 handler = parthandlermapping.get(part.type)
538 if handler is None:
538 if handler is None:
539 status = b'unsupported-type'
539 status = b'unsupported-type'
540 raise error.BundleUnknownFeatureError(parttype=part.type)
540 raise error.BundleUnknownFeatureError(parttype=part.type)
541 indebug(op.ui, b'found a handler for part %s' % part.type)
541 indebug(op.ui, b'found a handler for part %s' % part.type)
542 unknownparams = part.mandatorykeys - handler.params
542 unknownparams = part.mandatorykeys - handler.params
543 if unknownparams:
543 if unknownparams:
544 unknownparams = list(unknownparams)
544 unknownparams = list(unknownparams)
545 unknownparams.sort()
545 unknownparams.sort()
546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
547 raise error.BundleUnknownFeatureError(
547 raise error.BundleUnknownFeatureError(
548 parttype=part.type, params=unknownparams
548 parttype=part.type, params=unknownparams
549 )
549 )
550 status = b'supported'
550 status = b'supported'
551 except error.BundleUnknownFeatureError as exc:
551 except error.BundleUnknownFeatureError as exc:
552 if part.mandatory: # mandatory parts
552 if part.mandatory: # mandatory parts
553 raise
553 raise
554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
555 return # skip to part processing
555 return # skip to part processing
556 finally:
556 finally:
557 if op.ui.debugflag:
557 if op.ui.debugflag:
558 msg = [b'bundle2-input-part: "%s"' % part.type]
558 msg = [b'bundle2-input-part: "%s"' % part.type]
559 if not part.mandatory:
559 if not part.mandatory:
560 msg.append(b' (advisory)')
560 msg.append(b' (advisory)')
561 nbmp = len(part.mandatorykeys)
561 nbmp = len(part.mandatorykeys)
562 nbap = len(part.params) - nbmp
562 nbap = len(part.params) - nbmp
563 if nbmp or nbap:
563 if nbmp or nbap:
564 msg.append(b' (params:')
564 msg.append(b' (params:')
565 if nbmp:
565 if nbmp:
566 msg.append(b' %i mandatory' % nbmp)
566 msg.append(b' %i mandatory' % nbmp)
567 if nbap:
567 if nbap:
568 msg.append(b' %i advisory' % nbmp)
568 msg.append(b' %i advisory' % nbmp)
569 msg.append(b')')
569 msg.append(b')')
570 msg.append(b' %s\n' % status)
570 msg.append(b' %s\n' % status)
571 op.ui.debug(b''.join(msg))
571 op.ui.debug(b''.join(msg))
572
572
573 return handler
573 return handler
574
574
575
575
576 def _processpart(op, part):
576 def _processpart(op, part):
577 """process a single part from a bundle
577 """process a single part from a bundle
578
578
579 The part is guaranteed to have been fully consumed when the function exits
579 The part is guaranteed to have been fully consumed when the function exits
580 (even if an exception is raised)."""
580 (even if an exception is raised)."""
581 handler = _gethandler(op, part)
581 handler = _gethandler(op, part)
582 if handler is None:
582 if handler is None:
583 return
583 return
584
584
585 # handler is called outside the above try block so that we don't
585 # handler is called outside the above try block so that we don't
586 # risk catching KeyErrors from anything other than the
586 # risk catching KeyErrors from anything other than the
587 # parthandlermapping lookup (any KeyError raised by handler()
587 # parthandlermapping lookup (any KeyError raised by handler()
588 # itself represents a defect of a different variety).
588 # itself represents a defect of a different variety).
589 output = None
589 output = None
590 if op.captureoutput and op.reply is not None:
590 if op.captureoutput and op.reply is not None:
591 op.ui.pushbuffer(error=True, subproc=True)
591 op.ui.pushbuffer(error=True, subproc=True)
592 output = b''
592 output = b''
593 try:
593 try:
594 handler(op, part)
594 handler(op, part)
595 finally:
595 finally:
596 if output is not None:
596 if output is not None:
597 output = op.ui.popbuffer()
597 output = op.ui.popbuffer()
598 if output:
598 if output:
599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
600 outpart.addparam(
600 outpart.addparam(
601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
602 )
602 )
603
603
604
604
605 def decodecaps(blob):
605 def decodecaps(blob):
606 """decode a bundle2 caps bytes blob into a dictionary
606 """decode a bundle2 caps bytes blob into a dictionary
607
607
608 The blob is a list of capabilities (one per line)
608 The blob is a list of capabilities (one per line)
609 Capabilities may have values using a line of the form::
609 Capabilities may have values using a line of the form::
610
610
611 capability=value1,value2,value3
611 capability=value1,value2,value3
612
612
613 The values are always a list."""
613 The values are always a list."""
614 caps = {}
614 caps = {}
615 for line in blob.splitlines():
615 for line in blob.splitlines():
616 if not line:
616 if not line:
617 continue
617 continue
618 if b'=' not in line:
618 if b'=' not in line:
619 key, vals = line, ()
619 key, vals = line, ()
620 else:
620 else:
621 key, vals = line.split(b'=', 1)
621 key, vals = line.split(b'=', 1)
622 vals = vals.split(b',')
622 vals = vals.split(b',')
623 key = urlreq.unquote(key)
623 key = urlreq.unquote(key)
624 vals = [urlreq.unquote(v) for v in vals]
624 vals = [urlreq.unquote(v) for v in vals]
625 caps[key] = vals
625 caps[key] = vals
626 return caps
626 return caps
627
627
628
628
629 def encodecaps(caps):
629 def encodecaps(caps):
630 """encode a bundle2 caps dictionary into a bytes blob"""
630 """encode a bundle2 caps dictionary into a bytes blob"""
631 chunks = []
631 chunks = []
632 for ca in sorted(caps):
632 for ca in sorted(caps):
633 vals = caps[ca]
633 vals = caps[ca]
634 ca = urlreq.quote(ca)
634 ca = urlreq.quote(ca)
635 vals = [urlreq.quote(v) for v in vals]
635 vals = [urlreq.quote(v) for v in vals]
636 if vals:
636 if vals:
637 ca = b"%s=%s" % (ca, b','.join(vals))
637 ca = b"%s=%s" % (ca, b','.join(vals))
638 chunks.append(ca)
638 chunks.append(ca)
639 return b'\n'.join(chunks)
639 return b'\n'.join(chunks)
640
640
641
641
642 bundletypes = {
642 bundletypes = {
643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
644 # since the unification ssh accepts a header but there
644 # since the unification ssh accepts a header but there
645 # is no capability signaling it.
645 # is no capability signaling it.
646 b"HG20": (), # special-cased below
646 b"HG20": (), # special-cased below
647 b"HG10UN": (b"HG10UN", b'UN'),
647 b"HG10UN": (b"HG10UN", b'UN'),
648 b"HG10BZ": (b"HG10", b'BZ'),
648 b"HG10BZ": (b"HG10", b'BZ'),
649 b"HG10GZ": (b"HG10GZ", b'GZ'),
649 b"HG10GZ": (b"HG10GZ", b'GZ'),
650 }
650 }
651
651
652 # hgweb uses this list to communicate its preferred type
652 # hgweb uses this list to communicate its preferred type
653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
654
654
655
655
656 class bundle20:
656 class bundle20:
657 """represent an outgoing bundle2 container
657 """represent an outgoing bundle2 container
658
658
659 Use the `addparam` method to add stream level parameter. and `newpart` to
659 Use the `addparam` method to add stream level parameter. and `newpart` to
660 populate it. Then call `getchunks` to retrieve all the binary chunks of
660 populate it. Then call `getchunks` to retrieve all the binary chunks of
661 data that compose the bundle2 container."""
661 data that compose the bundle2 container."""
662
662
663 _magicstring = b'HG20'
663 _magicstring = b'HG20'
664
664
665 def __init__(self, ui, capabilities=()):
665 def __init__(self, ui, capabilities=()):
666 self.ui = ui
666 self.ui = ui
667 self._params = []
667 self._params = []
668 self._parts = []
668 self._parts = []
669 self.capabilities = dict(capabilities)
669 self.capabilities = dict(capabilities)
670 self._compengine = util.compengines.forbundletype(b'UN')
670 self._compengine = util.compengines.forbundletype(b'UN')
671 self._compopts = None
671 self._compopts = None
672 # If compression is being handled by a consumer of the raw
672 # If compression is being handled by a consumer of the raw
673 # data (e.g. the wire protocol), unsetting this flag tells
673 # data (e.g. the wire protocol), unsetting this flag tells
674 # consumers that the bundle is best left uncompressed.
674 # consumers that the bundle is best left uncompressed.
675 self.prefercompressed = True
675 self.prefercompressed = True
676
676
677 def setcompression(self, alg, compopts=None):
677 def setcompression(self, alg, compopts=None):
678 """setup core part compression to <alg>"""
678 """setup core part compression to <alg>"""
679 if alg in (None, b'UN'):
679 if alg in (None, b'UN'):
680 return
680 return
681 assert not any(n.lower() == b'compression' for n, v in self._params)
681 assert not any(n.lower() == b'compression' for n, v in self._params)
682 self.addparam(b'Compression', alg)
682 self.addparam(b'Compression', alg)
683 self._compengine = util.compengines.forbundletype(alg)
683 self._compengine = util.compengines.forbundletype(alg)
684 self._compopts = compopts
684 self._compopts = compopts
685
685
686 @property
686 @property
687 def nbparts(self):
687 def nbparts(self):
688 """total number of parts added to the bundler"""
688 """total number of parts added to the bundler"""
689 return len(self._parts)
689 return len(self._parts)
690
690
691 # methods used to defines the bundle2 content
691 # methods used to defines the bundle2 content
692 def addparam(self, name, value=None):
692 def addparam(self, name, value=None):
693 """add a stream level parameter"""
693 """add a stream level parameter"""
694 if not name:
694 if not name:
695 raise error.ProgrammingError(b'empty parameter name')
695 raise error.ProgrammingError(b'empty parameter name')
696 if name[0:1] not in pycompat.bytestr(
696 if name[0:1] not in pycompat.bytestr(
697 string.ascii_letters # pytype: disable=wrong-arg-types
697 string.ascii_letters # pytype: disable=wrong-arg-types
698 ):
698 ):
699 raise error.ProgrammingError(
699 raise error.ProgrammingError(
700 b'non letter first character: %s' % name
700 b'non letter first character: %s' % name
701 )
701 )
702 self._params.append((name, value))
702 self._params.append((name, value))
703
703
704 def addpart(self, part):
704 def addpart(self, part):
705 """add a new part to the bundle2 container
705 """add a new part to the bundle2 container
706
706
707 Parts contains the actual applicative payload."""
707 Parts contains the actual applicative payload."""
708 assert part.id is None
708 assert part.id is None
709 part.id = len(self._parts) # very cheap counter
709 part.id = len(self._parts) # very cheap counter
710 self._parts.append(part)
710 self._parts.append(part)
711
711
712 def newpart(self, typeid, *args, **kwargs):
712 def newpart(self, typeid, *args, **kwargs):
713 """create a new part and add it to the containers
713 """create a new part and add it to the containers
714
714
715 As the part is directly added to the containers. For now, this means
715 As the part is directly added to the containers. For now, this means
716 that any failure to properly initialize the part after calling
716 that any failure to properly initialize the part after calling
717 ``newpart`` should result in a failure of the whole bundling process.
717 ``newpart`` should result in a failure of the whole bundling process.
718
718
719 You can still fall back to manually create and add if you need better
719 You can still fall back to manually create and add if you need better
720 control."""
720 control."""
721 part = bundlepart(typeid, *args, **kwargs)
721 part = bundlepart(typeid, *args, **kwargs)
722 self.addpart(part)
722 self.addpart(part)
723 return part
723 return part
724
724
725 # methods used to generate the bundle2 stream
725 # methods used to generate the bundle2 stream
726 def getchunks(self):
726 def getchunks(self):
727 if self.ui.debugflag:
727 if self.ui.debugflag:
728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
729 if self._params:
729 if self._params:
730 msg.append(b' (%i params)' % len(self._params))
730 msg.append(b' (%i params)' % len(self._params))
731 msg.append(b' %i parts total\n' % len(self._parts))
731 msg.append(b' %i parts total\n' % len(self._parts))
732 self.ui.debug(b''.join(msg))
732 self.ui.debug(b''.join(msg))
733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
734 yield self._magicstring
734 yield self._magicstring
735 param = self._paramchunk()
735 param = self._paramchunk()
736 outdebug(self.ui, b'bundle parameter: %s' % param)
736 outdebug(self.ui, b'bundle parameter: %s' % param)
737 yield _pack(_fstreamparamsize, len(param))
737 yield _pack(_fstreamparamsize, len(param))
738 if param:
738 if param:
739 yield param
739 yield param
740 for chunk in self._compengine.compressstream(
740 for chunk in self._compengine.compressstream(
741 self._getcorechunk(), self._compopts
741 self._getcorechunk(), self._compopts
742 ):
742 ):
743 yield chunk
743 yield chunk
744
744
745 def _paramchunk(self):
745 def _paramchunk(self):
746 """return a encoded version of all stream parameters"""
746 """return a encoded version of all stream parameters"""
747 blocks = []
747 blocks = []
748 for par, value in self._params:
748 for par, value in self._params:
749 par = urlreq.quote(par)
749 par = urlreq.quote(par)
750 if value is not None:
750 if value is not None:
751 value = urlreq.quote(value)
751 value = urlreq.quote(value)
752 par = b'%s=%s' % (par, value)
752 par = b'%s=%s' % (par, value)
753 blocks.append(par)
753 blocks.append(par)
754 return b' '.join(blocks)
754 return b' '.join(blocks)
755
755
756 def _getcorechunk(self):
756 def _getcorechunk(self):
757 """yield chunk for the core part of the bundle
757 """yield chunk for the core part of the bundle
758
758
759 (all but headers and parameters)"""
759 (all but headers and parameters)"""
760 outdebug(self.ui, b'start of parts')
760 outdebug(self.ui, b'start of parts')
761 for part in self._parts:
761 for part in self._parts:
762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
763 for chunk in part.getchunks(ui=self.ui):
763 for chunk in part.getchunks(ui=self.ui):
764 yield chunk
764 yield chunk
765 outdebug(self.ui, b'end of bundle')
765 outdebug(self.ui, b'end of bundle')
766 yield _pack(_fpartheadersize, 0)
766 yield _pack(_fpartheadersize, 0)
767
767
768 def salvageoutput(self):
768 def salvageoutput(self):
769 """return a list with a copy of all output parts in the bundle
769 """return a list with a copy of all output parts in the bundle
770
770
771 This is meant to be used during error handling to make sure we preserve
771 This is meant to be used during error handling to make sure we preserve
772 server output"""
772 server output"""
773 salvaged = []
773 salvaged = []
774 for part in self._parts:
774 for part in self._parts:
775 if part.type.startswith(b'output'):
775 if part.type.startswith(b'output'):
776 salvaged.append(part.copy())
776 salvaged.append(part.copy())
777 return salvaged
777 return salvaged
778
778
779
779
780 class unpackermixin:
780 class unpackermixin:
781 """A mixin to extract bytes and struct data from a stream"""
781 """A mixin to extract bytes and struct data from a stream"""
782
782
783 def __init__(self, fp):
783 def __init__(self, fp):
784 self._fp = fp
784 self._fp = fp
785
785
786 def _unpack(self, format):
786 def _unpack(self, format):
787 """unpack this struct format from the stream
787 """unpack this struct format from the stream
788
788
789 This method is meant for internal usage by the bundle2 protocol only.
789 This method is meant for internal usage by the bundle2 protocol only.
790 They directly manipulate the low level stream including bundle2 level
790 They directly manipulate the low level stream including bundle2 level
791 instruction.
791 instruction.
792
792
793 Do not use it to implement higher-level logic or methods."""
793 Do not use it to implement higher-level logic or methods."""
794 data = self._readexact(struct.calcsize(format))
794 data = self._readexact(struct.calcsize(format))
795 return _unpack(format, data)
795 return _unpack(format, data)
796
796
797 def _readexact(self, size):
797 def _readexact(self, size):
798 """read exactly <size> bytes from the stream
798 """read exactly <size> bytes from the stream
799
799
800 This method is meant for internal usage by the bundle2 protocol only.
800 This method is meant for internal usage by the bundle2 protocol only.
801 They directly manipulate the low level stream including bundle2 level
801 They directly manipulate the low level stream including bundle2 level
802 instruction.
802 instruction.
803
803
804 Do not use it to implement higher-level logic or methods."""
804 Do not use it to implement higher-level logic or methods."""
805 return changegroup.readexactly(self._fp, size)
805 return changegroup.readexactly(self._fp, size)
806
806
807
807
808 def getunbundler(ui, fp, magicstring=None):
808 def getunbundler(ui, fp, magicstring=None):
809 """return a valid unbundler object for a given magicstring"""
809 """return a valid unbundler object for a given magicstring"""
810 if magicstring is None:
810 if magicstring is None:
811 magicstring = changegroup.readexactly(fp, 4)
811 magicstring = changegroup.readexactly(fp, 4)
812 magic, version = magicstring[0:2], magicstring[2:4]
812 magic, version = magicstring[0:2], magicstring[2:4]
813 if magic != b'HG':
813 if magic != b'HG':
814 ui.debug(
814 ui.debug(
815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
816 % (magic, version)
816 % (magic, version)
817 )
817 )
818 raise error.Abort(_(b'not a Mercurial bundle'))
818 raise error.Abort(_(b'not a Mercurial bundle'))
819 unbundlerclass = formatmap.get(version)
819 unbundlerclass = formatmap.get(version)
820 if unbundlerclass is None:
820 if unbundlerclass is None:
821 raise error.Abort(_(b'unknown bundle version %s') % version)
821 raise error.Abort(_(b'unknown bundle version %s') % version)
822 unbundler = unbundlerclass(ui, fp)
822 unbundler = unbundlerclass(ui, fp)
823 indebug(ui, b'start processing of %s stream' % magicstring)
823 indebug(ui, b'start processing of %s stream' % magicstring)
824 return unbundler
824 return unbundler
825
825
826
826
827 class unbundle20(unpackermixin):
827 class unbundle20(unpackermixin):
828 """interpret a bundle2 stream
828 """interpret a bundle2 stream
829
829
830 This class is fed with a binary stream and yields parts through its
830 This class is fed with a binary stream and yields parts through its
831 `iterparts` methods."""
831 `iterparts` methods."""
832
832
833 _magicstring = b'HG20'
833 _magicstring = b'HG20'
834
834
835 def __init__(self, ui, fp):
835 def __init__(self, ui, fp):
836 """If header is specified, we do not read it out of the stream."""
836 """If header is specified, we do not read it out of the stream."""
837 self.ui = ui
837 self.ui = ui
838 self._compengine = util.compengines.forbundletype(b'UN')
838 self._compengine = util.compengines.forbundletype(b'UN')
839 self._compressed = None
839 self._compressed = None
840 super(unbundle20, self).__init__(fp)
840 super(unbundle20, self).__init__(fp)
841
841
842 @util.propertycache
842 @util.propertycache
843 def params(self):
843 def params(self):
844 """dictionary of stream level parameters"""
844 """dictionary of stream level parameters"""
845 indebug(self.ui, b'reading bundle2 stream parameters')
845 indebug(self.ui, b'reading bundle2 stream parameters')
846 params = {}
846 params = {}
847 paramssize = self._unpack(_fstreamparamsize)[0]
847 paramssize = self._unpack(_fstreamparamsize)[0]
848 if paramssize < 0:
848 if paramssize < 0:
849 raise error.BundleValueError(
849 raise error.BundleValueError(
850 b'negative bundle param size: %i' % paramssize
850 b'negative bundle param size: %i' % paramssize
851 )
851 )
852 if paramssize:
852 if paramssize:
853 params = self._readexact(paramssize)
853 params = self._readexact(paramssize)
854 params = self._processallparams(params)
854 params = self._processallparams(params)
855 return params
855 return params
856
856
857 def _processallparams(self, paramsblock):
857 def _processallparams(self, paramsblock):
858 """ """
858 """ """
859 params = util.sortdict()
859 params = util.sortdict()
860 for p in paramsblock.split(b' '):
860 for p in paramsblock.split(b' '):
861 p = p.split(b'=', 1)
861 p = p.split(b'=', 1)
862 p = [urlreq.unquote(i) for i in p]
862 p = [urlreq.unquote(i) for i in p]
863 if len(p) < 2:
863 if len(p) < 2:
864 p.append(None)
864 p.append(None)
865 self._processparam(*p)
865 self._processparam(*p)
866 params[p[0]] = p[1]
866 params[p[0]] = p[1]
867 return params
867 return params
868
868
869 def _processparam(self, name, value):
869 def _processparam(self, name, value):
870 """process a parameter, applying its effect if needed
870 """process a parameter, applying its effect if needed
871
871
872 Parameter starting with a lower case letter are advisory and will be
872 Parameter starting with a lower case letter are advisory and will be
873 ignored when unknown. Those starting with an upper case letter are
873 ignored when unknown. Those starting with an upper case letter are
874 mandatory and will this function will raise a KeyError when unknown.
874 mandatory and will this function will raise a KeyError when unknown.
875
875
876 Note: no option are currently supported. Any input will be either
876 Note: no option are currently supported. Any input will be either
877 ignored or failing.
877 ignored or failing.
878 """
878 """
879 if not name:
879 if not name:
880 raise ValueError('empty parameter name')
880 raise ValueError('empty parameter name')
881 if name[0:1] not in pycompat.bytestr(
881 if name[0:1] not in pycompat.bytestr(
882 string.ascii_letters # pytype: disable=wrong-arg-types
882 string.ascii_letters # pytype: disable=wrong-arg-types
883 ):
883 ):
884 raise ValueError('non letter first character: %s' % name)
884 raise ValueError('non letter first character: %s' % name)
885 try:
885 try:
886 handler = b2streamparamsmap[name.lower()]
886 handler = b2streamparamsmap[name.lower()]
887 except KeyError:
887 except KeyError:
888 if name[0:1].islower():
888 if name[0:1].islower():
889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
890 else:
890 else:
891 raise error.BundleUnknownFeatureError(params=(name,))
891 raise error.BundleUnknownFeatureError(params=(name,))
892 else:
892 else:
893 handler(self, name, value)
893 handler(self, name, value)
894
894
895 def _forwardchunks(self):
895 def _forwardchunks(self):
896 """utility to transfer a bundle2 as binary
896 """utility to transfer a bundle2 as binary
897
897
898 This is made necessary by the fact the 'getbundle' command over 'ssh'
898 This is made necessary by the fact the 'getbundle' command over 'ssh'
899 have no way to know then the reply end, relying on the bundle to be
899 have no way to know then the reply end, relying on the bundle to be
900 interpreted to know its end. This is terrible and we are sorry, but we
900 interpreted to know its end. This is terrible and we are sorry, but we
901 needed to move forward to get general delta enabled.
901 needed to move forward to get general delta enabled.
902 """
902 """
903 yield self._magicstring
903 yield self._magicstring
904 assert 'params' not in vars(self)
904 assert 'params' not in vars(self)
905 paramssize = self._unpack(_fstreamparamsize)[0]
905 paramssize = self._unpack(_fstreamparamsize)[0]
906 if paramssize < 0:
906 if paramssize < 0:
907 raise error.BundleValueError(
907 raise error.BundleValueError(
908 b'negative bundle param size: %i' % paramssize
908 b'negative bundle param size: %i' % paramssize
909 )
909 )
910 if paramssize:
910 if paramssize:
911 params = self._readexact(paramssize)
911 params = self._readexact(paramssize)
912 self._processallparams(params)
912 self._processallparams(params)
913 # The payload itself is decompressed below, so drop
913 # The payload itself is decompressed below, so drop
914 # the compression parameter passed down to compensate.
914 # the compression parameter passed down to compensate.
915 outparams = []
915 outparams = []
916 for p in params.split(b' '):
916 for p in params.split(b' '):
917 k, v = p.split(b'=', 1)
917 k, v = p.split(b'=', 1)
918 if k.lower() != b'compression':
918 if k.lower() != b'compression':
919 outparams.append(p)
919 outparams.append(p)
920 outparams = b' '.join(outparams)
920 outparams = b' '.join(outparams)
921 yield _pack(_fstreamparamsize, len(outparams))
921 yield _pack(_fstreamparamsize, len(outparams))
922 yield outparams
922 yield outparams
923 else:
923 else:
924 yield _pack(_fstreamparamsize, paramssize)
924 yield _pack(_fstreamparamsize, paramssize)
925 # From there, payload might need to be decompressed
925 # From there, payload might need to be decompressed
926 self._fp = self._compengine.decompressorreader(self._fp)
926 self._fp = self._compengine.decompressorreader(self._fp)
927 emptycount = 0
927 emptycount = 0
928 while emptycount < 2:
928 while emptycount < 2:
929 # so we can brainlessly loop
929 # so we can brainlessly loop
930 assert _fpartheadersize == _fpayloadsize
930 assert _fpartheadersize == _fpayloadsize
931 size = self._unpack(_fpartheadersize)[0]
931 size = self._unpack(_fpartheadersize)[0]
932 yield _pack(_fpartheadersize, size)
932 yield _pack(_fpartheadersize, size)
933 if size:
933 if size:
934 emptycount = 0
934 emptycount = 0
935 else:
935 else:
936 emptycount += 1
936 emptycount += 1
937 continue
937 continue
938 if size == flaginterrupt:
938 if size == flaginterrupt:
939 continue
939 continue
940 elif size < 0:
940 elif size < 0:
941 raise error.BundleValueError(b'negative chunk size: %i')
941 raise error.BundleValueError(b'negative chunk size: %i')
942 yield self._readexact(size)
942 yield self._readexact(size)
943
943
944 def iterparts(self, seekable=False):
944 def iterparts(self, seekable=False):
945 """yield all parts contained in the stream"""
945 """yield all parts contained in the stream"""
946 cls = seekableunbundlepart if seekable else unbundlepart
946 cls = seekableunbundlepart if seekable else unbundlepart
947 # make sure param have been loaded
947 # make sure param have been loaded
948 self.params
948 self.params
949 # From there, payload need to be decompressed
949 # From there, payload need to be decompressed
950 self._fp = self._compengine.decompressorreader(self._fp)
950 self._fp = self._compengine.decompressorreader(self._fp)
951 indebug(self.ui, b'start extraction of bundle2 parts')
951 indebug(self.ui, b'start extraction of bundle2 parts')
952 headerblock = self._readpartheader()
952 headerblock = self._readpartheader()
953 while headerblock is not None:
953 while headerblock is not None:
954 part = cls(self.ui, headerblock, self._fp)
954 part = cls(self.ui, headerblock, self._fp)
955 yield part
955 yield part
956 # Ensure part is fully consumed so we can start reading the next
956 # Ensure part is fully consumed so we can start reading the next
957 # part.
957 # part.
958 part.consume()
958 part.consume()
959
959
960 headerblock = self._readpartheader()
960 headerblock = self._readpartheader()
961 indebug(self.ui, b'end of bundle2 stream')
961 indebug(self.ui, b'end of bundle2 stream')
962
962
963 def _readpartheader(self):
963 def _readpartheader(self):
964 """reads a part header size and return the bytes blob
964 """reads a part header size and return the bytes blob
965
965
966 returns None if empty"""
966 returns None if empty"""
967 headersize = self._unpack(_fpartheadersize)[0]
967 headersize = self._unpack(_fpartheadersize)[0]
968 if headersize < 0:
968 if headersize < 0:
969 raise error.BundleValueError(
969 raise error.BundleValueError(
970 b'negative part header size: %i' % headersize
970 b'negative part header size: %i' % headersize
971 )
971 )
972 indebug(self.ui, b'part header size: %i' % headersize)
972 indebug(self.ui, b'part header size: %i' % headersize)
973 if headersize:
973 if headersize:
974 return self._readexact(headersize)
974 return self._readexact(headersize)
975 return None
975 return None
976
976
977 def compressed(self):
977 def compressed(self):
978 self.params # load params
978 self.params # load params
979 return self._compressed
979 return self._compressed
980
980
981 def close(self):
981 def close(self):
982 """close underlying file"""
982 """close underlying file"""
983 if util.safehasattr(self._fp, 'close'):
983 if util.safehasattr(self._fp, 'close'):
984 return self._fp.close()
984 return self._fp.close()
985
985
986
986
987 formatmap = {b'20': unbundle20}
987 formatmap = {b'20': unbundle20}
988
988
989 b2streamparamsmap = {}
989 b2streamparamsmap = {}
990
990
991
991
992 def b2streamparamhandler(name):
992 def b2streamparamhandler(name):
993 """register a handler for a stream level parameter"""
993 """register a handler for a stream level parameter"""
994
994
995 def decorator(func):
995 def decorator(func):
996 assert name not in formatmap
996 assert name not in formatmap
997 b2streamparamsmap[name] = func
997 b2streamparamsmap[name] = func
998 return func
998 return func
999
999
1000 return decorator
1000 return decorator
1001
1001
1002
1002
1003 @b2streamparamhandler(b'compression')
1003 @b2streamparamhandler(b'compression')
1004 def processcompression(unbundler, param, value):
1004 def processcompression(unbundler, param, value):
1005 """read compression parameter and install payload decompression"""
1005 """read compression parameter and install payload decompression"""
1006 if value not in util.compengines.supportedbundletypes:
1006 if value not in util.compengines.supportedbundletypes:
1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1008 unbundler._compengine = util.compengines.forbundletype(value)
1008 unbundler._compengine = util.compengines.forbundletype(value)
1009 if value is not None:
1009 if value is not None:
1010 unbundler._compressed = True
1010 unbundler._compressed = True
1011
1011
1012
1012
1013 class bundlepart:
1013 class bundlepart:
1014 """A bundle2 part contains application level payload
1014 """A bundle2 part contains application level payload
1015
1015
1016 The part `type` is used to route the part to the application level
1016 The part `type` is used to route the part to the application level
1017 handler.
1017 handler.
1018
1018
1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1020 generator of byte chunks.
1020 generator of byte chunks.
1021
1021
1022 You can add parameters to the part using the ``addparam`` method.
1022 You can add parameters to the part using the ``addparam`` method.
1023 Parameters can be either mandatory (default) or advisory. Remote side
1023 Parameters can be either mandatory (default) or advisory. Remote side
1024 should be able to safely ignore the advisory ones.
1024 should be able to safely ignore the advisory ones.
1025
1025
1026 Both data and parameters cannot be modified after the generation has begun.
1026 Both data and parameters cannot be modified after the generation has begun.
1027 """
1027 """
1028
1028
1029 def __init__(
1029 def __init__(
1030 self,
1030 self,
1031 parttype,
1031 parttype,
1032 mandatoryparams=(),
1032 mandatoryparams=(),
1033 advisoryparams=(),
1033 advisoryparams=(),
1034 data=b'',
1034 data=b'',
1035 mandatory=True,
1035 mandatory=True,
1036 ):
1036 ):
1037 validateparttype(parttype)
1037 validateparttype(parttype)
1038 self.id = None
1038 self.id = None
1039 self.type = parttype
1039 self.type = parttype
1040 self._data = data
1040 self._data = data
1041 self._mandatoryparams = list(mandatoryparams)
1041 self._mandatoryparams = list(mandatoryparams)
1042 self._advisoryparams = list(advisoryparams)
1042 self._advisoryparams = list(advisoryparams)
1043 # checking for duplicated entries
1043 # checking for duplicated entries
1044 self._seenparams = set()
1044 self._seenparams = set()
1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1046 if pname in self._seenparams:
1046 if pname in self._seenparams:
1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1048 self._seenparams.add(pname)
1048 self._seenparams.add(pname)
1049 # status of the part's generation:
1049 # status of the part's generation:
1050 # - None: not started,
1050 # - None: not started,
1051 # - False: currently generated,
1051 # - False: currently generated,
1052 # - True: generation done.
1052 # - True: generation done.
1053 self._generated = None
1053 self._generated = None
1054 self.mandatory = mandatory
1054 self.mandatory = mandatory
1055
1055
1056 def __repr__(self):
1056 def __repr__(self):
1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1059 cls,
1059 cls,
1060 id(self),
1060 id(self),
1061 self.id,
1061 self.id,
1062 self.type,
1062 self.type,
1063 self.mandatory,
1063 self.mandatory,
1064 )
1064 )
1065
1065
1066 def copy(self):
1066 def copy(self):
1067 """return a copy of the part
1067 """return a copy of the part
1068
1068
1069 The new part have the very same content but no partid assigned yet.
1069 The new part have the very same content but no partid assigned yet.
1070 Parts with generated data cannot be copied."""
1070 Parts with generated data cannot be copied."""
1071 assert not util.safehasattr(self.data, 'next')
1071 assert not util.safehasattr(self.data, 'next')
1072 return self.__class__(
1072 return self.__class__(
1073 self.type,
1073 self.type,
1074 self._mandatoryparams,
1074 self._mandatoryparams,
1075 self._advisoryparams,
1075 self._advisoryparams,
1076 self._data,
1076 self._data,
1077 self.mandatory,
1077 self.mandatory,
1078 )
1078 )
1079
1079
1080 # methods used to defines the part content
1080 # methods used to defines the part content
1081 @property
1081 @property
1082 def data(self):
1082 def data(self):
1083 return self._data
1083 return self._data
1084
1084
1085 @data.setter
1085 @data.setter
1086 def data(self, data):
1086 def data(self, data):
1087 if self._generated is not None:
1087 if self._generated is not None:
1088 raise error.ReadOnlyPartError(b'part is being generated')
1088 raise error.ReadOnlyPartError(b'part is being generated')
1089 self._data = data
1089 self._data = data
1090
1090
1091 @property
1091 @property
1092 def mandatoryparams(self):
1092 def mandatoryparams(self):
1093 # make it an immutable tuple to force people through ``addparam``
1093 # make it an immutable tuple to force people through ``addparam``
1094 return tuple(self._mandatoryparams)
1094 return tuple(self._mandatoryparams)
1095
1095
1096 @property
1096 @property
1097 def advisoryparams(self):
1097 def advisoryparams(self):
1098 # make it an immutable tuple to force people through ``addparam``
1098 # make it an immutable tuple to force people through ``addparam``
1099 return tuple(self._advisoryparams)
1099 return tuple(self._advisoryparams)
1100
1100
1101 def addparam(self, name, value=b'', mandatory=True):
1101 def addparam(self, name, value=b'', mandatory=True):
1102 """add a parameter to the part
1102 """add a parameter to the part
1103
1103
1104 If 'mandatory' is set to True, the remote handler must claim support
1104 If 'mandatory' is set to True, the remote handler must claim support
1105 for this parameter or the unbundling will be aborted.
1105 for this parameter or the unbundling will be aborted.
1106
1106
1107 The 'name' and 'value' cannot exceed 255 bytes each.
1107 The 'name' and 'value' cannot exceed 255 bytes each.
1108 """
1108 """
1109 if self._generated is not None:
1109 if self._generated is not None:
1110 raise error.ReadOnlyPartError(b'part is being generated')
1110 raise error.ReadOnlyPartError(b'part is being generated')
1111 if name in self._seenparams:
1111 if name in self._seenparams:
1112 raise ValueError(b'duplicated params: %s' % name)
1112 raise ValueError(b'duplicated params: %s' % name)
1113 self._seenparams.add(name)
1113 self._seenparams.add(name)
1114 params = self._advisoryparams
1114 params = self._advisoryparams
1115 if mandatory:
1115 if mandatory:
1116 params = self._mandatoryparams
1116 params = self._mandatoryparams
1117 params.append((name, value))
1117 params.append((name, value))
1118
1118
1119 # methods used to generates the bundle2 stream
1119 # methods used to generates the bundle2 stream
1120 def getchunks(self, ui):
1120 def getchunks(self, ui):
1121 if self._generated is not None:
1121 if self._generated is not None:
1122 raise error.ProgrammingError(b'part can only be consumed once')
1122 raise error.ProgrammingError(b'part can only be consumed once')
1123 self._generated = False
1123 self._generated = False
1124
1124
1125 if ui.debugflag:
1125 if ui.debugflag:
1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1127 if not self.mandatory:
1127 if not self.mandatory:
1128 msg.append(b' (advisory)')
1128 msg.append(b' (advisory)')
1129 nbmp = len(self.mandatoryparams)
1129 nbmp = len(self.mandatoryparams)
1130 nbap = len(self.advisoryparams)
1130 nbap = len(self.advisoryparams)
1131 if nbmp or nbap:
1131 if nbmp or nbap:
1132 msg.append(b' (params:')
1132 msg.append(b' (params:')
1133 if nbmp:
1133 if nbmp:
1134 msg.append(b' %i mandatory' % nbmp)
1134 msg.append(b' %i mandatory' % nbmp)
1135 if nbap:
1135 if nbap:
1136 msg.append(b' %i advisory' % nbmp)
1136 msg.append(b' %i advisory' % nbmp)
1137 msg.append(b')')
1137 msg.append(b')')
1138 if not self.data:
1138 if not self.data:
1139 msg.append(b' empty payload')
1139 msg.append(b' empty payload')
1140 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1140 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1141 self.data, b'__next__'
1141 self.data, b'__next__'
1142 ):
1142 ):
1143 msg.append(b' streamed payload')
1143 msg.append(b' streamed payload')
1144 else:
1144 else:
1145 msg.append(b' %i bytes payload' % len(self.data))
1145 msg.append(b' %i bytes payload' % len(self.data))
1146 msg.append(b'\n')
1146 msg.append(b'\n')
1147 ui.debug(b''.join(msg))
1147 ui.debug(b''.join(msg))
1148
1148
1149 #### header
1149 #### header
1150 if self.mandatory:
1150 if self.mandatory:
1151 parttype = self.type.upper()
1151 parttype = self.type.upper()
1152 else:
1152 else:
1153 parttype = self.type.lower()
1153 parttype = self.type.lower()
1154 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1154 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1155 ## parttype
1155 ## parttype
1156 header = [
1156 header = [
1157 _pack(_fparttypesize, len(parttype)),
1157 _pack(_fparttypesize, len(parttype)),
1158 parttype,
1158 parttype,
1159 _pack(_fpartid, self.id),
1159 _pack(_fpartid, self.id),
1160 ]
1160 ]
1161 ## parameters
1161 ## parameters
1162 # count
1162 # count
1163 manpar = self.mandatoryparams
1163 manpar = self.mandatoryparams
1164 advpar = self.advisoryparams
1164 advpar = self.advisoryparams
1165 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1165 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1166 # size
1166 # size
1167 parsizes = []
1167 parsizes = []
1168 for key, value in manpar:
1168 for key, value in manpar:
1169 parsizes.append(len(key))
1169 parsizes.append(len(key))
1170 parsizes.append(len(value))
1170 parsizes.append(len(value))
1171 for key, value in advpar:
1171 for key, value in advpar:
1172 parsizes.append(len(key))
1172 parsizes.append(len(key))
1173 parsizes.append(len(value))
1173 parsizes.append(len(value))
1174 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1174 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1175 header.append(paramsizes)
1175 header.append(paramsizes)
1176 # key, value
1176 # key, value
1177 for key, value in manpar:
1177 for key, value in manpar:
1178 header.append(key)
1178 header.append(key)
1179 header.append(value)
1179 header.append(value)
1180 for key, value in advpar:
1180 for key, value in advpar:
1181 header.append(key)
1181 header.append(key)
1182 header.append(value)
1182 header.append(value)
1183 ## finalize header
1183 ## finalize header
1184 try:
1184 try:
1185 headerchunk = b''.join(header)
1185 headerchunk = b''.join(header)
1186 except TypeError:
1186 except TypeError:
1187 raise TypeError(
1187 raise TypeError(
1188 'Found a non-bytes trying to '
1188 'Found a non-bytes trying to '
1189 'build bundle part header: %r' % header
1189 'build bundle part header: %r' % header
1190 )
1190 )
1191 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1191 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1192 yield _pack(_fpartheadersize, len(headerchunk))
1192 yield _pack(_fpartheadersize, len(headerchunk))
1193 yield headerchunk
1193 yield headerchunk
1194 ## payload
1194 ## payload
1195 try:
1195 try:
1196 for chunk in self._payloadchunks():
1196 for chunk in self._payloadchunks():
1197 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1197 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1198 yield _pack(_fpayloadsize, len(chunk))
1198 yield _pack(_fpayloadsize, len(chunk))
1199 yield chunk
1199 yield chunk
1200 except GeneratorExit:
1200 except GeneratorExit:
1201 # GeneratorExit means that nobody is listening for our
1201 # GeneratorExit means that nobody is listening for our
1202 # results anyway, so just bail quickly rather than trying
1202 # results anyway, so just bail quickly rather than trying
1203 # to produce an error part.
1203 # to produce an error part.
1204 ui.debug(b'bundle2-generatorexit\n')
1204 ui.debug(b'bundle2-generatorexit\n')
1205 raise
1205 raise
1206 except BaseException as exc:
1206 except BaseException as exc:
1207 bexc = stringutil.forcebytestr(exc)
1207 bexc = stringutil.forcebytestr(exc)
1208 # backup exception data for later
1208 # backup exception data for later
1209 ui.debug(
1209 ui.debug(
1210 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1210 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1211 )
1211 )
1212 tb = sys.exc_info()[2]
1212 tb = sys.exc_info()[2]
1213 msg = b'unexpected error: %s' % bexc
1213 msg = b'unexpected error: %s' % bexc
1214 interpart = bundlepart(
1214 interpart = bundlepart(
1215 b'error:abort', [(b'message', msg)], mandatory=False
1215 b'error:abort', [(b'message', msg)], mandatory=False
1216 )
1216 )
1217 interpart.id = 0
1217 interpart.id = 0
1218 yield _pack(_fpayloadsize, -1)
1218 yield _pack(_fpayloadsize, -1)
1219 for chunk in interpart.getchunks(ui=ui):
1219 for chunk in interpart.getchunks(ui=ui):
1220 yield chunk
1220 yield chunk
1221 outdebug(ui, b'closing payload chunk')
1221 outdebug(ui, b'closing payload chunk')
1222 # abort current part payload
1222 # abort current part payload
1223 yield _pack(_fpayloadsize, 0)
1223 yield _pack(_fpayloadsize, 0)
1224 pycompat.raisewithtb(exc, tb)
1224 pycompat.raisewithtb(exc, tb)
1225 # end of payload
1225 # end of payload
1226 outdebug(ui, b'closing payload chunk')
1226 outdebug(ui, b'closing payload chunk')
1227 yield _pack(_fpayloadsize, 0)
1227 yield _pack(_fpayloadsize, 0)
1228 self._generated = True
1228 self._generated = True
1229
1229
1230 def _payloadchunks(self):
1230 def _payloadchunks(self):
1231 """yield chunks of a the part payload
1231 """yield chunks of a the part payload
1232
1232
1233 Exists to handle the different methods to provide data to a part."""
1233 Exists to handle the different methods to provide data to a part."""
1234 # we only support fixed size data now.
1234 # we only support fixed size data now.
1235 # This will be improved in the future.
1235 # This will be improved in the future.
1236 if util.safehasattr(self.data, 'next') or util.safehasattr(
1236 if util.safehasattr(self.data, 'next') or util.safehasattr(
1237 self.data, b'__next__'
1237 self.data, b'__next__'
1238 ):
1238 ):
1239 buff = util.chunkbuffer(self.data)
1239 buff = util.chunkbuffer(self.data)
1240 chunk = buff.read(preferedchunksize)
1240 chunk = buff.read(preferedchunksize)
1241 while chunk:
1241 while chunk:
1242 yield chunk
1242 yield chunk
1243 chunk = buff.read(preferedchunksize)
1243 chunk = buff.read(preferedchunksize)
1244 elif len(self.data):
1244 elif len(self.data):
1245 yield self.data
1245 yield self.data
1246
1246
1247
1247
1248 flaginterrupt = -1
1248 flaginterrupt = -1
1249
1249
1250
1250
1251 class interrupthandler(unpackermixin):
1251 class interrupthandler(unpackermixin):
1252 """read one part and process it with restricted capability
1252 """read one part and process it with restricted capability
1253
1253
1254 This allows to transmit exception raised on the producer size during part
1254 This allows to transmit exception raised on the producer size during part
1255 iteration while the consumer is reading a part.
1255 iteration while the consumer is reading a part.
1256
1256
1257 Part processed in this manner only have access to a ui object,"""
1257 Part processed in this manner only have access to a ui object,"""
1258
1258
1259 def __init__(self, ui, fp):
1259 def __init__(self, ui, fp):
1260 super(interrupthandler, self).__init__(fp)
1260 super(interrupthandler, self).__init__(fp)
1261 self.ui = ui
1261 self.ui = ui
1262
1262
1263 def _readpartheader(self):
1263 def _readpartheader(self):
1264 """reads a part header size and return the bytes blob
1264 """reads a part header size and return the bytes blob
1265
1265
1266 returns None if empty"""
1266 returns None if empty"""
1267 headersize = self._unpack(_fpartheadersize)[0]
1267 headersize = self._unpack(_fpartheadersize)[0]
1268 if headersize < 0:
1268 if headersize < 0:
1269 raise error.BundleValueError(
1269 raise error.BundleValueError(
1270 b'negative part header size: %i' % headersize
1270 b'negative part header size: %i' % headersize
1271 )
1271 )
1272 indebug(self.ui, b'part header size: %i\n' % headersize)
1272 indebug(self.ui, b'part header size: %i\n' % headersize)
1273 if headersize:
1273 if headersize:
1274 return self._readexact(headersize)
1274 return self._readexact(headersize)
1275 return None
1275 return None
1276
1276
1277 def __call__(self):
1277 def __call__(self):
1278
1278
1279 self.ui.debug(
1279 self.ui.debug(
1280 b'bundle2-input-stream-interrupt: opening out of band context\n'
1280 b'bundle2-input-stream-interrupt: opening out of band context\n'
1281 )
1281 )
1282 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1282 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1283 headerblock = self._readpartheader()
1283 headerblock = self._readpartheader()
1284 if headerblock is None:
1284 if headerblock is None:
1285 indebug(self.ui, b'no part found during interruption.')
1285 indebug(self.ui, b'no part found during interruption.')
1286 return
1286 return
1287 part = unbundlepart(self.ui, headerblock, self._fp)
1287 part = unbundlepart(self.ui, headerblock, self._fp)
1288 op = interruptoperation(self.ui)
1288 op = interruptoperation(self.ui)
1289 hardabort = False
1289 hardabort = False
1290 try:
1290 try:
1291 _processpart(op, part)
1291 _processpart(op, part)
1292 except (SystemExit, KeyboardInterrupt):
1292 except (SystemExit, KeyboardInterrupt):
1293 hardabort = True
1293 hardabort = True
1294 raise
1294 raise
1295 finally:
1295 finally:
1296 if not hardabort:
1296 if not hardabort:
1297 part.consume()
1297 part.consume()
1298 self.ui.debug(
1298 self.ui.debug(
1299 b'bundle2-input-stream-interrupt: closing out of band context\n'
1299 b'bundle2-input-stream-interrupt: closing out of band context\n'
1300 )
1300 )
1301
1301
1302
1302
1303 class interruptoperation:
1303 class interruptoperation:
1304 """A limited operation to be use by part handler during interruption
1304 """A limited operation to be use by part handler during interruption
1305
1305
1306 It only have access to an ui object.
1306 It only have access to an ui object.
1307 """
1307 """
1308
1308
1309 def __init__(self, ui):
1309 def __init__(self, ui):
1310 self.ui = ui
1310 self.ui = ui
1311 self.reply = None
1311 self.reply = None
1312 self.captureoutput = False
1312 self.captureoutput = False
1313
1313
1314 @property
1314 @property
1315 def repo(self):
1315 def repo(self):
1316 raise error.ProgrammingError(b'no repo access from stream interruption')
1316 raise error.ProgrammingError(b'no repo access from stream interruption')
1317
1317
1318 def gettransaction(self):
1318 def gettransaction(self):
1319 raise TransactionUnavailable(b'no repo access from stream interruption')
1319 raise TransactionUnavailable(b'no repo access from stream interruption')
1320
1320
1321
1321
1322 def decodepayloadchunks(ui, fh):
1322 def decodepayloadchunks(ui, fh):
1323 """Reads bundle2 part payload data into chunks.
1323 """Reads bundle2 part payload data into chunks.
1324
1324
1325 Part payload data consists of framed chunks. This function takes
1325 Part payload data consists of framed chunks. This function takes
1326 a file handle and emits those chunks.
1326 a file handle and emits those chunks.
1327 """
1327 """
1328 dolog = ui.configbool(b'devel', b'bundle2.debug')
1328 dolog = ui.configbool(b'devel', b'bundle2.debug')
1329 debug = ui.debug
1329 debug = ui.debug
1330
1330
1331 headerstruct = struct.Struct(_fpayloadsize)
1331 headerstruct = struct.Struct(_fpayloadsize)
1332 headersize = headerstruct.size
1332 headersize = headerstruct.size
1333 unpack = headerstruct.unpack
1333 unpack = headerstruct.unpack
1334
1334
1335 readexactly = changegroup.readexactly
1335 readexactly = changegroup.readexactly
1336 read = fh.read
1336 read = fh.read
1337
1337
1338 chunksize = unpack(readexactly(fh, headersize))[0]
1338 chunksize = unpack(readexactly(fh, headersize))[0]
1339 indebug(ui, b'payload chunk size: %i' % chunksize)
1339 indebug(ui, b'payload chunk size: %i' % chunksize)
1340
1340
1341 # changegroup.readexactly() is inlined below for performance.
1341 # changegroup.readexactly() is inlined below for performance.
1342 while chunksize:
1342 while chunksize:
1343 if chunksize >= 0:
1343 if chunksize >= 0:
1344 s = read(chunksize)
1344 s = read(chunksize)
1345 if len(s) < chunksize:
1345 if len(s) < chunksize:
1346 raise error.Abort(
1346 raise error.Abort(
1347 _(
1347 _(
1348 b'stream ended unexpectedly '
1348 b'stream ended unexpectedly '
1349 b' (got %d bytes, expected %d)'
1349 b' (got %d bytes, expected %d)'
1350 )
1350 )
1351 % (len(s), chunksize)
1351 % (len(s), chunksize)
1352 )
1352 )
1353
1353
1354 yield s
1354 yield s
1355 elif chunksize == flaginterrupt:
1355 elif chunksize == flaginterrupt:
1356 # Interrupt "signal" detected. The regular stream is interrupted
1356 # Interrupt "signal" detected. The regular stream is interrupted
1357 # and a bundle2 part follows. Consume it.
1357 # and a bundle2 part follows. Consume it.
1358 interrupthandler(ui, fh)()
1358 interrupthandler(ui, fh)()
1359 else:
1359 else:
1360 raise error.BundleValueError(
1360 raise error.BundleValueError(
1361 b'negative payload chunk size: %s' % chunksize
1361 b'negative payload chunk size: %s' % chunksize
1362 )
1362 )
1363
1363
1364 s = read(headersize)
1364 s = read(headersize)
1365 if len(s) < headersize:
1365 if len(s) < headersize:
1366 raise error.Abort(
1366 raise error.Abort(
1367 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1367 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1368 % (len(s), chunksize)
1368 % (len(s), chunksize)
1369 )
1369 )
1370
1370
1371 chunksize = unpack(s)[0]
1371 chunksize = unpack(s)[0]
1372
1372
1373 # indebug() inlined for performance.
1373 # indebug() inlined for performance.
1374 if dolog:
1374 if dolog:
1375 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1375 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1376
1376
1377
1377
1378 class unbundlepart(unpackermixin):
1378 class unbundlepart(unpackermixin):
1379 """a bundle part read from a bundle"""
1379 """a bundle part read from a bundle"""
1380
1380
1381 def __init__(self, ui, header, fp):
1381 def __init__(self, ui, header, fp):
1382 super(unbundlepart, self).__init__(fp)
1382 super(unbundlepart, self).__init__(fp)
1383 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1383 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1384 fp, b'tell'
1384 fp, b'tell'
1385 )
1385 )
1386 self.ui = ui
1386 self.ui = ui
1387 # unbundle state attr
1387 # unbundle state attr
1388 self._headerdata = header
1388 self._headerdata = header
1389 self._headeroffset = 0
1389 self._headeroffset = 0
1390 self._initialized = False
1390 self._initialized = False
1391 self.consumed = False
1391 self.consumed = False
1392 # part data
1392 # part data
1393 self.id = None
1393 self.id = None
1394 self.type = None
1394 self.type = None
1395 self.mandatoryparams = None
1395 self.mandatoryparams = None
1396 self.advisoryparams = None
1396 self.advisoryparams = None
1397 self.params = None
1397 self.params = None
1398 self.mandatorykeys = ()
1398 self.mandatorykeys = ()
1399 self._readheader()
1399 self._readheader()
1400 self._mandatory = None
1400 self._mandatory = None
1401 self._pos = 0
1401 self._pos = 0
1402
1402
1403 def _fromheader(self, size):
1403 def _fromheader(self, size):
1404 """return the next <size> byte from the header"""
1404 """return the next <size> byte from the header"""
1405 offset = self._headeroffset
1405 offset = self._headeroffset
1406 data = self._headerdata[offset : (offset + size)]
1406 data = self._headerdata[offset : (offset + size)]
1407 self._headeroffset = offset + size
1407 self._headeroffset = offset + size
1408 return data
1408 return data
1409
1409
1410 def _unpackheader(self, format):
1410 def _unpackheader(self, format):
1411 """read given format from header
1411 """read given format from header
1412
1412
1413 This automatically compute the size of the format to read."""
1413 This automatically compute the size of the format to read."""
1414 data = self._fromheader(struct.calcsize(format))
1414 data = self._fromheader(struct.calcsize(format))
1415 return _unpack(format, data)
1415 return _unpack(format, data)
1416
1416
1417 def _initparams(self, mandatoryparams, advisoryparams):
1417 def _initparams(self, mandatoryparams, advisoryparams):
1418 """internal function to setup all logic related parameters"""
1418 """internal function to setup all logic related parameters"""
1419 # make it read only to prevent people touching it by mistake.
1419 # make it read only to prevent people touching it by mistake.
1420 self.mandatoryparams = tuple(mandatoryparams)
1420 self.mandatoryparams = tuple(mandatoryparams)
1421 self.advisoryparams = tuple(advisoryparams)
1421 self.advisoryparams = tuple(advisoryparams)
1422 # user friendly UI
1422 # user friendly UI
1423 self.params = util.sortdict(self.mandatoryparams)
1423 self.params = util.sortdict(self.mandatoryparams)
1424 self.params.update(self.advisoryparams)
1424 self.params.update(self.advisoryparams)
1425 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1425 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1426
1426
1427 def _readheader(self):
1427 def _readheader(self):
1428 """read the header and setup the object"""
1428 """read the header and setup the object"""
1429 typesize = self._unpackheader(_fparttypesize)[0]
1429 typesize = self._unpackheader(_fparttypesize)[0]
1430 self.type = self._fromheader(typesize)
1430 self.type = self._fromheader(typesize)
1431 indebug(self.ui, b'part type: "%s"' % self.type)
1431 indebug(self.ui, b'part type: "%s"' % self.type)
1432 self.id = self._unpackheader(_fpartid)[0]
1432 self.id = self._unpackheader(_fpartid)[0]
1433 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1433 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1434 # extract mandatory bit from type
1434 # extract mandatory bit from type
1435 self.mandatory = self.type != self.type.lower()
1435 self.mandatory = self.type != self.type.lower()
1436 self.type = self.type.lower()
1436 self.type = self.type.lower()
1437 ## reading parameters
1437 ## reading parameters
1438 # param count
1438 # param count
1439 mancount, advcount = self._unpackheader(_fpartparamcount)
1439 mancount, advcount = self._unpackheader(_fpartparamcount)
1440 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1440 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1441 # param size
1441 # param size
1442 fparamsizes = _makefpartparamsizes(mancount + advcount)
1442 fparamsizes = _makefpartparamsizes(mancount + advcount)
1443 paramsizes = self._unpackheader(fparamsizes)
1443 paramsizes = self._unpackheader(fparamsizes)
1444 # make it a list of couple again
1444 # make it a list of couple again
1445 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1445 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1446 # split mandatory from advisory
1446 # split mandatory from advisory
1447 mansizes = paramsizes[:mancount]
1447 mansizes = paramsizes[:mancount]
1448 advsizes = paramsizes[mancount:]
1448 advsizes = paramsizes[mancount:]
1449 # retrieve param value
1449 # retrieve param value
1450 manparams = []
1450 manparams = []
1451 for key, value in mansizes:
1451 for key, value in mansizes:
1452 manparams.append((self._fromheader(key), self._fromheader(value)))
1452 manparams.append((self._fromheader(key), self._fromheader(value)))
1453 advparams = []
1453 advparams = []
1454 for key, value in advsizes:
1454 for key, value in advsizes:
1455 advparams.append((self._fromheader(key), self._fromheader(value)))
1455 advparams.append((self._fromheader(key), self._fromheader(value)))
1456 self._initparams(manparams, advparams)
1456 self._initparams(manparams, advparams)
1457 ## part payload
1457 ## part payload
1458 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1458 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1459 # we read the data, tell it
1459 # we read the data, tell it
1460 self._initialized = True
1460 self._initialized = True
1461
1461
1462 def _payloadchunks(self):
1462 def _payloadchunks(self):
1463 """Generator of decoded chunks in the payload."""
1463 """Generator of decoded chunks in the payload."""
1464 return decodepayloadchunks(self.ui, self._fp)
1464 return decodepayloadchunks(self.ui, self._fp)
1465
1465
1466 def consume(self):
1466 def consume(self):
1467 """Read the part payload until completion.
1467 """Read the part payload until completion.
1468
1468
1469 By consuming the part data, the underlying stream read offset will
1469 By consuming the part data, the underlying stream read offset will
1470 be advanced to the next part (or end of stream).
1470 be advanced to the next part (or end of stream).
1471 """
1471 """
1472 if self.consumed:
1472 if self.consumed:
1473 return
1473 return
1474
1474
1475 chunk = self.read(32768)
1475 chunk = self.read(32768)
1476 while chunk:
1476 while chunk:
1477 self._pos += len(chunk)
1477 self._pos += len(chunk)
1478 chunk = self.read(32768)
1478 chunk = self.read(32768)
1479
1479
1480 def read(self, size=None):
1480 def read(self, size=None):
1481 """read payload data"""
1481 """read payload data"""
1482 if not self._initialized:
1482 if not self._initialized:
1483 self._readheader()
1483 self._readheader()
1484 if size is None:
1484 if size is None:
1485 data = self._payloadstream.read()
1485 data = self._payloadstream.read()
1486 else:
1486 else:
1487 data = self._payloadstream.read(size)
1487 data = self._payloadstream.read(size)
1488 self._pos += len(data)
1488 self._pos += len(data)
1489 if size is None or len(data) < size:
1489 if size is None or len(data) < size:
1490 if not self.consumed and self._pos:
1490 if not self.consumed and self._pos:
1491 self.ui.debug(
1491 self.ui.debug(
1492 b'bundle2-input-part: total payload size %i\n' % self._pos
1492 b'bundle2-input-part: total payload size %i\n' % self._pos
1493 )
1493 )
1494 self.consumed = True
1494 self.consumed = True
1495 return data
1495 return data
1496
1496
1497
1497
1498 class seekableunbundlepart(unbundlepart):
1498 class seekableunbundlepart(unbundlepart):
1499 """A bundle2 part in a bundle that is seekable.
1499 """A bundle2 part in a bundle that is seekable.
1500
1500
1501 Regular ``unbundlepart`` instances can only be read once. This class
1501 Regular ``unbundlepart`` instances can only be read once. This class
1502 extends ``unbundlepart`` to enable bi-directional seeking within the
1502 extends ``unbundlepart`` to enable bi-directional seeking within the
1503 part.
1503 part.
1504
1504
1505 Bundle2 part data consists of framed chunks. Offsets when seeking
1505 Bundle2 part data consists of framed chunks. Offsets when seeking
1506 refer to the decoded data, not the offsets in the underlying bundle2
1506 refer to the decoded data, not the offsets in the underlying bundle2
1507 stream.
1507 stream.
1508
1508
1509 To facilitate quickly seeking within the decoded data, instances of this
1509 To facilitate quickly seeking within the decoded data, instances of this
1510 class maintain a mapping between offsets in the underlying stream and
1510 class maintain a mapping between offsets in the underlying stream and
1511 the decoded payload. This mapping will consume memory in proportion
1511 the decoded payload. This mapping will consume memory in proportion
1512 to the number of chunks within the payload (which almost certainly
1512 to the number of chunks within the payload (which almost certainly
1513 increases in proportion with the size of the part).
1513 increases in proportion with the size of the part).
1514 """
1514 """
1515
1515
1516 def __init__(self, ui, header, fp):
1516 def __init__(self, ui, header, fp):
1517 # (payload, file) offsets for chunk starts.
1517 # (payload, file) offsets for chunk starts.
1518 self._chunkindex = []
1518 self._chunkindex = []
1519
1519
1520 super(seekableunbundlepart, self).__init__(ui, header, fp)
1520 super(seekableunbundlepart, self).__init__(ui, header, fp)
1521
1521
1522 def _payloadchunks(self, chunknum=0):
1522 def _payloadchunks(self, chunknum=0):
1523 '''seek to specified chunk and start yielding data'''
1523 '''seek to specified chunk and start yielding data'''
1524 if len(self._chunkindex) == 0:
1524 if len(self._chunkindex) == 0:
1525 assert chunknum == 0, b'Must start with chunk 0'
1525 assert chunknum == 0, b'Must start with chunk 0'
1526 self._chunkindex.append((0, self._tellfp()))
1526 self._chunkindex.append((0, self._tellfp()))
1527 else:
1527 else:
1528 assert chunknum < len(self._chunkindex), (
1528 assert chunknum < len(self._chunkindex), (
1529 b'Unknown chunk %d' % chunknum
1529 b'Unknown chunk %d' % chunknum
1530 )
1530 )
1531 self._seekfp(self._chunkindex[chunknum][1])
1531 self._seekfp(self._chunkindex[chunknum][1])
1532
1532
1533 pos = self._chunkindex[chunknum][0]
1533 pos = self._chunkindex[chunknum][0]
1534
1534
1535 for chunk in decodepayloadchunks(self.ui, self._fp):
1535 for chunk in decodepayloadchunks(self.ui, self._fp):
1536 chunknum += 1
1536 chunknum += 1
1537 pos += len(chunk)
1537 pos += len(chunk)
1538 if chunknum == len(self._chunkindex):
1538 if chunknum == len(self._chunkindex):
1539 self._chunkindex.append((pos, self._tellfp()))
1539 self._chunkindex.append((pos, self._tellfp()))
1540
1540
1541 yield chunk
1541 yield chunk
1542
1542
1543 def _findchunk(self, pos):
1543 def _findchunk(self, pos):
1544 '''for a given payload position, return a chunk number and offset'''
1544 '''for a given payload position, return a chunk number and offset'''
1545 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1545 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1546 if ppos == pos:
1546 if ppos == pos:
1547 return chunk, 0
1547 return chunk, 0
1548 elif ppos > pos:
1548 elif ppos > pos:
1549 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1549 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1550 raise ValueError(b'Unknown chunk')
1550 raise ValueError(b'Unknown chunk')
1551
1551
1552 def tell(self):
1552 def tell(self):
1553 return self._pos
1553 return self._pos
1554
1554
1555 def seek(self, offset, whence=os.SEEK_SET):
1555 def seek(self, offset, whence=os.SEEK_SET):
1556 if whence == os.SEEK_SET:
1556 if whence == os.SEEK_SET:
1557 newpos = offset
1557 newpos = offset
1558 elif whence == os.SEEK_CUR:
1558 elif whence == os.SEEK_CUR:
1559 newpos = self._pos + offset
1559 newpos = self._pos + offset
1560 elif whence == os.SEEK_END:
1560 elif whence == os.SEEK_END:
1561 if not self.consumed:
1561 if not self.consumed:
1562 # Can't use self.consume() here because it advances self._pos.
1562 # Can't use self.consume() here because it advances self._pos.
1563 chunk = self.read(32768)
1563 chunk = self.read(32768)
1564 while chunk:
1564 while chunk:
1565 chunk = self.read(32768)
1565 chunk = self.read(32768)
1566 newpos = self._chunkindex[-1][0] - offset
1566 newpos = self._chunkindex[-1][0] - offset
1567 else:
1567 else:
1568 raise ValueError(b'Unknown whence value: %r' % (whence,))
1568 raise ValueError(b'Unknown whence value: %r' % (whence,))
1569
1569
1570 if newpos > self._chunkindex[-1][0] and not self.consumed:
1570 if newpos > self._chunkindex[-1][0] and not self.consumed:
1571 # Can't use self.consume() here because it advances self._pos.
1571 # Can't use self.consume() here because it advances self._pos.
1572 chunk = self.read(32768)
1572 chunk = self.read(32768)
1573 while chunk:
1573 while chunk:
1574 chunk = self.read(32668)
1574 chunk = self.read(32668)
1575
1575
1576 if not 0 <= newpos <= self._chunkindex[-1][0]:
1576 if not 0 <= newpos <= self._chunkindex[-1][0]:
1577 raise ValueError(b'Offset out of range')
1577 raise ValueError(b'Offset out of range')
1578
1578
1579 if self._pos != newpos:
1579 if self._pos != newpos:
1580 chunk, internaloffset = self._findchunk(newpos)
1580 chunk, internaloffset = self._findchunk(newpos)
1581 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1581 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1582 adjust = self.read(internaloffset)
1582 adjust = self.read(internaloffset)
1583 if len(adjust) != internaloffset:
1583 if len(adjust) != internaloffset:
1584 raise error.Abort(_(b'Seek failed\n'))
1584 raise error.Abort(_(b'Seek failed\n'))
1585 self._pos = newpos
1585 self._pos = newpos
1586
1586
1587 def _seekfp(self, offset, whence=0):
1587 def _seekfp(self, offset, whence=0):
1588 """move the underlying file pointer
1588 """move the underlying file pointer
1589
1589
1590 This method is meant for internal usage by the bundle2 protocol only.
1590 This method is meant for internal usage by the bundle2 protocol only.
1591 They directly manipulate the low level stream including bundle2 level
1591 They directly manipulate the low level stream including bundle2 level
1592 instruction.
1592 instruction.
1593
1593
1594 Do not use it to implement higher-level logic or methods."""
1594 Do not use it to implement higher-level logic or methods."""
1595 if self._seekable:
1595 if self._seekable:
1596 return self._fp.seek(offset, whence)
1596 return self._fp.seek(offset, whence)
1597 else:
1597 else:
1598 raise NotImplementedError(_(b'File pointer is not seekable'))
1598 raise NotImplementedError(_(b'File pointer is not seekable'))
1599
1599
1600 def _tellfp(self):
1600 def _tellfp(self):
1601 """return the file offset, or None if file is not seekable
1601 """return the file offset, or None if file is not seekable
1602
1602
1603 This method is meant for internal usage by the bundle2 protocol only.
1603 This method is meant for internal usage by the bundle2 protocol only.
1604 They directly manipulate the low level stream including bundle2 level
1604 They directly manipulate the low level stream including bundle2 level
1605 instruction.
1605 instruction.
1606
1606
1607 Do not use it to implement higher-level logic or methods."""
1607 Do not use it to implement higher-level logic or methods."""
1608 if self._seekable:
1608 if self._seekable:
1609 try:
1609 try:
1610 return self._fp.tell()
1610 return self._fp.tell()
1611 except IOError as e:
1611 except IOError as e:
1612 if e.errno == errno.ESPIPE:
1612 if e.errno == errno.ESPIPE:
1613 self._seekable = False
1613 self._seekable = False
1614 else:
1614 else:
1615 raise
1615 raise
1616 return None
1616 return None
1617
1617
1618
1618
1619 # These are only the static capabilities.
1619 # These are only the static capabilities.
1620 # Check the 'getrepocaps' function for the rest.
1620 # Check the 'getrepocaps' function for the rest.
1621 capabilities = {
1621 capabilities = {
1622 b'HG20': (),
1622 b'HG20': (),
1623 b'bookmarks': (),
1623 b'bookmarks': (),
1624 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1624 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1625 b'listkeys': (),
1625 b'listkeys': (),
1626 b'pushkey': (),
1626 b'pushkey': (),
1627 b'digests': tuple(sorted(util.DIGESTS.keys())),
1627 b'digests': tuple(sorted(util.DIGESTS.keys())),
1628 b'remote-changegroup': (b'http', b'https'),
1628 b'remote-changegroup': (b'http', b'https'),
1629 b'hgtagsfnodes': (),
1629 b'hgtagsfnodes': (),
1630 b'phases': (b'heads',),
1630 b'phases': (b'heads',),
1631 b'stream': (b'v2',),
1631 b'stream': (b'v2',),
1632 }
1632 }
1633
1633
1634
1634
1635 def getrepocaps(repo, allowpushback=False, role=None):
1635 def getrepocaps(repo, allowpushback=False, role=None):
1636 """return the bundle2 capabilities for a given repo
1636 """return the bundle2 capabilities for a given repo
1637
1637
1638 Exists to allow extensions (like evolution) to mutate the capabilities.
1638 Exists to allow extensions (like evolution) to mutate the capabilities.
1639
1639
1640 The returned value is used for servers advertising their capabilities as
1640 The returned value is used for servers advertising their capabilities as
1641 well as clients advertising their capabilities to servers as part of
1641 well as clients advertising their capabilities to servers as part of
1642 bundle2 requests. The ``role`` argument specifies which is which.
1642 bundle2 requests. The ``role`` argument specifies which is which.
1643 """
1643 """
1644 if role not in (b'client', b'server'):
1644 if role not in (b'client', b'server'):
1645 raise error.ProgrammingError(b'role argument must be client or server')
1645 raise error.ProgrammingError(b'role argument must be client or server')
1646
1646
1647 caps = capabilities.copy()
1647 caps = capabilities.copy()
1648 caps[b'changegroup'] = tuple(
1648 caps[b'changegroup'] = tuple(
1649 sorted(changegroup.supportedincomingversions(repo))
1649 sorted(changegroup.supportedincomingversions(repo))
1650 )
1650 )
1651 if obsolete.isenabled(repo, obsolete.exchangeopt):
1651 if obsolete.isenabled(repo, obsolete.exchangeopt):
1652 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1652 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1653 caps[b'obsmarkers'] = supportedformat
1653 caps[b'obsmarkers'] = supportedformat
1654 if allowpushback:
1654 if allowpushback:
1655 caps[b'pushback'] = ()
1655 caps[b'pushback'] = ()
1656 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1656 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1657 if cpmode == b'check-related':
1657 if cpmode == b'check-related':
1658 caps[b'checkheads'] = (b'related',)
1658 caps[b'checkheads'] = (b'related',)
1659 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1659 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1660 caps.pop(b'phases')
1660 caps.pop(b'phases')
1661
1661
1662 # Don't advertise stream clone support in server mode if not configured.
1662 # Don't advertise stream clone support in server mode if not configured.
1663 if role == b'server':
1663 if role == b'server':
1664 streamsupported = repo.ui.configbool(
1664 streamsupported = repo.ui.configbool(
1665 b'server', b'uncompressed', untrusted=True
1665 b'server', b'uncompressed', untrusted=True
1666 )
1666 )
1667 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1667 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1668
1668
1669 if not streamsupported or not featuresupported:
1669 if not streamsupported or not featuresupported:
1670 caps.pop(b'stream')
1670 caps.pop(b'stream')
1671 # Else always advertise support on client, because payload support
1671 # Else always advertise support on client, because payload support
1672 # should always be advertised.
1672 # should always be advertised.
1673
1673
1674 if repo.ui.configbool(b'experimental', b'stream-v3'):
1674 if repo.ui.configbool(b'experimental', b'stream-v3'):
1675 if b'stream' in caps:
1675 if b'stream' in caps:
1676 caps[b'stream'] += (b'v3-exp',)
1676 caps[b'stream'] += (b'v3-exp',)
1677
1677
1678 # b'rev-branch-cache is no longer advertised, but still supported
1678 # b'rev-branch-cache is no longer advertised, but still supported
1679 # for legacy clients.
1679 # for legacy clients.
1680
1680
1681 return caps
1681 return caps
1682
1682
1683
1683
1684 def bundle2caps(remote):
1684 def bundle2caps(remote):
1685 """return the bundle capabilities of a peer as dict"""
1685 """return the bundle capabilities of a peer as dict"""
1686 raw = remote.capable(b'bundle2')
1686 raw = remote.capable(b'bundle2')
1687 if not raw and raw != b'':
1687 if not raw and raw != b'':
1688 return {}
1688 return {}
1689 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1689 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1690 return decodecaps(capsblob)
1690 return decodecaps(capsblob)
1691
1691
1692
1692
1693 def obsmarkersversion(caps):
1693 def obsmarkersversion(caps):
1694 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1694 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1695 obscaps = caps.get(b'obsmarkers', ())
1695 obscaps = caps.get(b'obsmarkers', ())
1696 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1696 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1697
1697
1698
1698
1699 def writenewbundle(
1699 def writenewbundle(
1700 ui,
1700 ui,
1701 repo,
1701 repo,
1702 source,
1702 source,
1703 filename,
1703 filename,
1704 bundletype,
1704 bundletype,
1705 outgoing,
1705 outgoing,
1706 opts,
1706 opts,
1707 vfs=None,
1707 vfs=None,
1708 compression=None,
1708 compression=None,
1709 compopts=None,
1709 compopts=None,
1710 allow_internal=False,
1710 allow_internal=False,
1711 ):
1711 ):
1712 if bundletype.startswith(b'HG10'):
1712 if bundletype.startswith(b'HG10'):
1713 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1713 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1714 return writebundle(
1714 return writebundle(
1715 ui,
1715 ui,
1716 cg,
1716 cg,
1717 filename,
1717 filename,
1718 bundletype,
1718 bundletype,
1719 vfs=vfs,
1719 vfs=vfs,
1720 compression=compression,
1720 compression=compression,
1721 compopts=compopts,
1721 compopts=compopts,
1722 )
1722 )
1723 elif not bundletype.startswith(b'HG20'):
1723 elif not bundletype.startswith(b'HG20'):
1724 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1724 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1725
1725
1726 # enforce that no internal phase are to be bundled
1726 # enforce that no internal phase are to be bundled
1727 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1727 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1728 if bundled_internal and not allow_internal:
1728 if bundled_internal and not allow_internal:
1729 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1729 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1730 msg = "backup bundle would contains %d internal changesets"
1730 msg = "backup bundle would contains %d internal changesets"
1731 msg %= count
1731 msg %= count
1732 raise error.ProgrammingError(msg)
1732 raise error.ProgrammingError(msg)
1733
1733
1734 caps = {}
1734 caps = {}
1735 if opts.get(b'obsolescence', False):
1735 if opts.get(b'obsolescence', False):
1736 caps[b'obsmarkers'] = (b'V1',)
1736 caps[b'obsmarkers'] = (b'V1',)
1737 if opts.get(b'streamv2'):
1737 if opts.get(b'streamv2'):
1738 caps[b'stream'] = [b'v2']
1738 caps[b'stream'] = [b'v2']
1739 bundle = bundle20(ui, caps)
1739 bundle = bundle20(ui, caps)
1740 bundle.setcompression(compression, compopts)
1740 bundle.setcompression(compression, compopts)
1741 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1741 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1742 chunkiter = bundle.getchunks()
1742 chunkiter = bundle.getchunks()
1743
1743
1744 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1744 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1745
1745
1746
1746
1747 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1747 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1748 # We should eventually reconcile this logic with the one behind
1748 # We should eventually reconcile this logic with the one behind
1749 # 'exchange.getbundle2partsgenerator'.
1749 # 'exchange.getbundle2partsgenerator'.
1750 #
1750 #
1751 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1751 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1752 # different right now. So we keep them separated for now for the sake of
1752 # different right now. So we keep them separated for now for the sake of
1753 # simplicity.
1753 # simplicity.
1754
1754
1755 # we might not always want a changegroup in such bundle, for example in
1755 # we might not always want a changegroup in such bundle, for example in
1756 # stream bundles
1756 # stream bundles
1757 if opts.get(b'changegroup', True):
1757 if opts.get(b'changegroup', True):
1758 cgversion = opts.get(b'cg.version')
1758 cgversion = opts.get(b'cg.version')
1759 if cgversion is None:
1759 if cgversion is None:
1760 cgversion = changegroup.safeversion(repo)
1760 cgversion = changegroup.safeversion(repo)
1761 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1761 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1762 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1762 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1763 part.addparam(b'version', cg.version)
1763 part.addparam(b'version', cg.version)
1764 if b'clcount' in cg.extras:
1764 if b'clcount' in cg.extras:
1765 part.addparam(
1765 part.addparam(
1766 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1766 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1767 )
1767 )
1768 if opts.get(b'phases'):
1768 if opts.get(b'phases'):
1769 target_phase = phases.draft
1769 target_phase = phases.draft
1770 for head in outgoing.ancestorsof:
1770 for head in outgoing.ancestorsof:
1771 target_phase = max(target_phase, repo[head].phase())
1771 target_phase = max(target_phase, repo[head].phase())
1772 if target_phase > phases.draft:
1772 if target_phase > phases.draft:
1773 part.addparam(
1773 part.addparam(
1774 b'targetphase',
1774 b'targetphase',
1775 b'%d' % target_phase,
1775 b'%d' % target_phase,
1776 mandatory=False,
1776 mandatory=False,
1777 )
1777 )
1778 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1778 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1779 part.addparam(b'exp-sidedata', b'1')
1779 part.addparam(b'exp-sidedata', b'1')
1780
1780
1781 if opts.get(b'streamv2', False):
1781 if opts.get(b'streamv2', False):
1782 addpartbundlestream2(bundler, repo, stream=True)
1782 addpartbundlestream2(bundler, repo, stream=True)
1783
1783
1784 if opts.get(b'tagsfnodescache', True):
1784 if opts.get(b'tagsfnodescache', True):
1785 addparttagsfnodescache(repo, bundler, outgoing)
1785 addparttagsfnodescache(repo, bundler, outgoing)
1786
1786
1787 if opts.get(b'revbranchcache', True):
1787 if opts.get(b'revbranchcache', True):
1788 addpartrevbranchcache(repo, bundler, outgoing)
1788 addpartrevbranchcache(repo, bundler, outgoing)
1789
1789
1790 if opts.get(b'obsolescence', False):
1790 if opts.get(b'obsolescence', False):
1791 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1791 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1792 buildobsmarkerspart(
1792 buildobsmarkerspart(
1793 bundler,
1793 bundler,
1794 obsmarkers,
1794 obsmarkers,
1795 mandatory=opts.get(b'obsolescence-mandatory', True),
1795 mandatory=opts.get(b'obsolescence-mandatory', True),
1796 )
1796 )
1797
1797
1798 if opts.get(b'phases', False):
1798 if opts.get(b'phases', False):
1799 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1799 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1800 phasedata = phases.binaryencode(headsbyphase)
1800 phasedata = phases.binaryencode(headsbyphase)
1801 bundler.newpart(b'phase-heads', data=phasedata)
1801 bundler.newpart(b'phase-heads', data=phasedata)
1802
1802
1803
1803
1804 def addparttagsfnodescache(repo, bundler, outgoing):
1804 def addparttagsfnodescache(repo, bundler, outgoing):
1805 # we include the tags fnode cache for the bundle changeset
1805 # we include the tags fnode cache for the bundle changeset
1806 # (as an optional parts)
1806 # (as an optional parts)
1807 cache = tags.hgtagsfnodescache(repo.unfiltered())
1807 cache = tags.hgtagsfnodescache(repo.unfiltered())
1808 chunks = []
1808 chunks = []
1809
1809
1810 # .hgtags fnodes are only relevant for head changesets. While we could
1810 # .hgtags fnodes are only relevant for head changesets. While we could
1811 # transfer values for all known nodes, there will likely be little to
1811 # transfer values for all known nodes, there will likely be little to
1812 # no benefit.
1812 # no benefit.
1813 #
1813 #
1814 # We don't bother using a generator to produce output data because
1814 # We don't bother using a generator to produce output data because
1815 # a) we only have 40 bytes per head and even esoteric numbers of heads
1815 # a) we only have 40 bytes per head and even esoteric numbers of heads
1816 # consume little memory (1M heads is 40MB) b) we don't want to send the
1816 # consume little memory (1M heads is 40MB) b) we don't want to send the
1817 # part if we don't have entries and knowing if we have entries requires
1817 # part if we don't have entries and knowing if we have entries requires
1818 # cache lookups.
1818 # cache lookups.
1819 for node in outgoing.ancestorsof:
1819 for node in outgoing.ancestorsof:
1820 # Don't compute missing, as this may slow down serving.
1820 # Don't compute missing, as this may slow down serving.
1821 fnode = cache.getfnode(node, computemissing=False)
1821 fnode = cache.getfnode(node, computemissing=False)
1822 if fnode:
1822 if fnode:
1823 chunks.extend([node, fnode])
1823 chunks.extend([node, fnode])
1824
1824
1825 if chunks:
1825 if chunks:
1826 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1826 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1827
1827
1828
1828
1829 def addpartrevbranchcache(repo, bundler, outgoing):
1829 def addpartrevbranchcache(repo, bundler, outgoing):
1830 # we include the rev branch cache for the bundle changeset
1830 # we include the rev branch cache for the bundle changeset
1831 # (as an optional parts)
1831 # (as an optional parts)
1832 cache = repo.revbranchcache()
1832 cache = repo.revbranchcache()
1833 cl = repo.unfiltered().changelog
1833 cl = repo.unfiltered().changelog
1834 branchesdata = collections.defaultdict(lambda: (set(), set()))
1834 branchesdata = collections.defaultdict(lambda: (set(), set()))
1835 for node in outgoing.missing:
1835 for node in outgoing.missing:
1836 branch, close = cache.branchinfo(cl.rev(node))
1836 branch, close = cache.branchinfo(cl.rev(node))
1837 branchesdata[branch][close].add(node)
1837 branchesdata[branch][close].add(node)
1838
1838
1839 def generate():
1839 def generate():
1840 for branch, (nodes, closed) in sorted(branchesdata.items()):
1840 for branch, (nodes, closed) in sorted(branchesdata.items()):
1841 utf8branch = encoding.fromlocal(branch)
1841 utf8branch = encoding.fromlocal(branch)
1842 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1842 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1843 yield utf8branch
1843 yield utf8branch
1844 for n in sorted(nodes):
1844 for n in sorted(nodes):
1845 yield n
1845 yield n
1846 for n in sorted(closed):
1846 for n in sorted(closed):
1847 yield n
1847 yield n
1848
1848
1849 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1849 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1850
1850
1851
1851
1852 def _formatrequirementsspec(requirements):
1852 def _formatrequirementsspec(requirements):
1853 requirements = [req for req in requirements if req != b"shared"]
1853 requirements = [req for req in requirements if req != b"shared"]
1854 return urlreq.quote(b','.join(sorted(requirements)))
1854 return urlreq.quote(b','.join(sorted(requirements)))
1855
1855
1856
1856
1857 def _formatrequirementsparams(requirements):
1857 def _formatrequirementsparams(requirements):
1858 requirements = _formatrequirementsspec(requirements)
1858 requirements = _formatrequirementsspec(requirements)
1859 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1859 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1860 return params
1860 return params
1861
1861
1862
1862
1863 def format_remote_wanted_sidedata(repo):
1863 def format_remote_wanted_sidedata(repo):
1864 """Formats a repo's wanted sidedata categories into a bytestring for
1864 """Formats a repo's wanted sidedata categories into a bytestring for
1865 capabilities exchange."""
1865 capabilities exchange."""
1866 wanted = b""
1866 wanted = b""
1867 if repo._wanted_sidedata:
1867 if repo._wanted_sidedata:
1868 wanted = b','.join(
1868 wanted = b','.join(
1869 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1869 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1870 )
1870 )
1871 return wanted
1871 return wanted
1872
1872
1873
1873
1874 def read_remote_wanted_sidedata(remote):
1874 def read_remote_wanted_sidedata(remote):
1875 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1875 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1876 return read_wanted_sidedata(sidedata_categories)
1876 return read_wanted_sidedata(sidedata_categories)
1877
1877
1878
1878
1879 def read_wanted_sidedata(formatted):
1879 def read_wanted_sidedata(formatted):
1880 if formatted:
1880 if formatted:
1881 return set(formatted.split(b','))
1881 return set(formatted.split(b','))
1882 return set()
1882 return set()
1883
1883
1884
1884
1885 def addpartbundlestream2(bundler, repo, **kwargs):
1885 def addpartbundlestream2(bundler, repo, **kwargs):
1886 if not kwargs.get('stream', False):
1886 if not kwargs.get('stream', False):
1887 return
1887 return
1888
1888
1889 if not streamclone.allowservergeneration(repo):
1889 if not streamclone.allowservergeneration(repo):
1890 msg = _(b'stream data requested but server does not allow this feature')
1890 msg = _(b'stream data requested but server does not allow this feature')
1891 hint = _(b'the client seems buggy')
1891 hint = _(b'the client seems buggy')
1892 raise error.Abort(msg, hint=hint)
1892 raise error.Abort(msg, hint=hint)
1893 if not (b'stream' in bundler.capabilities):
1893 if not (b'stream' in bundler.capabilities):
1894 msg = _(
1894 msg = _(
1895 b'stream data requested but supported streaming clone versions were not specified'
1895 b'stream data requested but supported streaming clone versions were not specified'
1896 )
1896 )
1897 hint = _(b'the client seems buggy')
1897 hint = _(b'the client seems buggy')
1898 raise error.Abort(msg, hint=hint)
1898 raise error.Abort(msg, hint=hint)
1899 client_supported = set(bundler.capabilities[b'stream'])
1899 client_supported = set(bundler.capabilities[b'stream'])
1900 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1900 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1901 common_supported = client_supported & server_supported
1901 common_supported = client_supported & server_supported
1902 if not common_supported:
1902 if not common_supported:
1903 msg = _(b'no common supported version with the client: %s; %s')
1903 msg = _(b'no common supported version with the client: %s; %s')
1904 str_server = b','.join(sorted(server_supported))
1904 str_server = b','.join(sorted(server_supported))
1905 str_client = b','.join(sorted(client_supported))
1905 str_client = b','.join(sorted(client_supported))
1906 msg %= (str_server, str_client)
1906 msg %= (str_server, str_client)
1907 raise error.Abort(msg)
1907 raise error.Abort(msg)
1908 version = max(common_supported)
1908 version = max(common_supported)
1909
1909
1910 # Stream clones don't compress well. And compression undermines a
1910 # Stream clones don't compress well. And compression undermines a
1911 # goal of stream clones, which is to be fast. Communicate the desire
1911 # goal of stream clones, which is to be fast. Communicate the desire
1912 # to avoid compression to consumers of the bundle.
1912 # to avoid compression to consumers of the bundle.
1913 bundler.prefercompressed = False
1913 bundler.prefercompressed = False
1914
1914
1915 # get the includes and excludes
1915 # get the includes and excludes
1916 includepats = kwargs.get('includepats')
1916 includepats = kwargs.get('includepats')
1917 excludepats = kwargs.get('excludepats')
1917 excludepats = kwargs.get('excludepats')
1918
1918
1919 narrowstream = repo.ui.configbool(
1919 narrowstream = repo.ui.configbool(
1920 b'experimental', b'server.stream-narrow-clones'
1920 b'experimental', b'server.stream-narrow-clones'
1921 )
1921 )
1922
1922
1923 if (includepats or excludepats) and not narrowstream:
1923 if (includepats or excludepats) and not narrowstream:
1924 raise error.Abort(_(b'server does not support narrow stream clones'))
1924 raise error.Abort(_(b'server does not support narrow stream clones'))
1925
1925
1926 includeobsmarkers = False
1926 includeobsmarkers = False
1927 if repo.obsstore:
1927 if repo.obsstore:
1928 remoteversions = obsmarkersversion(bundler.capabilities)
1928 remoteversions = obsmarkersversion(bundler.capabilities)
1929 if not remoteversions:
1929 if not remoteversions:
1930 raise error.Abort(
1930 raise error.Abort(
1931 _(
1931 _(
1932 b'server has obsolescence markers, but client '
1932 b'server has obsolescence markers, but client '
1933 b'cannot receive them via stream clone'
1933 b'cannot receive them via stream clone'
1934 )
1934 )
1935 )
1935 )
1936 elif repo.obsstore._version in remoteversions:
1936 elif repo.obsstore._version in remoteversions:
1937 includeobsmarkers = True
1937 includeobsmarkers = True
1938
1938
1939 if version == b"v2":
1939 if version == b"v2":
1940 filecount, bytecount, it = streamclone.generatev2(
1940 filecount, bytecount, it = streamclone.generatev2(
1941 repo, includepats, excludepats, includeobsmarkers
1941 repo, includepats, excludepats, includeobsmarkers
1942 )
1942 )
1943 requirements = streamclone.streamed_requirements(repo)
1943 requirements = streamclone.streamed_requirements(repo)
1944 requirements = _formatrequirementsspec(requirements)
1944 requirements = _formatrequirementsspec(requirements)
1945 part = bundler.newpart(b'stream2', data=it)
1945 part = bundler.newpart(b'stream2', data=it)
1946 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1946 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1947 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1947 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1948 part.addparam(b'requirements', requirements, mandatory=True)
1948 part.addparam(b'requirements', requirements, mandatory=True)
1949 elif version == b"v3-exp":
1949 elif version == b"v3-exp":
1950 filecount, bytecount, it = streamclone.generatev2(
1950 filecount, bytecount, it = streamclone.generatev2(
1951 repo, includepats, excludepats, includeobsmarkers
1951 repo, includepats, excludepats, includeobsmarkers
1952 )
1952 )
1953 requirements = streamclone.streamed_requirements(repo)
1953 requirements = streamclone.streamed_requirements(repo)
1954 requirements = _formatrequirementsspec(requirements)
1954 requirements = _formatrequirementsspec(requirements)
1955 part = bundler.newpart(b'stream3', data=it)
1955 part = bundler.newpart(b'stream3-exp', data=it)
1956 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1956 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1957 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1957 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1958 part.addparam(b'requirements', requirements, mandatory=True)
1958 part.addparam(b'requirements', requirements, mandatory=True)
1959
1959
1960
1960
1961 def buildobsmarkerspart(bundler, markers, mandatory=True):
1961 def buildobsmarkerspart(bundler, markers, mandatory=True):
1962 """add an obsmarker part to the bundler with <markers>
1962 """add an obsmarker part to the bundler with <markers>
1963
1963
1964 No part is created if markers is empty.
1964 No part is created if markers is empty.
1965 Raises ValueError if the bundler doesn't support any known obsmarker format.
1965 Raises ValueError if the bundler doesn't support any known obsmarker format.
1966 """
1966 """
1967 if not markers:
1967 if not markers:
1968 return None
1968 return None
1969
1969
1970 remoteversions = obsmarkersversion(bundler.capabilities)
1970 remoteversions = obsmarkersversion(bundler.capabilities)
1971 version = obsolete.commonversion(remoteversions)
1971 version = obsolete.commonversion(remoteversions)
1972 if version is None:
1972 if version is None:
1973 raise ValueError(b'bundler does not support common obsmarker format')
1973 raise ValueError(b'bundler does not support common obsmarker format')
1974 stream = obsolete.encodemarkers(markers, True, version=version)
1974 stream = obsolete.encodemarkers(markers, True, version=version)
1975 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1975 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1976
1976
1977
1977
1978 def writebundle(
1978 def writebundle(
1979 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1979 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1980 ):
1980 ):
1981 """Write a bundle file and return its filename.
1981 """Write a bundle file and return its filename.
1982
1982
1983 Existing files will not be overwritten.
1983 Existing files will not be overwritten.
1984 If no filename is specified, a temporary file is created.
1984 If no filename is specified, a temporary file is created.
1985 bz2 compression can be turned off.
1985 bz2 compression can be turned off.
1986 The bundle file will be deleted in case of errors.
1986 The bundle file will be deleted in case of errors.
1987 """
1987 """
1988
1988
1989 if bundletype == b"HG20":
1989 if bundletype == b"HG20":
1990 bundle = bundle20(ui)
1990 bundle = bundle20(ui)
1991 bundle.setcompression(compression, compopts)
1991 bundle.setcompression(compression, compopts)
1992 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1992 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1993 part.addparam(b'version', cg.version)
1993 part.addparam(b'version', cg.version)
1994 if b'clcount' in cg.extras:
1994 if b'clcount' in cg.extras:
1995 part.addparam(
1995 part.addparam(
1996 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1996 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1997 )
1997 )
1998 chunkiter = bundle.getchunks()
1998 chunkiter = bundle.getchunks()
1999 else:
1999 else:
2000 # compression argument is only for the bundle2 case
2000 # compression argument is only for the bundle2 case
2001 assert compression is None
2001 assert compression is None
2002 if cg.version != b'01':
2002 if cg.version != b'01':
2003 raise error.Abort(
2003 raise error.Abort(
2004 _(b'old bundle types only supports v1 changegroups')
2004 _(b'old bundle types only supports v1 changegroups')
2005 )
2005 )
2006
2006
2007 # HG20 is the case without 2 values to unpack, but is handled above.
2007 # HG20 is the case without 2 values to unpack, but is handled above.
2008 # pytype: disable=bad-unpacking
2008 # pytype: disable=bad-unpacking
2009 header, comp = bundletypes[bundletype]
2009 header, comp = bundletypes[bundletype]
2010 # pytype: enable=bad-unpacking
2010 # pytype: enable=bad-unpacking
2011
2011
2012 if comp not in util.compengines.supportedbundletypes:
2012 if comp not in util.compengines.supportedbundletypes:
2013 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2013 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2014 compengine = util.compengines.forbundletype(comp)
2014 compengine = util.compengines.forbundletype(comp)
2015
2015
2016 def chunkiter():
2016 def chunkiter():
2017 yield header
2017 yield header
2018 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2018 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2019 yield chunk
2019 yield chunk
2020
2020
2021 chunkiter = chunkiter()
2021 chunkiter = chunkiter()
2022
2022
2023 # parse the changegroup data, otherwise we will block
2023 # parse the changegroup data, otherwise we will block
2024 # in case of sshrepo because we don't know the end of the stream
2024 # in case of sshrepo because we don't know the end of the stream
2025 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2025 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2026
2026
2027
2027
2028 def combinechangegroupresults(op):
2028 def combinechangegroupresults(op):
2029 """logic to combine 0 or more addchangegroup results into one"""
2029 """logic to combine 0 or more addchangegroup results into one"""
2030 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2030 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2031 changedheads = 0
2031 changedheads = 0
2032 result = 1
2032 result = 1
2033 for ret in results:
2033 for ret in results:
2034 # If any changegroup result is 0, return 0
2034 # If any changegroup result is 0, return 0
2035 if ret == 0:
2035 if ret == 0:
2036 result = 0
2036 result = 0
2037 break
2037 break
2038 if ret < -1:
2038 if ret < -1:
2039 changedheads += ret + 1
2039 changedheads += ret + 1
2040 elif ret > 1:
2040 elif ret > 1:
2041 changedheads += ret - 1
2041 changedheads += ret - 1
2042 if changedheads > 0:
2042 if changedheads > 0:
2043 result = 1 + changedheads
2043 result = 1 + changedheads
2044 elif changedheads < 0:
2044 elif changedheads < 0:
2045 result = -1 + changedheads
2045 result = -1 + changedheads
2046 return result
2046 return result
2047
2047
2048
2048
2049 @parthandler(
2049 @parthandler(
2050 b'changegroup',
2050 b'changegroup',
2051 (
2051 (
2052 b'version',
2052 b'version',
2053 b'nbchanges',
2053 b'nbchanges',
2054 b'exp-sidedata',
2054 b'exp-sidedata',
2055 b'exp-wanted-sidedata',
2055 b'exp-wanted-sidedata',
2056 b'treemanifest',
2056 b'treemanifest',
2057 b'targetphase',
2057 b'targetphase',
2058 ),
2058 ),
2059 )
2059 )
2060 def handlechangegroup(op, inpart):
2060 def handlechangegroup(op, inpart):
2061 """apply a changegroup part on the repo"""
2061 """apply a changegroup part on the repo"""
2062 from . import localrepo
2062 from . import localrepo
2063
2063
2064 tr = op.gettransaction()
2064 tr = op.gettransaction()
2065 unpackerversion = inpart.params.get(b'version', b'01')
2065 unpackerversion = inpart.params.get(b'version', b'01')
2066 # We should raise an appropriate exception here
2066 # We should raise an appropriate exception here
2067 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2067 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2068 # the source and url passed here are overwritten by the one contained in
2068 # the source and url passed here are overwritten by the one contained in
2069 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2069 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2070 nbchangesets = None
2070 nbchangesets = None
2071 if b'nbchanges' in inpart.params:
2071 if b'nbchanges' in inpart.params:
2072 nbchangesets = int(inpart.params.get(b'nbchanges'))
2072 nbchangesets = int(inpart.params.get(b'nbchanges'))
2073 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2073 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2074 if len(op.repo.changelog) != 0:
2074 if len(op.repo.changelog) != 0:
2075 raise error.Abort(
2075 raise error.Abort(
2076 _(
2076 _(
2077 b"bundle contains tree manifests, but local repo is "
2077 b"bundle contains tree manifests, but local repo is "
2078 b"non-empty and does not use tree manifests"
2078 b"non-empty and does not use tree manifests"
2079 )
2079 )
2080 )
2080 )
2081 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2081 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2082 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2082 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2083 op.repo.ui, op.repo.requirements, op.repo.features
2083 op.repo.ui, op.repo.requirements, op.repo.features
2084 )
2084 )
2085 scmutil.writereporequirements(op.repo)
2085 scmutil.writereporequirements(op.repo)
2086
2086
2087 extrakwargs = {}
2087 extrakwargs = {}
2088 targetphase = inpart.params.get(b'targetphase')
2088 targetphase = inpart.params.get(b'targetphase')
2089 if targetphase is not None:
2089 if targetphase is not None:
2090 extrakwargs['targetphase'] = int(targetphase)
2090 extrakwargs['targetphase'] = int(targetphase)
2091
2091
2092 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2092 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2093 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2093 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2094
2094
2095 ret = _processchangegroup(
2095 ret = _processchangegroup(
2096 op,
2096 op,
2097 cg,
2097 cg,
2098 tr,
2098 tr,
2099 op.source,
2099 op.source,
2100 b'bundle2',
2100 b'bundle2',
2101 expectedtotal=nbchangesets,
2101 expectedtotal=nbchangesets,
2102 **extrakwargs
2102 **extrakwargs
2103 )
2103 )
2104 if op.reply is not None:
2104 if op.reply is not None:
2105 # This is definitely not the final form of this
2105 # This is definitely not the final form of this
2106 # return. But one need to start somewhere.
2106 # return. But one need to start somewhere.
2107 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2107 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2108 part.addparam(
2108 part.addparam(
2109 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2109 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2110 )
2110 )
2111 part.addparam(b'return', b'%i' % ret, mandatory=False)
2111 part.addparam(b'return', b'%i' % ret, mandatory=False)
2112 assert not inpart.read()
2112 assert not inpart.read()
2113
2113
2114
2114
2115 _remotechangegroupparams = tuple(
2115 _remotechangegroupparams = tuple(
2116 [b'url', b'size', b'digests']
2116 [b'url', b'size', b'digests']
2117 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2117 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2118 )
2118 )
2119
2119
2120
2120
2121 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2121 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2122 def handleremotechangegroup(op, inpart):
2122 def handleremotechangegroup(op, inpart):
2123 """apply a bundle10 on the repo, given an url and validation information
2123 """apply a bundle10 on the repo, given an url and validation information
2124
2124
2125 All the information about the remote bundle to import are given as
2125 All the information about the remote bundle to import are given as
2126 parameters. The parameters include:
2126 parameters. The parameters include:
2127 - url: the url to the bundle10.
2127 - url: the url to the bundle10.
2128 - size: the bundle10 file size. It is used to validate what was
2128 - size: the bundle10 file size. It is used to validate what was
2129 retrieved by the client matches the server knowledge about the bundle.
2129 retrieved by the client matches the server knowledge about the bundle.
2130 - digests: a space separated list of the digest types provided as
2130 - digests: a space separated list of the digest types provided as
2131 parameters.
2131 parameters.
2132 - digest:<digest-type>: the hexadecimal representation of the digest with
2132 - digest:<digest-type>: the hexadecimal representation of the digest with
2133 that name. Like the size, it is used to validate what was retrieved by
2133 that name. Like the size, it is used to validate what was retrieved by
2134 the client matches what the server knows about the bundle.
2134 the client matches what the server knows about the bundle.
2135
2135
2136 When multiple digest types are given, all of them are checked.
2136 When multiple digest types are given, all of them are checked.
2137 """
2137 """
2138 try:
2138 try:
2139 raw_url = inpart.params[b'url']
2139 raw_url = inpart.params[b'url']
2140 except KeyError:
2140 except KeyError:
2141 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2141 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2142 parsed_url = urlutil.url(raw_url)
2142 parsed_url = urlutil.url(raw_url)
2143 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2143 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2144 raise error.Abort(
2144 raise error.Abort(
2145 _(b'remote-changegroup does not support %s urls')
2145 _(b'remote-changegroup does not support %s urls')
2146 % parsed_url.scheme
2146 % parsed_url.scheme
2147 )
2147 )
2148
2148
2149 try:
2149 try:
2150 size = int(inpart.params[b'size'])
2150 size = int(inpart.params[b'size'])
2151 except ValueError:
2151 except ValueError:
2152 raise error.Abort(
2152 raise error.Abort(
2153 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2153 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2154 )
2154 )
2155 except KeyError:
2155 except KeyError:
2156 raise error.Abort(
2156 raise error.Abort(
2157 _(b'remote-changegroup: missing "%s" param') % b'size'
2157 _(b'remote-changegroup: missing "%s" param') % b'size'
2158 )
2158 )
2159
2159
2160 digests = {}
2160 digests = {}
2161 for typ in inpart.params.get(b'digests', b'').split():
2161 for typ in inpart.params.get(b'digests', b'').split():
2162 param = b'digest:%s' % typ
2162 param = b'digest:%s' % typ
2163 try:
2163 try:
2164 value = inpart.params[param]
2164 value = inpart.params[param]
2165 except KeyError:
2165 except KeyError:
2166 raise error.Abort(
2166 raise error.Abort(
2167 _(b'remote-changegroup: missing "%s" param') % param
2167 _(b'remote-changegroup: missing "%s" param') % param
2168 )
2168 )
2169 digests[typ] = value
2169 digests[typ] = value
2170
2170
2171 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2171 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2172
2172
2173 tr = op.gettransaction()
2173 tr = op.gettransaction()
2174 from . import exchange
2174 from . import exchange
2175
2175
2176 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2176 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2177 if not isinstance(cg, changegroup.cg1unpacker):
2177 if not isinstance(cg, changegroup.cg1unpacker):
2178 raise error.Abort(
2178 raise error.Abort(
2179 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2179 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2180 )
2180 )
2181 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2181 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2182 if op.reply is not None:
2182 if op.reply is not None:
2183 # This is definitely not the final form of this
2183 # This is definitely not the final form of this
2184 # return. But one need to start somewhere.
2184 # return. But one need to start somewhere.
2185 part = op.reply.newpart(b'reply:changegroup')
2185 part = op.reply.newpart(b'reply:changegroup')
2186 part.addparam(
2186 part.addparam(
2187 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2187 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2188 )
2188 )
2189 part.addparam(b'return', b'%i' % ret, mandatory=False)
2189 part.addparam(b'return', b'%i' % ret, mandatory=False)
2190 try:
2190 try:
2191 real_part.validate()
2191 real_part.validate()
2192 except error.Abort as e:
2192 except error.Abort as e:
2193 raise error.Abort(
2193 raise error.Abort(
2194 _(b'bundle at %s is corrupted:\n%s')
2194 _(b'bundle at %s is corrupted:\n%s')
2195 % (urlutil.hidepassword(raw_url), e.message)
2195 % (urlutil.hidepassword(raw_url), e.message)
2196 )
2196 )
2197 assert not inpart.read()
2197 assert not inpart.read()
2198
2198
2199
2199
2200 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2200 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2201 def handlereplychangegroup(op, inpart):
2201 def handlereplychangegroup(op, inpart):
2202 ret = int(inpart.params[b'return'])
2202 ret = int(inpart.params[b'return'])
2203 replyto = int(inpart.params[b'in-reply-to'])
2203 replyto = int(inpart.params[b'in-reply-to'])
2204 op.records.add(b'changegroup', {b'return': ret}, replyto)
2204 op.records.add(b'changegroup', {b'return': ret}, replyto)
2205
2205
2206
2206
2207 @parthandler(b'check:bookmarks')
2207 @parthandler(b'check:bookmarks')
2208 def handlecheckbookmarks(op, inpart):
2208 def handlecheckbookmarks(op, inpart):
2209 """check location of bookmarks
2209 """check location of bookmarks
2210
2210
2211 This part is to be used to detect push race regarding bookmark, it
2211 This part is to be used to detect push race regarding bookmark, it
2212 contains binary encoded (bookmark, node) tuple. If the local state does
2212 contains binary encoded (bookmark, node) tuple. If the local state does
2213 not marks the one in the part, a PushRaced exception is raised
2213 not marks the one in the part, a PushRaced exception is raised
2214 """
2214 """
2215 bookdata = bookmarks.binarydecode(op.repo, inpart)
2215 bookdata = bookmarks.binarydecode(op.repo, inpart)
2216
2216
2217 msgstandard = (
2217 msgstandard = (
2218 b'remote repository changed while pushing - please try again '
2218 b'remote repository changed while pushing - please try again '
2219 b'(bookmark "%s" move from %s to %s)'
2219 b'(bookmark "%s" move from %s to %s)'
2220 )
2220 )
2221 msgmissing = (
2221 msgmissing = (
2222 b'remote repository changed while pushing - please try again '
2222 b'remote repository changed while pushing - please try again '
2223 b'(bookmark "%s" is missing, expected %s)'
2223 b'(bookmark "%s" is missing, expected %s)'
2224 )
2224 )
2225 msgexist = (
2225 msgexist = (
2226 b'remote repository changed while pushing - please try again '
2226 b'remote repository changed while pushing - please try again '
2227 b'(bookmark "%s" set on %s, expected missing)'
2227 b'(bookmark "%s" set on %s, expected missing)'
2228 )
2228 )
2229 for book, node in bookdata:
2229 for book, node in bookdata:
2230 currentnode = op.repo._bookmarks.get(book)
2230 currentnode = op.repo._bookmarks.get(book)
2231 if currentnode != node:
2231 if currentnode != node:
2232 if node is None:
2232 if node is None:
2233 finalmsg = msgexist % (book, short(currentnode))
2233 finalmsg = msgexist % (book, short(currentnode))
2234 elif currentnode is None:
2234 elif currentnode is None:
2235 finalmsg = msgmissing % (book, short(node))
2235 finalmsg = msgmissing % (book, short(node))
2236 else:
2236 else:
2237 finalmsg = msgstandard % (
2237 finalmsg = msgstandard % (
2238 book,
2238 book,
2239 short(node),
2239 short(node),
2240 short(currentnode),
2240 short(currentnode),
2241 )
2241 )
2242 raise error.PushRaced(finalmsg)
2242 raise error.PushRaced(finalmsg)
2243
2243
2244
2244
2245 @parthandler(b'check:heads')
2245 @parthandler(b'check:heads')
2246 def handlecheckheads(op, inpart):
2246 def handlecheckheads(op, inpart):
2247 """check that head of the repo did not change
2247 """check that head of the repo did not change
2248
2248
2249 This is used to detect a push race when using unbundle.
2249 This is used to detect a push race when using unbundle.
2250 This replaces the "heads" argument of unbundle."""
2250 This replaces the "heads" argument of unbundle."""
2251 h = inpart.read(20)
2251 h = inpart.read(20)
2252 heads = []
2252 heads = []
2253 while len(h) == 20:
2253 while len(h) == 20:
2254 heads.append(h)
2254 heads.append(h)
2255 h = inpart.read(20)
2255 h = inpart.read(20)
2256 assert not h
2256 assert not h
2257 # Trigger a transaction so that we are guaranteed to have the lock now.
2257 # Trigger a transaction so that we are guaranteed to have the lock now.
2258 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2258 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2259 op.gettransaction()
2259 op.gettransaction()
2260 if sorted(heads) != sorted(op.repo.heads()):
2260 if sorted(heads) != sorted(op.repo.heads()):
2261 raise error.PushRaced(
2261 raise error.PushRaced(
2262 b'remote repository changed while pushing - please try again'
2262 b'remote repository changed while pushing - please try again'
2263 )
2263 )
2264
2264
2265
2265
2266 @parthandler(b'check:updated-heads')
2266 @parthandler(b'check:updated-heads')
2267 def handlecheckupdatedheads(op, inpart):
2267 def handlecheckupdatedheads(op, inpart):
2268 """check for race on the heads touched by a push
2268 """check for race on the heads touched by a push
2269
2269
2270 This is similar to 'check:heads' but focus on the heads actually updated
2270 This is similar to 'check:heads' but focus on the heads actually updated
2271 during the push. If other activities happen on unrelated heads, it is
2271 during the push. If other activities happen on unrelated heads, it is
2272 ignored.
2272 ignored.
2273
2273
2274 This allow server with high traffic to avoid push contention as long as
2274 This allow server with high traffic to avoid push contention as long as
2275 unrelated parts of the graph are involved."""
2275 unrelated parts of the graph are involved."""
2276 h = inpart.read(20)
2276 h = inpart.read(20)
2277 heads = []
2277 heads = []
2278 while len(h) == 20:
2278 while len(h) == 20:
2279 heads.append(h)
2279 heads.append(h)
2280 h = inpart.read(20)
2280 h = inpart.read(20)
2281 assert not h
2281 assert not h
2282 # trigger a transaction so that we are guaranteed to have the lock now.
2282 # trigger a transaction so that we are guaranteed to have the lock now.
2283 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2283 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2284 op.gettransaction()
2284 op.gettransaction()
2285
2285
2286 currentheads = set()
2286 currentheads = set()
2287 for ls in op.repo.branchmap().iterheads():
2287 for ls in op.repo.branchmap().iterheads():
2288 currentheads.update(ls)
2288 currentheads.update(ls)
2289
2289
2290 for h in heads:
2290 for h in heads:
2291 if h not in currentheads:
2291 if h not in currentheads:
2292 raise error.PushRaced(
2292 raise error.PushRaced(
2293 b'remote repository changed while pushing - '
2293 b'remote repository changed while pushing - '
2294 b'please try again'
2294 b'please try again'
2295 )
2295 )
2296
2296
2297
2297
2298 @parthandler(b'check:phases')
2298 @parthandler(b'check:phases')
2299 def handlecheckphases(op, inpart):
2299 def handlecheckphases(op, inpart):
2300 """check that phase boundaries of the repository did not change
2300 """check that phase boundaries of the repository did not change
2301
2301
2302 This is used to detect a push race.
2302 This is used to detect a push race.
2303 """
2303 """
2304 phasetonodes = phases.binarydecode(inpart)
2304 phasetonodes = phases.binarydecode(inpart)
2305 unfi = op.repo.unfiltered()
2305 unfi = op.repo.unfiltered()
2306 cl = unfi.changelog
2306 cl = unfi.changelog
2307 phasecache = unfi._phasecache
2307 phasecache = unfi._phasecache
2308 msg = (
2308 msg = (
2309 b'remote repository changed while pushing - please try again '
2309 b'remote repository changed while pushing - please try again '
2310 b'(%s is %s expected %s)'
2310 b'(%s is %s expected %s)'
2311 )
2311 )
2312 for expectedphase, nodes in phasetonodes.items():
2312 for expectedphase, nodes in phasetonodes.items():
2313 for n in nodes:
2313 for n in nodes:
2314 actualphase = phasecache.phase(unfi, cl.rev(n))
2314 actualphase = phasecache.phase(unfi, cl.rev(n))
2315 if actualphase != expectedphase:
2315 if actualphase != expectedphase:
2316 finalmsg = msg % (
2316 finalmsg = msg % (
2317 short(n),
2317 short(n),
2318 phases.phasenames[actualphase],
2318 phases.phasenames[actualphase],
2319 phases.phasenames[expectedphase],
2319 phases.phasenames[expectedphase],
2320 )
2320 )
2321 raise error.PushRaced(finalmsg)
2321 raise error.PushRaced(finalmsg)
2322
2322
2323
2323
2324 @parthandler(b'output')
2324 @parthandler(b'output')
2325 def handleoutput(op, inpart):
2325 def handleoutput(op, inpart):
2326 """forward output captured on the server to the client"""
2326 """forward output captured on the server to the client"""
2327 for line in inpart.read().splitlines():
2327 for line in inpart.read().splitlines():
2328 op.ui.status(_(b'remote: %s\n') % line)
2328 op.ui.status(_(b'remote: %s\n') % line)
2329
2329
2330
2330
2331 @parthandler(b'replycaps')
2331 @parthandler(b'replycaps')
2332 def handlereplycaps(op, inpart):
2332 def handlereplycaps(op, inpart):
2333 """Notify that a reply bundle should be created
2333 """Notify that a reply bundle should be created
2334
2334
2335 The payload contains the capabilities information for the reply"""
2335 The payload contains the capabilities information for the reply"""
2336 caps = decodecaps(inpart.read())
2336 caps = decodecaps(inpart.read())
2337 if op.reply is None:
2337 if op.reply is None:
2338 op.reply = bundle20(op.ui, caps)
2338 op.reply = bundle20(op.ui, caps)
2339
2339
2340
2340
2341 class AbortFromPart(error.Abort):
2341 class AbortFromPart(error.Abort):
2342 """Sub-class of Abort that denotes an error from a bundle2 part."""
2342 """Sub-class of Abort that denotes an error from a bundle2 part."""
2343
2343
2344
2344
2345 @parthandler(b'error:abort', (b'message', b'hint'))
2345 @parthandler(b'error:abort', (b'message', b'hint'))
2346 def handleerrorabort(op, inpart):
2346 def handleerrorabort(op, inpart):
2347 """Used to transmit abort error over the wire"""
2347 """Used to transmit abort error over the wire"""
2348 raise AbortFromPart(
2348 raise AbortFromPart(
2349 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2349 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2350 )
2350 )
2351
2351
2352
2352
2353 @parthandler(
2353 @parthandler(
2354 b'error:pushkey',
2354 b'error:pushkey',
2355 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2355 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2356 )
2356 )
2357 def handleerrorpushkey(op, inpart):
2357 def handleerrorpushkey(op, inpart):
2358 """Used to transmit failure of a mandatory pushkey over the wire"""
2358 """Used to transmit failure of a mandatory pushkey over the wire"""
2359 kwargs = {}
2359 kwargs = {}
2360 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2360 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2361 value = inpart.params.get(name)
2361 value = inpart.params.get(name)
2362 if value is not None:
2362 if value is not None:
2363 kwargs[name] = value
2363 kwargs[name] = value
2364 raise error.PushkeyFailed(
2364 raise error.PushkeyFailed(
2365 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2365 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2366 )
2366 )
2367
2367
2368
2368
2369 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2369 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2370 def handleerrorunsupportedcontent(op, inpart):
2370 def handleerrorunsupportedcontent(op, inpart):
2371 """Used to transmit unknown content error over the wire"""
2371 """Used to transmit unknown content error over the wire"""
2372 kwargs = {}
2372 kwargs = {}
2373 parttype = inpart.params.get(b'parttype')
2373 parttype = inpart.params.get(b'parttype')
2374 if parttype is not None:
2374 if parttype is not None:
2375 kwargs[b'parttype'] = parttype
2375 kwargs[b'parttype'] = parttype
2376 params = inpart.params.get(b'params')
2376 params = inpart.params.get(b'params')
2377 if params is not None:
2377 if params is not None:
2378 kwargs[b'params'] = params.split(b'\0')
2378 kwargs[b'params'] = params.split(b'\0')
2379
2379
2380 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2380 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2381
2381
2382
2382
2383 @parthandler(b'error:pushraced', (b'message',))
2383 @parthandler(b'error:pushraced', (b'message',))
2384 def handleerrorpushraced(op, inpart):
2384 def handleerrorpushraced(op, inpart):
2385 """Used to transmit push race error over the wire"""
2385 """Used to transmit push race error over the wire"""
2386 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2386 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2387
2387
2388
2388
2389 @parthandler(b'listkeys', (b'namespace',))
2389 @parthandler(b'listkeys', (b'namespace',))
2390 def handlelistkeys(op, inpart):
2390 def handlelistkeys(op, inpart):
2391 """retrieve pushkey namespace content stored in a bundle2"""
2391 """retrieve pushkey namespace content stored in a bundle2"""
2392 namespace = inpart.params[b'namespace']
2392 namespace = inpart.params[b'namespace']
2393 r = pushkey.decodekeys(inpart.read())
2393 r = pushkey.decodekeys(inpart.read())
2394 op.records.add(b'listkeys', (namespace, r))
2394 op.records.add(b'listkeys', (namespace, r))
2395
2395
2396
2396
2397 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2397 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2398 def handlepushkey(op, inpart):
2398 def handlepushkey(op, inpart):
2399 """process a pushkey request"""
2399 """process a pushkey request"""
2400 dec = pushkey.decode
2400 dec = pushkey.decode
2401 namespace = dec(inpart.params[b'namespace'])
2401 namespace = dec(inpart.params[b'namespace'])
2402 key = dec(inpart.params[b'key'])
2402 key = dec(inpart.params[b'key'])
2403 old = dec(inpart.params[b'old'])
2403 old = dec(inpart.params[b'old'])
2404 new = dec(inpart.params[b'new'])
2404 new = dec(inpart.params[b'new'])
2405 # Grab the transaction to ensure that we have the lock before performing the
2405 # Grab the transaction to ensure that we have the lock before performing the
2406 # pushkey.
2406 # pushkey.
2407 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2407 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2408 op.gettransaction()
2408 op.gettransaction()
2409 ret = op.repo.pushkey(namespace, key, old, new)
2409 ret = op.repo.pushkey(namespace, key, old, new)
2410 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2410 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2411 op.records.add(b'pushkey', record)
2411 op.records.add(b'pushkey', record)
2412 if op.reply is not None:
2412 if op.reply is not None:
2413 rpart = op.reply.newpart(b'reply:pushkey')
2413 rpart = op.reply.newpart(b'reply:pushkey')
2414 rpart.addparam(
2414 rpart.addparam(
2415 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2415 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2416 )
2416 )
2417 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2417 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2418 if inpart.mandatory and not ret:
2418 if inpart.mandatory and not ret:
2419 kwargs = {}
2419 kwargs = {}
2420 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2420 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2421 if key in inpart.params:
2421 if key in inpart.params:
2422 kwargs[key] = inpart.params[key]
2422 kwargs[key] = inpart.params[key]
2423 raise error.PushkeyFailed(
2423 raise error.PushkeyFailed(
2424 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2424 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2425 )
2425 )
2426
2426
2427
2427
2428 @parthandler(b'bookmarks')
2428 @parthandler(b'bookmarks')
2429 def handlebookmark(op, inpart):
2429 def handlebookmark(op, inpart):
2430 """transmit bookmark information
2430 """transmit bookmark information
2431
2431
2432 The part contains binary encoded bookmark information.
2432 The part contains binary encoded bookmark information.
2433
2433
2434 The exact behavior of this part can be controlled by the 'bookmarks' mode
2434 The exact behavior of this part can be controlled by the 'bookmarks' mode
2435 on the bundle operation.
2435 on the bundle operation.
2436
2436
2437 When mode is 'apply' (the default) the bookmark information is applied as
2437 When mode is 'apply' (the default) the bookmark information is applied as
2438 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2438 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2439 issued earlier to check for push races in such update. This behavior is
2439 issued earlier to check for push races in such update. This behavior is
2440 suitable for pushing.
2440 suitable for pushing.
2441
2441
2442 When mode is 'records', the information is recorded into the 'bookmarks'
2442 When mode is 'records', the information is recorded into the 'bookmarks'
2443 records of the bundle operation. This behavior is suitable for pulling.
2443 records of the bundle operation. This behavior is suitable for pulling.
2444 """
2444 """
2445 changes = bookmarks.binarydecode(op.repo, inpart)
2445 changes = bookmarks.binarydecode(op.repo, inpart)
2446
2446
2447 pushkeycompat = op.repo.ui.configbool(
2447 pushkeycompat = op.repo.ui.configbool(
2448 b'server', b'bookmarks-pushkey-compat'
2448 b'server', b'bookmarks-pushkey-compat'
2449 )
2449 )
2450 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2450 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2451
2451
2452 if bookmarksmode == b'apply':
2452 if bookmarksmode == b'apply':
2453 tr = op.gettransaction()
2453 tr = op.gettransaction()
2454 bookstore = op.repo._bookmarks
2454 bookstore = op.repo._bookmarks
2455 if pushkeycompat:
2455 if pushkeycompat:
2456 allhooks = []
2456 allhooks = []
2457 for book, node in changes:
2457 for book, node in changes:
2458 hookargs = tr.hookargs.copy()
2458 hookargs = tr.hookargs.copy()
2459 hookargs[b'pushkeycompat'] = b'1'
2459 hookargs[b'pushkeycompat'] = b'1'
2460 hookargs[b'namespace'] = b'bookmarks'
2460 hookargs[b'namespace'] = b'bookmarks'
2461 hookargs[b'key'] = book
2461 hookargs[b'key'] = book
2462 hookargs[b'old'] = hex(bookstore.get(book, b''))
2462 hookargs[b'old'] = hex(bookstore.get(book, b''))
2463 hookargs[b'new'] = hex(node if node is not None else b'')
2463 hookargs[b'new'] = hex(node if node is not None else b'')
2464 allhooks.append(hookargs)
2464 allhooks.append(hookargs)
2465
2465
2466 for hookargs in allhooks:
2466 for hookargs in allhooks:
2467 op.repo.hook(
2467 op.repo.hook(
2468 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2468 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2469 )
2469 )
2470
2470
2471 for book, node in changes:
2471 for book, node in changes:
2472 if bookmarks.isdivergent(book):
2472 if bookmarks.isdivergent(book):
2473 msg = _(b'cannot accept divergent bookmark %s!') % book
2473 msg = _(b'cannot accept divergent bookmark %s!') % book
2474 raise error.Abort(msg)
2474 raise error.Abort(msg)
2475
2475
2476 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2476 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2477
2477
2478 if pushkeycompat:
2478 if pushkeycompat:
2479
2479
2480 def runhook(unused_success):
2480 def runhook(unused_success):
2481 for hookargs in allhooks:
2481 for hookargs in allhooks:
2482 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2482 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2483
2483
2484 op.repo._afterlock(runhook)
2484 op.repo._afterlock(runhook)
2485
2485
2486 elif bookmarksmode == b'records':
2486 elif bookmarksmode == b'records':
2487 for book, node in changes:
2487 for book, node in changes:
2488 record = {b'bookmark': book, b'node': node}
2488 record = {b'bookmark': book, b'node': node}
2489 op.records.add(b'bookmarks', record)
2489 op.records.add(b'bookmarks', record)
2490 else:
2490 else:
2491 raise error.ProgrammingError(
2491 raise error.ProgrammingError(
2492 b'unknown bookmark mode: %s' % bookmarksmode
2492 b'unknown bookmark mode: %s' % bookmarksmode
2493 )
2493 )
2494
2494
2495
2495
2496 @parthandler(b'phase-heads')
2496 @parthandler(b'phase-heads')
2497 def handlephases(op, inpart):
2497 def handlephases(op, inpart):
2498 """apply phases from bundle part to repo"""
2498 """apply phases from bundle part to repo"""
2499 headsbyphase = phases.binarydecode(inpart)
2499 headsbyphase = phases.binarydecode(inpart)
2500 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2500 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2501
2501
2502
2502
2503 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2503 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2504 def handlepushkeyreply(op, inpart):
2504 def handlepushkeyreply(op, inpart):
2505 """retrieve the result of a pushkey request"""
2505 """retrieve the result of a pushkey request"""
2506 ret = int(inpart.params[b'return'])
2506 ret = int(inpart.params[b'return'])
2507 partid = int(inpart.params[b'in-reply-to'])
2507 partid = int(inpart.params[b'in-reply-to'])
2508 op.records.add(b'pushkey', {b'return': ret}, partid)
2508 op.records.add(b'pushkey', {b'return': ret}, partid)
2509
2509
2510
2510
2511 @parthandler(b'obsmarkers')
2511 @parthandler(b'obsmarkers')
2512 def handleobsmarker(op, inpart):
2512 def handleobsmarker(op, inpart):
2513 """add a stream of obsmarkers to the repo"""
2513 """add a stream of obsmarkers to the repo"""
2514 tr = op.gettransaction()
2514 tr = op.gettransaction()
2515 markerdata = inpart.read()
2515 markerdata = inpart.read()
2516 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2516 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2517 op.ui.writenoi18n(
2517 op.ui.writenoi18n(
2518 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2518 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2519 )
2519 )
2520 # The mergemarkers call will crash if marker creation is not enabled.
2520 # The mergemarkers call will crash if marker creation is not enabled.
2521 # we want to avoid this if the part is advisory.
2521 # we want to avoid this if the part is advisory.
2522 if not inpart.mandatory and op.repo.obsstore.readonly:
2522 if not inpart.mandatory and op.repo.obsstore.readonly:
2523 op.repo.ui.debug(
2523 op.repo.ui.debug(
2524 b'ignoring obsolescence markers, feature not enabled\n'
2524 b'ignoring obsolescence markers, feature not enabled\n'
2525 )
2525 )
2526 return
2526 return
2527 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2527 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2528 op.repo.invalidatevolatilesets()
2528 op.repo.invalidatevolatilesets()
2529 op.records.add(b'obsmarkers', {b'new': new})
2529 op.records.add(b'obsmarkers', {b'new': new})
2530 if op.reply is not None:
2530 if op.reply is not None:
2531 rpart = op.reply.newpart(b'reply:obsmarkers')
2531 rpart = op.reply.newpart(b'reply:obsmarkers')
2532 rpart.addparam(
2532 rpart.addparam(
2533 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2533 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2534 )
2534 )
2535 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2535 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2536
2536
2537
2537
2538 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2538 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2539 def handleobsmarkerreply(op, inpart):
2539 def handleobsmarkerreply(op, inpart):
2540 """retrieve the result of a pushkey request"""
2540 """retrieve the result of a pushkey request"""
2541 ret = int(inpart.params[b'new'])
2541 ret = int(inpart.params[b'new'])
2542 partid = int(inpart.params[b'in-reply-to'])
2542 partid = int(inpart.params[b'in-reply-to'])
2543 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2543 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2544
2544
2545
2545
2546 @parthandler(b'hgtagsfnodes')
2546 @parthandler(b'hgtagsfnodes')
2547 def handlehgtagsfnodes(op, inpart):
2547 def handlehgtagsfnodes(op, inpart):
2548 """Applies .hgtags fnodes cache entries to the local repo.
2548 """Applies .hgtags fnodes cache entries to the local repo.
2549
2549
2550 Payload is pairs of 20 byte changeset nodes and filenodes.
2550 Payload is pairs of 20 byte changeset nodes and filenodes.
2551 """
2551 """
2552 # Grab the transaction so we ensure that we have the lock at this point.
2552 # Grab the transaction so we ensure that we have the lock at this point.
2553 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2553 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2554 op.gettransaction()
2554 op.gettransaction()
2555 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2555 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2556
2556
2557 count = 0
2557 count = 0
2558 while True:
2558 while True:
2559 node = inpart.read(20)
2559 node = inpart.read(20)
2560 fnode = inpart.read(20)
2560 fnode = inpart.read(20)
2561 if len(node) < 20 or len(fnode) < 20:
2561 if len(node) < 20 or len(fnode) < 20:
2562 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2562 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2563 break
2563 break
2564 cache.setfnode(node, fnode)
2564 cache.setfnode(node, fnode)
2565 count += 1
2565 count += 1
2566
2566
2567 cache.write()
2567 cache.write()
2568 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2568 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2569
2569
2570
2570
2571 rbcstruct = struct.Struct(b'>III')
2571 rbcstruct = struct.Struct(b'>III')
2572
2572
2573
2573
2574 @parthandler(b'cache:rev-branch-cache')
2574 @parthandler(b'cache:rev-branch-cache')
2575 def handlerbc(op, inpart):
2575 def handlerbc(op, inpart):
2576 """Legacy part, ignored for compatibility with bundles from or
2576 """Legacy part, ignored for compatibility with bundles from or
2577 for Mercurial before 5.7. Newer Mercurial computes the cache
2577 for Mercurial before 5.7. Newer Mercurial computes the cache
2578 efficiently enough during unbundling that the additional transfer
2578 efficiently enough during unbundling that the additional transfer
2579 is unnecessary."""
2579 is unnecessary."""
2580
2580
2581
2581
2582 @parthandler(b'pushvars')
2582 @parthandler(b'pushvars')
2583 def bundle2getvars(op, part):
2583 def bundle2getvars(op, part):
2584 '''unbundle a bundle2 containing shellvars on the server'''
2584 '''unbundle a bundle2 containing shellvars on the server'''
2585 # An option to disable unbundling on server-side for security reasons
2585 # An option to disable unbundling on server-side for security reasons
2586 if op.ui.configbool(b'push', b'pushvars.server'):
2586 if op.ui.configbool(b'push', b'pushvars.server'):
2587 hookargs = {}
2587 hookargs = {}
2588 for key, value in part.advisoryparams:
2588 for key, value in part.advisoryparams:
2589 key = key.upper()
2589 key = key.upper()
2590 # We want pushed variables to have USERVAR_ prepended so we know
2590 # We want pushed variables to have USERVAR_ prepended so we know
2591 # they came from the --pushvar flag.
2591 # they came from the --pushvar flag.
2592 key = b"USERVAR_" + key
2592 key = b"USERVAR_" + key
2593 hookargs[key] = value
2593 hookargs[key] = value
2594 op.addhookargs(hookargs)
2594 op.addhookargs(hookargs)
2595
2595
2596
2596
2597 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2597 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2598 def handlestreamv2bundle(op, part):
2598 def handlestreamv2bundle(op, part):
2599
2599
2600 requirements = urlreq.unquote(part.params[b'requirements'])
2600 requirements = urlreq.unquote(part.params[b'requirements'])
2601 requirements = requirements.split(b',') if requirements else []
2601 requirements = requirements.split(b',') if requirements else []
2602 filecount = int(part.params[b'filecount'])
2602 filecount = int(part.params[b'filecount'])
2603 bytecount = int(part.params[b'bytecount'])
2603 bytecount = int(part.params[b'bytecount'])
2604
2604
2605 repo = op.repo
2605 repo = op.repo
2606 if len(repo):
2606 if len(repo):
2607 msg = _(b'cannot apply stream clone to non empty repository')
2607 msg = _(b'cannot apply stream clone to non empty repository')
2608 raise error.Abort(msg)
2608 raise error.Abort(msg)
2609
2609
2610 repo.ui.debug(b'applying stream bundle\n')
2610 repo.ui.debug(b'applying stream bundle\n')
2611 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2611 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2612
2612
2613
2613
2614 @parthandler(b'stream3', (b'requirements', b'filecount', b'bytecount'))
2614 @parthandler(b'stream3-exp', (b'requirements', b'filecount', b'bytecount'))
2615 def handlestreamv3bundle(op, part):
2615 def handlestreamv3bundle(op, part):
2616 return handlestreamv2bundle(op, part)
2616 return handlestreamv2bundle(op, part)
2617
2617
2618
2618
2619 def widen_bundle(
2619 def widen_bundle(
2620 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2620 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2621 ):
2621 ):
2622 """generates bundle2 for widening a narrow clone
2622 """generates bundle2 for widening a narrow clone
2623
2623
2624 bundler is the bundle to which data should be added
2624 bundler is the bundle to which data should be added
2625 repo is the localrepository instance
2625 repo is the localrepository instance
2626 oldmatcher matches what the client already has
2626 oldmatcher matches what the client already has
2627 newmatcher matches what the client needs (including what it already has)
2627 newmatcher matches what the client needs (including what it already has)
2628 common is set of common heads between server and client
2628 common is set of common heads between server and client
2629 known is a set of revs known on the client side (used in ellipses)
2629 known is a set of revs known on the client side (used in ellipses)
2630 cgversion is the changegroup version to send
2630 cgversion is the changegroup version to send
2631 ellipses is boolean value telling whether to send ellipses data or not
2631 ellipses is boolean value telling whether to send ellipses data or not
2632
2632
2633 returns bundle2 of the data required for extending
2633 returns bundle2 of the data required for extending
2634 """
2634 """
2635 commonnodes = set()
2635 commonnodes = set()
2636 cl = repo.changelog
2636 cl = repo.changelog
2637 for r in repo.revs(b"::%ln", common):
2637 for r in repo.revs(b"::%ln", common):
2638 commonnodes.add(cl.node(r))
2638 commonnodes.add(cl.node(r))
2639 if commonnodes:
2639 if commonnodes:
2640 packer = changegroup.getbundler(
2640 packer = changegroup.getbundler(
2641 cgversion,
2641 cgversion,
2642 repo,
2642 repo,
2643 oldmatcher=oldmatcher,
2643 oldmatcher=oldmatcher,
2644 matcher=newmatcher,
2644 matcher=newmatcher,
2645 fullnodes=commonnodes,
2645 fullnodes=commonnodes,
2646 )
2646 )
2647 cgdata = packer.generate(
2647 cgdata = packer.generate(
2648 {repo.nullid},
2648 {repo.nullid},
2649 list(commonnodes),
2649 list(commonnodes),
2650 False,
2650 False,
2651 b'narrow_widen',
2651 b'narrow_widen',
2652 changelog=False,
2652 changelog=False,
2653 )
2653 )
2654
2654
2655 part = bundler.newpart(b'changegroup', data=cgdata)
2655 part = bundler.newpart(b'changegroup', data=cgdata)
2656 part.addparam(b'version', cgversion)
2656 part.addparam(b'version', cgversion)
2657 if scmutil.istreemanifest(repo):
2657 if scmutil.istreemanifest(repo):
2658 part.addparam(b'treemanifest', b'1')
2658 part.addparam(b'treemanifest', b'1')
2659 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2659 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2660 part.addparam(b'exp-sidedata', b'1')
2660 part.addparam(b'exp-sidedata', b'1')
2661 wanted = format_remote_wanted_sidedata(repo)
2661 wanted = format_remote_wanted_sidedata(repo)
2662 part.addparam(b'exp-wanted-sidedata', wanted)
2662 part.addparam(b'exp-wanted-sidedata', wanted)
2663
2663
2664 return bundler
2664 return bundler
@@ -1,1024 +1,1024 b''
1 #require serve no-reposimplestore no-chg
1 #require serve no-reposimplestore no-chg
2
2
3 #testcases stream-legacy stream-bundle2-v2 stream-bundle2-v3
3 #testcases stream-legacy stream-bundle2-v2 stream-bundle2-v3
4
4
5 #if stream-legacy
5 #if stream-legacy
6 $ cat << EOF >> $HGRCPATH
6 $ cat << EOF >> $HGRCPATH
7 > [server]
7 > [server]
8 > bundle2.stream = no
8 > bundle2.stream = no
9 > EOF
9 > EOF
10 #endif
10 #endif
11 #if stream-bundle2-v3
11 #if stream-bundle2-v3
12 $ cat << EOF >> $HGRCPATH
12 $ cat << EOF >> $HGRCPATH
13 > [experimental]
13 > [experimental]
14 > stream-v3 = yes
14 > stream-v3 = yes
15 > EOF
15 > EOF
16 #endif
16 #endif
17
17
18 Initialize repository
18 Initialize repository
19
19
20 $ hg init server
20 $ hg init server
21 $ cd server
21 $ cd server
22 $ sh $TESTDIR/testlib/stream_clone_setup.sh
22 $ sh $TESTDIR/testlib/stream_clone_setup.sh
23 adding 00changelog-ab349180a0405010.nd
23 adding 00changelog-ab349180a0405010.nd
24 adding 00changelog.d
24 adding 00changelog.d
25 adding 00changelog.i
25 adding 00changelog.i
26 adding 00changelog.n
26 adding 00changelog.n
27 adding 00manifest.d
27 adding 00manifest.d
28 adding 00manifest.i
28 adding 00manifest.i
29 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
29 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
30 adding data/foo.d
30 adding data/foo.d
31 adding data/foo.i
31 adding data/foo.i
32 adding data/foo.n
32 adding data/foo.n
33 adding data/undo.babar
33 adding data/undo.babar
34 adding data/undo.d
34 adding data/undo.d
35 adding data/undo.foo.d
35 adding data/undo.foo.d
36 adding data/undo.foo.i
36 adding data/undo.foo.i
37 adding data/undo.foo.n
37 adding data/undo.foo.n
38 adding data/undo.i
38 adding data/undo.i
39 adding data/undo.n
39 adding data/undo.n
40 adding data/undo.py
40 adding data/undo.py
41 adding foo.d
41 adding foo.d
42 adding foo.i
42 adding foo.i
43 adding foo.n
43 adding foo.n
44 adding meta/foo.d
44 adding meta/foo.d
45 adding meta/foo.i
45 adding meta/foo.i
46 adding meta/foo.n
46 adding meta/foo.n
47 adding meta/undo.babar
47 adding meta/undo.babar
48 adding meta/undo.d
48 adding meta/undo.d
49 adding meta/undo.foo.d
49 adding meta/undo.foo.d
50 adding meta/undo.foo.i
50 adding meta/undo.foo.i
51 adding meta/undo.foo.n
51 adding meta/undo.foo.n
52 adding meta/undo.i
52 adding meta/undo.i
53 adding meta/undo.n
53 adding meta/undo.n
54 adding meta/undo.py
54 adding meta/undo.py
55 adding savanah/foo.d
55 adding savanah/foo.d
56 adding savanah/foo.i
56 adding savanah/foo.i
57 adding savanah/foo.n
57 adding savanah/foo.n
58 adding savanah/undo.babar
58 adding savanah/undo.babar
59 adding savanah/undo.d
59 adding savanah/undo.d
60 adding savanah/undo.foo.d
60 adding savanah/undo.foo.d
61 adding savanah/undo.foo.i
61 adding savanah/undo.foo.i
62 adding savanah/undo.foo.n
62 adding savanah/undo.foo.n
63 adding savanah/undo.i
63 adding savanah/undo.i
64 adding savanah/undo.n
64 adding savanah/undo.n
65 adding savanah/undo.py
65 adding savanah/undo.py
66 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
66 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
67 adding store/foo.d
67 adding store/foo.d
68 adding store/foo.i
68 adding store/foo.i
69 adding store/foo.n
69 adding store/foo.n
70 adding store/undo.babar
70 adding store/undo.babar
71 adding store/undo.d
71 adding store/undo.d
72 adding store/undo.foo.d
72 adding store/undo.foo.d
73 adding store/undo.foo.i
73 adding store/undo.foo.i
74 adding store/undo.foo.n
74 adding store/undo.foo.n
75 adding store/undo.i
75 adding store/undo.i
76 adding store/undo.n
76 adding store/undo.n
77 adding store/undo.py
77 adding store/undo.py
78 adding undo.babar
78 adding undo.babar
79 adding undo.d
79 adding undo.d
80 adding undo.foo.d
80 adding undo.foo.d
81 adding undo.foo.i
81 adding undo.foo.i
82 adding undo.foo.n
82 adding undo.foo.n
83 adding undo.i
83 adding undo.i
84 adding undo.n
84 adding undo.n
85 adding undo.py
85 adding undo.py
86
86
87 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
87 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
88 $ cat hg.pid > $DAEMON_PIDS
88 $ cat hg.pid > $DAEMON_PIDS
89 $ cd ..
89 $ cd ..
90
90
91 Check local clone
91 Check local clone
92 ==================
92 ==================
93
93
94 The logic is close enough of uncompressed.
94 The logic is close enough of uncompressed.
95 This is present here to reuse the testing around file with "special" names.
95 This is present here to reuse the testing around file with "special" names.
96
96
97 $ hg clone server local-clone
97 $ hg clone server local-clone
98 updating to branch default
98 updating to branch default
99 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
99 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
100
100
101 Check that the clone went well
101 Check that the clone went well
102
102
103 $ hg verify -R local-clone -q
103 $ hg verify -R local-clone -q
104
104
105 Check uncompressed
105 Check uncompressed
106 ==================
106 ==================
107
107
108 Cannot stream clone when server.uncompressed is set
108 Cannot stream clone when server.uncompressed is set
109
109
110 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
110 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
111 200 Script output follows
111 200 Script output follows
112
112
113 1
113 1
114
114
115 #if stream-legacy
115 #if stream-legacy
116 $ hg debugcapabilities http://localhost:$HGPORT
116 $ hg debugcapabilities http://localhost:$HGPORT
117 Main capabilities:
117 Main capabilities:
118 batch
118 batch
119 branchmap
119 branchmap
120 $USUAL_BUNDLE2_CAPS_SERVER$
120 $USUAL_BUNDLE2_CAPS_SERVER$
121 changegroupsubset
121 changegroupsubset
122 compression=$BUNDLE2_COMPRESSIONS$
122 compression=$BUNDLE2_COMPRESSIONS$
123 getbundle
123 getbundle
124 httpheader=1024
124 httpheader=1024
125 httpmediatype=0.1rx,0.1tx,0.2tx
125 httpmediatype=0.1rx,0.1tx,0.2tx
126 known
126 known
127 lookup
127 lookup
128 pushkey
128 pushkey
129 unbundle=HG10GZ,HG10BZ,HG10UN
129 unbundle=HG10GZ,HG10BZ,HG10UN
130 unbundlehash
130 unbundlehash
131 Bundle2 capabilities:
131 Bundle2 capabilities:
132 HG20
132 HG20
133 bookmarks
133 bookmarks
134 changegroup
134 changegroup
135 01
135 01
136 02
136 02
137 03
137 03
138 checkheads
138 checkheads
139 related
139 related
140 digests
140 digests
141 md5
141 md5
142 sha1
142 sha1
143 sha512
143 sha512
144 error
144 error
145 abort
145 abort
146 unsupportedcontent
146 unsupportedcontent
147 pushraced
147 pushraced
148 pushkey
148 pushkey
149 hgtagsfnodes
149 hgtagsfnodes
150 listkeys
150 listkeys
151 phases
151 phases
152 heads
152 heads
153 pushkey
153 pushkey
154 remote-changegroup
154 remote-changegroup
155 http
155 http
156 https
156 https
157
157
158 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
158 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
159 warning: stream clone requested but server has them disabled
159 warning: stream clone requested but server has them disabled
160 requesting all changes
160 requesting all changes
161 adding changesets
161 adding changesets
162 adding manifests
162 adding manifests
163 adding file changes
163 adding file changes
164 added 3 changesets with 1088 changes to 1088 files
164 added 3 changesets with 1088 changes to 1088 files
165 new changesets 96ee1d7354c4:5223b5e3265f
165 new changesets 96ee1d7354c4:5223b5e3265f
166
166
167 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
167 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
168 200 Script output follows
168 200 Script output follows
169 content-type: application/mercurial-0.2
169 content-type: application/mercurial-0.2
170
170
171
171
172 $ f --size body --hexdump --bytes 100
172 $ f --size body --hexdump --bytes 100
173 body: size=140
173 body: size=140
174 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
174 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
175 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...|
175 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...|
176 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest|
176 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest|
177 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
177 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
178 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
178 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
179 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
179 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
180 0060: 69 73 20 66 |is f|
180 0060: 69 73 20 66 |is f|
181
181
182 #endif
182 #endif
183 #if stream-bundle2-v2
183 #if stream-bundle2-v2
184 $ hg debugcapabilities http://localhost:$HGPORT
184 $ hg debugcapabilities http://localhost:$HGPORT
185 Main capabilities:
185 Main capabilities:
186 batch
186 batch
187 branchmap
187 branchmap
188 $USUAL_BUNDLE2_CAPS_SERVER$
188 $USUAL_BUNDLE2_CAPS_SERVER$
189 changegroupsubset
189 changegroupsubset
190 compression=$BUNDLE2_COMPRESSIONS$
190 compression=$BUNDLE2_COMPRESSIONS$
191 getbundle
191 getbundle
192 httpheader=1024
192 httpheader=1024
193 httpmediatype=0.1rx,0.1tx,0.2tx
193 httpmediatype=0.1rx,0.1tx,0.2tx
194 known
194 known
195 lookup
195 lookup
196 pushkey
196 pushkey
197 unbundle=HG10GZ,HG10BZ,HG10UN
197 unbundle=HG10GZ,HG10BZ,HG10UN
198 unbundlehash
198 unbundlehash
199 Bundle2 capabilities:
199 Bundle2 capabilities:
200 HG20
200 HG20
201 bookmarks
201 bookmarks
202 changegroup
202 changegroup
203 01
203 01
204 02
204 02
205 03
205 03
206 checkheads
206 checkheads
207 related
207 related
208 digests
208 digests
209 md5
209 md5
210 sha1
210 sha1
211 sha512
211 sha512
212 error
212 error
213 abort
213 abort
214 unsupportedcontent
214 unsupportedcontent
215 pushraced
215 pushraced
216 pushkey
216 pushkey
217 hgtagsfnodes
217 hgtagsfnodes
218 listkeys
218 listkeys
219 phases
219 phases
220 heads
220 heads
221 pushkey
221 pushkey
222 remote-changegroup
222 remote-changegroup
223 http
223 http
224 https
224 https
225
225
226 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
226 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
227 warning: stream clone requested but server has them disabled
227 warning: stream clone requested but server has them disabled
228 requesting all changes
228 requesting all changes
229 adding changesets
229 adding changesets
230 adding manifests
230 adding manifests
231 adding file changes
231 adding file changes
232 added 3 changesets with 1088 changes to 1088 files
232 added 3 changesets with 1088 changes to 1088 files
233 new changesets 96ee1d7354c4:5223b5e3265f
233 new changesets 96ee1d7354c4:5223b5e3265f
234
234
235 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
235 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
236 200 Script output follows
236 200 Script output follows
237 content-type: application/mercurial-0.2
237 content-type: application/mercurial-0.2
238
238
239
239
240 $ f --size body --hexdump --bytes 100
240 $ f --size body --hexdump --bytes 100
241 body: size=140
241 body: size=140
242 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
242 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
243 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...|
243 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...|
244 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest|
244 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest|
245 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
245 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
246 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
246 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
247 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
247 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
248 0060: 69 73 20 66 |is f|
248 0060: 69 73 20 66 |is f|
249
249
250 #endif
250 #endif
251 #if stream-bundle2-v3
251 #if stream-bundle2-v3
252 $ hg debugcapabilities http://localhost:$HGPORT
252 $ hg debugcapabilities http://localhost:$HGPORT
253 Main capabilities:
253 Main capabilities:
254 batch
254 batch
255 branchmap
255 branchmap
256 $USUAL_BUNDLE2_CAPS_SERVER$
256 $USUAL_BUNDLE2_CAPS_SERVER$
257 changegroupsubset
257 changegroupsubset
258 compression=$BUNDLE2_COMPRESSIONS$
258 compression=$BUNDLE2_COMPRESSIONS$
259 getbundle
259 getbundle
260 httpheader=1024
260 httpheader=1024
261 httpmediatype=0.1rx,0.1tx,0.2tx
261 httpmediatype=0.1rx,0.1tx,0.2tx
262 known
262 known
263 lookup
263 lookup
264 pushkey
264 pushkey
265 unbundle=HG10GZ,HG10BZ,HG10UN
265 unbundle=HG10GZ,HG10BZ,HG10UN
266 unbundlehash
266 unbundlehash
267 Bundle2 capabilities:
267 Bundle2 capabilities:
268 HG20
268 HG20
269 bookmarks
269 bookmarks
270 changegroup
270 changegroup
271 01
271 01
272 02
272 02
273 03
273 03
274 checkheads
274 checkheads
275 related
275 related
276 digests
276 digests
277 md5
277 md5
278 sha1
278 sha1
279 sha512
279 sha512
280 error
280 error
281 abort
281 abort
282 unsupportedcontent
282 unsupportedcontent
283 pushraced
283 pushraced
284 pushkey
284 pushkey
285 hgtagsfnodes
285 hgtagsfnodes
286 listkeys
286 listkeys
287 phases
287 phases
288 heads
288 heads
289 pushkey
289 pushkey
290 remote-changegroup
290 remote-changegroup
291 http
291 http
292 https
292 https
293
293
294 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
294 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
295 warning: stream clone requested but server has them disabled
295 warning: stream clone requested but server has them disabled
296 requesting all changes
296 requesting all changes
297 adding changesets
297 adding changesets
298 adding manifests
298 adding manifests
299 adding file changes
299 adding file changes
300 added 3 changesets with 1088 changes to 1088 files
300 added 3 changesets with 1088 changes to 1088 files
301 new changesets 96ee1d7354c4:5223b5e3265f
301 new changesets 96ee1d7354c4:5223b5e3265f
302
302
303 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
303 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
304 200 Script output follows
304 200 Script output follows
305 content-type: application/mercurial-0.2
305 content-type: application/mercurial-0.2
306
306
307
307
308 $ f --size body --hexdump --bytes 100
308 $ f --size body --hexdump --bytes 100
309 body: size=140
309 body: size=140
310 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
310 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
311 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...|
311 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...|
312 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest|
312 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest|
313 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
313 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
314 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
314 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
315 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
315 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
316 0060: 69 73 20 66 |is f|
316 0060: 69 73 20 66 |is f|
317
317
318 #endif
318 #endif
319
319
320 $ killdaemons.py
320 $ killdaemons.py
321 $ cd server
321 $ cd server
322 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
322 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
323 $ cat hg.pid > $DAEMON_PIDS
323 $ cat hg.pid > $DAEMON_PIDS
324 $ cd ..
324 $ cd ..
325
325
326 Basic clone
326 Basic clone
327
327
328 #if stream-legacy
328 #if stream-legacy
329 $ hg clone --stream -U http://localhost:$HGPORT clone1
329 $ hg clone --stream -U http://localhost:$HGPORT clone1
330 streaming all changes
330 streaming all changes
331 1090 files to transfer, 102 KB of data (no-zstd !)
331 1090 files to transfer, 102 KB of data (no-zstd !)
332 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
332 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
333 1090 files to transfer, 98.8 KB of data (zstd !)
333 1090 files to transfer, 98.8 KB of data (zstd !)
334 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
334 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
335 searching for changes
335 searching for changes
336 no changes found
336 no changes found
337 $ cat server/errors.txt
337 $ cat server/errors.txt
338 #endif
338 #endif
339 #if stream-bundle2-v2
339 #if stream-bundle2-v2
340 $ hg clone --stream -U http://localhost:$HGPORT clone1
340 $ hg clone --stream -U http://localhost:$HGPORT clone1
341 streaming all changes
341 streaming all changes
342 1093 files to transfer, 102 KB of data (no-zstd !)
342 1093 files to transfer, 102 KB of data (no-zstd !)
343 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
343 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
344 1093 files to transfer, 98.9 KB of data (zstd !)
344 1093 files to transfer, 98.9 KB of data (zstd !)
345 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
345 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
346
346
347 $ ls -1 clone1/.hg/cache
347 $ ls -1 clone1/.hg/cache
348 branch2-base
348 branch2-base
349 branch2-immutable
349 branch2-immutable
350 branch2-served
350 branch2-served
351 branch2-served.hidden
351 branch2-served.hidden
352 branch2-visible
352 branch2-visible
353 branch2-visible-hidden
353 branch2-visible-hidden
354 rbc-names-v1
354 rbc-names-v1
355 rbc-revs-v1
355 rbc-revs-v1
356 tags2
356 tags2
357 tags2-served
357 tags2-served
358 $ cat server/errors.txt
358 $ cat server/errors.txt
359 #endif
359 #endif
360 #if stream-bundle2-v3
360 #if stream-bundle2-v3
361 $ hg clone --stream -U http://localhost:$HGPORT clone1
361 $ hg clone --stream -U http://localhost:$HGPORT clone1
362 streaming all changes
362 streaming all changes
363 1093 files to transfer, 102 KB of data (no-zstd !)
363 1093 files to transfer, 102 KB of data (no-zstd !)
364 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
364 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
365 1093 files to transfer, 98.9 KB of data (zstd !)
365 1093 files to transfer, 98.9 KB of data (zstd !)
366 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
366 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
367
367
368 $ ls -1 clone1/.hg/cache
368 $ ls -1 clone1/.hg/cache
369 branch2-base
369 branch2-base
370 branch2-immutable
370 branch2-immutable
371 branch2-served
371 branch2-served
372 branch2-served.hidden
372 branch2-served.hidden
373 branch2-visible
373 branch2-visible
374 branch2-visible-hidden
374 branch2-visible-hidden
375 rbc-names-v1
375 rbc-names-v1
376 rbc-revs-v1
376 rbc-revs-v1
377 tags2
377 tags2
378 tags2-served
378 tags2-served
379 $ cat server/errors.txt
379 $ cat server/errors.txt
380 #endif
380 #endif
381
381
382 getbundle requests with stream=1 are uncompressed
382 getbundle requests with stream=1 are uncompressed
383
383
384 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
384 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
385 200 Script output follows
385 200 Script output follows
386 content-type: application/mercurial-0.2
386 content-type: application/mercurial-0.2
387
387
388
388
389 #if no-zstd no-rust
389 #if no-zstd no-rust
390 $ f --size --hex --bytes 256 body
390 $ f --size --hex --bytes 256 body
391 body: size=119123
391 body: size=119123
392 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
392 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
393 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
393 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
394 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
394 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
395 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
395 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
396 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
396 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
397 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
397 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
398 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
398 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
399 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
399 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
400 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
400 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
401 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
401 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
402 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
402 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
403 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
403 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
404 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
404 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
405 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
405 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
406 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
406 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
407 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
407 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
408 #endif
408 #endif
409 #if zstd no-rust
409 #if zstd no-rust
410 $ f --size --hex --bytes 256 body
410 $ f --size --hex --bytes 256 body
411 body: size=116310 (no-bigendian !)
411 body: size=116310 (no-bigendian !)
412 body: size=116305 (bigendian !)
412 body: size=116305 (bigendian !)
413 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
413 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
414 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
414 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
415 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
415 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
416 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
416 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
417 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
417 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
418 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
418 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
419 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
419 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
420 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
420 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
421 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
421 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
422 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
422 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
423 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
423 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
424 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
424 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
425 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
425 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
426 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
426 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
427 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
427 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
428 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
428 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
429 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
429 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
430 #endif
430 #endif
431 #if zstd rust no-dirstate-v2
431 #if zstd rust no-dirstate-v2
432 $ f --size --hex --bytes 256 body
432 $ f --size --hex --bytes 256 body
433 body: size=116310
433 body: size=116310
434 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
434 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
435 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
435 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
436 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
436 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
437 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
437 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
438 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
438 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
439 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
439 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
440 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
440 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
441 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
441 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
442 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
442 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
443 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
443 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
444 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
444 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
445 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
445 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
446 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
446 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
447 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
447 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
448 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
448 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
449 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
449 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
450 #endif
450 #endif
451 #if zstd dirstate-v2
451 #if zstd dirstate-v2
452 $ f --size --hex --bytes 256 body
452 $ f --size --hex --bytes 256 body
453 body: size=109549
453 body: size=109549
454 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
454 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
455 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
455 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
456 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
456 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
457 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
457 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
458 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
458 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
459 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
459 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
460 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
460 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
461 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
461 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
462 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
462 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
463 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
463 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
464 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
464 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
465 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
465 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
466 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
466 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
467 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
467 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
468 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
468 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
469 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
469 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
470 #endif
470 #endif
471
471
472 --uncompressed is an alias to --stream
472 --uncompressed is an alias to --stream
473
473
474 #if stream-legacy
474 #if stream-legacy
475 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
475 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
476 streaming all changes
476 streaming all changes
477 1090 files to transfer, 102 KB of data (no-zstd !)
477 1090 files to transfer, 102 KB of data (no-zstd !)
478 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
478 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
479 1090 files to transfer, 98.8 KB of data (zstd !)
479 1090 files to transfer, 98.8 KB of data (zstd !)
480 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
480 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
481 searching for changes
481 searching for changes
482 no changes found
482 no changes found
483 #endif
483 #endif
484 #if stream-bundle2-v2
484 #if stream-bundle2-v2
485 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
485 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
486 streaming all changes
486 streaming all changes
487 1093 files to transfer, 102 KB of data (no-zstd !)
487 1093 files to transfer, 102 KB of data (no-zstd !)
488 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
488 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
489 1093 files to transfer, 98.9 KB of data (zstd !)
489 1093 files to transfer, 98.9 KB of data (zstd !)
490 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
490 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
491 #endif
491 #endif
492 #if stream-bundle2-v3
492 #if stream-bundle2-v3
493 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
493 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
494 streaming all changes
494 streaming all changes
495 1093 files to transfer, 102 KB of data (no-zstd !)
495 1093 files to transfer, 102 KB of data (no-zstd !)
496 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
496 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
497 1093 files to transfer, 98.9 KB of data (zstd !)
497 1093 files to transfer, 98.9 KB of data (zstd !)
498 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
498 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
499 #endif
499 #endif
500
500
501 Clone with background file closing enabled
501 Clone with background file closing enabled
502
502
503 #if stream-legacy
503 #if stream-legacy
504 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
504 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
505 using http://localhost:$HGPORT/
505 using http://localhost:$HGPORT/
506 sending capabilities command
506 sending capabilities command
507 sending branchmap command
507 sending branchmap command
508 streaming all changes
508 streaming all changes
509 sending stream_out command
509 sending stream_out command
510 1090 files to transfer, 102 KB of data (no-zstd !)
510 1090 files to transfer, 102 KB of data (no-zstd !)
511 1090 files to transfer, 98.8 KB of data (zstd !)
511 1090 files to transfer, 98.8 KB of data (zstd !)
512 starting 4 threads for background file closing
512 starting 4 threads for background file closing
513 updating the branch cache
513 updating the branch cache
514 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
514 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
515 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
515 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
516 query 1; heads
516 query 1; heads
517 sending batch command
517 sending batch command
518 searching for changes
518 searching for changes
519 all remote heads known locally
519 all remote heads known locally
520 no changes found
520 no changes found
521 sending getbundle command
521 sending getbundle command
522 bundle2-input-bundle: with-transaction
522 bundle2-input-bundle: with-transaction
523 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
523 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
524 bundle2-input-part: "phase-heads" supported
524 bundle2-input-part: "phase-heads" supported
525 bundle2-input-part: total payload size 24
525 bundle2-input-part: total payload size 24
526 bundle2-input-bundle: 2 parts total
526 bundle2-input-bundle: 2 parts total
527 checking for updated bookmarks
527 checking for updated bookmarks
528 updating the branch cache
528 updating the branch cache
529 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
529 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
530 #endif
530 #endif
531 #if stream-bundle2-v2
531 #if stream-bundle2-v2
532 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
532 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
533 using http://localhost:$HGPORT/
533 using http://localhost:$HGPORT/
534 sending capabilities command
534 sending capabilities command
535 query 1; heads
535 query 1; heads
536 sending batch command
536 sending batch command
537 streaming all changes
537 streaming all changes
538 sending getbundle command
538 sending getbundle command
539 bundle2-input-bundle: with-transaction
539 bundle2-input-bundle: with-transaction
540 bundle2-input-part: "stream2" (params: 3 mandatory) supported
540 bundle2-input-part: "stream2" (params: 3 mandatory) supported
541 applying stream bundle
541 applying stream bundle
542 1093 files to transfer, 102 KB of data (no-zstd !)
542 1093 files to transfer, 102 KB of data (no-zstd !)
543 1093 files to transfer, 98.9 KB of data (zstd !)
543 1093 files to transfer, 98.9 KB of data (zstd !)
544 starting 4 threads for background file closing
544 starting 4 threads for background file closing
545 starting 4 threads for background file closing
545 starting 4 threads for background file closing
546 updating the branch cache
546 updating the branch cache
547 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
547 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
548 bundle2-input-part: total payload size 118984 (no-zstd !)
548 bundle2-input-part: total payload size 118984 (no-zstd !)
549 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
549 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
550 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
550 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
551 bundle2-input-part: total payload size 116140 (zstd bigendian !)
551 bundle2-input-part: total payload size 116140 (zstd bigendian !)
552 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
552 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
553 bundle2-input-bundle: 2 parts total
553 bundle2-input-bundle: 2 parts total
554 checking for updated bookmarks
554 checking for updated bookmarks
555 updating the branch cache
555 updating the branch cache
556 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
556 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
557 #endif
557 #endif
558 #if stream-bundle2-v3
558 #if stream-bundle2-v3
559 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
559 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
560 using http://localhost:$HGPORT/
560 using http://localhost:$HGPORT/
561 sending capabilities command
561 sending capabilities command
562 query 1; heads
562 query 1; heads
563 sending batch command
563 sending batch command
564 streaming all changes
564 streaming all changes
565 sending getbundle command
565 sending getbundle command
566 bundle2-input-bundle: with-transaction
566 bundle2-input-bundle: with-transaction
567 bundle2-input-part: "stream3" (params: 3 mandatory) supported
567 bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported
568 applying stream bundle
568 applying stream bundle
569 1093 files to transfer, 102 KB of data (no-zstd !)
569 1093 files to transfer, 102 KB of data (no-zstd !)
570 1093 files to transfer, 98.9 KB of data (zstd !)
570 1093 files to transfer, 98.9 KB of data (zstd !)
571 starting 4 threads for background file closing
571 starting 4 threads for background file closing
572 starting 4 threads for background file closing
572 starting 4 threads for background file closing
573 updating the branch cache
573 updating the branch cache
574 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
574 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
575 bundle2-input-part: total payload size 118984 (no-zstd !)
575 bundle2-input-part: total payload size 118984 (no-zstd !)
576 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
576 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
577 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
577 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
578 bundle2-input-part: total payload size 116140 (zstd bigendian !)
578 bundle2-input-part: total payload size 116140 (zstd bigendian !)
579 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
579 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
580 bundle2-input-bundle: 2 parts total
580 bundle2-input-bundle: 2 parts total
581 checking for updated bookmarks
581 checking for updated bookmarks
582 updating the branch cache
582 updating the branch cache
583 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
583 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
584 #endif
584 #endif
585
585
586 Cannot stream clone when there are secret changesets
586 Cannot stream clone when there are secret changesets
587
587
588 $ hg -R server phase --force --secret -r tip
588 $ hg -R server phase --force --secret -r tip
589 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
589 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
590 warning: stream clone requested but server has them disabled
590 warning: stream clone requested but server has them disabled
591 requesting all changes
591 requesting all changes
592 adding changesets
592 adding changesets
593 adding manifests
593 adding manifests
594 adding file changes
594 adding file changes
595 added 2 changesets with 1025 changes to 1025 files
595 added 2 changesets with 1025 changes to 1025 files
596 new changesets 96ee1d7354c4:c17445101a72
596 new changesets 96ee1d7354c4:c17445101a72
597
597
598 $ killdaemons.py
598 $ killdaemons.py
599
599
600 Streaming of secrets can be overridden by server config
600 Streaming of secrets can be overridden by server config
601
601
602 $ cd server
602 $ cd server
603 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
603 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
604 $ cat hg.pid > $DAEMON_PIDS
604 $ cat hg.pid > $DAEMON_PIDS
605 $ cd ..
605 $ cd ..
606
606
607 #if stream-legacy
607 #if stream-legacy
608 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
608 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
609 streaming all changes
609 streaming all changes
610 1090 files to transfer, 102 KB of data (no-zstd !)
610 1090 files to transfer, 102 KB of data (no-zstd !)
611 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
611 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
612 1090 files to transfer, 98.8 KB of data (zstd !)
612 1090 files to transfer, 98.8 KB of data (zstd !)
613 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
613 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
614 searching for changes
614 searching for changes
615 no changes found
615 no changes found
616 #endif
616 #endif
617 #if stream-bundle2-v2
617 #if stream-bundle2-v2
618 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
618 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
619 streaming all changes
619 streaming all changes
620 1093 files to transfer, 102 KB of data (no-zstd !)
620 1093 files to transfer, 102 KB of data (no-zstd !)
621 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
621 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
622 1093 files to transfer, 98.9 KB of data (zstd !)
622 1093 files to transfer, 98.9 KB of data (zstd !)
623 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
623 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
624 #endif
624 #endif
625 #if stream-bundle2-v3
625 #if stream-bundle2-v3
626 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
626 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
627 streaming all changes
627 streaming all changes
628 1093 files to transfer, 102 KB of data (no-zstd !)
628 1093 files to transfer, 102 KB of data (no-zstd !)
629 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
629 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
630 1093 files to transfer, 98.9 KB of data (zstd !)
630 1093 files to transfer, 98.9 KB of data (zstd !)
631 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
631 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
632 #endif
632 #endif
633
633
634 $ killdaemons.py
634 $ killdaemons.py
635
635
636 Verify interaction between preferuncompressed and secret presence
636 Verify interaction between preferuncompressed and secret presence
637
637
638 $ cd server
638 $ cd server
639 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
639 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
640 $ cat hg.pid > $DAEMON_PIDS
640 $ cat hg.pid > $DAEMON_PIDS
641 $ cd ..
641 $ cd ..
642
642
643 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
643 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
644 requesting all changes
644 requesting all changes
645 adding changesets
645 adding changesets
646 adding manifests
646 adding manifests
647 adding file changes
647 adding file changes
648 added 2 changesets with 1025 changes to 1025 files
648 added 2 changesets with 1025 changes to 1025 files
649 new changesets 96ee1d7354c4:c17445101a72
649 new changesets 96ee1d7354c4:c17445101a72
650
650
651 $ killdaemons.py
651 $ killdaemons.py
652
652
653 Clone not allowed when full bundles disabled and can't serve secrets
653 Clone not allowed when full bundles disabled and can't serve secrets
654
654
655 $ cd server
655 $ cd server
656 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
656 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
657 $ cat hg.pid > $DAEMON_PIDS
657 $ cat hg.pid > $DAEMON_PIDS
658 $ cd ..
658 $ cd ..
659
659
660 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
660 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
661 warning: stream clone requested but server has them disabled
661 warning: stream clone requested but server has them disabled
662 requesting all changes
662 requesting all changes
663 remote: abort: server has pull-based clones disabled
663 remote: abort: server has pull-based clones disabled
664 abort: pull failed on remote
664 abort: pull failed on remote
665 (remove --pull if specified or upgrade Mercurial)
665 (remove --pull if specified or upgrade Mercurial)
666 [100]
666 [100]
667
667
668 Local stream clone with secrets involved
668 Local stream clone with secrets involved
669 (This is just a test over behavior: if you have access to the repo's files,
669 (This is just a test over behavior: if you have access to the repo's files,
670 there is no security so it isn't important to prevent a clone here.)
670 there is no security so it isn't important to prevent a clone here.)
671
671
672 $ hg clone -U --stream server local-secret
672 $ hg clone -U --stream server local-secret
673 warning: stream clone requested but server has them disabled
673 warning: stream clone requested but server has them disabled
674 requesting all changes
674 requesting all changes
675 adding changesets
675 adding changesets
676 adding manifests
676 adding manifests
677 adding file changes
677 adding file changes
678 added 2 changesets with 1025 changes to 1025 files
678 added 2 changesets with 1025 changes to 1025 files
679 new changesets 96ee1d7354c4:c17445101a72
679 new changesets 96ee1d7354c4:c17445101a72
680
680
681 Stream clone while repo is changing:
681 Stream clone while repo is changing:
682
682
683 $ mkdir changing
683 $ mkdir changing
684 $ cd changing
684 $ cd changing
685
685
686 extension for delaying the server process so we reliably can modify the repo
686 extension for delaying the server process so we reliably can modify the repo
687 while cloning
687 while cloning
688
688
689 $ cat > stream_steps.py <<EOF
689 $ cat > stream_steps.py <<EOF
690 > import os
690 > import os
691 > import sys
691 > import sys
692 > from mercurial import (
692 > from mercurial import (
693 > encoding,
693 > encoding,
694 > extensions,
694 > extensions,
695 > streamclone,
695 > streamclone,
696 > testing,
696 > testing,
697 > )
697 > )
698 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
698 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
699 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
699 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
700 >
700 >
701 > def _test_sync_point_walk_1(orig, repo):
701 > def _test_sync_point_walk_1(orig, repo):
702 > testing.write_file(WALKED_FILE_1)
702 > testing.write_file(WALKED_FILE_1)
703 >
703 >
704 > def _test_sync_point_walk_2(orig, repo):
704 > def _test_sync_point_walk_2(orig, repo):
705 > assert repo._currentlock(repo._lockref) is None
705 > assert repo._currentlock(repo._lockref) is None
706 > testing.wait_file(WALKED_FILE_2)
706 > testing.wait_file(WALKED_FILE_2)
707 >
707 >
708 > extensions.wrapfunction(
708 > extensions.wrapfunction(
709 > streamclone,
709 > streamclone,
710 > '_test_sync_point_walk_1',
710 > '_test_sync_point_walk_1',
711 > _test_sync_point_walk_1
711 > _test_sync_point_walk_1
712 > )
712 > )
713 > extensions.wrapfunction(
713 > extensions.wrapfunction(
714 > streamclone,
714 > streamclone,
715 > '_test_sync_point_walk_2',
715 > '_test_sync_point_walk_2',
716 > _test_sync_point_walk_2
716 > _test_sync_point_walk_2
717 > )
717 > )
718 > EOF
718 > EOF
719
719
720 prepare repo with small and big file to cover both code paths in emitrevlogdata
720 prepare repo with small and big file to cover both code paths in emitrevlogdata
721
721
722 $ hg init repo
722 $ hg init repo
723 $ touch repo/f1
723 $ touch repo/f1
724 $ $TESTDIR/seq.py 50000 > repo/f2
724 $ $TESTDIR/seq.py 50000 > repo/f2
725 $ hg -R repo ci -Aqm "0"
725 $ hg -R repo ci -Aqm "0"
726 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
726 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
727 $ export HG_TEST_STREAM_WALKED_FILE_1
727 $ export HG_TEST_STREAM_WALKED_FILE_1
728 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
728 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
729 $ export HG_TEST_STREAM_WALKED_FILE_2
729 $ export HG_TEST_STREAM_WALKED_FILE_2
730 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
730 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
731 $ export HG_TEST_STREAM_WALKED_FILE_3
731 $ export HG_TEST_STREAM_WALKED_FILE_3
732 # $ cat << EOF >> $HGRCPATH
732 # $ cat << EOF >> $HGRCPATH
733 # > [hooks]
733 # > [hooks]
734 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
734 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
735 # > EOF
735 # > EOF
736 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
736 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
737 $ cat hg.pid >> $DAEMON_PIDS
737 $ cat hg.pid >> $DAEMON_PIDS
738
738
739 clone while modifying the repo between stating file with write lock and
739 clone while modifying the repo between stating file with write lock and
740 actually serving file content
740 actually serving file content
741
741
742 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
742 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
743 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
743 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
744 $ echo >> repo/f1
744 $ echo >> repo/f1
745 $ echo >> repo/f2
745 $ echo >> repo/f2
746 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
746 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
747 $ touch $HG_TEST_STREAM_WALKED_FILE_2
747 $ touch $HG_TEST_STREAM_WALKED_FILE_2
748 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
748 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
749 $ hg -R clone id
749 $ hg -R clone id
750 000000000000
750 000000000000
751 $ cat errors.log
751 $ cat errors.log
752 $ cd ..
752 $ cd ..
753
753
754 Stream repository with bookmarks
754 Stream repository with bookmarks
755 --------------------------------
755 --------------------------------
756
756
757 (revert introduction of secret changeset)
757 (revert introduction of secret changeset)
758
758
759 $ hg -R server phase --draft 'secret()'
759 $ hg -R server phase --draft 'secret()'
760
760
761 add a bookmark
761 add a bookmark
762
762
763 $ hg -R server bookmark -r tip some-bookmark
763 $ hg -R server bookmark -r tip some-bookmark
764
764
765 clone it
765 clone it
766
766
767 #if stream-legacy
767 #if stream-legacy
768 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
768 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
769 streaming all changes
769 streaming all changes
770 1090 files to transfer, 102 KB of data (no-zstd !)
770 1090 files to transfer, 102 KB of data (no-zstd !)
771 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
771 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
772 1090 files to transfer, 98.8 KB of data (zstd !)
772 1090 files to transfer, 98.8 KB of data (zstd !)
773 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
773 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
774 searching for changes
774 searching for changes
775 no changes found
775 no changes found
776 updating to branch default
776 updating to branch default
777 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
777 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
778 #endif
778 #endif
779 #if stream-bundle2-v2
779 #if stream-bundle2-v2
780 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
780 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
781 streaming all changes
781 streaming all changes
782 1096 files to transfer, 102 KB of data (no-zstd !)
782 1096 files to transfer, 102 KB of data (no-zstd !)
783 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
783 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
784 1096 files to transfer, 99.1 KB of data (zstd !)
784 1096 files to transfer, 99.1 KB of data (zstd !)
785 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
785 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
786 updating to branch default
786 updating to branch default
787 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
787 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
788 #endif
788 #endif
789 #if stream-bundle2-v3
789 #if stream-bundle2-v3
790 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
790 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
791 streaming all changes
791 streaming all changes
792 1096 files to transfer, 102 KB of data (no-zstd !)
792 1096 files to transfer, 102 KB of data (no-zstd !)
793 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
793 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
794 1096 files to transfer, 99.1 KB of data (zstd !)
794 1096 files to transfer, 99.1 KB of data (zstd !)
795 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
795 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
796 updating to branch default
796 updating to branch default
797 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
797 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
798 #endif
798 #endif
799 $ hg verify -R with-bookmarks -q
799 $ hg verify -R with-bookmarks -q
800 $ hg -R with-bookmarks bookmarks
800 $ hg -R with-bookmarks bookmarks
801 some-bookmark 2:5223b5e3265f
801 some-bookmark 2:5223b5e3265f
802
802
803 Stream repository with phases
803 Stream repository with phases
804 -----------------------------
804 -----------------------------
805
805
806 Clone as publishing
806 Clone as publishing
807
807
808 $ hg -R server phase -r 'all()'
808 $ hg -R server phase -r 'all()'
809 0: draft
809 0: draft
810 1: draft
810 1: draft
811 2: draft
811 2: draft
812
812
813 #if stream-legacy
813 #if stream-legacy
814 $ hg clone --stream http://localhost:$HGPORT phase-publish
814 $ hg clone --stream http://localhost:$HGPORT phase-publish
815 streaming all changes
815 streaming all changes
816 1090 files to transfer, 102 KB of data (no-zstd !)
816 1090 files to transfer, 102 KB of data (no-zstd !)
817 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
817 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
818 1090 files to transfer, 98.8 KB of data (zstd !)
818 1090 files to transfer, 98.8 KB of data (zstd !)
819 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
819 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
820 searching for changes
820 searching for changes
821 no changes found
821 no changes found
822 updating to branch default
822 updating to branch default
823 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
823 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
824 #endif
824 #endif
825 #if stream-bundle2-v2
825 #if stream-bundle2-v2
826 $ hg clone --stream http://localhost:$HGPORT phase-publish
826 $ hg clone --stream http://localhost:$HGPORT phase-publish
827 streaming all changes
827 streaming all changes
828 1096 files to transfer, 102 KB of data (no-zstd !)
828 1096 files to transfer, 102 KB of data (no-zstd !)
829 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
829 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
830 1096 files to transfer, 99.1 KB of data (zstd !)
830 1096 files to transfer, 99.1 KB of data (zstd !)
831 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
831 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
832 updating to branch default
832 updating to branch default
833 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
833 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
834 #endif
834 #endif
835 #if stream-bundle2-v3
835 #if stream-bundle2-v3
836 $ hg clone --stream http://localhost:$HGPORT phase-publish
836 $ hg clone --stream http://localhost:$HGPORT phase-publish
837 streaming all changes
837 streaming all changes
838 1096 files to transfer, 102 KB of data (no-zstd !)
838 1096 files to transfer, 102 KB of data (no-zstd !)
839 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
839 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
840 1096 files to transfer, 99.1 KB of data (zstd !)
840 1096 files to transfer, 99.1 KB of data (zstd !)
841 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
841 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
842 updating to branch default
842 updating to branch default
843 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
843 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
844 #endif
844 #endif
845 $ hg verify -R phase-publish -q
845 $ hg verify -R phase-publish -q
846 $ hg -R phase-publish phase -r 'all()'
846 $ hg -R phase-publish phase -r 'all()'
847 0: public
847 0: public
848 1: public
848 1: public
849 2: public
849 2: public
850
850
851 Clone as non publishing
851 Clone as non publishing
852
852
853 $ cat << EOF >> server/.hg/hgrc
853 $ cat << EOF >> server/.hg/hgrc
854 > [phases]
854 > [phases]
855 > publish = False
855 > publish = False
856 > EOF
856 > EOF
857 $ killdaemons.py
857 $ killdaemons.py
858 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
858 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
859 $ cat hg.pid > $DAEMON_PIDS
859 $ cat hg.pid > $DAEMON_PIDS
860
860
861 #if stream-legacy
861 #if stream-legacy
862
862
863 With v1 of the stream protocol, changeset are always cloned as public. It make
863 With v1 of the stream protocol, changeset are always cloned as public. It make
864 stream v1 unsuitable for non-publishing repository.
864 stream v1 unsuitable for non-publishing repository.
865
865
866 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
866 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
867 streaming all changes
867 streaming all changes
868 1090 files to transfer, 102 KB of data (no-zstd !)
868 1090 files to transfer, 102 KB of data (no-zstd !)
869 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
869 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
870 1090 files to transfer, 98.8 KB of data (zstd !)
870 1090 files to transfer, 98.8 KB of data (zstd !)
871 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
871 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
872 searching for changes
872 searching for changes
873 no changes found
873 no changes found
874 updating to branch default
874 updating to branch default
875 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
875 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
876 $ hg -R phase-no-publish phase -r 'all()'
876 $ hg -R phase-no-publish phase -r 'all()'
877 0: public
877 0: public
878 1: public
878 1: public
879 2: public
879 2: public
880 #endif
880 #endif
881 #if stream-bundle2-v2
881 #if stream-bundle2-v2
882 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
882 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
883 streaming all changes
883 streaming all changes
884 1097 files to transfer, 102 KB of data (no-zstd !)
884 1097 files to transfer, 102 KB of data (no-zstd !)
885 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
885 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
886 1097 files to transfer, 99.1 KB of data (zstd !)
886 1097 files to transfer, 99.1 KB of data (zstd !)
887 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
887 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
888 updating to branch default
888 updating to branch default
889 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
889 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
890 $ hg -R phase-no-publish phase -r 'all()'
890 $ hg -R phase-no-publish phase -r 'all()'
891 0: draft
891 0: draft
892 1: draft
892 1: draft
893 2: draft
893 2: draft
894 #endif
894 #endif
895 #if stream-bundle2-v3
895 #if stream-bundle2-v3
896 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
896 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
897 streaming all changes
897 streaming all changes
898 1097 files to transfer, 102 KB of data (no-zstd !)
898 1097 files to transfer, 102 KB of data (no-zstd !)
899 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
899 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
900 1097 files to transfer, 99.1 KB of data (zstd !)
900 1097 files to transfer, 99.1 KB of data (zstd !)
901 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
901 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
902 updating to branch default
902 updating to branch default
903 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
903 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
904 $ hg -R phase-no-publish phase -r 'all()'
904 $ hg -R phase-no-publish phase -r 'all()'
905 0: draft
905 0: draft
906 1: draft
906 1: draft
907 2: draft
907 2: draft
908 #endif
908 #endif
909 $ hg verify -R phase-no-publish -q
909 $ hg verify -R phase-no-publish -q
910
910
911 $ killdaemons.py
911 $ killdaemons.py
912
912
913 #if stream-legacy
913 #if stream-legacy
914
914
915 With v1 of the stream protocol, changeset are always cloned as public. There's
915 With v1 of the stream protocol, changeset are always cloned as public. There's
916 no obsolescence markers exchange in stream v1.
916 no obsolescence markers exchange in stream v1.
917
917
918 #endif
918 #endif
919 #if stream-bundle2-v2
919 #if stream-bundle2-v2
920
920
921 Stream repository with obsolescence
921 Stream repository with obsolescence
922 -----------------------------------
922 -----------------------------------
923
923
924 Clone non-publishing with obsolescence
924 Clone non-publishing with obsolescence
925
925
926 $ cat >> $HGRCPATH << EOF
926 $ cat >> $HGRCPATH << EOF
927 > [experimental]
927 > [experimental]
928 > evolution=all
928 > evolution=all
929 > EOF
929 > EOF
930
930
931 $ cd server
931 $ cd server
932 $ echo foo > foo
932 $ echo foo > foo
933 $ hg -q commit -m 'about to be pruned'
933 $ hg -q commit -m 'about to be pruned'
934 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
934 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
935 1 new obsolescence markers
935 1 new obsolescence markers
936 obsoleted 1 changesets
936 obsoleted 1 changesets
937 $ hg up null -q
937 $ hg up null -q
938 $ hg log -T '{rev}: {phase}\n'
938 $ hg log -T '{rev}: {phase}\n'
939 2: draft
939 2: draft
940 1: draft
940 1: draft
941 0: draft
941 0: draft
942 $ hg serve -p $HGPORT -d --pid-file=hg.pid
942 $ hg serve -p $HGPORT -d --pid-file=hg.pid
943 $ cat hg.pid > $DAEMON_PIDS
943 $ cat hg.pid > $DAEMON_PIDS
944 $ cd ..
944 $ cd ..
945
945
946 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
946 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
947 streaming all changes
947 streaming all changes
948 1098 files to transfer, 102 KB of data (no-zstd !)
948 1098 files to transfer, 102 KB of data (no-zstd !)
949 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
949 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
950 1098 files to transfer, 99.5 KB of data (zstd !)
950 1098 files to transfer, 99.5 KB of data (zstd !)
951 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
951 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
952 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
952 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
953 2: draft
953 2: draft
954 1: draft
954 1: draft
955 0: draft
955 0: draft
956 $ hg debugobsolete -R with-obsolescence
956 $ hg debugobsolete -R with-obsolescence
957 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
957 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
958 $ hg verify -R with-obsolescence -q
958 $ hg verify -R with-obsolescence -q
959
959
960 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
960 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
961 streaming all changes
961 streaming all changes
962 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
962 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
963 abort: pull failed on remote
963 abort: pull failed on remote
964 [100]
964 [100]
965
965
966 $ killdaemons.py
966 $ killdaemons.py
967
967
968 #endif
968 #endif
969 #if stream-bundle2-v3
969 #if stream-bundle2-v3
970
970
971 Stream repository with obsolescence
971 Stream repository with obsolescence
972 -----------------------------------
972 -----------------------------------
973
973
974 Clone non-publishing with obsolescence
974 Clone non-publishing with obsolescence
975
975
976 $ cat >> $HGRCPATH << EOF
976 $ cat >> $HGRCPATH << EOF
977 > [experimental]
977 > [experimental]
978 > evolution=all
978 > evolution=all
979 > EOF
979 > EOF
980
980
981 $ cd server
981 $ cd server
982 $ echo foo > foo
982 $ echo foo > foo
983 $ hg -q commit -m 'about to be pruned'
983 $ hg -q commit -m 'about to be pruned'
984 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
984 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
985 1 new obsolescence markers
985 1 new obsolescence markers
986 obsoleted 1 changesets
986 obsoleted 1 changesets
987 $ hg up null -q
987 $ hg up null -q
988 $ hg log -T '{rev}: {phase}\n'
988 $ hg log -T '{rev}: {phase}\n'
989 2: draft
989 2: draft
990 1: draft
990 1: draft
991 0: draft
991 0: draft
992 $ hg serve -p $HGPORT -d --pid-file=hg.pid
992 $ hg serve -p $HGPORT -d --pid-file=hg.pid
993 $ cat hg.pid > $DAEMON_PIDS
993 $ cat hg.pid > $DAEMON_PIDS
994 $ cd ..
994 $ cd ..
995
995
996 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
996 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
997 streaming all changes
997 streaming all changes
998 1098 files to transfer, 102 KB of data (no-zstd !)
998 1098 files to transfer, 102 KB of data (no-zstd !)
999 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
999 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
1000 1098 files to transfer, 99.5 KB of data (zstd !)
1000 1098 files to transfer, 99.5 KB of data (zstd !)
1001 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
1001 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
1002 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
1002 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
1003 2: draft
1003 2: draft
1004 1: draft
1004 1: draft
1005 0: draft
1005 0: draft
1006 $ hg debugobsolete -R with-obsolescence
1006 $ hg debugobsolete -R with-obsolescence
1007 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1007 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1008 $ hg verify -R with-obsolescence -q
1008 $ hg verify -R with-obsolescence -q
1009
1009
1010 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
1010 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
1011 streaming all changes
1011 streaming all changes
1012 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
1012 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
1013 abort: pull failed on remote
1013 abort: pull failed on remote
1014 [100]
1014 [100]
1015
1015
1016 $ killdaemons.py
1016 $ killdaemons.py
1017
1017
1018 #endif
1018 #endif
1019
1019
1020 Cloning a repo with no requirements doesn't give some obscure error
1020 Cloning a repo with no requirements doesn't give some obscure error
1021
1021
1022 $ mkdir -p empty-repo/.hg
1022 $ mkdir -p empty-repo/.hg
1023 $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo2
1023 $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo2
1024 $ hg --cwd empty-repo2 verify -q
1024 $ hg --cwd empty-repo2 verify -q
General Comments 0
You need to be logged in to leave comments. Login now