##// END OF EJS Templates
streamclone: avoid some obscure error in a corner case...
Valentin Gatien-Baron -
r49847:d9ed7c5e stable
parent child Browse files
Show More
@@ -1,2589 +1,2590 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157
157
158 from .i18n import _
158 from .i18n import _
159 from .node import (
159 from .node import (
160 hex,
160 hex,
161 short,
161 short,
162 )
162 )
163 from . import (
163 from . import (
164 bookmarks,
164 bookmarks,
165 changegroup,
165 changegroup,
166 encoding,
166 encoding,
167 error,
167 error,
168 obsolete,
168 obsolete,
169 phases,
169 phases,
170 pushkey,
170 pushkey,
171 pycompat,
171 pycompat,
172 requirements,
172 requirements,
173 scmutil,
173 scmutil,
174 streamclone,
174 streamclone,
175 tags,
175 tags,
176 url,
176 url,
177 util,
177 util,
178 )
178 )
179 from .utils import (
179 from .utils import (
180 stringutil,
180 stringutil,
181 urlutil,
181 urlutil,
182 )
182 )
183 from .interfaces import repository
183 from .interfaces import repository
184
184
185 urlerr = util.urlerr
185 urlerr = util.urlerr
186 urlreq = util.urlreq
186 urlreq = util.urlreq
187
187
188 _pack = struct.pack
188 _pack = struct.pack
189 _unpack = struct.unpack
189 _unpack = struct.unpack
190
190
191 _fstreamparamsize = b'>i'
191 _fstreamparamsize = b'>i'
192 _fpartheadersize = b'>i'
192 _fpartheadersize = b'>i'
193 _fparttypesize = b'>B'
193 _fparttypesize = b'>B'
194 _fpartid = b'>I'
194 _fpartid = b'>I'
195 _fpayloadsize = b'>i'
195 _fpayloadsize = b'>i'
196 _fpartparamcount = b'>BB'
196 _fpartparamcount = b'>BB'
197
197
198 preferedchunksize = 32768
198 preferedchunksize = 32768
199
199
200 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
201
201
202
202
203 def outdebug(ui, message):
203 def outdebug(ui, message):
204 """debug regarding output stream (bundling)"""
204 """debug regarding output stream (bundling)"""
205 if ui.configbool(b'devel', b'bundle2.debug'):
205 if ui.configbool(b'devel', b'bundle2.debug'):
206 ui.debug(b'bundle2-output: %s\n' % message)
206 ui.debug(b'bundle2-output: %s\n' % message)
207
207
208
208
209 def indebug(ui, message):
209 def indebug(ui, message):
210 """debug on input stream (unbundling)"""
210 """debug on input stream (unbundling)"""
211 if ui.configbool(b'devel', b'bundle2.debug'):
211 if ui.configbool(b'devel', b'bundle2.debug'):
212 ui.debug(b'bundle2-input: %s\n' % message)
212 ui.debug(b'bundle2-input: %s\n' % message)
213
213
214
214
215 def validateparttype(parttype):
215 def validateparttype(parttype):
216 """raise ValueError if a parttype contains invalid character"""
216 """raise ValueError if a parttype contains invalid character"""
217 if _parttypeforbidden.search(parttype):
217 if _parttypeforbidden.search(parttype):
218 raise ValueError(parttype)
218 raise ValueError(parttype)
219
219
220
220
221 def _makefpartparamsizes(nbparams):
221 def _makefpartparamsizes(nbparams):
222 """return a struct format to read part parameter sizes
222 """return a struct format to read part parameter sizes
223
223
224 The number parameters is variable so we need to build that format
224 The number parameters is variable so we need to build that format
225 dynamically.
225 dynamically.
226 """
226 """
227 return b'>' + (b'BB' * nbparams)
227 return b'>' + (b'BB' * nbparams)
228
228
229
229
230 parthandlermapping = {}
230 parthandlermapping = {}
231
231
232
232
233 def parthandler(parttype, params=()):
233 def parthandler(parttype, params=()):
234 """decorator that register a function as a bundle2 part handler
234 """decorator that register a function as a bundle2 part handler
235
235
236 eg::
236 eg::
237
237
238 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
239 def myparttypehandler(...):
239 def myparttypehandler(...):
240 '''process a part of type "my part".'''
240 '''process a part of type "my part".'''
241 ...
241 ...
242 """
242 """
243 validateparttype(parttype)
243 validateparttype(parttype)
244
244
245 def _decorator(func):
245 def _decorator(func):
246 lparttype = parttype.lower() # enforce lower case matching.
246 lparttype = parttype.lower() # enforce lower case matching.
247 assert lparttype not in parthandlermapping
247 assert lparttype not in parthandlermapping
248 parthandlermapping[lparttype] = func
248 parthandlermapping[lparttype] = func
249 func.params = frozenset(params)
249 func.params = frozenset(params)
250 return func
250 return func
251
251
252 return _decorator
252 return _decorator
253
253
254
254
255 class unbundlerecords(object):
255 class unbundlerecords(object):
256 """keep record of what happens during and unbundle
256 """keep record of what happens during and unbundle
257
257
258 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 New records are added using `records.add('cat', obj)`. Where 'cat' is a
259 category of record and obj is an arbitrary object.
259 category of record and obj is an arbitrary object.
260
260
261 `records['cat']` will return all entries of this category 'cat'.
261 `records['cat']` will return all entries of this category 'cat'.
262
262
263 Iterating on the object itself will yield `('category', obj)` tuples
263 Iterating on the object itself will yield `('category', obj)` tuples
264 for all entries.
264 for all entries.
265
265
266 All iterations happens in chronological order.
266 All iterations happens in chronological order.
267 """
267 """
268
268
269 def __init__(self):
269 def __init__(self):
270 self._categories = {}
270 self._categories = {}
271 self._sequences = []
271 self._sequences = []
272 self._replies = {}
272 self._replies = {}
273
273
274 def add(self, category, entry, inreplyto=None):
274 def add(self, category, entry, inreplyto=None):
275 """add a new record of a given category.
275 """add a new record of a given category.
276
276
277 The entry can then be retrieved in the list returned by
277 The entry can then be retrieved in the list returned by
278 self['category']."""
278 self['category']."""
279 self._categories.setdefault(category, []).append(entry)
279 self._categories.setdefault(category, []).append(entry)
280 self._sequences.append((category, entry))
280 self._sequences.append((category, entry))
281 if inreplyto is not None:
281 if inreplyto is not None:
282 self.getreplies(inreplyto).add(category, entry)
282 self.getreplies(inreplyto).add(category, entry)
283
283
284 def getreplies(self, partid):
284 def getreplies(self, partid):
285 """get the records that are replies to a specific part"""
285 """get the records that are replies to a specific part"""
286 return self._replies.setdefault(partid, unbundlerecords())
286 return self._replies.setdefault(partid, unbundlerecords())
287
287
288 def __getitem__(self, cat):
288 def __getitem__(self, cat):
289 return tuple(self._categories.get(cat, ()))
289 return tuple(self._categories.get(cat, ()))
290
290
291 def __iter__(self):
291 def __iter__(self):
292 return iter(self._sequences)
292 return iter(self._sequences)
293
293
294 def __len__(self):
294 def __len__(self):
295 return len(self._sequences)
295 return len(self._sequences)
296
296
297 def __nonzero__(self):
297 def __nonzero__(self):
298 return bool(self._sequences)
298 return bool(self._sequences)
299
299
300 __bool__ = __nonzero__
300 __bool__ = __nonzero__
301
301
302
302
303 class bundleoperation(object):
303 class bundleoperation(object):
304 """an object that represents a single bundling process
304 """an object that represents a single bundling process
305
305
306 Its purpose is to carry unbundle-related objects and states.
306 Its purpose is to carry unbundle-related objects and states.
307
307
308 A new object should be created at the beginning of each bundle processing.
308 A new object should be created at the beginning of each bundle processing.
309 The object is to be returned by the processing function.
309 The object is to be returned by the processing function.
310
310
311 The object has very little content now it will ultimately contain:
311 The object has very little content now it will ultimately contain:
312 * an access to the repo the bundle is applied to,
312 * an access to the repo the bundle is applied to,
313 * a ui object,
313 * a ui object,
314 * a way to retrieve a transaction to add changes to the repo,
314 * a way to retrieve a transaction to add changes to the repo,
315 * a way to record the result of processing each part,
315 * a way to record the result of processing each part,
316 * a way to construct a bundle response when applicable.
316 * a way to construct a bundle response when applicable.
317 """
317 """
318
318
319 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
319 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
320 self.repo = repo
320 self.repo = repo
321 self.ui = repo.ui
321 self.ui = repo.ui
322 self.records = unbundlerecords()
322 self.records = unbundlerecords()
323 self.reply = None
323 self.reply = None
324 self.captureoutput = captureoutput
324 self.captureoutput = captureoutput
325 self.hookargs = {}
325 self.hookargs = {}
326 self._gettransaction = transactiongetter
326 self._gettransaction = transactiongetter
327 # carries value that can modify part behavior
327 # carries value that can modify part behavior
328 self.modes = {}
328 self.modes = {}
329 self.source = source
329 self.source = source
330
330
331 def gettransaction(self):
331 def gettransaction(self):
332 transaction = self._gettransaction()
332 transaction = self._gettransaction()
333
333
334 if self.hookargs:
334 if self.hookargs:
335 # the ones added to the transaction supercede those added
335 # the ones added to the transaction supercede those added
336 # to the operation.
336 # to the operation.
337 self.hookargs.update(transaction.hookargs)
337 self.hookargs.update(transaction.hookargs)
338 transaction.hookargs = self.hookargs
338 transaction.hookargs = self.hookargs
339
339
340 # mark the hookargs as flushed. further attempts to add to
340 # mark the hookargs as flushed. further attempts to add to
341 # hookargs will result in an abort.
341 # hookargs will result in an abort.
342 self.hookargs = None
342 self.hookargs = None
343
343
344 return transaction
344 return transaction
345
345
346 def addhookargs(self, hookargs):
346 def addhookargs(self, hookargs):
347 if self.hookargs is None:
347 if self.hookargs is None:
348 raise error.ProgrammingError(
348 raise error.ProgrammingError(
349 b'attempted to add hookargs to '
349 b'attempted to add hookargs to '
350 b'operation after transaction started'
350 b'operation after transaction started'
351 )
351 )
352 self.hookargs.update(hookargs)
352 self.hookargs.update(hookargs)
353
353
354
354
355 class TransactionUnavailable(RuntimeError):
355 class TransactionUnavailable(RuntimeError):
356 pass
356 pass
357
357
358
358
359 def _notransaction():
359 def _notransaction():
360 """default method to get a transaction while processing a bundle
360 """default method to get a transaction while processing a bundle
361
361
362 Raise an exception to highlight the fact that no transaction was expected
362 Raise an exception to highlight the fact that no transaction was expected
363 to be created"""
363 to be created"""
364 raise TransactionUnavailable()
364 raise TransactionUnavailable()
365
365
366
366
367 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
367 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
368 # transform me into unbundler.apply() as soon as the freeze is lifted
368 # transform me into unbundler.apply() as soon as the freeze is lifted
369 if isinstance(unbundler, unbundle20):
369 if isinstance(unbundler, unbundle20):
370 tr.hookargs[b'bundle2'] = b'1'
370 tr.hookargs[b'bundle2'] = b'1'
371 if source is not None and b'source' not in tr.hookargs:
371 if source is not None and b'source' not in tr.hookargs:
372 tr.hookargs[b'source'] = source
372 tr.hookargs[b'source'] = source
373 if url is not None and b'url' not in tr.hookargs:
373 if url is not None and b'url' not in tr.hookargs:
374 tr.hookargs[b'url'] = url
374 tr.hookargs[b'url'] = url
375 return processbundle(repo, unbundler, lambda: tr, source=source)
375 return processbundle(repo, unbundler, lambda: tr, source=source)
376 else:
376 else:
377 # the transactiongetter won't be used, but we might as well set it
377 # the transactiongetter won't be used, but we might as well set it
378 op = bundleoperation(repo, lambda: tr, source=source)
378 op = bundleoperation(repo, lambda: tr, source=source)
379 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
379 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
380 return op
380 return op
381
381
382
382
383 class partiterator(object):
383 class partiterator(object):
384 def __init__(self, repo, op, unbundler):
384 def __init__(self, repo, op, unbundler):
385 self.repo = repo
385 self.repo = repo
386 self.op = op
386 self.op = op
387 self.unbundler = unbundler
387 self.unbundler = unbundler
388 self.iterator = None
388 self.iterator = None
389 self.count = 0
389 self.count = 0
390 self.current = None
390 self.current = None
391
391
392 def __enter__(self):
392 def __enter__(self):
393 def func():
393 def func():
394 itr = enumerate(self.unbundler.iterparts(), 1)
394 itr = enumerate(self.unbundler.iterparts(), 1)
395 for count, p in itr:
395 for count, p in itr:
396 self.count = count
396 self.count = count
397 self.current = p
397 self.current = p
398 yield p
398 yield p
399 p.consume()
399 p.consume()
400 self.current = None
400 self.current = None
401
401
402 self.iterator = func()
402 self.iterator = func()
403 return self.iterator
403 return self.iterator
404
404
405 def __exit__(self, type, exc, tb):
405 def __exit__(self, type, exc, tb):
406 if not self.iterator:
406 if not self.iterator:
407 return
407 return
408
408
409 # Only gracefully abort in a normal exception situation. User aborts
409 # Only gracefully abort in a normal exception situation. User aborts
410 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
410 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
411 # and should not gracefully cleanup.
411 # and should not gracefully cleanup.
412 if isinstance(exc, Exception):
412 if isinstance(exc, Exception):
413 # Any exceptions seeking to the end of the bundle at this point are
413 # Any exceptions seeking to the end of the bundle at this point are
414 # almost certainly related to the underlying stream being bad.
414 # almost certainly related to the underlying stream being bad.
415 # And, chances are that the exception we're handling is related to
415 # And, chances are that the exception we're handling is related to
416 # getting in that bad state. So, we swallow the seeking error and
416 # getting in that bad state. So, we swallow the seeking error and
417 # re-raise the original error.
417 # re-raise the original error.
418 seekerror = False
418 seekerror = False
419 try:
419 try:
420 if self.current:
420 if self.current:
421 # consume the part content to not corrupt the stream.
421 # consume the part content to not corrupt the stream.
422 self.current.consume()
422 self.current.consume()
423
423
424 for part in self.iterator:
424 for part in self.iterator:
425 # consume the bundle content
425 # consume the bundle content
426 part.consume()
426 part.consume()
427 except Exception:
427 except Exception:
428 seekerror = True
428 seekerror = True
429
429
430 # Small hack to let caller code distinguish exceptions from bundle2
430 # Small hack to let caller code distinguish exceptions from bundle2
431 # processing from processing the old format. This is mostly needed
431 # processing from processing the old format. This is mostly needed
432 # to handle different return codes to unbundle according to the type
432 # to handle different return codes to unbundle according to the type
433 # of bundle. We should probably clean up or drop this return code
433 # of bundle. We should probably clean up or drop this return code
434 # craziness in a future version.
434 # craziness in a future version.
435 exc.duringunbundle2 = True
435 exc.duringunbundle2 = True
436 salvaged = []
436 salvaged = []
437 replycaps = None
437 replycaps = None
438 if self.op.reply is not None:
438 if self.op.reply is not None:
439 salvaged = self.op.reply.salvageoutput()
439 salvaged = self.op.reply.salvageoutput()
440 replycaps = self.op.reply.capabilities
440 replycaps = self.op.reply.capabilities
441 exc._replycaps = replycaps
441 exc._replycaps = replycaps
442 exc._bundle2salvagedoutput = salvaged
442 exc._bundle2salvagedoutput = salvaged
443
443
444 # Re-raising from a variable loses the original stack. So only use
444 # Re-raising from a variable loses the original stack. So only use
445 # that form if we need to.
445 # that form if we need to.
446 if seekerror:
446 if seekerror:
447 raise exc
447 raise exc
448
448
449 self.repo.ui.debug(
449 self.repo.ui.debug(
450 b'bundle2-input-bundle: %i parts total\n' % self.count
450 b'bundle2-input-bundle: %i parts total\n' % self.count
451 )
451 )
452
452
453
453
454 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
454 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
455 """This function process a bundle, apply effect to/from a repo
455 """This function process a bundle, apply effect to/from a repo
456
456
457 It iterates over each part then searches for and uses the proper handling
457 It iterates over each part then searches for and uses the proper handling
458 code to process the part. Parts are processed in order.
458 code to process the part. Parts are processed in order.
459
459
460 Unknown Mandatory part will abort the process.
460 Unknown Mandatory part will abort the process.
461
461
462 It is temporarily possible to provide a prebuilt bundleoperation to the
462 It is temporarily possible to provide a prebuilt bundleoperation to the
463 function. This is used to ensure output is properly propagated in case of
463 function. This is used to ensure output is properly propagated in case of
464 an error during the unbundling. This output capturing part will likely be
464 an error during the unbundling. This output capturing part will likely be
465 reworked and this ability will probably go away in the process.
465 reworked and this ability will probably go away in the process.
466 """
466 """
467 if op is None:
467 if op is None:
468 if transactiongetter is None:
468 if transactiongetter is None:
469 transactiongetter = _notransaction
469 transactiongetter = _notransaction
470 op = bundleoperation(repo, transactiongetter, source=source)
470 op = bundleoperation(repo, transactiongetter, source=source)
471 # todo:
471 # todo:
472 # - replace this is a init function soon.
472 # - replace this is a init function soon.
473 # - exception catching
473 # - exception catching
474 unbundler.params
474 unbundler.params
475 if repo.ui.debugflag:
475 if repo.ui.debugflag:
476 msg = [b'bundle2-input-bundle:']
476 msg = [b'bundle2-input-bundle:']
477 if unbundler.params:
477 if unbundler.params:
478 msg.append(b' %i params' % len(unbundler.params))
478 msg.append(b' %i params' % len(unbundler.params))
479 if op._gettransaction is None or op._gettransaction is _notransaction:
479 if op._gettransaction is None or op._gettransaction is _notransaction:
480 msg.append(b' no-transaction')
480 msg.append(b' no-transaction')
481 else:
481 else:
482 msg.append(b' with-transaction')
482 msg.append(b' with-transaction')
483 msg.append(b'\n')
483 msg.append(b'\n')
484 repo.ui.debug(b''.join(msg))
484 repo.ui.debug(b''.join(msg))
485
485
486 processparts(repo, op, unbundler)
486 processparts(repo, op, unbundler)
487
487
488 return op
488 return op
489
489
490
490
491 def processparts(repo, op, unbundler):
491 def processparts(repo, op, unbundler):
492 with partiterator(repo, op, unbundler) as parts:
492 with partiterator(repo, op, unbundler) as parts:
493 for part in parts:
493 for part in parts:
494 _processpart(op, part)
494 _processpart(op, part)
495
495
496
496
497 def _processchangegroup(op, cg, tr, source, url, **kwargs):
497 def _processchangegroup(op, cg, tr, source, url, **kwargs):
498 ret = cg.apply(op.repo, tr, source, url, **kwargs)
498 ret = cg.apply(op.repo, tr, source, url, **kwargs)
499 op.records.add(
499 op.records.add(
500 b'changegroup',
500 b'changegroup',
501 {
501 {
502 b'return': ret,
502 b'return': ret,
503 },
503 },
504 )
504 )
505 return ret
505 return ret
506
506
507
507
508 def _gethandler(op, part):
508 def _gethandler(op, part):
509 status = b'unknown' # used by debug output
509 status = b'unknown' # used by debug output
510 try:
510 try:
511 handler = parthandlermapping.get(part.type)
511 handler = parthandlermapping.get(part.type)
512 if handler is None:
512 if handler is None:
513 status = b'unsupported-type'
513 status = b'unsupported-type'
514 raise error.BundleUnknownFeatureError(parttype=part.type)
514 raise error.BundleUnknownFeatureError(parttype=part.type)
515 indebug(op.ui, b'found a handler for part %s' % part.type)
515 indebug(op.ui, b'found a handler for part %s' % part.type)
516 unknownparams = part.mandatorykeys - handler.params
516 unknownparams = part.mandatorykeys - handler.params
517 if unknownparams:
517 if unknownparams:
518 unknownparams = list(unknownparams)
518 unknownparams = list(unknownparams)
519 unknownparams.sort()
519 unknownparams.sort()
520 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
520 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
521 raise error.BundleUnknownFeatureError(
521 raise error.BundleUnknownFeatureError(
522 parttype=part.type, params=unknownparams
522 parttype=part.type, params=unknownparams
523 )
523 )
524 status = b'supported'
524 status = b'supported'
525 except error.BundleUnknownFeatureError as exc:
525 except error.BundleUnknownFeatureError as exc:
526 if part.mandatory: # mandatory parts
526 if part.mandatory: # mandatory parts
527 raise
527 raise
528 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
528 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
529 return # skip to part processing
529 return # skip to part processing
530 finally:
530 finally:
531 if op.ui.debugflag:
531 if op.ui.debugflag:
532 msg = [b'bundle2-input-part: "%s"' % part.type]
532 msg = [b'bundle2-input-part: "%s"' % part.type]
533 if not part.mandatory:
533 if not part.mandatory:
534 msg.append(b' (advisory)')
534 msg.append(b' (advisory)')
535 nbmp = len(part.mandatorykeys)
535 nbmp = len(part.mandatorykeys)
536 nbap = len(part.params) - nbmp
536 nbap = len(part.params) - nbmp
537 if nbmp or nbap:
537 if nbmp or nbap:
538 msg.append(b' (params:')
538 msg.append(b' (params:')
539 if nbmp:
539 if nbmp:
540 msg.append(b' %i mandatory' % nbmp)
540 msg.append(b' %i mandatory' % nbmp)
541 if nbap:
541 if nbap:
542 msg.append(b' %i advisory' % nbmp)
542 msg.append(b' %i advisory' % nbmp)
543 msg.append(b')')
543 msg.append(b')')
544 msg.append(b' %s\n' % status)
544 msg.append(b' %s\n' % status)
545 op.ui.debug(b''.join(msg))
545 op.ui.debug(b''.join(msg))
546
546
547 return handler
547 return handler
548
548
549
549
550 def _processpart(op, part):
550 def _processpart(op, part):
551 """process a single part from a bundle
551 """process a single part from a bundle
552
552
553 The part is guaranteed to have been fully consumed when the function exits
553 The part is guaranteed to have been fully consumed when the function exits
554 (even if an exception is raised)."""
554 (even if an exception is raised)."""
555 handler = _gethandler(op, part)
555 handler = _gethandler(op, part)
556 if handler is None:
556 if handler is None:
557 return
557 return
558
558
559 # handler is called outside the above try block so that we don't
559 # handler is called outside the above try block so that we don't
560 # risk catching KeyErrors from anything other than the
560 # risk catching KeyErrors from anything other than the
561 # parthandlermapping lookup (any KeyError raised by handler()
561 # parthandlermapping lookup (any KeyError raised by handler()
562 # itself represents a defect of a different variety).
562 # itself represents a defect of a different variety).
563 output = None
563 output = None
564 if op.captureoutput and op.reply is not None:
564 if op.captureoutput and op.reply is not None:
565 op.ui.pushbuffer(error=True, subproc=True)
565 op.ui.pushbuffer(error=True, subproc=True)
566 output = b''
566 output = b''
567 try:
567 try:
568 handler(op, part)
568 handler(op, part)
569 finally:
569 finally:
570 if output is not None:
570 if output is not None:
571 output = op.ui.popbuffer()
571 output = op.ui.popbuffer()
572 if output:
572 if output:
573 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
573 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
574 outpart.addparam(
574 outpart.addparam(
575 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
575 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
576 )
576 )
577
577
578
578
579 def decodecaps(blob):
579 def decodecaps(blob):
580 """decode a bundle2 caps bytes blob into a dictionary
580 """decode a bundle2 caps bytes blob into a dictionary
581
581
582 The blob is a list of capabilities (one per line)
582 The blob is a list of capabilities (one per line)
583 Capabilities may have values using a line of the form::
583 Capabilities may have values using a line of the form::
584
584
585 capability=value1,value2,value3
585 capability=value1,value2,value3
586
586
587 The values are always a list."""
587 The values are always a list."""
588 caps = {}
588 caps = {}
589 for line in blob.splitlines():
589 for line in blob.splitlines():
590 if not line:
590 if not line:
591 continue
591 continue
592 if b'=' not in line:
592 if b'=' not in line:
593 key, vals = line, ()
593 key, vals = line, ()
594 else:
594 else:
595 key, vals = line.split(b'=', 1)
595 key, vals = line.split(b'=', 1)
596 vals = vals.split(b',')
596 vals = vals.split(b',')
597 key = urlreq.unquote(key)
597 key = urlreq.unquote(key)
598 vals = [urlreq.unquote(v) for v in vals]
598 vals = [urlreq.unquote(v) for v in vals]
599 caps[key] = vals
599 caps[key] = vals
600 return caps
600 return caps
601
601
602
602
603 def encodecaps(caps):
603 def encodecaps(caps):
604 """encode a bundle2 caps dictionary into a bytes blob"""
604 """encode a bundle2 caps dictionary into a bytes blob"""
605 chunks = []
605 chunks = []
606 for ca in sorted(caps):
606 for ca in sorted(caps):
607 vals = caps[ca]
607 vals = caps[ca]
608 ca = urlreq.quote(ca)
608 ca = urlreq.quote(ca)
609 vals = [urlreq.quote(v) for v in vals]
609 vals = [urlreq.quote(v) for v in vals]
610 if vals:
610 if vals:
611 ca = b"%s=%s" % (ca, b','.join(vals))
611 ca = b"%s=%s" % (ca, b','.join(vals))
612 chunks.append(ca)
612 chunks.append(ca)
613 return b'\n'.join(chunks)
613 return b'\n'.join(chunks)
614
614
615
615
616 bundletypes = {
616 bundletypes = {
617 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
617 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
618 # since the unification ssh accepts a header but there
618 # since the unification ssh accepts a header but there
619 # is no capability signaling it.
619 # is no capability signaling it.
620 b"HG20": (), # special-cased below
620 b"HG20": (), # special-cased below
621 b"HG10UN": (b"HG10UN", b'UN'),
621 b"HG10UN": (b"HG10UN", b'UN'),
622 b"HG10BZ": (b"HG10", b'BZ'),
622 b"HG10BZ": (b"HG10", b'BZ'),
623 b"HG10GZ": (b"HG10GZ", b'GZ'),
623 b"HG10GZ": (b"HG10GZ", b'GZ'),
624 }
624 }
625
625
626 # hgweb uses this list to communicate its preferred type
626 # hgweb uses this list to communicate its preferred type
627 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
627 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
628
628
629
629
630 class bundle20(object):
630 class bundle20(object):
631 """represent an outgoing bundle2 container
631 """represent an outgoing bundle2 container
632
632
633 Use the `addparam` method to add stream level parameter. and `newpart` to
633 Use the `addparam` method to add stream level parameter. and `newpart` to
634 populate it. Then call `getchunks` to retrieve all the binary chunks of
634 populate it. Then call `getchunks` to retrieve all the binary chunks of
635 data that compose the bundle2 container."""
635 data that compose the bundle2 container."""
636
636
637 _magicstring = b'HG20'
637 _magicstring = b'HG20'
638
638
639 def __init__(self, ui, capabilities=()):
639 def __init__(self, ui, capabilities=()):
640 self.ui = ui
640 self.ui = ui
641 self._params = []
641 self._params = []
642 self._parts = []
642 self._parts = []
643 self.capabilities = dict(capabilities)
643 self.capabilities = dict(capabilities)
644 self._compengine = util.compengines.forbundletype(b'UN')
644 self._compengine = util.compengines.forbundletype(b'UN')
645 self._compopts = None
645 self._compopts = None
646 # If compression is being handled by a consumer of the raw
646 # If compression is being handled by a consumer of the raw
647 # data (e.g. the wire protocol), unsetting this flag tells
647 # data (e.g. the wire protocol), unsetting this flag tells
648 # consumers that the bundle is best left uncompressed.
648 # consumers that the bundle is best left uncompressed.
649 self.prefercompressed = True
649 self.prefercompressed = True
650
650
651 def setcompression(self, alg, compopts=None):
651 def setcompression(self, alg, compopts=None):
652 """setup core part compression to <alg>"""
652 """setup core part compression to <alg>"""
653 if alg in (None, b'UN'):
653 if alg in (None, b'UN'):
654 return
654 return
655 assert not any(n.lower() == b'compression' for n, v in self._params)
655 assert not any(n.lower() == b'compression' for n, v in self._params)
656 self.addparam(b'Compression', alg)
656 self.addparam(b'Compression', alg)
657 self._compengine = util.compengines.forbundletype(alg)
657 self._compengine = util.compengines.forbundletype(alg)
658 self._compopts = compopts
658 self._compopts = compopts
659
659
660 @property
660 @property
661 def nbparts(self):
661 def nbparts(self):
662 """total number of parts added to the bundler"""
662 """total number of parts added to the bundler"""
663 return len(self._parts)
663 return len(self._parts)
664
664
665 # methods used to defines the bundle2 content
665 # methods used to defines the bundle2 content
666 def addparam(self, name, value=None):
666 def addparam(self, name, value=None):
667 """add a stream level parameter"""
667 """add a stream level parameter"""
668 if not name:
668 if not name:
669 raise error.ProgrammingError(b'empty parameter name')
669 raise error.ProgrammingError(b'empty parameter name')
670 if name[0:1] not in pycompat.bytestr(
670 if name[0:1] not in pycompat.bytestr(
671 string.ascii_letters # pytype: disable=wrong-arg-types
671 string.ascii_letters # pytype: disable=wrong-arg-types
672 ):
672 ):
673 raise error.ProgrammingError(
673 raise error.ProgrammingError(
674 b'non letter first character: %s' % name
674 b'non letter first character: %s' % name
675 )
675 )
676 self._params.append((name, value))
676 self._params.append((name, value))
677
677
678 def addpart(self, part):
678 def addpart(self, part):
679 """add a new part to the bundle2 container
679 """add a new part to the bundle2 container
680
680
681 Parts contains the actual applicative payload."""
681 Parts contains the actual applicative payload."""
682 assert part.id is None
682 assert part.id is None
683 part.id = len(self._parts) # very cheap counter
683 part.id = len(self._parts) # very cheap counter
684 self._parts.append(part)
684 self._parts.append(part)
685
685
686 def newpart(self, typeid, *args, **kwargs):
686 def newpart(self, typeid, *args, **kwargs):
687 """create a new part and add it to the containers
687 """create a new part and add it to the containers
688
688
689 As the part is directly added to the containers. For now, this means
689 As the part is directly added to the containers. For now, this means
690 that any failure to properly initialize the part after calling
690 that any failure to properly initialize the part after calling
691 ``newpart`` should result in a failure of the whole bundling process.
691 ``newpart`` should result in a failure of the whole bundling process.
692
692
693 You can still fall back to manually create and add if you need better
693 You can still fall back to manually create and add if you need better
694 control."""
694 control."""
695 part = bundlepart(typeid, *args, **kwargs)
695 part = bundlepart(typeid, *args, **kwargs)
696 self.addpart(part)
696 self.addpart(part)
697 return part
697 return part
698
698
699 # methods used to generate the bundle2 stream
699 # methods used to generate the bundle2 stream
700 def getchunks(self):
700 def getchunks(self):
701 if self.ui.debugflag:
701 if self.ui.debugflag:
702 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
702 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
703 if self._params:
703 if self._params:
704 msg.append(b' (%i params)' % len(self._params))
704 msg.append(b' (%i params)' % len(self._params))
705 msg.append(b' %i parts total\n' % len(self._parts))
705 msg.append(b' %i parts total\n' % len(self._parts))
706 self.ui.debug(b''.join(msg))
706 self.ui.debug(b''.join(msg))
707 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
707 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
708 yield self._magicstring
708 yield self._magicstring
709 param = self._paramchunk()
709 param = self._paramchunk()
710 outdebug(self.ui, b'bundle parameter: %s' % param)
710 outdebug(self.ui, b'bundle parameter: %s' % param)
711 yield _pack(_fstreamparamsize, len(param))
711 yield _pack(_fstreamparamsize, len(param))
712 if param:
712 if param:
713 yield param
713 yield param
714 for chunk in self._compengine.compressstream(
714 for chunk in self._compengine.compressstream(
715 self._getcorechunk(), self._compopts
715 self._getcorechunk(), self._compopts
716 ):
716 ):
717 yield chunk
717 yield chunk
718
718
719 def _paramchunk(self):
719 def _paramchunk(self):
720 """return a encoded version of all stream parameters"""
720 """return a encoded version of all stream parameters"""
721 blocks = []
721 blocks = []
722 for par, value in self._params:
722 for par, value in self._params:
723 par = urlreq.quote(par)
723 par = urlreq.quote(par)
724 if value is not None:
724 if value is not None:
725 value = urlreq.quote(value)
725 value = urlreq.quote(value)
726 par = b'%s=%s' % (par, value)
726 par = b'%s=%s' % (par, value)
727 blocks.append(par)
727 blocks.append(par)
728 return b' '.join(blocks)
728 return b' '.join(blocks)
729
729
730 def _getcorechunk(self):
730 def _getcorechunk(self):
731 """yield chunk for the core part of the bundle
731 """yield chunk for the core part of the bundle
732
732
733 (all but headers and parameters)"""
733 (all but headers and parameters)"""
734 outdebug(self.ui, b'start of parts')
734 outdebug(self.ui, b'start of parts')
735 for part in self._parts:
735 for part in self._parts:
736 outdebug(self.ui, b'bundle part: "%s"' % part.type)
736 outdebug(self.ui, b'bundle part: "%s"' % part.type)
737 for chunk in part.getchunks(ui=self.ui):
737 for chunk in part.getchunks(ui=self.ui):
738 yield chunk
738 yield chunk
739 outdebug(self.ui, b'end of bundle')
739 outdebug(self.ui, b'end of bundle')
740 yield _pack(_fpartheadersize, 0)
740 yield _pack(_fpartheadersize, 0)
741
741
742 def salvageoutput(self):
742 def salvageoutput(self):
743 """return a list with a copy of all output parts in the bundle
743 """return a list with a copy of all output parts in the bundle
744
744
745 This is meant to be used during error handling to make sure we preserve
745 This is meant to be used during error handling to make sure we preserve
746 server output"""
746 server output"""
747 salvaged = []
747 salvaged = []
748 for part in self._parts:
748 for part in self._parts:
749 if part.type.startswith(b'output'):
749 if part.type.startswith(b'output'):
750 salvaged.append(part.copy())
750 salvaged.append(part.copy())
751 return salvaged
751 return salvaged
752
752
753
753
754 class unpackermixin(object):
754 class unpackermixin(object):
755 """A mixin to extract bytes and struct data from a stream"""
755 """A mixin to extract bytes and struct data from a stream"""
756
756
757 def __init__(self, fp):
757 def __init__(self, fp):
758 self._fp = fp
758 self._fp = fp
759
759
760 def _unpack(self, format):
760 def _unpack(self, format):
761 """unpack this struct format from the stream
761 """unpack this struct format from the stream
762
762
763 This method is meant for internal usage by the bundle2 protocol only.
763 This method is meant for internal usage by the bundle2 protocol only.
764 They directly manipulate the low level stream including bundle2 level
764 They directly manipulate the low level stream including bundle2 level
765 instruction.
765 instruction.
766
766
767 Do not use it to implement higher-level logic or methods."""
767 Do not use it to implement higher-level logic or methods."""
768 data = self._readexact(struct.calcsize(format))
768 data = self._readexact(struct.calcsize(format))
769 return _unpack(format, data)
769 return _unpack(format, data)
770
770
771 def _readexact(self, size):
771 def _readexact(self, size):
772 """read exactly <size> bytes from the stream
772 """read exactly <size> bytes from the stream
773
773
774 This method is meant for internal usage by the bundle2 protocol only.
774 This method is meant for internal usage by the bundle2 protocol only.
775 They directly manipulate the low level stream including bundle2 level
775 They directly manipulate the low level stream including bundle2 level
776 instruction.
776 instruction.
777
777
778 Do not use it to implement higher-level logic or methods."""
778 Do not use it to implement higher-level logic or methods."""
779 return changegroup.readexactly(self._fp, size)
779 return changegroup.readexactly(self._fp, size)
780
780
781
781
782 def getunbundler(ui, fp, magicstring=None):
782 def getunbundler(ui, fp, magicstring=None):
783 """return a valid unbundler object for a given magicstring"""
783 """return a valid unbundler object for a given magicstring"""
784 if magicstring is None:
784 if magicstring is None:
785 magicstring = changegroup.readexactly(fp, 4)
785 magicstring = changegroup.readexactly(fp, 4)
786 magic, version = magicstring[0:2], magicstring[2:4]
786 magic, version = magicstring[0:2], magicstring[2:4]
787 if magic != b'HG':
787 if magic != b'HG':
788 ui.debug(
788 ui.debug(
789 b"error: invalid magic: %r (version %r), should be 'HG'\n"
789 b"error: invalid magic: %r (version %r), should be 'HG'\n"
790 % (magic, version)
790 % (magic, version)
791 )
791 )
792 raise error.Abort(_(b'not a Mercurial bundle'))
792 raise error.Abort(_(b'not a Mercurial bundle'))
793 unbundlerclass = formatmap.get(version)
793 unbundlerclass = formatmap.get(version)
794 if unbundlerclass is None:
794 if unbundlerclass is None:
795 raise error.Abort(_(b'unknown bundle version %s') % version)
795 raise error.Abort(_(b'unknown bundle version %s') % version)
796 unbundler = unbundlerclass(ui, fp)
796 unbundler = unbundlerclass(ui, fp)
797 indebug(ui, b'start processing of %s stream' % magicstring)
797 indebug(ui, b'start processing of %s stream' % magicstring)
798 return unbundler
798 return unbundler
799
799
800
800
801 class unbundle20(unpackermixin):
801 class unbundle20(unpackermixin):
802 """interpret a bundle2 stream
802 """interpret a bundle2 stream
803
803
804 This class is fed with a binary stream and yields parts through its
804 This class is fed with a binary stream and yields parts through its
805 `iterparts` methods."""
805 `iterparts` methods."""
806
806
807 _magicstring = b'HG20'
807 _magicstring = b'HG20'
808
808
809 def __init__(self, ui, fp):
809 def __init__(self, ui, fp):
810 """If header is specified, we do not read it out of the stream."""
810 """If header is specified, we do not read it out of the stream."""
811 self.ui = ui
811 self.ui = ui
812 self._compengine = util.compengines.forbundletype(b'UN')
812 self._compengine = util.compengines.forbundletype(b'UN')
813 self._compressed = None
813 self._compressed = None
814 super(unbundle20, self).__init__(fp)
814 super(unbundle20, self).__init__(fp)
815
815
816 @util.propertycache
816 @util.propertycache
817 def params(self):
817 def params(self):
818 """dictionary of stream level parameters"""
818 """dictionary of stream level parameters"""
819 indebug(self.ui, b'reading bundle2 stream parameters')
819 indebug(self.ui, b'reading bundle2 stream parameters')
820 params = {}
820 params = {}
821 paramssize = self._unpack(_fstreamparamsize)[0]
821 paramssize = self._unpack(_fstreamparamsize)[0]
822 if paramssize < 0:
822 if paramssize < 0:
823 raise error.BundleValueError(
823 raise error.BundleValueError(
824 b'negative bundle param size: %i' % paramssize
824 b'negative bundle param size: %i' % paramssize
825 )
825 )
826 if paramssize:
826 if paramssize:
827 params = self._readexact(paramssize)
827 params = self._readexact(paramssize)
828 params = self._processallparams(params)
828 params = self._processallparams(params)
829 return params
829 return params
830
830
831 def _processallparams(self, paramsblock):
831 def _processallparams(self, paramsblock):
832 """ """
832 """ """
833 params = util.sortdict()
833 params = util.sortdict()
834 for p in paramsblock.split(b' '):
834 for p in paramsblock.split(b' '):
835 p = p.split(b'=', 1)
835 p = p.split(b'=', 1)
836 p = [urlreq.unquote(i) for i in p]
836 p = [urlreq.unquote(i) for i in p]
837 if len(p) < 2:
837 if len(p) < 2:
838 p.append(None)
838 p.append(None)
839 self._processparam(*p)
839 self._processparam(*p)
840 params[p[0]] = p[1]
840 params[p[0]] = p[1]
841 return params
841 return params
842
842
843 def _processparam(self, name, value):
843 def _processparam(self, name, value):
844 """process a parameter, applying its effect if needed
844 """process a parameter, applying its effect if needed
845
845
846 Parameter starting with a lower case letter are advisory and will be
846 Parameter starting with a lower case letter are advisory and will be
847 ignored when unknown. Those starting with an upper case letter are
847 ignored when unknown. Those starting with an upper case letter are
848 mandatory and will this function will raise a KeyError when unknown.
848 mandatory and will this function will raise a KeyError when unknown.
849
849
850 Note: no option are currently supported. Any input will be either
850 Note: no option are currently supported. Any input will be either
851 ignored or failing.
851 ignored or failing.
852 """
852 """
853 if not name:
853 if not name:
854 raise ValueError('empty parameter name')
854 raise ValueError('empty parameter name')
855 if name[0:1] not in pycompat.bytestr(
855 if name[0:1] not in pycompat.bytestr(
856 string.ascii_letters # pytype: disable=wrong-arg-types
856 string.ascii_letters # pytype: disable=wrong-arg-types
857 ):
857 ):
858 raise ValueError('non letter first character: %s' % name)
858 raise ValueError('non letter first character: %s' % name)
859 try:
859 try:
860 handler = b2streamparamsmap[name.lower()]
860 handler = b2streamparamsmap[name.lower()]
861 except KeyError:
861 except KeyError:
862 if name[0:1].islower():
862 if name[0:1].islower():
863 indebug(self.ui, b"ignoring unknown parameter %s" % name)
863 indebug(self.ui, b"ignoring unknown parameter %s" % name)
864 else:
864 else:
865 raise error.BundleUnknownFeatureError(params=(name,))
865 raise error.BundleUnknownFeatureError(params=(name,))
866 else:
866 else:
867 handler(self, name, value)
867 handler(self, name, value)
868
868
869 def _forwardchunks(self):
869 def _forwardchunks(self):
870 """utility to transfer a bundle2 as binary
870 """utility to transfer a bundle2 as binary
871
871
872 This is made necessary by the fact the 'getbundle' command over 'ssh'
872 This is made necessary by the fact the 'getbundle' command over 'ssh'
873 have no way to know then the reply end, relying on the bundle to be
873 have no way to know then the reply end, relying on the bundle to be
874 interpreted to know its end. This is terrible and we are sorry, but we
874 interpreted to know its end. This is terrible and we are sorry, but we
875 needed to move forward to get general delta enabled.
875 needed to move forward to get general delta enabled.
876 """
876 """
877 yield self._magicstring
877 yield self._magicstring
878 assert 'params' not in vars(self)
878 assert 'params' not in vars(self)
879 paramssize = self._unpack(_fstreamparamsize)[0]
879 paramssize = self._unpack(_fstreamparamsize)[0]
880 if paramssize < 0:
880 if paramssize < 0:
881 raise error.BundleValueError(
881 raise error.BundleValueError(
882 b'negative bundle param size: %i' % paramssize
882 b'negative bundle param size: %i' % paramssize
883 )
883 )
884 if paramssize:
884 if paramssize:
885 params = self._readexact(paramssize)
885 params = self._readexact(paramssize)
886 self._processallparams(params)
886 self._processallparams(params)
887 # The payload itself is decompressed below, so drop
887 # The payload itself is decompressed below, so drop
888 # the compression parameter passed down to compensate.
888 # the compression parameter passed down to compensate.
889 outparams = []
889 outparams = []
890 for p in params.split(b' '):
890 for p in params.split(b' '):
891 k, v = p.split(b'=', 1)
891 k, v = p.split(b'=', 1)
892 if k.lower() != b'compression':
892 if k.lower() != b'compression':
893 outparams.append(p)
893 outparams.append(p)
894 outparams = b' '.join(outparams)
894 outparams = b' '.join(outparams)
895 yield _pack(_fstreamparamsize, len(outparams))
895 yield _pack(_fstreamparamsize, len(outparams))
896 yield outparams
896 yield outparams
897 else:
897 else:
898 yield _pack(_fstreamparamsize, paramssize)
898 yield _pack(_fstreamparamsize, paramssize)
899 # From there, payload might need to be decompressed
899 # From there, payload might need to be decompressed
900 self._fp = self._compengine.decompressorreader(self._fp)
900 self._fp = self._compengine.decompressorreader(self._fp)
901 emptycount = 0
901 emptycount = 0
902 while emptycount < 2:
902 while emptycount < 2:
903 # so we can brainlessly loop
903 # so we can brainlessly loop
904 assert _fpartheadersize == _fpayloadsize
904 assert _fpartheadersize == _fpayloadsize
905 size = self._unpack(_fpartheadersize)[0]
905 size = self._unpack(_fpartheadersize)[0]
906 yield _pack(_fpartheadersize, size)
906 yield _pack(_fpartheadersize, size)
907 if size:
907 if size:
908 emptycount = 0
908 emptycount = 0
909 else:
909 else:
910 emptycount += 1
910 emptycount += 1
911 continue
911 continue
912 if size == flaginterrupt:
912 if size == flaginterrupt:
913 continue
913 continue
914 elif size < 0:
914 elif size < 0:
915 raise error.BundleValueError(b'negative chunk size: %i')
915 raise error.BundleValueError(b'negative chunk size: %i')
916 yield self._readexact(size)
916 yield self._readexact(size)
917
917
918 def iterparts(self, seekable=False):
918 def iterparts(self, seekable=False):
919 """yield all parts contained in the stream"""
919 """yield all parts contained in the stream"""
920 cls = seekableunbundlepart if seekable else unbundlepart
920 cls = seekableunbundlepart if seekable else unbundlepart
921 # make sure param have been loaded
921 # make sure param have been loaded
922 self.params
922 self.params
923 # From there, payload need to be decompressed
923 # From there, payload need to be decompressed
924 self._fp = self._compengine.decompressorreader(self._fp)
924 self._fp = self._compengine.decompressorreader(self._fp)
925 indebug(self.ui, b'start extraction of bundle2 parts')
925 indebug(self.ui, b'start extraction of bundle2 parts')
926 headerblock = self._readpartheader()
926 headerblock = self._readpartheader()
927 while headerblock is not None:
927 while headerblock is not None:
928 part = cls(self.ui, headerblock, self._fp)
928 part = cls(self.ui, headerblock, self._fp)
929 yield part
929 yield part
930 # Ensure part is fully consumed so we can start reading the next
930 # Ensure part is fully consumed so we can start reading the next
931 # part.
931 # part.
932 part.consume()
932 part.consume()
933
933
934 headerblock = self._readpartheader()
934 headerblock = self._readpartheader()
935 indebug(self.ui, b'end of bundle2 stream')
935 indebug(self.ui, b'end of bundle2 stream')
936
936
937 def _readpartheader(self):
937 def _readpartheader(self):
938 """reads a part header size and return the bytes blob
938 """reads a part header size and return the bytes blob
939
939
940 returns None if empty"""
940 returns None if empty"""
941 headersize = self._unpack(_fpartheadersize)[0]
941 headersize = self._unpack(_fpartheadersize)[0]
942 if headersize < 0:
942 if headersize < 0:
943 raise error.BundleValueError(
943 raise error.BundleValueError(
944 b'negative part header size: %i' % headersize
944 b'negative part header size: %i' % headersize
945 )
945 )
946 indebug(self.ui, b'part header size: %i' % headersize)
946 indebug(self.ui, b'part header size: %i' % headersize)
947 if headersize:
947 if headersize:
948 return self._readexact(headersize)
948 return self._readexact(headersize)
949 return None
949 return None
950
950
951 def compressed(self):
951 def compressed(self):
952 self.params # load params
952 self.params # load params
953 return self._compressed
953 return self._compressed
954
954
955 def close(self):
955 def close(self):
956 """close underlying file"""
956 """close underlying file"""
957 if util.safehasattr(self._fp, 'close'):
957 if util.safehasattr(self._fp, 'close'):
958 return self._fp.close()
958 return self._fp.close()
959
959
960
960
961 formatmap = {b'20': unbundle20}
961 formatmap = {b'20': unbundle20}
962
962
963 b2streamparamsmap = {}
963 b2streamparamsmap = {}
964
964
965
965
966 def b2streamparamhandler(name):
966 def b2streamparamhandler(name):
967 """register a handler for a stream level parameter"""
967 """register a handler for a stream level parameter"""
968
968
969 def decorator(func):
969 def decorator(func):
970 assert name not in formatmap
970 assert name not in formatmap
971 b2streamparamsmap[name] = func
971 b2streamparamsmap[name] = func
972 return func
972 return func
973
973
974 return decorator
974 return decorator
975
975
976
976
977 @b2streamparamhandler(b'compression')
977 @b2streamparamhandler(b'compression')
978 def processcompression(unbundler, param, value):
978 def processcompression(unbundler, param, value):
979 """read compression parameter and install payload decompression"""
979 """read compression parameter and install payload decompression"""
980 if value not in util.compengines.supportedbundletypes:
980 if value not in util.compengines.supportedbundletypes:
981 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
981 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
982 unbundler._compengine = util.compengines.forbundletype(value)
982 unbundler._compengine = util.compengines.forbundletype(value)
983 if value is not None:
983 if value is not None:
984 unbundler._compressed = True
984 unbundler._compressed = True
985
985
986
986
987 class bundlepart(object):
987 class bundlepart(object):
988 """A bundle2 part contains application level payload
988 """A bundle2 part contains application level payload
989
989
990 The part `type` is used to route the part to the application level
990 The part `type` is used to route the part to the application level
991 handler.
991 handler.
992
992
993 The part payload is contained in ``part.data``. It could be raw bytes or a
993 The part payload is contained in ``part.data``. It could be raw bytes or a
994 generator of byte chunks.
994 generator of byte chunks.
995
995
996 You can add parameters to the part using the ``addparam`` method.
996 You can add parameters to the part using the ``addparam`` method.
997 Parameters can be either mandatory (default) or advisory. Remote side
997 Parameters can be either mandatory (default) or advisory. Remote side
998 should be able to safely ignore the advisory ones.
998 should be able to safely ignore the advisory ones.
999
999
1000 Both data and parameters cannot be modified after the generation has begun.
1000 Both data and parameters cannot be modified after the generation has begun.
1001 """
1001 """
1002
1002
1003 def __init__(
1003 def __init__(
1004 self,
1004 self,
1005 parttype,
1005 parttype,
1006 mandatoryparams=(),
1006 mandatoryparams=(),
1007 advisoryparams=(),
1007 advisoryparams=(),
1008 data=b'',
1008 data=b'',
1009 mandatory=True,
1009 mandatory=True,
1010 ):
1010 ):
1011 validateparttype(parttype)
1011 validateparttype(parttype)
1012 self.id = None
1012 self.id = None
1013 self.type = parttype
1013 self.type = parttype
1014 self._data = data
1014 self._data = data
1015 self._mandatoryparams = list(mandatoryparams)
1015 self._mandatoryparams = list(mandatoryparams)
1016 self._advisoryparams = list(advisoryparams)
1016 self._advisoryparams = list(advisoryparams)
1017 # checking for duplicated entries
1017 # checking for duplicated entries
1018 self._seenparams = set()
1018 self._seenparams = set()
1019 for pname, __ in self._mandatoryparams + self._advisoryparams:
1019 for pname, __ in self._mandatoryparams + self._advisoryparams:
1020 if pname in self._seenparams:
1020 if pname in self._seenparams:
1021 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1021 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1022 self._seenparams.add(pname)
1022 self._seenparams.add(pname)
1023 # status of the part's generation:
1023 # status of the part's generation:
1024 # - None: not started,
1024 # - None: not started,
1025 # - False: currently generated,
1025 # - False: currently generated,
1026 # - True: generation done.
1026 # - True: generation done.
1027 self._generated = None
1027 self._generated = None
1028 self.mandatory = mandatory
1028 self.mandatory = mandatory
1029
1029
1030 def __repr__(self):
1030 def __repr__(self):
1031 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1031 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1032 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1032 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1033 cls,
1033 cls,
1034 id(self),
1034 id(self),
1035 self.id,
1035 self.id,
1036 self.type,
1036 self.type,
1037 self.mandatory,
1037 self.mandatory,
1038 )
1038 )
1039
1039
1040 def copy(self):
1040 def copy(self):
1041 """return a copy of the part
1041 """return a copy of the part
1042
1042
1043 The new part have the very same content but no partid assigned yet.
1043 The new part have the very same content but no partid assigned yet.
1044 Parts with generated data cannot be copied."""
1044 Parts with generated data cannot be copied."""
1045 assert not util.safehasattr(self.data, 'next')
1045 assert not util.safehasattr(self.data, 'next')
1046 return self.__class__(
1046 return self.__class__(
1047 self.type,
1047 self.type,
1048 self._mandatoryparams,
1048 self._mandatoryparams,
1049 self._advisoryparams,
1049 self._advisoryparams,
1050 self._data,
1050 self._data,
1051 self.mandatory,
1051 self.mandatory,
1052 )
1052 )
1053
1053
1054 # methods used to defines the part content
1054 # methods used to defines the part content
1055 @property
1055 @property
1056 def data(self):
1056 def data(self):
1057 return self._data
1057 return self._data
1058
1058
1059 @data.setter
1059 @data.setter
1060 def data(self, data):
1060 def data(self, data):
1061 if self._generated is not None:
1061 if self._generated is not None:
1062 raise error.ReadOnlyPartError(b'part is being generated')
1062 raise error.ReadOnlyPartError(b'part is being generated')
1063 self._data = data
1063 self._data = data
1064
1064
1065 @property
1065 @property
1066 def mandatoryparams(self):
1066 def mandatoryparams(self):
1067 # make it an immutable tuple to force people through ``addparam``
1067 # make it an immutable tuple to force people through ``addparam``
1068 return tuple(self._mandatoryparams)
1068 return tuple(self._mandatoryparams)
1069
1069
1070 @property
1070 @property
1071 def advisoryparams(self):
1071 def advisoryparams(self):
1072 # make it an immutable tuple to force people through ``addparam``
1072 # make it an immutable tuple to force people through ``addparam``
1073 return tuple(self._advisoryparams)
1073 return tuple(self._advisoryparams)
1074
1074
1075 def addparam(self, name, value=b'', mandatory=True):
1075 def addparam(self, name, value=b'', mandatory=True):
1076 """add a parameter to the part
1076 """add a parameter to the part
1077
1077
1078 If 'mandatory' is set to True, the remote handler must claim support
1078 If 'mandatory' is set to True, the remote handler must claim support
1079 for this parameter or the unbundling will be aborted.
1079 for this parameter or the unbundling will be aborted.
1080
1080
1081 The 'name' and 'value' cannot exceed 255 bytes each.
1081 The 'name' and 'value' cannot exceed 255 bytes each.
1082 """
1082 """
1083 if self._generated is not None:
1083 if self._generated is not None:
1084 raise error.ReadOnlyPartError(b'part is being generated')
1084 raise error.ReadOnlyPartError(b'part is being generated')
1085 if name in self._seenparams:
1085 if name in self._seenparams:
1086 raise ValueError(b'duplicated params: %s' % name)
1086 raise ValueError(b'duplicated params: %s' % name)
1087 self._seenparams.add(name)
1087 self._seenparams.add(name)
1088 params = self._advisoryparams
1088 params = self._advisoryparams
1089 if mandatory:
1089 if mandatory:
1090 params = self._mandatoryparams
1090 params = self._mandatoryparams
1091 params.append((name, value))
1091 params.append((name, value))
1092
1092
1093 # methods used to generates the bundle2 stream
1093 # methods used to generates the bundle2 stream
1094 def getchunks(self, ui):
1094 def getchunks(self, ui):
1095 if self._generated is not None:
1095 if self._generated is not None:
1096 raise error.ProgrammingError(b'part can only be consumed once')
1096 raise error.ProgrammingError(b'part can only be consumed once')
1097 self._generated = False
1097 self._generated = False
1098
1098
1099 if ui.debugflag:
1099 if ui.debugflag:
1100 msg = [b'bundle2-output-part: "%s"' % self.type]
1100 msg = [b'bundle2-output-part: "%s"' % self.type]
1101 if not self.mandatory:
1101 if not self.mandatory:
1102 msg.append(b' (advisory)')
1102 msg.append(b' (advisory)')
1103 nbmp = len(self.mandatoryparams)
1103 nbmp = len(self.mandatoryparams)
1104 nbap = len(self.advisoryparams)
1104 nbap = len(self.advisoryparams)
1105 if nbmp or nbap:
1105 if nbmp or nbap:
1106 msg.append(b' (params:')
1106 msg.append(b' (params:')
1107 if nbmp:
1107 if nbmp:
1108 msg.append(b' %i mandatory' % nbmp)
1108 msg.append(b' %i mandatory' % nbmp)
1109 if nbap:
1109 if nbap:
1110 msg.append(b' %i advisory' % nbmp)
1110 msg.append(b' %i advisory' % nbmp)
1111 msg.append(b')')
1111 msg.append(b')')
1112 if not self.data:
1112 if not self.data:
1113 msg.append(b' empty payload')
1113 msg.append(b' empty payload')
1114 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1114 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1115 self.data, b'__next__'
1115 self.data, b'__next__'
1116 ):
1116 ):
1117 msg.append(b' streamed payload')
1117 msg.append(b' streamed payload')
1118 else:
1118 else:
1119 msg.append(b' %i bytes payload' % len(self.data))
1119 msg.append(b' %i bytes payload' % len(self.data))
1120 msg.append(b'\n')
1120 msg.append(b'\n')
1121 ui.debug(b''.join(msg))
1121 ui.debug(b''.join(msg))
1122
1122
1123 #### header
1123 #### header
1124 if self.mandatory:
1124 if self.mandatory:
1125 parttype = self.type.upper()
1125 parttype = self.type.upper()
1126 else:
1126 else:
1127 parttype = self.type.lower()
1127 parttype = self.type.lower()
1128 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1128 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1129 ## parttype
1129 ## parttype
1130 header = [
1130 header = [
1131 _pack(_fparttypesize, len(parttype)),
1131 _pack(_fparttypesize, len(parttype)),
1132 parttype,
1132 parttype,
1133 _pack(_fpartid, self.id),
1133 _pack(_fpartid, self.id),
1134 ]
1134 ]
1135 ## parameters
1135 ## parameters
1136 # count
1136 # count
1137 manpar = self.mandatoryparams
1137 manpar = self.mandatoryparams
1138 advpar = self.advisoryparams
1138 advpar = self.advisoryparams
1139 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1139 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1140 # size
1140 # size
1141 parsizes = []
1141 parsizes = []
1142 for key, value in manpar:
1142 for key, value in manpar:
1143 parsizes.append(len(key))
1143 parsizes.append(len(key))
1144 parsizes.append(len(value))
1144 parsizes.append(len(value))
1145 for key, value in advpar:
1145 for key, value in advpar:
1146 parsizes.append(len(key))
1146 parsizes.append(len(key))
1147 parsizes.append(len(value))
1147 parsizes.append(len(value))
1148 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1148 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1149 header.append(paramsizes)
1149 header.append(paramsizes)
1150 # key, value
1150 # key, value
1151 for key, value in manpar:
1151 for key, value in manpar:
1152 header.append(key)
1152 header.append(key)
1153 header.append(value)
1153 header.append(value)
1154 for key, value in advpar:
1154 for key, value in advpar:
1155 header.append(key)
1155 header.append(key)
1156 header.append(value)
1156 header.append(value)
1157 ## finalize header
1157 ## finalize header
1158 try:
1158 try:
1159 headerchunk = b''.join(header)
1159 headerchunk = b''.join(header)
1160 except TypeError:
1160 except TypeError:
1161 raise TypeError(
1161 raise TypeError(
1162 'Found a non-bytes trying to '
1162 'Found a non-bytes trying to '
1163 'build bundle part header: %r' % header
1163 'build bundle part header: %r' % header
1164 )
1164 )
1165 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1165 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1166 yield _pack(_fpartheadersize, len(headerchunk))
1166 yield _pack(_fpartheadersize, len(headerchunk))
1167 yield headerchunk
1167 yield headerchunk
1168 ## payload
1168 ## payload
1169 try:
1169 try:
1170 for chunk in self._payloadchunks():
1170 for chunk in self._payloadchunks():
1171 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1171 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1172 yield _pack(_fpayloadsize, len(chunk))
1172 yield _pack(_fpayloadsize, len(chunk))
1173 yield chunk
1173 yield chunk
1174 except GeneratorExit:
1174 except GeneratorExit:
1175 # GeneratorExit means that nobody is listening for our
1175 # GeneratorExit means that nobody is listening for our
1176 # results anyway, so just bail quickly rather than trying
1176 # results anyway, so just bail quickly rather than trying
1177 # to produce an error part.
1177 # to produce an error part.
1178 ui.debug(b'bundle2-generatorexit\n')
1178 ui.debug(b'bundle2-generatorexit\n')
1179 raise
1179 raise
1180 except BaseException as exc:
1180 except BaseException as exc:
1181 bexc = stringutil.forcebytestr(exc)
1181 bexc = stringutil.forcebytestr(exc)
1182 # backup exception data for later
1182 # backup exception data for later
1183 ui.debug(
1183 ui.debug(
1184 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1184 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1185 )
1185 )
1186 tb = sys.exc_info()[2]
1186 tb = sys.exc_info()[2]
1187 msg = b'unexpected error: %s' % bexc
1187 msg = b'unexpected error: %s' % bexc
1188 interpart = bundlepart(
1188 interpart = bundlepart(
1189 b'error:abort', [(b'message', msg)], mandatory=False
1189 b'error:abort', [(b'message', msg)], mandatory=False
1190 )
1190 )
1191 interpart.id = 0
1191 interpart.id = 0
1192 yield _pack(_fpayloadsize, -1)
1192 yield _pack(_fpayloadsize, -1)
1193 for chunk in interpart.getchunks(ui=ui):
1193 for chunk in interpart.getchunks(ui=ui):
1194 yield chunk
1194 yield chunk
1195 outdebug(ui, b'closing payload chunk')
1195 outdebug(ui, b'closing payload chunk')
1196 # abort current part payload
1196 # abort current part payload
1197 yield _pack(_fpayloadsize, 0)
1197 yield _pack(_fpayloadsize, 0)
1198 pycompat.raisewithtb(exc, tb)
1198 pycompat.raisewithtb(exc, tb)
1199 # end of payload
1199 # end of payload
1200 outdebug(ui, b'closing payload chunk')
1200 outdebug(ui, b'closing payload chunk')
1201 yield _pack(_fpayloadsize, 0)
1201 yield _pack(_fpayloadsize, 0)
1202 self._generated = True
1202 self._generated = True
1203
1203
1204 def _payloadchunks(self):
1204 def _payloadchunks(self):
1205 """yield chunks of a the part payload
1205 """yield chunks of a the part payload
1206
1206
1207 Exists to handle the different methods to provide data to a part."""
1207 Exists to handle the different methods to provide data to a part."""
1208 # we only support fixed size data now.
1208 # we only support fixed size data now.
1209 # This will be improved in the future.
1209 # This will be improved in the future.
1210 if util.safehasattr(self.data, 'next') or util.safehasattr(
1210 if util.safehasattr(self.data, 'next') or util.safehasattr(
1211 self.data, b'__next__'
1211 self.data, b'__next__'
1212 ):
1212 ):
1213 buff = util.chunkbuffer(self.data)
1213 buff = util.chunkbuffer(self.data)
1214 chunk = buff.read(preferedchunksize)
1214 chunk = buff.read(preferedchunksize)
1215 while chunk:
1215 while chunk:
1216 yield chunk
1216 yield chunk
1217 chunk = buff.read(preferedchunksize)
1217 chunk = buff.read(preferedchunksize)
1218 elif len(self.data):
1218 elif len(self.data):
1219 yield self.data
1219 yield self.data
1220
1220
1221
1221
1222 flaginterrupt = -1
1222 flaginterrupt = -1
1223
1223
1224
1224
1225 class interrupthandler(unpackermixin):
1225 class interrupthandler(unpackermixin):
1226 """read one part and process it with restricted capability
1226 """read one part and process it with restricted capability
1227
1227
1228 This allows to transmit exception raised on the producer size during part
1228 This allows to transmit exception raised on the producer size during part
1229 iteration while the consumer is reading a part.
1229 iteration while the consumer is reading a part.
1230
1230
1231 Part processed in this manner only have access to a ui object,"""
1231 Part processed in this manner only have access to a ui object,"""
1232
1232
1233 def __init__(self, ui, fp):
1233 def __init__(self, ui, fp):
1234 super(interrupthandler, self).__init__(fp)
1234 super(interrupthandler, self).__init__(fp)
1235 self.ui = ui
1235 self.ui = ui
1236
1236
1237 def _readpartheader(self):
1237 def _readpartheader(self):
1238 """reads a part header size and return the bytes blob
1238 """reads a part header size and return the bytes blob
1239
1239
1240 returns None if empty"""
1240 returns None if empty"""
1241 headersize = self._unpack(_fpartheadersize)[0]
1241 headersize = self._unpack(_fpartheadersize)[0]
1242 if headersize < 0:
1242 if headersize < 0:
1243 raise error.BundleValueError(
1243 raise error.BundleValueError(
1244 b'negative part header size: %i' % headersize
1244 b'negative part header size: %i' % headersize
1245 )
1245 )
1246 indebug(self.ui, b'part header size: %i\n' % headersize)
1246 indebug(self.ui, b'part header size: %i\n' % headersize)
1247 if headersize:
1247 if headersize:
1248 return self._readexact(headersize)
1248 return self._readexact(headersize)
1249 return None
1249 return None
1250
1250
1251 def __call__(self):
1251 def __call__(self):
1252
1252
1253 self.ui.debug(
1253 self.ui.debug(
1254 b'bundle2-input-stream-interrupt: opening out of band context\n'
1254 b'bundle2-input-stream-interrupt: opening out of band context\n'
1255 )
1255 )
1256 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1256 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1257 headerblock = self._readpartheader()
1257 headerblock = self._readpartheader()
1258 if headerblock is None:
1258 if headerblock is None:
1259 indebug(self.ui, b'no part found during interruption.')
1259 indebug(self.ui, b'no part found during interruption.')
1260 return
1260 return
1261 part = unbundlepart(self.ui, headerblock, self._fp)
1261 part = unbundlepart(self.ui, headerblock, self._fp)
1262 op = interruptoperation(self.ui)
1262 op = interruptoperation(self.ui)
1263 hardabort = False
1263 hardabort = False
1264 try:
1264 try:
1265 _processpart(op, part)
1265 _processpart(op, part)
1266 except (SystemExit, KeyboardInterrupt):
1266 except (SystemExit, KeyboardInterrupt):
1267 hardabort = True
1267 hardabort = True
1268 raise
1268 raise
1269 finally:
1269 finally:
1270 if not hardabort:
1270 if not hardabort:
1271 part.consume()
1271 part.consume()
1272 self.ui.debug(
1272 self.ui.debug(
1273 b'bundle2-input-stream-interrupt: closing out of band context\n'
1273 b'bundle2-input-stream-interrupt: closing out of band context\n'
1274 )
1274 )
1275
1275
1276
1276
1277 class interruptoperation(object):
1277 class interruptoperation(object):
1278 """A limited operation to be use by part handler during interruption
1278 """A limited operation to be use by part handler during interruption
1279
1279
1280 It only have access to an ui object.
1280 It only have access to an ui object.
1281 """
1281 """
1282
1282
1283 def __init__(self, ui):
1283 def __init__(self, ui):
1284 self.ui = ui
1284 self.ui = ui
1285 self.reply = None
1285 self.reply = None
1286 self.captureoutput = False
1286 self.captureoutput = False
1287
1287
1288 @property
1288 @property
1289 def repo(self):
1289 def repo(self):
1290 raise error.ProgrammingError(b'no repo access from stream interruption')
1290 raise error.ProgrammingError(b'no repo access from stream interruption')
1291
1291
1292 def gettransaction(self):
1292 def gettransaction(self):
1293 raise TransactionUnavailable(b'no repo access from stream interruption')
1293 raise TransactionUnavailable(b'no repo access from stream interruption')
1294
1294
1295
1295
1296 def decodepayloadchunks(ui, fh):
1296 def decodepayloadchunks(ui, fh):
1297 """Reads bundle2 part payload data into chunks.
1297 """Reads bundle2 part payload data into chunks.
1298
1298
1299 Part payload data consists of framed chunks. This function takes
1299 Part payload data consists of framed chunks. This function takes
1300 a file handle and emits those chunks.
1300 a file handle and emits those chunks.
1301 """
1301 """
1302 dolog = ui.configbool(b'devel', b'bundle2.debug')
1302 dolog = ui.configbool(b'devel', b'bundle2.debug')
1303 debug = ui.debug
1303 debug = ui.debug
1304
1304
1305 headerstruct = struct.Struct(_fpayloadsize)
1305 headerstruct = struct.Struct(_fpayloadsize)
1306 headersize = headerstruct.size
1306 headersize = headerstruct.size
1307 unpack = headerstruct.unpack
1307 unpack = headerstruct.unpack
1308
1308
1309 readexactly = changegroup.readexactly
1309 readexactly = changegroup.readexactly
1310 read = fh.read
1310 read = fh.read
1311
1311
1312 chunksize = unpack(readexactly(fh, headersize))[0]
1312 chunksize = unpack(readexactly(fh, headersize))[0]
1313 indebug(ui, b'payload chunk size: %i' % chunksize)
1313 indebug(ui, b'payload chunk size: %i' % chunksize)
1314
1314
1315 # changegroup.readexactly() is inlined below for performance.
1315 # changegroup.readexactly() is inlined below for performance.
1316 while chunksize:
1316 while chunksize:
1317 if chunksize >= 0:
1317 if chunksize >= 0:
1318 s = read(chunksize)
1318 s = read(chunksize)
1319 if len(s) < chunksize:
1319 if len(s) < chunksize:
1320 raise error.Abort(
1320 raise error.Abort(
1321 _(
1321 _(
1322 b'stream ended unexpectedly '
1322 b'stream ended unexpectedly '
1323 b' (got %d bytes, expected %d)'
1323 b' (got %d bytes, expected %d)'
1324 )
1324 )
1325 % (len(s), chunksize)
1325 % (len(s), chunksize)
1326 )
1326 )
1327
1327
1328 yield s
1328 yield s
1329 elif chunksize == flaginterrupt:
1329 elif chunksize == flaginterrupt:
1330 # Interrupt "signal" detected. The regular stream is interrupted
1330 # Interrupt "signal" detected. The regular stream is interrupted
1331 # and a bundle2 part follows. Consume it.
1331 # and a bundle2 part follows. Consume it.
1332 interrupthandler(ui, fh)()
1332 interrupthandler(ui, fh)()
1333 else:
1333 else:
1334 raise error.BundleValueError(
1334 raise error.BundleValueError(
1335 b'negative payload chunk size: %s' % chunksize
1335 b'negative payload chunk size: %s' % chunksize
1336 )
1336 )
1337
1337
1338 s = read(headersize)
1338 s = read(headersize)
1339 if len(s) < headersize:
1339 if len(s) < headersize:
1340 raise error.Abort(
1340 raise error.Abort(
1341 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1341 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1342 % (len(s), chunksize)
1342 % (len(s), chunksize)
1343 )
1343 )
1344
1344
1345 chunksize = unpack(s)[0]
1345 chunksize = unpack(s)[0]
1346
1346
1347 # indebug() inlined for performance.
1347 # indebug() inlined for performance.
1348 if dolog:
1348 if dolog:
1349 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1349 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1350
1350
1351
1351
1352 class unbundlepart(unpackermixin):
1352 class unbundlepart(unpackermixin):
1353 """a bundle part read from a bundle"""
1353 """a bundle part read from a bundle"""
1354
1354
1355 def __init__(self, ui, header, fp):
1355 def __init__(self, ui, header, fp):
1356 super(unbundlepart, self).__init__(fp)
1356 super(unbundlepart, self).__init__(fp)
1357 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1357 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1358 fp, b'tell'
1358 fp, b'tell'
1359 )
1359 )
1360 self.ui = ui
1360 self.ui = ui
1361 # unbundle state attr
1361 # unbundle state attr
1362 self._headerdata = header
1362 self._headerdata = header
1363 self._headeroffset = 0
1363 self._headeroffset = 0
1364 self._initialized = False
1364 self._initialized = False
1365 self.consumed = False
1365 self.consumed = False
1366 # part data
1366 # part data
1367 self.id = None
1367 self.id = None
1368 self.type = None
1368 self.type = None
1369 self.mandatoryparams = None
1369 self.mandatoryparams = None
1370 self.advisoryparams = None
1370 self.advisoryparams = None
1371 self.params = None
1371 self.params = None
1372 self.mandatorykeys = ()
1372 self.mandatorykeys = ()
1373 self._readheader()
1373 self._readheader()
1374 self._mandatory = None
1374 self._mandatory = None
1375 self._pos = 0
1375 self._pos = 0
1376
1376
1377 def _fromheader(self, size):
1377 def _fromheader(self, size):
1378 """return the next <size> byte from the header"""
1378 """return the next <size> byte from the header"""
1379 offset = self._headeroffset
1379 offset = self._headeroffset
1380 data = self._headerdata[offset : (offset + size)]
1380 data = self._headerdata[offset : (offset + size)]
1381 self._headeroffset = offset + size
1381 self._headeroffset = offset + size
1382 return data
1382 return data
1383
1383
1384 def _unpackheader(self, format):
1384 def _unpackheader(self, format):
1385 """read given format from header
1385 """read given format from header
1386
1386
1387 This automatically compute the size of the format to read."""
1387 This automatically compute the size of the format to read."""
1388 data = self._fromheader(struct.calcsize(format))
1388 data = self._fromheader(struct.calcsize(format))
1389 return _unpack(format, data)
1389 return _unpack(format, data)
1390
1390
1391 def _initparams(self, mandatoryparams, advisoryparams):
1391 def _initparams(self, mandatoryparams, advisoryparams):
1392 """internal function to setup all logic related parameters"""
1392 """internal function to setup all logic related parameters"""
1393 # make it read only to prevent people touching it by mistake.
1393 # make it read only to prevent people touching it by mistake.
1394 self.mandatoryparams = tuple(mandatoryparams)
1394 self.mandatoryparams = tuple(mandatoryparams)
1395 self.advisoryparams = tuple(advisoryparams)
1395 self.advisoryparams = tuple(advisoryparams)
1396 # user friendly UI
1396 # user friendly UI
1397 self.params = util.sortdict(self.mandatoryparams)
1397 self.params = util.sortdict(self.mandatoryparams)
1398 self.params.update(self.advisoryparams)
1398 self.params.update(self.advisoryparams)
1399 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1399 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1400
1400
1401 def _readheader(self):
1401 def _readheader(self):
1402 """read the header and setup the object"""
1402 """read the header and setup the object"""
1403 typesize = self._unpackheader(_fparttypesize)[0]
1403 typesize = self._unpackheader(_fparttypesize)[0]
1404 self.type = self._fromheader(typesize)
1404 self.type = self._fromheader(typesize)
1405 indebug(self.ui, b'part type: "%s"' % self.type)
1405 indebug(self.ui, b'part type: "%s"' % self.type)
1406 self.id = self._unpackheader(_fpartid)[0]
1406 self.id = self._unpackheader(_fpartid)[0]
1407 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1407 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1408 # extract mandatory bit from type
1408 # extract mandatory bit from type
1409 self.mandatory = self.type != self.type.lower()
1409 self.mandatory = self.type != self.type.lower()
1410 self.type = self.type.lower()
1410 self.type = self.type.lower()
1411 ## reading parameters
1411 ## reading parameters
1412 # param count
1412 # param count
1413 mancount, advcount = self._unpackheader(_fpartparamcount)
1413 mancount, advcount = self._unpackheader(_fpartparamcount)
1414 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1414 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1415 # param size
1415 # param size
1416 fparamsizes = _makefpartparamsizes(mancount + advcount)
1416 fparamsizes = _makefpartparamsizes(mancount + advcount)
1417 paramsizes = self._unpackheader(fparamsizes)
1417 paramsizes = self._unpackheader(fparamsizes)
1418 # make it a list of couple again
1418 # make it a list of couple again
1419 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1419 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1420 # split mandatory from advisory
1420 # split mandatory from advisory
1421 mansizes = paramsizes[:mancount]
1421 mansizes = paramsizes[:mancount]
1422 advsizes = paramsizes[mancount:]
1422 advsizes = paramsizes[mancount:]
1423 # retrieve param value
1423 # retrieve param value
1424 manparams = []
1424 manparams = []
1425 for key, value in mansizes:
1425 for key, value in mansizes:
1426 manparams.append((self._fromheader(key), self._fromheader(value)))
1426 manparams.append((self._fromheader(key), self._fromheader(value)))
1427 advparams = []
1427 advparams = []
1428 for key, value in advsizes:
1428 for key, value in advsizes:
1429 advparams.append((self._fromheader(key), self._fromheader(value)))
1429 advparams.append((self._fromheader(key), self._fromheader(value)))
1430 self._initparams(manparams, advparams)
1430 self._initparams(manparams, advparams)
1431 ## part payload
1431 ## part payload
1432 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1432 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1433 # we read the data, tell it
1433 # we read the data, tell it
1434 self._initialized = True
1434 self._initialized = True
1435
1435
1436 def _payloadchunks(self):
1436 def _payloadchunks(self):
1437 """Generator of decoded chunks in the payload."""
1437 """Generator of decoded chunks in the payload."""
1438 return decodepayloadchunks(self.ui, self._fp)
1438 return decodepayloadchunks(self.ui, self._fp)
1439
1439
1440 def consume(self):
1440 def consume(self):
1441 """Read the part payload until completion.
1441 """Read the part payload until completion.
1442
1442
1443 By consuming the part data, the underlying stream read offset will
1443 By consuming the part data, the underlying stream read offset will
1444 be advanced to the next part (or end of stream).
1444 be advanced to the next part (or end of stream).
1445 """
1445 """
1446 if self.consumed:
1446 if self.consumed:
1447 return
1447 return
1448
1448
1449 chunk = self.read(32768)
1449 chunk = self.read(32768)
1450 while chunk:
1450 while chunk:
1451 self._pos += len(chunk)
1451 self._pos += len(chunk)
1452 chunk = self.read(32768)
1452 chunk = self.read(32768)
1453
1453
1454 def read(self, size=None):
1454 def read(self, size=None):
1455 """read payload data"""
1455 """read payload data"""
1456 if not self._initialized:
1456 if not self._initialized:
1457 self._readheader()
1457 self._readheader()
1458 if size is None:
1458 if size is None:
1459 data = self._payloadstream.read()
1459 data = self._payloadstream.read()
1460 else:
1460 else:
1461 data = self._payloadstream.read(size)
1461 data = self._payloadstream.read(size)
1462 self._pos += len(data)
1462 self._pos += len(data)
1463 if size is None or len(data) < size:
1463 if size is None or len(data) < size:
1464 if not self.consumed and self._pos:
1464 if not self.consumed and self._pos:
1465 self.ui.debug(
1465 self.ui.debug(
1466 b'bundle2-input-part: total payload size %i\n' % self._pos
1466 b'bundle2-input-part: total payload size %i\n' % self._pos
1467 )
1467 )
1468 self.consumed = True
1468 self.consumed = True
1469 return data
1469 return data
1470
1470
1471
1471
1472 class seekableunbundlepart(unbundlepart):
1472 class seekableunbundlepart(unbundlepart):
1473 """A bundle2 part in a bundle that is seekable.
1473 """A bundle2 part in a bundle that is seekable.
1474
1474
1475 Regular ``unbundlepart`` instances can only be read once. This class
1475 Regular ``unbundlepart`` instances can only be read once. This class
1476 extends ``unbundlepart`` to enable bi-directional seeking within the
1476 extends ``unbundlepart`` to enable bi-directional seeking within the
1477 part.
1477 part.
1478
1478
1479 Bundle2 part data consists of framed chunks. Offsets when seeking
1479 Bundle2 part data consists of framed chunks. Offsets when seeking
1480 refer to the decoded data, not the offsets in the underlying bundle2
1480 refer to the decoded data, not the offsets in the underlying bundle2
1481 stream.
1481 stream.
1482
1482
1483 To facilitate quickly seeking within the decoded data, instances of this
1483 To facilitate quickly seeking within the decoded data, instances of this
1484 class maintain a mapping between offsets in the underlying stream and
1484 class maintain a mapping between offsets in the underlying stream and
1485 the decoded payload. This mapping will consume memory in proportion
1485 the decoded payload. This mapping will consume memory in proportion
1486 to the number of chunks within the payload (which almost certainly
1486 to the number of chunks within the payload (which almost certainly
1487 increases in proportion with the size of the part).
1487 increases in proportion with the size of the part).
1488 """
1488 """
1489
1489
1490 def __init__(self, ui, header, fp):
1490 def __init__(self, ui, header, fp):
1491 # (payload, file) offsets for chunk starts.
1491 # (payload, file) offsets for chunk starts.
1492 self._chunkindex = []
1492 self._chunkindex = []
1493
1493
1494 super(seekableunbundlepart, self).__init__(ui, header, fp)
1494 super(seekableunbundlepart, self).__init__(ui, header, fp)
1495
1495
1496 def _payloadchunks(self, chunknum=0):
1496 def _payloadchunks(self, chunknum=0):
1497 '''seek to specified chunk and start yielding data'''
1497 '''seek to specified chunk and start yielding data'''
1498 if len(self._chunkindex) == 0:
1498 if len(self._chunkindex) == 0:
1499 assert chunknum == 0, b'Must start with chunk 0'
1499 assert chunknum == 0, b'Must start with chunk 0'
1500 self._chunkindex.append((0, self._tellfp()))
1500 self._chunkindex.append((0, self._tellfp()))
1501 else:
1501 else:
1502 assert chunknum < len(self._chunkindex), (
1502 assert chunknum < len(self._chunkindex), (
1503 b'Unknown chunk %d' % chunknum
1503 b'Unknown chunk %d' % chunknum
1504 )
1504 )
1505 self._seekfp(self._chunkindex[chunknum][1])
1505 self._seekfp(self._chunkindex[chunknum][1])
1506
1506
1507 pos = self._chunkindex[chunknum][0]
1507 pos = self._chunkindex[chunknum][0]
1508
1508
1509 for chunk in decodepayloadchunks(self.ui, self._fp):
1509 for chunk in decodepayloadchunks(self.ui, self._fp):
1510 chunknum += 1
1510 chunknum += 1
1511 pos += len(chunk)
1511 pos += len(chunk)
1512 if chunknum == len(self._chunkindex):
1512 if chunknum == len(self._chunkindex):
1513 self._chunkindex.append((pos, self._tellfp()))
1513 self._chunkindex.append((pos, self._tellfp()))
1514
1514
1515 yield chunk
1515 yield chunk
1516
1516
1517 def _findchunk(self, pos):
1517 def _findchunk(self, pos):
1518 '''for a given payload position, return a chunk number and offset'''
1518 '''for a given payload position, return a chunk number and offset'''
1519 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1519 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1520 if ppos == pos:
1520 if ppos == pos:
1521 return chunk, 0
1521 return chunk, 0
1522 elif ppos > pos:
1522 elif ppos > pos:
1523 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1523 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1524 raise ValueError(b'Unknown chunk')
1524 raise ValueError(b'Unknown chunk')
1525
1525
1526 def tell(self):
1526 def tell(self):
1527 return self._pos
1527 return self._pos
1528
1528
1529 def seek(self, offset, whence=os.SEEK_SET):
1529 def seek(self, offset, whence=os.SEEK_SET):
1530 if whence == os.SEEK_SET:
1530 if whence == os.SEEK_SET:
1531 newpos = offset
1531 newpos = offset
1532 elif whence == os.SEEK_CUR:
1532 elif whence == os.SEEK_CUR:
1533 newpos = self._pos + offset
1533 newpos = self._pos + offset
1534 elif whence == os.SEEK_END:
1534 elif whence == os.SEEK_END:
1535 if not self.consumed:
1535 if not self.consumed:
1536 # Can't use self.consume() here because it advances self._pos.
1536 # Can't use self.consume() here because it advances self._pos.
1537 chunk = self.read(32768)
1537 chunk = self.read(32768)
1538 while chunk:
1538 while chunk:
1539 chunk = self.read(32768)
1539 chunk = self.read(32768)
1540 newpos = self._chunkindex[-1][0] - offset
1540 newpos = self._chunkindex[-1][0] - offset
1541 else:
1541 else:
1542 raise ValueError(b'Unknown whence value: %r' % (whence,))
1542 raise ValueError(b'Unknown whence value: %r' % (whence,))
1543
1543
1544 if newpos > self._chunkindex[-1][0] and not self.consumed:
1544 if newpos > self._chunkindex[-1][0] and not self.consumed:
1545 # Can't use self.consume() here because it advances self._pos.
1545 # Can't use self.consume() here because it advances self._pos.
1546 chunk = self.read(32768)
1546 chunk = self.read(32768)
1547 while chunk:
1547 while chunk:
1548 chunk = self.read(32668)
1548 chunk = self.read(32668)
1549
1549
1550 if not 0 <= newpos <= self._chunkindex[-1][0]:
1550 if not 0 <= newpos <= self._chunkindex[-1][0]:
1551 raise ValueError(b'Offset out of range')
1551 raise ValueError(b'Offset out of range')
1552
1552
1553 if self._pos != newpos:
1553 if self._pos != newpos:
1554 chunk, internaloffset = self._findchunk(newpos)
1554 chunk, internaloffset = self._findchunk(newpos)
1555 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1555 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1556 adjust = self.read(internaloffset)
1556 adjust = self.read(internaloffset)
1557 if len(adjust) != internaloffset:
1557 if len(adjust) != internaloffset:
1558 raise error.Abort(_(b'Seek failed\n'))
1558 raise error.Abort(_(b'Seek failed\n'))
1559 self._pos = newpos
1559 self._pos = newpos
1560
1560
1561 def _seekfp(self, offset, whence=0):
1561 def _seekfp(self, offset, whence=0):
1562 """move the underlying file pointer
1562 """move the underlying file pointer
1563
1563
1564 This method is meant for internal usage by the bundle2 protocol only.
1564 This method is meant for internal usage by the bundle2 protocol only.
1565 They directly manipulate the low level stream including bundle2 level
1565 They directly manipulate the low level stream including bundle2 level
1566 instruction.
1566 instruction.
1567
1567
1568 Do not use it to implement higher-level logic or methods."""
1568 Do not use it to implement higher-level logic or methods."""
1569 if self._seekable:
1569 if self._seekable:
1570 return self._fp.seek(offset, whence)
1570 return self._fp.seek(offset, whence)
1571 else:
1571 else:
1572 raise NotImplementedError(_(b'File pointer is not seekable'))
1572 raise NotImplementedError(_(b'File pointer is not seekable'))
1573
1573
1574 def _tellfp(self):
1574 def _tellfp(self):
1575 """return the file offset, or None if file is not seekable
1575 """return the file offset, or None if file is not seekable
1576
1576
1577 This method is meant for internal usage by the bundle2 protocol only.
1577 This method is meant for internal usage by the bundle2 protocol only.
1578 They directly manipulate the low level stream including bundle2 level
1578 They directly manipulate the low level stream including bundle2 level
1579 instruction.
1579 instruction.
1580
1580
1581 Do not use it to implement higher-level logic or methods."""
1581 Do not use it to implement higher-level logic or methods."""
1582 if self._seekable:
1582 if self._seekable:
1583 try:
1583 try:
1584 return self._fp.tell()
1584 return self._fp.tell()
1585 except IOError as e:
1585 except IOError as e:
1586 if e.errno == errno.ESPIPE:
1586 if e.errno == errno.ESPIPE:
1587 self._seekable = False
1587 self._seekable = False
1588 else:
1588 else:
1589 raise
1589 raise
1590 return None
1590 return None
1591
1591
1592
1592
1593 # These are only the static capabilities.
1593 # These are only the static capabilities.
1594 # Check the 'getrepocaps' function for the rest.
1594 # Check the 'getrepocaps' function for the rest.
1595 capabilities = {
1595 capabilities = {
1596 b'HG20': (),
1596 b'HG20': (),
1597 b'bookmarks': (),
1597 b'bookmarks': (),
1598 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1598 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1599 b'listkeys': (),
1599 b'listkeys': (),
1600 b'pushkey': (),
1600 b'pushkey': (),
1601 b'digests': tuple(sorted(util.DIGESTS.keys())),
1601 b'digests': tuple(sorted(util.DIGESTS.keys())),
1602 b'remote-changegroup': (b'http', b'https'),
1602 b'remote-changegroup': (b'http', b'https'),
1603 b'hgtagsfnodes': (),
1603 b'hgtagsfnodes': (),
1604 b'phases': (b'heads',),
1604 b'phases': (b'heads',),
1605 b'stream': (b'v2',),
1605 b'stream': (b'v2',),
1606 }
1606 }
1607
1607
1608
1608
1609 def getrepocaps(repo, allowpushback=False, role=None):
1609 def getrepocaps(repo, allowpushback=False, role=None):
1610 """return the bundle2 capabilities for a given repo
1610 """return the bundle2 capabilities for a given repo
1611
1611
1612 Exists to allow extensions (like evolution) to mutate the capabilities.
1612 Exists to allow extensions (like evolution) to mutate the capabilities.
1613
1613
1614 The returned value is used for servers advertising their capabilities as
1614 The returned value is used for servers advertising their capabilities as
1615 well as clients advertising their capabilities to servers as part of
1615 well as clients advertising their capabilities to servers as part of
1616 bundle2 requests. The ``role`` argument specifies which is which.
1616 bundle2 requests. The ``role`` argument specifies which is which.
1617 """
1617 """
1618 if role not in (b'client', b'server'):
1618 if role not in (b'client', b'server'):
1619 raise error.ProgrammingError(b'role argument must be client or server')
1619 raise error.ProgrammingError(b'role argument must be client or server')
1620
1620
1621 caps = capabilities.copy()
1621 caps = capabilities.copy()
1622 caps[b'changegroup'] = tuple(
1622 caps[b'changegroup'] = tuple(
1623 sorted(changegroup.supportedincomingversions(repo))
1623 sorted(changegroup.supportedincomingversions(repo))
1624 )
1624 )
1625 if obsolete.isenabled(repo, obsolete.exchangeopt):
1625 if obsolete.isenabled(repo, obsolete.exchangeopt):
1626 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1626 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1627 caps[b'obsmarkers'] = supportedformat
1627 caps[b'obsmarkers'] = supportedformat
1628 if allowpushback:
1628 if allowpushback:
1629 caps[b'pushback'] = ()
1629 caps[b'pushback'] = ()
1630 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1630 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1631 if cpmode == b'check-related':
1631 if cpmode == b'check-related':
1632 caps[b'checkheads'] = (b'related',)
1632 caps[b'checkheads'] = (b'related',)
1633 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1633 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1634 caps.pop(b'phases')
1634 caps.pop(b'phases')
1635
1635
1636 # Don't advertise stream clone support in server mode if not configured.
1636 # Don't advertise stream clone support in server mode if not configured.
1637 if role == b'server':
1637 if role == b'server':
1638 streamsupported = repo.ui.configbool(
1638 streamsupported = repo.ui.configbool(
1639 b'server', b'uncompressed', untrusted=True
1639 b'server', b'uncompressed', untrusted=True
1640 )
1640 )
1641 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1641 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1642
1642
1643 if not streamsupported or not featuresupported:
1643 if not streamsupported or not featuresupported:
1644 caps.pop(b'stream')
1644 caps.pop(b'stream')
1645 # Else always advertise support on client, because payload support
1645 # Else always advertise support on client, because payload support
1646 # should always be advertised.
1646 # should always be advertised.
1647
1647
1648 # b'rev-branch-cache is no longer advertised, but still supported
1648 # b'rev-branch-cache is no longer advertised, but still supported
1649 # for legacy clients.
1649 # for legacy clients.
1650
1650
1651 return caps
1651 return caps
1652
1652
1653
1653
1654 def bundle2caps(remote):
1654 def bundle2caps(remote):
1655 """return the bundle capabilities of a peer as dict"""
1655 """return the bundle capabilities of a peer as dict"""
1656 raw = remote.capable(b'bundle2')
1656 raw = remote.capable(b'bundle2')
1657 if not raw and raw != b'':
1657 if not raw and raw != b'':
1658 return {}
1658 return {}
1659 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1659 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1660 return decodecaps(capsblob)
1660 return decodecaps(capsblob)
1661
1661
1662
1662
1663 def obsmarkersversion(caps):
1663 def obsmarkersversion(caps):
1664 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1664 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1665 obscaps = caps.get(b'obsmarkers', ())
1665 obscaps = caps.get(b'obsmarkers', ())
1666 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1666 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1667
1667
1668
1668
1669 def writenewbundle(
1669 def writenewbundle(
1670 ui,
1670 ui,
1671 repo,
1671 repo,
1672 source,
1672 source,
1673 filename,
1673 filename,
1674 bundletype,
1674 bundletype,
1675 outgoing,
1675 outgoing,
1676 opts,
1676 opts,
1677 vfs=None,
1677 vfs=None,
1678 compression=None,
1678 compression=None,
1679 compopts=None,
1679 compopts=None,
1680 ):
1680 ):
1681 if bundletype.startswith(b'HG10'):
1681 if bundletype.startswith(b'HG10'):
1682 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1682 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1683 return writebundle(
1683 return writebundle(
1684 ui,
1684 ui,
1685 cg,
1685 cg,
1686 filename,
1686 filename,
1687 bundletype,
1687 bundletype,
1688 vfs=vfs,
1688 vfs=vfs,
1689 compression=compression,
1689 compression=compression,
1690 compopts=compopts,
1690 compopts=compopts,
1691 )
1691 )
1692 elif not bundletype.startswith(b'HG20'):
1692 elif not bundletype.startswith(b'HG20'):
1693 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1693 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1694
1694
1695 caps = {}
1695 caps = {}
1696 if b'obsolescence' in opts:
1696 if b'obsolescence' in opts:
1697 caps[b'obsmarkers'] = (b'V1',)
1697 caps[b'obsmarkers'] = (b'V1',)
1698 bundle = bundle20(ui, caps)
1698 bundle = bundle20(ui, caps)
1699 bundle.setcompression(compression, compopts)
1699 bundle.setcompression(compression, compopts)
1700 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1700 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1701 chunkiter = bundle.getchunks()
1701 chunkiter = bundle.getchunks()
1702
1702
1703 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1703 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1704
1704
1705
1705
1706 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1706 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1707 # We should eventually reconcile this logic with the one behind
1707 # We should eventually reconcile this logic with the one behind
1708 # 'exchange.getbundle2partsgenerator'.
1708 # 'exchange.getbundle2partsgenerator'.
1709 #
1709 #
1710 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1710 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1711 # different right now. So we keep them separated for now for the sake of
1711 # different right now. So we keep them separated for now for the sake of
1712 # simplicity.
1712 # simplicity.
1713
1713
1714 # we might not always want a changegroup in such bundle, for example in
1714 # we might not always want a changegroup in such bundle, for example in
1715 # stream bundles
1715 # stream bundles
1716 if opts.get(b'changegroup', True):
1716 if opts.get(b'changegroup', True):
1717 cgversion = opts.get(b'cg.version')
1717 cgversion = opts.get(b'cg.version')
1718 if cgversion is None:
1718 if cgversion is None:
1719 cgversion = changegroup.safeversion(repo)
1719 cgversion = changegroup.safeversion(repo)
1720 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1720 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1721 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1721 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1722 part.addparam(b'version', cg.version)
1722 part.addparam(b'version', cg.version)
1723 if b'clcount' in cg.extras:
1723 if b'clcount' in cg.extras:
1724 part.addparam(
1724 part.addparam(
1725 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1725 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1726 )
1726 )
1727 if opts.get(b'phases') and repo.revs(
1727 if opts.get(b'phases') and repo.revs(
1728 b'%ln and secret()', outgoing.ancestorsof
1728 b'%ln and secret()', outgoing.ancestorsof
1729 ):
1729 ):
1730 part.addparam(
1730 part.addparam(
1731 b'targetphase', b'%d' % phases.secret, mandatory=False
1731 b'targetphase', b'%d' % phases.secret, mandatory=False
1732 )
1732 )
1733 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1733 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1734 part.addparam(b'exp-sidedata', b'1')
1734 part.addparam(b'exp-sidedata', b'1')
1735
1735
1736 if opts.get(b'streamv2', False):
1736 if opts.get(b'streamv2', False):
1737 addpartbundlestream2(bundler, repo, stream=True)
1737 addpartbundlestream2(bundler, repo, stream=True)
1738
1738
1739 if opts.get(b'tagsfnodescache', True):
1739 if opts.get(b'tagsfnodescache', True):
1740 addparttagsfnodescache(repo, bundler, outgoing)
1740 addparttagsfnodescache(repo, bundler, outgoing)
1741
1741
1742 if opts.get(b'revbranchcache', True):
1742 if opts.get(b'revbranchcache', True):
1743 addpartrevbranchcache(repo, bundler, outgoing)
1743 addpartrevbranchcache(repo, bundler, outgoing)
1744
1744
1745 if opts.get(b'obsolescence', False):
1745 if opts.get(b'obsolescence', False):
1746 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1746 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1747 buildobsmarkerspart(
1747 buildobsmarkerspart(
1748 bundler,
1748 bundler,
1749 obsmarkers,
1749 obsmarkers,
1750 mandatory=opts.get(b'obsolescence-mandatory', True),
1750 mandatory=opts.get(b'obsolescence-mandatory', True),
1751 )
1751 )
1752
1752
1753 if opts.get(b'phases', False):
1753 if opts.get(b'phases', False):
1754 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1754 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1755 phasedata = phases.binaryencode(headsbyphase)
1755 phasedata = phases.binaryencode(headsbyphase)
1756 bundler.newpart(b'phase-heads', data=phasedata)
1756 bundler.newpart(b'phase-heads', data=phasedata)
1757
1757
1758
1758
1759 def addparttagsfnodescache(repo, bundler, outgoing):
1759 def addparttagsfnodescache(repo, bundler, outgoing):
1760 # we include the tags fnode cache for the bundle changeset
1760 # we include the tags fnode cache for the bundle changeset
1761 # (as an optional parts)
1761 # (as an optional parts)
1762 cache = tags.hgtagsfnodescache(repo.unfiltered())
1762 cache = tags.hgtagsfnodescache(repo.unfiltered())
1763 chunks = []
1763 chunks = []
1764
1764
1765 # .hgtags fnodes are only relevant for head changesets. While we could
1765 # .hgtags fnodes are only relevant for head changesets. While we could
1766 # transfer values for all known nodes, there will likely be little to
1766 # transfer values for all known nodes, there will likely be little to
1767 # no benefit.
1767 # no benefit.
1768 #
1768 #
1769 # We don't bother using a generator to produce output data because
1769 # We don't bother using a generator to produce output data because
1770 # a) we only have 40 bytes per head and even esoteric numbers of heads
1770 # a) we only have 40 bytes per head and even esoteric numbers of heads
1771 # consume little memory (1M heads is 40MB) b) we don't want to send the
1771 # consume little memory (1M heads is 40MB) b) we don't want to send the
1772 # part if we don't have entries and knowing if we have entries requires
1772 # part if we don't have entries and knowing if we have entries requires
1773 # cache lookups.
1773 # cache lookups.
1774 for node in outgoing.ancestorsof:
1774 for node in outgoing.ancestorsof:
1775 # Don't compute missing, as this may slow down serving.
1775 # Don't compute missing, as this may slow down serving.
1776 fnode = cache.getfnode(node, computemissing=False)
1776 fnode = cache.getfnode(node, computemissing=False)
1777 if fnode:
1777 if fnode:
1778 chunks.extend([node, fnode])
1778 chunks.extend([node, fnode])
1779
1779
1780 if chunks:
1780 if chunks:
1781 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1781 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1782
1782
1783
1783
1784 def addpartrevbranchcache(repo, bundler, outgoing):
1784 def addpartrevbranchcache(repo, bundler, outgoing):
1785 # we include the rev branch cache for the bundle changeset
1785 # we include the rev branch cache for the bundle changeset
1786 # (as an optional parts)
1786 # (as an optional parts)
1787 cache = repo.revbranchcache()
1787 cache = repo.revbranchcache()
1788 cl = repo.unfiltered().changelog
1788 cl = repo.unfiltered().changelog
1789 branchesdata = collections.defaultdict(lambda: (set(), set()))
1789 branchesdata = collections.defaultdict(lambda: (set(), set()))
1790 for node in outgoing.missing:
1790 for node in outgoing.missing:
1791 branch, close = cache.branchinfo(cl.rev(node))
1791 branch, close = cache.branchinfo(cl.rev(node))
1792 branchesdata[branch][close].add(node)
1792 branchesdata[branch][close].add(node)
1793
1793
1794 def generate():
1794 def generate():
1795 for branch, (nodes, closed) in sorted(branchesdata.items()):
1795 for branch, (nodes, closed) in sorted(branchesdata.items()):
1796 utf8branch = encoding.fromlocal(branch)
1796 utf8branch = encoding.fromlocal(branch)
1797 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1797 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1798 yield utf8branch
1798 yield utf8branch
1799 for n in sorted(nodes):
1799 for n in sorted(nodes):
1800 yield n
1800 yield n
1801 for n in sorted(closed):
1801 for n in sorted(closed):
1802 yield n
1802 yield n
1803
1803
1804 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1804 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1805
1805
1806
1806
1807 def _formatrequirementsspec(requirements):
1807 def _formatrequirementsspec(requirements):
1808 requirements = [req for req in requirements if req != b"shared"]
1808 requirements = [req for req in requirements if req != b"shared"]
1809 return urlreq.quote(b','.join(sorted(requirements)))
1809 return urlreq.quote(b','.join(sorted(requirements)))
1810
1810
1811
1811
1812 def _formatrequirementsparams(requirements):
1812 def _formatrequirementsparams(requirements):
1813 requirements = _formatrequirementsspec(requirements)
1813 requirements = _formatrequirementsspec(requirements)
1814 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1814 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1815 return params
1815 return params
1816
1816
1817
1817
1818 def format_remote_wanted_sidedata(repo):
1818 def format_remote_wanted_sidedata(repo):
1819 """Formats a repo's wanted sidedata categories into a bytestring for
1819 """Formats a repo's wanted sidedata categories into a bytestring for
1820 capabilities exchange."""
1820 capabilities exchange."""
1821 wanted = b""
1821 wanted = b""
1822 if repo._wanted_sidedata:
1822 if repo._wanted_sidedata:
1823 wanted = b','.join(
1823 wanted = b','.join(
1824 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1824 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1825 )
1825 )
1826 return wanted
1826 return wanted
1827
1827
1828
1828
1829 def read_remote_wanted_sidedata(remote):
1829 def read_remote_wanted_sidedata(remote):
1830 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1830 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1831 return read_wanted_sidedata(sidedata_categories)
1831 return read_wanted_sidedata(sidedata_categories)
1832
1832
1833
1833
1834 def read_wanted_sidedata(formatted):
1834 def read_wanted_sidedata(formatted):
1835 if formatted:
1835 if formatted:
1836 return set(formatted.split(b','))
1836 return set(formatted.split(b','))
1837 return set()
1837 return set()
1838
1838
1839
1839
1840 def addpartbundlestream2(bundler, repo, **kwargs):
1840 def addpartbundlestream2(bundler, repo, **kwargs):
1841 if not kwargs.get('stream', False):
1841 if not kwargs.get('stream', False):
1842 return
1842 return
1843
1843
1844 if not streamclone.allowservergeneration(repo):
1844 if not streamclone.allowservergeneration(repo):
1845 raise error.Abort(
1845 raise error.Abort(
1846 _(
1846 _(
1847 b'stream data requested but server does not allow '
1847 b'stream data requested but server does not allow '
1848 b'this feature'
1848 b'this feature'
1849 ),
1849 ),
1850 hint=_(
1850 hint=_(
1851 b'well-behaved clients should not be '
1851 b'well-behaved clients should not be '
1852 b'requesting stream data from servers not '
1852 b'requesting stream data from servers not '
1853 b'advertising it; the client may be buggy'
1853 b'advertising it; the client may be buggy'
1854 ),
1854 ),
1855 )
1855 )
1856
1856
1857 # Stream clones don't compress well. And compression undermines a
1857 # Stream clones don't compress well. And compression undermines a
1858 # goal of stream clones, which is to be fast. Communicate the desire
1858 # goal of stream clones, which is to be fast. Communicate the desire
1859 # to avoid compression to consumers of the bundle.
1859 # to avoid compression to consumers of the bundle.
1860 bundler.prefercompressed = False
1860 bundler.prefercompressed = False
1861
1861
1862 # get the includes and excludes
1862 # get the includes and excludes
1863 includepats = kwargs.get('includepats')
1863 includepats = kwargs.get('includepats')
1864 excludepats = kwargs.get('excludepats')
1864 excludepats = kwargs.get('excludepats')
1865
1865
1866 narrowstream = repo.ui.configbool(
1866 narrowstream = repo.ui.configbool(
1867 b'experimental', b'server.stream-narrow-clones'
1867 b'experimental', b'server.stream-narrow-clones'
1868 )
1868 )
1869
1869
1870 if (includepats or excludepats) and not narrowstream:
1870 if (includepats or excludepats) and not narrowstream:
1871 raise error.Abort(_(b'server does not support narrow stream clones'))
1871 raise error.Abort(_(b'server does not support narrow stream clones'))
1872
1872
1873 includeobsmarkers = False
1873 includeobsmarkers = False
1874 if repo.obsstore:
1874 if repo.obsstore:
1875 remoteversions = obsmarkersversion(bundler.capabilities)
1875 remoteversions = obsmarkersversion(bundler.capabilities)
1876 if not remoteversions:
1876 if not remoteversions:
1877 raise error.Abort(
1877 raise error.Abort(
1878 _(
1878 _(
1879 b'server has obsolescence markers, but client '
1879 b'server has obsolescence markers, but client '
1880 b'cannot receive them via stream clone'
1880 b'cannot receive them via stream clone'
1881 )
1881 )
1882 )
1882 )
1883 elif repo.obsstore._version in remoteversions:
1883 elif repo.obsstore._version in remoteversions:
1884 includeobsmarkers = True
1884 includeobsmarkers = True
1885
1885
1886 filecount, bytecount, it = streamclone.generatev2(
1886 filecount, bytecount, it = streamclone.generatev2(
1887 repo, includepats, excludepats, includeobsmarkers
1887 repo, includepats, excludepats, includeobsmarkers
1888 )
1888 )
1889 requirements = streamclone.streamed_requirements(repo)
1889 requirements = streamclone.streamed_requirements(repo)
1890 requirements = _formatrequirementsspec(requirements)
1890 requirements = _formatrequirementsspec(requirements)
1891 part = bundler.newpart(b'stream2', data=it)
1891 part = bundler.newpart(b'stream2', data=it)
1892 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1892 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1893 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1893 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1894 part.addparam(b'requirements', requirements, mandatory=True)
1894 part.addparam(b'requirements', requirements, mandatory=True)
1895
1895
1896
1896
1897 def buildobsmarkerspart(bundler, markers, mandatory=True):
1897 def buildobsmarkerspart(bundler, markers, mandatory=True):
1898 """add an obsmarker part to the bundler with <markers>
1898 """add an obsmarker part to the bundler with <markers>
1899
1899
1900 No part is created if markers is empty.
1900 No part is created if markers is empty.
1901 Raises ValueError if the bundler doesn't support any known obsmarker format.
1901 Raises ValueError if the bundler doesn't support any known obsmarker format.
1902 """
1902 """
1903 if not markers:
1903 if not markers:
1904 return None
1904 return None
1905
1905
1906 remoteversions = obsmarkersversion(bundler.capabilities)
1906 remoteversions = obsmarkersversion(bundler.capabilities)
1907 version = obsolete.commonversion(remoteversions)
1907 version = obsolete.commonversion(remoteversions)
1908 if version is None:
1908 if version is None:
1909 raise ValueError(b'bundler does not support common obsmarker format')
1909 raise ValueError(b'bundler does not support common obsmarker format')
1910 stream = obsolete.encodemarkers(markers, True, version=version)
1910 stream = obsolete.encodemarkers(markers, True, version=version)
1911 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1911 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1912
1912
1913
1913
1914 def writebundle(
1914 def writebundle(
1915 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1915 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1916 ):
1916 ):
1917 """Write a bundle file and return its filename.
1917 """Write a bundle file and return its filename.
1918
1918
1919 Existing files will not be overwritten.
1919 Existing files will not be overwritten.
1920 If no filename is specified, a temporary file is created.
1920 If no filename is specified, a temporary file is created.
1921 bz2 compression can be turned off.
1921 bz2 compression can be turned off.
1922 The bundle file will be deleted in case of errors.
1922 The bundle file will be deleted in case of errors.
1923 """
1923 """
1924
1924
1925 if bundletype == b"HG20":
1925 if bundletype == b"HG20":
1926 bundle = bundle20(ui)
1926 bundle = bundle20(ui)
1927 bundle.setcompression(compression, compopts)
1927 bundle.setcompression(compression, compopts)
1928 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1928 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1929 part.addparam(b'version', cg.version)
1929 part.addparam(b'version', cg.version)
1930 if b'clcount' in cg.extras:
1930 if b'clcount' in cg.extras:
1931 part.addparam(
1931 part.addparam(
1932 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1932 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1933 )
1933 )
1934 chunkiter = bundle.getchunks()
1934 chunkiter = bundle.getchunks()
1935 else:
1935 else:
1936 # compression argument is only for the bundle2 case
1936 # compression argument is only for the bundle2 case
1937 assert compression is None
1937 assert compression is None
1938 if cg.version != b'01':
1938 if cg.version != b'01':
1939 raise error.Abort(
1939 raise error.Abort(
1940 _(b'old bundle types only supports v1 changegroups')
1940 _(b'old bundle types only supports v1 changegroups')
1941 )
1941 )
1942 header, comp = bundletypes[bundletype]
1942 header, comp = bundletypes[bundletype]
1943 if comp not in util.compengines.supportedbundletypes:
1943 if comp not in util.compengines.supportedbundletypes:
1944 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1944 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1945 compengine = util.compengines.forbundletype(comp)
1945 compengine = util.compengines.forbundletype(comp)
1946
1946
1947 def chunkiter():
1947 def chunkiter():
1948 yield header
1948 yield header
1949 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1949 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1950 yield chunk
1950 yield chunk
1951
1951
1952 chunkiter = chunkiter()
1952 chunkiter = chunkiter()
1953
1953
1954 # parse the changegroup data, otherwise we will block
1954 # parse the changegroup data, otherwise we will block
1955 # in case of sshrepo because we don't know the end of the stream
1955 # in case of sshrepo because we don't know the end of the stream
1956 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1956 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1957
1957
1958
1958
1959 def combinechangegroupresults(op):
1959 def combinechangegroupresults(op):
1960 """logic to combine 0 or more addchangegroup results into one"""
1960 """logic to combine 0 or more addchangegroup results into one"""
1961 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1961 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1962 changedheads = 0
1962 changedheads = 0
1963 result = 1
1963 result = 1
1964 for ret in results:
1964 for ret in results:
1965 # If any changegroup result is 0, return 0
1965 # If any changegroup result is 0, return 0
1966 if ret == 0:
1966 if ret == 0:
1967 result = 0
1967 result = 0
1968 break
1968 break
1969 if ret < -1:
1969 if ret < -1:
1970 changedheads += ret + 1
1970 changedheads += ret + 1
1971 elif ret > 1:
1971 elif ret > 1:
1972 changedheads += ret - 1
1972 changedheads += ret - 1
1973 if changedheads > 0:
1973 if changedheads > 0:
1974 result = 1 + changedheads
1974 result = 1 + changedheads
1975 elif changedheads < 0:
1975 elif changedheads < 0:
1976 result = -1 + changedheads
1976 result = -1 + changedheads
1977 return result
1977 return result
1978
1978
1979
1979
1980 @parthandler(
1980 @parthandler(
1981 b'changegroup',
1981 b'changegroup',
1982 (
1982 (
1983 b'version',
1983 b'version',
1984 b'nbchanges',
1984 b'nbchanges',
1985 b'exp-sidedata',
1985 b'exp-sidedata',
1986 b'exp-wanted-sidedata',
1986 b'exp-wanted-sidedata',
1987 b'treemanifest',
1987 b'treemanifest',
1988 b'targetphase',
1988 b'targetphase',
1989 ),
1989 ),
1990 )
1990 )
1991 def handlechangegroup(op, inpart):
1991 def handlechangegroup(op, inpart):
1992 """apply a changegroup part on the repo"""
1992 """apply a changegroup part on the repo"""
1993 from . import localrepo
1993 from . import localrepo
1994
1994
1995 tr = op.gettransaction()
1995 tr = op.gettransaction()
1996 unpackerversion = inpart.params.get(b'version', b'01')
1996 unpackerversion = inpart.params.get(b'version', b'01')
1997 # We should raise an appropriate exception here
1997 # We should raise an appropriate exception here
1998 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1998 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1999 # the source and url passed here are overwritten by the one contained in
1999 # the source and url passed here are overwritten by the one contained in
2000 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2000 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2001 nbchangesets = None
2001 nbchangesets = None
2002 if b'nbchanges' in inpart.params:
2002 if b'nbchanges' in inpart.params:
2003 nbchangesets = int(inpart.params.get(b'nbchanges'))
2003 nbchangesets = int(inpart.params.get(b'nbchanges'))
2004 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2004 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2005 if len(op.repo.changelog) != 0:
2005 if len(op.repo.changelog) != 0:
2006 raise error.Abort(
2006 raise error.Abort(
2007 _(
2007 _(
2008 b"bundle contains tree manifests, but local repo is "
2008 b"bundle contains tree manifests, but local repo is "
2009 b"non-empty and does not use tree manifests"
2009 b"non-empty and does not use tree manifests"
2010 )
2010 )
2011 )
2011 )
2012 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2012 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2013 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2013 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2014 op.repo.ui, op.repo.requirements, op.repo.features
2014 op.repo.ui, op.repo.requirements, op.repo.features
2015 )
2015 )
2016 scmutil.writereporequirements(op.repo)
2016 scmutil.writereporequirements(op.repo)
2017
2017
2018 extrakwargs = {}
2018 extrakwargs = {}
2019 targetphase = inpart.params.get(b'targetphase')
2019 targetphase = inpart.params.get(b'targetphase')
2020 if targetphase is not None:
2020 if targetphase is not None:
2021 extrakwargs['targetphase'] = int(targetphase)
2021 extrakwargs['targetphase'] = int(targetphase)
2022
2022
2023 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2023 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2024 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2024 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2025
2025
2026 ret = _processchangegroup(
2026 ret = _processchangegroup(
2027 op,
2027 op,
2028 cg,
2028 cg,
2029 tr,
2029 tr,
2030 op.source,
2030 op.source,
2031 b'bundle2',
2031 b'bundle2',
2032 expectedtotal=nbchangesets,
2032 expectedtotal=nbchangesets,
2033 **extrakwargs
2033 **extrakwargs
2034 )
2034 )
2035 if op.reply is not None:
2035 if op.reply is not None:
2036 # This is definitely not the final form of this
2036 # This is definitely not the final form of this
2037 # return. But one need to start somewhere.
2037 # return. But one need to start somewhere.
2038 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2038 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2039 part.addparam(
2039 part.addparam(
2040 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2040 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2041 )
2041 )
2042 part.addparam(b'return', b'%i' % ret, mandatory=False)
2042 part.addparam(b'return', b'%i' % ret, mandatory=False)
2043 assert not inpart.read()
2043 assert not inpart.read()
2044
2044
2045
2045
2046 _remotechangegroupparams = tuple(
2046 _remotechangegroupparams = tuple(
2047 [b'url', b'size', b'digests']
2047 [b'url', b'size', b'digests']
2048 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2048 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2049 )
2049 )
2050
2050
2051
2051
2052 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2052 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2053 def handleremotechangegroup(op, inpart):
2053 def handleremotechangegroup(op, inpart):
2054 """apply a bundle10 on the repo, given an url and validation information
2054 """apply a bundle10 on the repo, given an url and validation information
2055
2055
2056 All the information about the remote bundle to import are given as
2056 All the information about the remote bundle to import are given as
2057 parameters. The parameters include:
2057 parameters. The parameters include:
2058 - url: the url to the bundle10.
2058 - url: the url to the bundle10.
2059 - size: the bundle10 file size. It is used to validate what was
2059 - size: the bundle10 file size. It is used to validate what was
2060 retrieved by the client matches the server knowledge about the bundle.
2060 retrieved by the client matches the server knowledge about the bundle.
2061 - digests: a space separated list of the digest types provided as
2061 - digests: a space separated list of the digest types provided as
2062 parameters.
2062 parameters.
2063 - digest:<digest-type>: the hexadecimal representation of the digest with
2063 - digest:<digest-type>: the hexadecimal representation of the digest with
2064 that name. Like the size, it is used to validate what was retrieved by
2064 that name. Like the size, it is used to validate what was retrieved by
2065 the client matches what the server knows about the bundle.
2065 the client matches what the server knows about the bundle.
2066
2066
2067 When multiple digest types are given, all of them are checked.
2067 When multiple digest types are given, all of them are checked.
2068 """
2068 """
2069 try:
2069 try:
2070 raw_url = inpart.params[b'url']
2070 raw_url = inpart.params[b'url']
2071 except KeyError:
2071 except KeyError:
2072 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2072 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2073 parsed_url = urlutil.url(raw_url)
2073 parsed_url = urlutil.url(raw_url)
2074 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2074 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2075 raise error.Abort(
2075 raise error.Abort(
2076 _(b'remote-changegroup does not support %s urls')
2076 _(b'remote-changegroup does not support %s urls')
2077 % parsed_url.scheme
2077 % parsed_url.scheme
2078 )
2078 )
2079
2079
2080 try:
2080 try:
2081 size = int(inpart.params[b'size'])
2081 size = int(inpart.params[b'size'])
2082 except ValueError:
2082 except ValueError:
2083 raise error.Abort(
2083 raise error.Abort(
2084 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2084 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2085 )
2085 )
2086 except KeyError:
2086 except KeyError:
2087 raise error.Abort(
2087 raise error.Abort(
2088 _(b'remote-changegroup: missing "%s" param') % b'size'
2088 _(b'remote-changegroup: missing "%s" param') % b'size'
2089 )
2089 )
2090
2090
2091 digests = {}
2091 digests = {}
2092 for typ in inpart.params.get(b'digests', b'').split():
2092 for typ in inpart.params.get(b'digests', b'').split():
2093 param = b'digest:%s' % typ
2093 param = b'digest:%s' % typ
2094 try:
2094 try:
2095 value = inpart.params[param]
2095 value = inpart.params[param]
2096 except KeyError:
2096 except KeyError:
2097 raise error.Abort(
2097 raise error.Abort(
2098 _(b'remote-changegroup: missing "%s" param') % param
2098 _(b'remote-changegroup: missing "%s" param') % param
2099 )
2099 )
2100 digests[typ] = value
2100 digests[typ] = value
2101
2101
2102 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2102 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2103
2103
2104 tr = op.gettransaction()
2104 tr = op.gettransaction()
2105 from . import exchange
2105 from . import exchange
2106
2106
2107 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2107 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2108 if not isinstance(cg, changegroup.cg1unpacker):
2108 if not isinstance(cg, changegroup.cg1unpacker):
2109 raise error.Abort(
2109 raise error.Abort(
2110 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2110 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2111 )
2111 )
2112 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2112 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2113 if op.reply is not None:
2113 if op.reply is not None:
2114 # This is definitely not the final form of this
2114 # This is definitely not the final form of this
2115 # return. But one need to start somewhere.
2115 # return. But one need to start somewhere.
2116 part = op.reply.newpart(b'reply:changegroup')
2116 part = op.reply.newpart(b'reply:changegroup')
2117 part.addparam(
2117 part.addparam(
2118 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2118 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2119 )
2119 )
2120 part.addparam(b'return', b'%i' % ret, mandatory=False)
2120 part.addparam(b'return', b'%i' % ret, mandatory=False)
2121 try:
2121 try:
2122 real_part.validate()
2122 real_part.validate()
2123 except error.Abort as e:
2123 except error.Abort as e:
2124 raise error.Abort(
2124 raise error.Abort(
2125 _(b'bundle at %s is corrupted:\n%s')
2125 _(b'bundle at %s is corrupted:\n%s')
2126 % (urlutil.hidepassword(raw_url), e.message)
2126 % (urlutil.hidepassword(raw_url), e.message)
2127 )
2127 )
2128 assert not inpart.read()
2128 assert not inpart.read()
2129
2129
2130
2130
2131 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2131 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2132 def handlereplychangegroup(op, inpart):
2132 def handlereplychangegroup(op, inpart):
2133 ret = int(inpart.params[b'return'])
2133 ret = int(inpart.params[b'return'])
2134 replyto = int(inpart.params[b'in-reply-to'])
2134 replyto = int(inpart.params[b'in-reply-to'])
2135 op.records.add(b'changegroup', {b'return': ret}, replyto)
2135 op.records.add(b'changegroup', {b'return': ret}, replyto)
2136
2136
2137
2137
2138 @parthandler(b'check:bookmarks')
2138 @parthandler(b'check:bookmarks')
2139 def handlecheckbookmarks(op, inpart):
2139 def handlecheckbookmarks(op, inpart):
2140 """check location of bookmarks
2140 """check location of bookmarks
2141
2141
2142 This part is to be used to detect push race regarding bookmark, it
2142 This part is to be used to detect push race regarding bookmark, it
2143 contains binary encoded (bookmark, node) tuple. If the local state does
2143 contains binary encoded (bookmark, node) tuple. If the local state does
2144 not marks the one in the part, a PushRaced exception is raised
2144 not marks the one in the part, a PushRaced exception is raised
2145 """
2145 """
2146 bookdata = bookmarks.binarydecode(op.repo, inpart)
2146 bookdata = bookmarks.binarydecode(op.repo, inpart)
2147
2147
2148 msgstandard = (
2148 msgstandard = (
2149 b'remote repository changed while pushing - please try again '
2149 b'remote repository changed while pushing - please try again '
2150 b'(bookmark "%s" move from %s to %s)'
2150 b'(bookmark "%s" move from %s to %s)'
2151 )
2151 )
2152 msgmissing = (
2152 msgmissing = (
2153 b'remote repository changed while pushing - please try again '
2153 b'remote repository changed while pushing - please try again '
2154 b'(bookmark "%s" is missing, expected %s)'
2154 b'(bookmark "%s" is missing, expected %s)'
2155 )
2155 )
2156 msgexist = (
2156 msgexist = (
2157 b'remote repository changed while pushing - please try again '
2157 b'remote repository changed while pushing - please try again '
2158 b'(bookmark "%s" set on %s, expected missing)'
2158 b'(bookmark "%s" set on %s, expected missing)'
2159 )
2159 )
2160 for book, node in bookdata:
2160 for book, node in bookdata:
2161 currentnode = op.repo._bookmarks.get(book)
2161 currentnode = op.repo._bookmarks.get(book)
2162 if currentnode != node:
2162 if currentnode != node:
2163 if node is None:
2163 if node is None:
2164 finalmsg = msgexist % (book, short(currentnode))
2164 finalmsg = msgexist % (book, short(currentnode))
2165 elif currentnode is None:
2165 elif currentnode is None:
2166 finalmsg = msgmissing % (book, short(node))
2166 finalmsg = msgmissing % (book, short(node))
2167 else:
2167 else:
2168 finalmsg = msgstandard % (
2168 finalmsg = msgstandard % (
2169 book,
2169 book,
2170 short(node),
2170 short(node),
2171 short(currentnode),
2171 short(currentnode),
2172 )
2172 )
2173 raise error.PushRaced(finalmsg)
2173 raise error.PushRaced(finalmsg)
2174
2174
2175
2175
2176 @parthandler(b'check:heads')
2176 @parthandler(b'check:heads')
2177 def handlecheckheads(op, inpart):
2177 def handlecheckheads(op, inpart):
2178 """check that head of the repo did not change
2178 """check that head of the repo did not change
2179
2179
2180 This is used to detect a push race when using unbundle.
2180 This is used to detect a push race when using unbundle.
2181 This replaces the "heads" argument of unbundle."""
2181 This replaces the "heads" argument of unbundle."""
2182 h = inpart.read(20)
2182 h = inpart.read(20)
2183 heads = []
2183 heads = []
2184 while len(h) == 20:
2184 while len(h) == 20:
2185 heads.append(h)
2185 heads.append(h)
2186 h = inpart.read(20)
2186 h = inpart.read(20)
2187 assert not h
2187 assert not h
2188 # Trigger a transaction so that we are guaranteed to have the lock now.
2188 # Trigger a transaction so that we are guaranteed to have the lock now.
2189 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2189 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2190 op.gettransaction()
2190 op.gettransaction()
2191 if sorted(heads) != sorted(op.repo.heads()):
2191 if sorted(heads) != sorted(op.repo.heads()):
2192 raise error.PushRaced(
2192 raise error.PushRaced(
2193 b'remote repository changed while pushing - please try again'
2193 b'remote repository changed while pushing - please try again'
2194 )
2194 )
2195
2195
2196
2196
2197 @parthandler(b'check:updated-heads')
2197 @parthandler(b'check:updated-heads')
2198 def handlecheckupdatedheads(op, inpart):
2198 def handlecheckupdatedheads(op, inpart):
2199 """check for race on the heads touched by a push
2199 """check for race on the heads touched by a push
2200
2200
2201 This is similar to 'check:heads' but focus on the heads actually updated
2201 This is similar to 'check:heads' but focus on the heads actually updated
2202 during the push. If other activities happen on unrelated heads, it is
2202 during the push. If other activities happen on unrelated heads, it is
2203 ignored.
2203 ignored.
2204
2204
2205 This allow server with high traffic to avoid push contention as long as
2205 This allow server with high traffic to avoid push contention as long as
2206 unrelated parts of the graph are involved."""
2206 unrelated parts of the graph are involved."""
2207 h = inpart.read(20)
2207 h = inpart.read(20)
2208 heads = []
2208 heads = []
2209 while len(h) == 20:
2209 while len(h) == 20:
2210 heads.append(h)
2210 heads.append(h)
2211 h = inpart.read(20)
2211 h = inpart.read(20)
2212 assert not h
2212 assert not h
2213 # trigger a transaction so that we are guaranteed to have the lock now.
2213 # trigger a transaction so that we are guaranteed to have the lock now.
2214 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2214 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2215 op.gettransaction()
2215 op.gettransaction()
2216
2216
2217 currentheads = set()
2217 currentheads = set()
2218 for ls in op.repo.branchmap().iterheads():
2218 for ls in op.repo.branchmap().iterheads():
2219 currentheads.update(ls)
2219 currentheads.update(ls)
2220
2220
2221 for h in heads:
2221 for h in heads:
2222 if h not in currentheads:
2222 if h not in currentheads:
2223 raise error.PushRaced(
2223 raise error.PushRaced(
2224 b'remote repository changed while pushing - '
2224 b'remote repository changed while pushing - '
2225 b'please try again'
2225 b'please try again'
2226 )
2226 )
2227
2227
2228
2228
2229 @parthandler(b'check:phases')
2229 @parthandler(b'check:phases')
2230 def handlecheckphases(op, inpart):
2230 def handlecheckphases(op, inpart):
2231 """check that phase boundaries of the repository did not change
2231 """check that phase boundaries of the repository did not change
2232
2232
2233 This is used to detect a push race.
2233 This is used to detect a push race.
2234 """
2234 """
2235 phasetonodes = phases.binarydecode(inpart)
2235 phasetonodes = phases.binarydecode(inpart)
2236 unfi = op.repo.unfiltered()
2236 unfi = op.repo.unfiltered()
2237 cl = unfi.changelog
2237 cl = unfi.changelog
2238 phasecache = unfi._phasecache
2238 phasecache = unfi._phasecache
2239 msg = (
2239 msg = (
2240 b'remote repository changed while pushing - please try again '
2240 b'remote repository changed while pushing - please try again '
2241 b'(%s is %s expected %s)'
2241 b'(%s is %s expected %s)'
2242 )
2242 )
2243 for expectedphase, nodes in pycompat.iteritems(phasetonodes):
2243 for expectedphase, nodes in pycompat.iteritems(phasetonodes):
2244 for n in nodes:
2244 for n in nodes:
2245 actualphase = phasecache.phase(unfi, cl.rev(n))
2245 actualphase = phasecache.phase(unfi, cl.rev(n))
2246 if actualphase != expectedphase:
2246 if actualphase != expectedphase:
2247 finalmsg = msg % (
2247 finalmsg = msg % (
2248 short(n),
2248 short(n),
2249 phases.phasenames[actualphase],
2249 phases.phasenames[actualphase],
2250 phases.phasenames[expectedphase],
2250 phases.phasenames[expectedphase],
2251 )
2251 )
2252 raise error.PushRaced(finalmsg)
2252 raise error.PushRaced(finalmsg)
2253
2253
2254
2254
2255 @parthandler(b'output')
2255 @parthandler(b'output')
2256 def handleoutput(op, inpart):
2256 def handleoutput(op, inpart):
2257 """forward output captured on the server to the client"""
2257 """forward output captured on the server to the client"""
2258 for line in inpart.read().splitlines():
2258 for line in inpart.read().splitlines():
2259 op.ui.status(_(b'remote: %s\n') % line)
2259 op.ui.status(_(b'remote: %s\n') % line)
2260
2260
2261
2261
2262 @parthandler(b'replycaps')
2262 @parthandler(b'replycaps')
2263 def handlereplycaps(op, inpart):
2263 def handlereplycaps(op, inpart):
2264 """Notify that a reply bundle should be created
2264 """Notify that a reply bundle should be created
2265
2265
2266 The payload contains the capabilities information for the reply"""
2266 The payload contains the capabilities information for the reply"""
2267 caps = decodecaps(inpart.read())
2267 caps = decodecaps(inpart.read())
2268 if op.reply is None:
2268 if op.reply is None:
2269 op.reply = bundle20(op.ui, caps)
2269 op.reply = bundle20(op.ui, caps)
2270
2270
2271
2271
2272 class AbortFromPart(error.Abort):
2272 class AbortFromPart(error.Abort):
2273 """Sub-class of Abort that denotes an error from a bundle2 part."""
2273 """Sub-class of Abort that denotes an error from a bundle2 part."""
2274
2274
2275
2275
2276 @parthandler(b'error:abort', (b'message', b'hint'))
2276 @parthandler(b'error:abort', (b'message', b'hint'))
2277 def handleerrorabort(op, inpart):
2277 def handleerrorabort(op, inpart):
2278 """Used to transmit abort error over the wire"""
2278 """Used to transmit abort error over the wire"""
2279 raise AbortFromPart(
2279 raise AbortFromPart(
2280 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2280 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2281 )
2281 )
2282
2282
2283
2283
2284 @parthandler(
2284 @parthandler(
2285 b'error:pushkey',
2285 b'error:pushkey',
2286 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2286 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2287 )
2287 )
2288 def handleerrorpushkey(op, inpart):
2288 def handleerrorpushkey(op, inpart):
2289 """Used to transmit failure of a mandatory pushkey over the wire"""
2289 """Used to transmit failure of a mandatory pushkey over the wire"""
2290 kwargs = {}
2290 kwargs = {}
2291 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2291 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2292 value = inpart.params.get(name)
2292 value = inpart.params.get(name)
2293 if value is not None:
2293 if value is not None:
2294 kwargs[name] = value
2294 kwargs[name] = value
2295 raise error.PushkeyFailed(
2295 raise error.PushkeyFailed(
2296 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2296 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2297 )
2297 )
2298
2298
2299
2299
2300 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2300 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2301 def handleerrorunsupportedcontent(op, inpart):
2301 def handleerrorunsupportedcontent(op, inpart):
2302 """Used to transmit unknown content error over the wire"""
2302 """Used to transmit unknown content error over the wire"""
2303 kwargs = {}
2303 kwargs = {}
2304 parttype = inpart.params.get(b'parttype')
2304 parttype = inpart.params.get(b'parttype')
2305 if parttype is not None:
2305 if parttype is not None:
2306 kwargs[b'parttype'] = parttype
2306 kwargs[b'parttype'] = parttype
2307 params = inpart.params.get(b'params')
2307 params = inpart.params.get(b'params')
2308 if params is not None:
2308 if params is not None:
2309 kwargs[b'params'] = params.split(b'\0')
2309 kwargs[b'params'] = params.split(b'\0')
2310
2310
2311 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2311 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2312
2312
2313
2313
2314 @parthandler(b'error:pushraced', (b'message',))
2314 @parthandler(b'error:pushraced', (b'message',))
2315 def handleerrorpushraced(op, inpart):
2315 def handleerrorpushraced(op, inpart):
2316 """Used to transmit push race error over the wire"""
2316 """Used to transmit push race error over the wire"""
2317 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2317 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2318
2318
2319
2319
2320 @parthandler(b'listkeys', (b'namespace',))
2320 @parthandler(b'listkeys', (b'namespace',))
2321 def handlelistkeys(op, inpart):
2321 def handlelistkeys(op, inpart):
2322 """retrieve pushkey namespace content stored in a bundle2"""
2322 """retrieve pushkey namespace content stored in a bundle2"""
2323 namespace = inpart.params[b'namespace']
2323 namespace = inpart.params[b'namespace']
2324 r = pushkey.decodekeys(inpart.read())
2324 r = pushkey.decodekeys(inpart.read())
2325 op.records.add(b'listkeys', (namespace, r))
2325 op.records.add(b'listkeys', (namespace, r))
2326
2326
2327
2327
2328 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2328 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2329 def handlepushkey(op, inpart):
2329 def handlepushkey(op, inpart):
2330 """process a pushkey request"""
2330 """process a pushkey request"""
2331 dec = pushkey.decode
2331 dec = pushkey.decode
2332 namespace = dec(inpart.params[b'namespace'])
2332 namespace = dec(inpart.params[b'namespace'])
2333 key = dec(inpart.params[b'key'])
2333 key = dec(inpart.params[b'key'])
2334 old = dec(inpart.params[b'old'])
2334 old = dec(inpart.params[b'old'])
2335 new = dec(inpart.params[b'new'])
2335 new = dec(inpart.params[b'new'])
2336 # Grab the transaction to ensure that we have the lock before performing the
2336 # Grab the transaction to ensure that we have the lock before performing the
2337 # pushkey.
2337 # pushkey.
2338 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2338 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2339 op.gettransaction()
2339 op.gettransaction()
2340 ret = op.repo.pushkey(namespace, key, old, new)
2340 ret = op.repo.pushkey(namespace, key, old, new)
2341 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2341 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2342 op.records.add(b'pushkey', record)
2342 op.records.add(b'pushkey', record)
2343 if op.reply is not None:
2343 if op.reply is not None:
2344 rpart = op.reply.newpart(b'reply:pushkey')
2344 rpart = op.reply.newpart(b'reply:pushkey')
2345 rpart.addparam(
2345 rpart.addparam(
2346 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2346 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2347 )
2347 )
2348 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2348 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2349 if inpart.mandatory and not ret:
2349 if inpart.mandatory and not ret:
2350 kwargs = {}
2350 kwargs = {}
2351 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2351 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2352 if key in inpart.params:
2352 if key in inpart.params:
2353 kwargs[key] = inpart.params[key]
2353 kwargs[key] = inpart.params[key]
2354 raise error.PushkeyFailed(
2354 raise error.PushkeyFailed(
2355 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2355 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2356 )
2356 )
2357
2357
2358
2358
2359 @parthandler(b'bookmarks')
2359 @parthandler(b'bookmarks')
2360 def handlebookmark(op, inpart):
2360 def handlebookmark(op, inpart):
2361 """transmit bookmark information
2361 """transmit bookmark information
2362
2362
2363 The part contains binary encoded bookmark information.
2363 The part contains binary encoded bookmark information.
2364
2364
2365 The exact behavior of this part can be controlled by the 'bookmarks' mode
2365 The exact behavior of this part can be controlled by the 'bookmarks' mode
2366 on the bundle operation.
2366 on the bundle operation.
2367
2367
2368 When mode is 'apply' (the default) the bookmark information is applied as
2368 When mode is 'apply' (the default) the bookmark information is applied as
2369 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2369 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2370 issued earlier to check for push races in such update. This behavior is
2370 issued earlier to check for push races in such update. This behavior is
2371 suitable for pushing.
2371 suitable for pushing.
2372
2372
2373 When mode is 'records', the information is recorded into the 'bookmarks'
2373 When mode is 'records', the information is recorded into the 'bookmarks'
2374 records of the bundle operation. This behavior is suitable for pulling.
2374 records of the bundle operation. This behavior is suitable for pulling.
2375 """
2375 """
2376 changes = bookmarks.binarydecode(op.repo, inpart)
2376 changes = bookmarks.binarydecode(op.repo, inpart)
2377
2377
2378 pushkeycompat = op.repo.ui.configbool(
2378 pushkeycompat = op.repo.ui.configbool(
2379 b'server', b'bookmarks-pushkey-compat'
2379 b'server', b'bookmarks-pushkey-compat'
2380 )
2380 )
2381 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2381 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2382
2382
2383 if bookmarksmode == b'apply':
2383 if bookmarksmode == b'apply':
2384 tr = op.gettransaction()
2384 tr = op.gettransaction()
2385 bookstore = op.repo._bookmarks
2385 bookstore = op.repo._bookmarks
2386 if pushkeycompat:
2386 if pushkeycompat:
2387 allhooks = []
2387 allhooks = []
2388 for book, node in changes:
2388 for book, node in changes:
2389 hookargs = tr.hookargs.copy()
2389 hookargs = tr.hookargs.copy()
2390 hookargs[b'pushkeycompat'] = b'1'
2390 hookargs[b'pushkeycompat'] = b'1'
2391 hookargs[b'namespace'] = b'bookmarks'
2391 hookargs[b'namespace'] = b'bookmarks'
2392 hookargs[b'key'] = book
2392 hookargs[b'key'] = book
2393 hookargs[b'old'] = hex(bookstore.get(book, b''))
2393 hookargs[b'old'] = hex(bookstore.get(book, b''))
2394 hookargs[b'new'] = hex(node if node is not None else b'')
2394 hookargs[b'new'] = hex(node if node is not None else b'')
2395 allhooks.append(hookargs)
2395 allhooks.append(hookargs)
2396
2396
2397 for hookargs in allhooks:
2397 for hookargs in allhooks:
2398 op.repo.hook(
2398 op.repo.hook(
2399 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2399 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2400 )
2400 )
2401
2401
2402 for book, node in changes:
2402 for book, node in changes:
2403 if bookmarks.isdivergent(book):
2403 if bookmarks.isdivergent(book):
2404 msg = _(b'cannot accept divergent bookmark %s!') % book
2404 msg = _(b'cannot accept divergent bookmark %s!') % book
2405 raise error.Abort(msg)
2405 raise error.Abort(msg)
2406
2406
2407 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2407 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2408
2408
2409 if pushkeycompat:
2409 if pushkeycompat:
2410
2410
2411 def runhook(unused_success):
2411 def runhook(unused_success):
2412 for hookargs in allhooks:
2412 for hookargs in allhooks:
2413 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2413 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2414
2414
2415 op.repo._afterlock(runhook)
2415 op.repo._afterlock(runhook)
2416
2416
2417 elif bookmarksmode == b'records':
2417 elif bookmarksmode == b'records':
2418 for book, node in changes:
2418 for book, node in changes:
2419 record = {b'bookmark': book, b'node': node}
2419 record = {b'bookmark': book, b'node': node}
2420 op.records.add(b'bookmarks', record)
2420 op.records.add(b'bookmarks', record)
2421 else:
2421 else:
2422 raise error.ProgrammingError(
2422 raise error.ProgrammingError(
2423 b'unknown bookmark mode: %s' % bookmarksmode
2423 b'unknown bookmark mode: %s' % bookmarksmode
2424 )
2424 )
2425
2425
2426
2426
2427 @parthandler(b'phase-heads')
2427 @parthandler(b'phase-heads')
2428 def handlephases(op, inpart):
2428 def handlephases(op, inpart):
2429 """apply phases from bundle part to repo"""
2429 """apply phases from bundle part to repo"""
2430 headsbyphase = phases.binarydecode(inpart)
2430 headsbyphase = phases.binarydecode(inpart)
2431 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2431 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2432
2432
2433
2433
2434 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2434 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2435 def handlepushkeyreply(op, inpart):
2435 def handlepushkeyreply(op, inpart):
2436 """retrieve the result of a pushkey request"""
2436 """retrieve the result of a pushkey request"""
2437 ret = int(inpart.params[b'return'])
2437 ret = int(inpart.params[b'return'])
2438 partid = int(inpart.params[b'in-reply-to'])
2438 partid = int(inpart.params[b'in-reply-to'])
2439 op.records.add(b'pushkey', {b'return': ret}, partid)
2439 op.records.add(b'pushkey', {b'return': ret}, partid)
2440
2440
2441
2441
2442 @parthandler(b'obsmarkers')
2442 @parthandler(b'obsmarkers')
2443 def handleobsmarker(op, inpart):
2443 def handleobsmarker(op, inpart):
2444 """add a stream of obsmarkers to the repo"""
2444 """add a stream of obsmarkers to the repo"""
2445 tr = op.gettransaction()
2445 tr = op.gettransaction()
2446 markerdata = inpart.read()
2446 markerdata = inpart.read()
2447 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2447 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2448 op.ui.writenoi18n(
2448 op.ui.writenoi18n(
2449 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2449 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2450 )
2450 )
2451 # The mergemarkers call will crash if marker creation is not enabled.
2451 # The mergemarkers call will crash if marker creation is not enabled.
2452 # we want to avoid this if the part is advisory.
2452 # we want to avoid this if the part is advisory.
2453 if not inpart.mandatory and op.repo.obsstore.readonly:
2453 if not inpart.mandatory and op.repo.obsstore.readonly:
2454 op.repo.ui.debug(
2454 op.repo.ui.debug(
2455 b'ignoring obsolescence markers, feature not enabled\n'
2455 b'ignoring obsolescence markers, feature not enabled\n'
2456 )
2456 )
2457 return
2457 return
2458 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2458 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2459 op.repo.invalidatevolatilesets()
2459 op.repo.invalidatevolatilesets()
2460 op.records.add(b'obsmarkers', {b'new': new})
2460 op.records.add(b'obsmarkers', {b'new': new})
2461 if op.reply is not None:
2461 if op.reply is not None:
2462 rpart = op.reply.newpart(b'reply:obsmarkers')
2462 rpart = op.reply.newpart(b'reply:obsmarkers')
2463 rpart.addparam(
2463 rpart.addparam(
2464 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2464 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2465 )
2465 )
2466 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2466 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2467
2467
2468
2468
2469 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2469 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2470 def handleobsmarkerreply(op, inpart):
2470 def handleobsmarkerreply(op, inpart):
2471 """retrieve the result of a pushkey request"""
2471 """retrieve the result of a pushkey request"""
2472 ret = int(inpart.params[b'new'])
2472 ret = int(inpart.params[b'new'])
2473 partid = int(inpart.params[b'in-reply-to'])
2473 partid = int(inpart.params[b'in-reply-to'])
2474 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2474 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2475
2475
2476
2476
2477 @parthandler(b'hgtagsfnodes')
2477 @parthandler(b'hgtagsfnodes')
2478 def handlehgtagsfnodes(op, inpart):
2478 def handlehgtagsfnodes(op, inpart):
2479 """Applies .hgtags fnodes cache entries to the local repo.
2479 """Applies .hgtags fnodes cache entries to the local repo.
2480
2480
2481 Payload is pairs of 20 byte changeset nodes and filenodes.
2481 Payload is pairs of 20 byte changeset nodes and filenodes.
2482 """
2482 """
2483 # Grab the transaction so we ensure that we have the lock at this point.
2483 # Grab the transaction so we ensure that we have the lock at this point.
2484 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2484 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2485 op.gettransaction()
2485 op.gettransaction()
2486 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2486 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2487
2487
2488 count = 0
2488 count = 0
2489 while True:
2489 while True:
2490 node = inpart.read(20)
2490 node = inpart.read(20)
2491 fnode = inpart.read(20)
2491 fnode = inpart.read(20)
2492 if len(node) < 20 or len(fnode) < 20:
2492 if len(node) < 20 or len(fnode) < 20:
2493 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2493 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2494 break
2494 break
2495 cache.setfnode(node, fnode)
2495 cache.setfnode(node, fnode)
2496 count += 1
2496 count += 1
2497
2497
2498 cache.write()
2498 cache.write()
2499 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2499 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2500
2500
2501
2501
2502 rbcstruct = struct.Struct(b'>III')
2502 rbcstruct = struct.Struct(b'>III')
2503
2503
2504
2504
2505 @parthandler(b'cache:rev-branch-cache')
2505 @parthandler(b'cache:rev-branch-cache')
2506 def handlerbc(op, inpart):
2506 def handlerbc(op, inpart):
2507 """Legacy part, ignored for compatibility with bundles from or
2507 """Legacy part, ignored for compatibility with bundles from or
2508 for Mercurial before 5.7. Newer Mercurial computes the cache
2508 for Mercurial before 5.7. Newer Mercurial computes the cache
2509 efficiently enough during unbundling that the additional transfer
2509 efficiently enough during unbundling that the additional transfer
2510 is unnecessary."""
2510 is unnecessary."""
2511
2511
2512
2512
2513 @parthandler(b'pushvars')
2513 @parthandler(b'pushvars')
2514 def bundle2getvars(op, part):
2514 def bundle2getvars(op, part):
2515 '''unbundle a bundle2 containing shellvars on the server'''
2515 '''unbundle a bundle2 containing shellvars on the server'''
2516 # An option to disable unbundling on server-side for security reasons
2516 # An option to disable unbundling on server-side for security reasons
2517 if op.ui.configbool(b'push', b'pushvars.server'):
2517 if op.ui.configbool(b'push', b'pushvars.server'):
2518 hookargs = {}
2518 hookargs = {}
2519 for key, value in part.advisoryparams:
2519 for key, value in part.advisoryparams:
2520 key = key.upper()
2520 key = key.upper()
2521 # We want pushed variables to have USERVAR_ prepended so we know
2521 # We want pushed variables to have USERVAR_ prepended so we know
2522 # they came from the --pushvar flag.
2522 # they came from the --pushvar flag.
2523 key = b"USERVAR_" + key
2523 key = b"USERVAR_" + key
2524 hookargs[key] = value
2524 hookargs[key] = value
2525 op.addhookargs(hookargs)
2525 op.addhookargs(hookargs)
2526
2526
2527
2527
2528 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2528 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2529 def handlestreamv2bundle(op, part):
2529 def handlestreamv2bundle(op, part):
2530
2530
2531 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2531 requirements = urlreq.unquote(part.params[b'requirements'])
2532 requirements = requirements.split(b',') if requirements else []
2532 filecount = int(part.params[b'filecount'])
2533 filecount = int(part.params[b'filecount'])
2533 bytecount = int(part.params[b'bytecount'])
2534 bytecount = int(part.params[b'bytecount'])
2534
2535
2535 repo = op.repo
2536 repo = op.repo
2536 if len(repo):
2537 if len(repo):
2537 msg = _(b'cannot apply stream clone to non empty repository')
2538 msg = _(b'cannot apply stream clone to non empty repository')
2538 raise error.Abort(msg)
2539 raise error.Abort(msg)
2539
2540
2540 repo.ui.debug(b'applying stream bundle\n')
2541 repo.ui.debug(b'applying stream bundle\n')
2541 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2542 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2542
2543
2543
2544
2544 def widen_bundle(
2545 def widen_bundle(
2545 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2546 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2546 ):
2547 ):
2547 """generates bundle2 for widening a narrow clone
2548 """generates bundle2 for widening a narrow clone
2548
2549
2549 bundler is the bundle to which data should be added
2550 bundler is the bundle to which data should be added
2550 repo is the localrepository instance
2551 repo is the localrepository instance
2551 oldmatcher matches what the client already has
2552 oldmatcher matches what the client already has
2552 newmatcher matches what the client needs (including what it already has)
2553 newmatcher matches what the client needs (including what it already has)
2553 common is set of common heads between server and client
2554 common is set of common heads between server and client
2554 known is a set of revs known on the client side (used in ellipses)
2555 known is a set of revs known on the client side (used in ellipses)
2555 cgversion is the changegroup version to send
2556 cgversion is the changegroup version to send
2556 ellipses is boolean value telling whether to send ellipses data or not
2557 ellipses is boolean value telling whether to send ellipses data or not
2557
2558
2558 returns bundle2 of the data required for extending
2559 returns bundle2 of the data required for extending
2559 """
2560 """
2560 commonnodes = set()
2561 commonnodes = set()
2561 cl = repo.changelog
2562 cl = repo.changelog
2562 for r in repo.revs(b"::%ln", common):
2563 for r in repo.revs(b"::%ln", common):
2563 commonnodes.add(cl.node(r))
2564 commonnodes.add(cl.node(r))
2564 if commonnodes:
2565 if commonnodes:
2565 packer = changegroup.getbundler(
2566 packer = changegroup.getbundler(
2566 cgversion,
2567 cgversion,
2567 repo,
2568 repo,
2568 oldmatcher=oldmatcher,
2569 oldmatcher=oldmatcher,
2569 matcher=newmatcher,
2570 matcher=newmatcher,
2570 fullnodes=commonnodes,
2571 fullnodes=commonnodes,
2571 )
2572 )
2572 cgdata = packer.generate(
2573 cgdata = packer.generate(
2573 {repo.nullid},
2574 {repo.nullid},
2574 list(commonnodes),
2575 list(commonnodes),
2575 False,
2576 False,
2576 b'narrow_widen',
2577 b'narrow_widen',
2577 changelog=False,
2578 changelog=False,
2578 )
2579 )
2579
2580
2580 part = bundler.newpart(b'changegroup', data=cgdata)
2581 part = bundler.newpart(b'changegroup', data=cgdata)
2581 part.addparam(b'version', cgversion)
2582 part.addparam(b'version', cgversion)
2582 if scmutil.istreemanifest(repo):
2583 if scmutil.istreemanifest(repo):
2583 part.addparam(b'treemanifest', b'1')
2584 part.addparam(b'treemanifest', b'1')
2584 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2585 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2585 part.addparam(b'exp-sidedata', b'1')
2586 part.addparam(b'exp-sidedata', b'1')
2586 wanted = format_remote_wanted_sidedata(repo)
2587 wanted = format_remote_wanted_sidedata(repo)
2587 part.addparam(b'exp-wanted-sidedata', wanted)
2588 part.addparam(b'exp-wanted-sidedata', wanted)
2588
2589
2589 return bundler
2590 return bundler
@@ -1,819 +1,825 b''
1 #require serve no-reposimplestore no-chg
1 #require serve no-reposimplestore no-chg
2
2
3 #testcases stream-legacy stream-bundle2
3 #testcases stream-legacy stream-bundle2
4
4
5 #if stream-legacy
5 #if stream-legacy
6 $ cat << EOF >> $HGRCPATH
6 $ cat << EOF >> $HGRCPATH
7 > [server]
7 > [server]
8 > bundle2.stream = no
8 > bundle2.stream = no
9 > EOF
9 > EOF
10 #endif
10 #endif
11
11
12 Initialize repository
12 Initialize repository
13
13
14 $ hg init server
14 $ hg init server
15 $ cd server
15 $ cd server
16 $ sh $TESTDIR/testlib/stream_clone_setup.sh
16 $ sh $TESTDIR/testlib/stream_clone_setup.sh
17 adding 00changelog-ab349180a0405010.nd
17 adding 00changelog-ab349180a0405010.nd
18 adding 00changelog.d
18 adding 00changelog.d
19 adding 00changelog.i
19 adding 00changelog.i
20 adding 00changelog.n
20 adding 00changelog.n
21 adding 00manifest.d
21 adding 00manifest.d
22 adding 00manifest.i
22 adding 00manifest.i
23 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
23 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
24 adding data/foo.d
24 adding data/foo.d
25 adding data/foo.i
25 adding data/foo.i
26 adding data/foo.n
26 adding data/foo.n
27 adding data/undo.babar
27 adding data/undo.babar
28 adding data/undo.d
28 adding data/undo.d
29 adding data/undo.foo.d
29 adding data/undo.foo.d
30 adding data/undo.foo.i
30 adding data/undo.foo.i
31 adding data/undo.foo.n
31 adding data/undo.foo.n
32 adding data/undo.i
32 adding data/undo.i
33 adding data/undo.n
33 adding data/undo.n
34 adding data/undo.py
34 adding data/undo.py
35 adding foo.d
35 adding foo.d
36 adding foo.i
36 adding foo.i
37 adding foo.n
37 adding foo.n
38 adding meta/foo.d
38 adding meta/foo.d
39 adding meta/foo.i
39 adding meta/foo.i
40 adding meta/foo.n
40 adding meta/foo.n
41 adding meta/undo.babar
41 adding meta/undo.babar
42 adding meta/undo.d
42 adding meta/undo.d
43 adding meta/undo.foo.d
43 adding meta/undo.foo.d
44 adding meta/undo.foo.i
44 adding meta/undo.foo.i
45 adding meta/undo.foo.n
45 adding meta/undo.foo.n
46 adding meta/undo.i
46 adding meta/undo.i
47 adding meta/undo.n
47 adding meta/undo.n
48 adding meta/undo.py
48 adding meta/undo.py
49 adding savanah/foo.d
49 adding savanah/foo.d
50 adding savanah/foo.i
50 adding savanah/foo.i
51 adding savanah/foo.n
51 adding savanah/foo.n
52 adding savanah/undo.babar
52 adding savanah/undo.babar
53 adding savanah/undo.d
53 adding savanah/undo.d
54 adding savanah/undo.foo.d
54 adding savanah/undo.foo.d
55 adding savanah/undo.foo.i
55 adding savanah/undo.foo.i
56 adding savanah/undo.foo.n
56 adding savanah/undo.foo.n
57 adding savanah/undo.i
57 adding savanah/undo.i
58 adding savanah/undo.n
58 adding savanah/undo.n
59 adding savanah/undo.py
59 adding savanah/undo.py
60 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
60 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
61 adding store/foo.d
61 adding store/foo.d
62 adding store/foo.i
62 adding store/foo.i
63 adding store/foo.n
63 adding store/foo.n
64 adding store/undo.babar
64 adding store/undo.babar
65 adding store/undo.d
65 adding store/undo.d
66 adding store/undo.foo.d
66 adding store/undo.foo.d
67 adding store/undo.foo.i
67 adding store/undo.foo.i
68 adding store/undo.foo.n
68 adding store/undo.foo.n
69 adding store/undo.i
69 adding store/undo.i
70 adding store/undo.n
70 adding store/undo.n
71 adding store/undo.py
71 adding store/undo.py
72 adding undo.babar
72 adding undo.babar
73 adding undo.d
73 adding undo.d
74 adding undo.foo.d
74 adding undo.foo.d
75 adding undo.foo.i
75 adding undo.foo.i
76 adding undo.foo.n
76 adding undo.foo.n
77 adding undo.i
77 adding undo.i
78 adding undo.n
78 adding undo.n
79 adding undo.py
79 adding undo.py
80
80
81 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
81 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
82 $ cat hg.pid > $DAEMON_PIDS
82 $ cat hg.pid > $DAEMON_PIDS
83 $ cd ..
83 $ cd ..
84
84
85 Check local clone
85 Check local clone
86 ==================
86 ==================
87
87
88 The logic is close enough of uncompressed.
88 The logic is close enough of uncompressed.
89 This is present here to reuse the testing around file with "special" names.
89 This is present here to reuse the testing around file with "special" names.
90
90
91 $ hg clone server local-clone
91 $ hg clone server local-clone
92 updating to branch default
92 updating to branch default
93 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
93 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
94
94
95 Check that the clone went well
95 Check that the clone went well
96
96
97 $ hg verify -R local-clone
97 $ hg verify -R local-clone
98 checking changesets
98 checking changesets
99 checking manifests
99 checking manifests
100 crosschecking files in changesets and manifests
100 crosschecking files in changesets and manifests
101 checking files
101 checking files
102 checked 3 changesets with 1088 changes to 1088 files
102 checked 3 changesets with 1088 changes to 1088 files
103
103
104 Check uncompressed
104 Check uncompressed
105 ==================
105 ==================
106
106
107 Cannot stream clone when server.uncompressed is set
107 Cannot stream clone when server.uncompressed is set
108
108
109 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
109 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
110 200 Script output follows
110 200 Script output follows
111
111
112 1
112 1
113
113
114 #if stream-legacy
114 #if stream-legacy
115 $ hg debugcapabilities http://localhost:$HGPORT
115 $ hg debugcapabilities http://localhost:$HGPORT
116 Main capabilities:
116 Main capabilities:
117 batch
117 batch
118 branchmap
118 branchmap
119 $USUAL_BUNDLE2_CAPS_SERVER$
119 $USUAL_BUNDLE2_CAPS_SERVER$
120 changegroupsubset
120 changegroupsubset
121 compression=$BUNDLE2_COMPRESSIONS$
121 compression=$BUNDLE2_COMPRESSIONS$
122 getbundle
122 getbundle
123 httpheader=1024
123 httpheader=1024
124 httpmediatype=0.1rx,0.1tx,0.2tx
124 httpmediatype=0.1rx,0.1tx,0.2tx
125 known
125 known
126 lookup
126 lookup
127 pushkey
127 pushkey
128 unbundle=HG10GZ,HG10BZ,HG10UN
128 unbundle=HG10GZ,HG10BZ,HG10UN
129 unbundlehash
129 unbundlehash
130 Bundle2 capabilities:
130 Bundle2 capabilities:
131 HG20
131 HG20
132 bookmarks
132 bookmarks
133 changegroup
133 changegroup
134 01
134 01
135 02
135 02
136 checkheads
136 checkheads
137 related
137 related
138 digests
138 digests
139 md5
139 md5
140 sha1
140 sha1
141 sha512
141 sha512
142 error
142 error
143 abort
143 abort
144 unsupportedcontent
144 unsupportedcontent
145 pushraced
145 pushraced
146 pushkey
146 pushkey
147 hgtagsfnodes
147 hgtagsfnodes
148 listkeys
148 listkeys
149 phases
149 phases
150 heads
150 heads
151 pushkey
151 pushkey
152 remote-changegroup
152 remote-changegroup
153 http
153 http
154 https
154 https
155
155
156 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
156 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
157 warning: stream clone requested but server has them disabled
157 warning: stream clone requested but server has them disabled
158 requesting all changes
158 requesting all changes
159 adding changesets
159 adding changesets
160 adding manifests
160 adding manifests
161 adding file changes
161 adding file changes
162 added 3 changesets with 1088 changes to 1088 files
162 added 3 changesets with 1088 changes to 1088 files
163 new changesets 96ee1d7354c4:5223b5e3265f
163 new changesets 96ee1d7354c4:5223b5e3265f
164
164
165 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
165 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
166 200 Script output follows
166 200 Script output follows
167 content-type: application/mercurial-0.2
167 content-type: application/mercurial-0.2
168
168
169
169
170 $ f --size body --hexdump --bytes 100
170 $ f --size body --hexdump --bytes 100
171 body: size=232
171 body: size=232
172 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
172 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
173 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
173 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
174 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
174 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
175 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
175 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
176 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
176 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
177 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
177 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
178 0060: 69 73 20 66 |is f|
178 0060: 69 73 20 66 |is f|
179
179
180 #endif
180 #endif
181 #if stream-bundle2
181 #if stream-bundle2
182 $ hg debugcapabilities http://localhost:$HGPORT
182 $ hg debugcapabilities http://localhost:$HGPORT
183 Main capabilities:
183 Main capabilities:
184 batch
184 batch
185 branchmap
185 branchmap
186 $USUAL_BUNDLE2_CAPS_SERVER$
186 $USUAL_BUNDLE2_CAPS_SERVER$
187 changegroupsubset
187 changegroupsubset
188 compression=$BUNDLE2_COMPRESSIONS$
188 compression=$BUNDLE2_COMPRESSIONS$
189 getbundle
189 getbundle
190 httpheader=1024
190 httpheader=1024
191 httpmediatype=0.1rx,0.1tx,0.2tx
191 httpmediatype=0.1rx,0.1tx,0.2tx
192 known
192 known
193 lookup
193 lookup
194 pushkey
194 pushkey
195 unbundle=HG10GZ,HG10BZ,HG10UN
195 unbundle=HG10GZ,HG10BZ,HG10UN
196 unbundlehash
196 unbundlehash
197 Bundle2 capabilities:
197 Bundle2 capabilities:
198 HG20
198 HG20
199 bookmarks
199 bookmarks
200 changegroup
200 changegroup
201 01
201 01
202 02
202 02
203 checkheads
203 checkheads
204 related
204 related
205 digests
205 digests
206 md5
206 md5
207 sha1
207 sha1
208 sha512
208 sha512
209 error
209 error
210 abort
210 abort
211 unsupportedcontent
211 unsupportedcontent
212 pushraced
212 pushraced
213 pushkey
213 pushkey
214 hgtagsfnodes
214 hgtagsfnodes
215 listkeys
215 listkeys
216 phases
216 phases
217 heads
217 heads
218 pushkey
218 pushkey
219 remote-changegroup
219 remote-changegroup
220 http
220 http
221 https
221 https
222
222
223 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
223 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
224 warning: stream clone requested but server has them disabled
224 warning: stream clone requested but server has them disabled
225 requesting all changes
225 requesting all changes
226 adding changesets
226 adding changesets
227 adding manifests
227 adding manifests
228 adding file changes
228 adding file changes
229 added 3 changesets with 1088 changes to 1088 files
229 added 3 changesets with 1088 changes to 1088 files
230 new changesets 96ee1d7354c4:5223b5e3265f
230 new changesets 96ee1d7354c4:5223b5e3265f
231
231
232 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
232 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
233 200 Script output follows
233 200 Script output follows
234 content-type: application/mercurial-0.2
234 content-type: application/mercurial-0.2
235
235
236
236
237 $ f --size body --hexdump --bytes 100
237 $ f --size body --hexdump --bytes 100
238 body: size=232
238 body: size=232
239 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
239 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
240 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
240 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
241 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
241 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
242 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
242 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
243 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
243 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
244 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
244 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
245 0060: 69 73 20 66 |is f|
245 0060: 69 73 20 66 |is f|
246
246
247 #endif
247 #endif
248
248
249 $ killdaemons.py
249 $ killdaemons.py
250 $ cd server
250 $ cd server
251 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
251 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
252 $ cat hg.pid > $DAEMON_PIDS
252 $ cat hg.pid > $DAEMON_PIDS
253 $ cd ..
253 $ cd ..
254
254
255 Basic clone
255 Basic clone
256
256
257 #if stream-legacy
257 #if stream-legacy
258 $ hg clone --stream -U http://localhost:$HGPORT clone1
258 $ hg clone --stream -U http://localhost:$HGPORT clone1
259 streaming all changes
259 streaming all changes
260 1090 files to transfer, 102 KB of data (no-zstd !)
260 1090 files to transfer, 102 KB of data (no-zstd !)
261 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
261 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
262 1090 files to transfer, 98.8 KB of data (zstd !)
262 1090 files to transfer, 98.8 KB of data (zstd !)
263 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
263 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
264 searching for changes
264 searching for changes
265 no changes found
265 no changes found
266 $ cat server/errors.txt
266 $ cat server/errors.txt
267 #endif
267 #endif
268 #if stream-bundle2
268 #if stream-bundle2
269 $ hg clone --stream -U http://localhost:$HGPORT clone1
269 $ hg clone --stream -U http://localhost:$HGPORT clone1
270 streaming all changes
270 streaming all changes
271 1093 files to transfer, 102 KB of data (no-zstd !)
271 1093 files to transfer, 102 KB of data (no-zstd !)
272 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
272 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
273 1093 files to transfer, 98.9 KB of data (zstd !)
273 1093 files to transfer, 98.9 KB of data (zstd !)
274 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
274 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
275
275
276 $ ls -1 clone1/.hg/cache
276 $ ls -1 clone1/.hg/cache
277 branch2-base
277 branch2-base
278 branch2-immutable
278 branch2-immutable
279 branch2-served
279 branch2-served
280 branch2-served.hidden
280 branch2-served.hidden
281 branch2-visible
281 branch2-visible
282 branch2-visible-hidden
282 branch2-visible-hidden
283 rbc-names-v1
283 rbc-names-v1
284 rbc-revs-v1
284 rbc-revs-v1
285 tags2
285 tags2
286 tags2-served
286 tags2-served
287 $ cat server/errors.txt
287 $ cat server/errors.txt
288 #endif
288 #endif
289
289
290 getbundle requests with stream=1 are uncompressed
290 getbundle requests with stream=1 are uncompressed
291
291
292 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
292 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
293 200 Script output follows
293 200 Script output follows
294 content-type: application/mercurial-0.2
294 content-type: application/mercurial-0.2
295
295
296
296
297 #if no-zstd no-rust
297 #if no-zstd no-rust
298 $ f --size --hex --bytes 256 body
298 $ f --size --hex --bytes 256 body
299 body: size=119123
299 body: size=119123
300 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
300 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
301 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
301 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
302 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
302 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
303 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
303 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
304 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
304 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
305 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
305 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
306 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
306 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
307 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
307 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
308 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
308 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
309 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
309 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
310 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
310 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
311 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
311 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
312 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
312 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
313 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
313 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
314 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
314 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
315 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
315 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
316 #endif
316 #endif
317 #if zstd no-rust
317 #if zstd no-rust
318 $ f --size --hex --bytes 256 body
318 $ f --size --hex --bytes 256 body
319 body: size=116310 (no-bigendian !)
319 body: size=116310 (no-bigendian !)
320 body: size=116305 (bigendian !)
320 body: size=116305 (bigendian !)
321 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
321 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
322 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
322 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
323 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
323 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
324 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
324 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
325 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
325 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
326 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
326 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
327 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
327 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
328 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
328 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
329 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
329 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
330 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
330 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
331 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
331 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
332 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
332 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
333 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
333 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
334 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
334 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
335 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
335 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
336 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
336 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
337 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
337 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
338 #endif
338 #endif
339 #if zstd rust no-dirstate-v2
339 #if zstd rust no-dirstate-v2
340 $ f --size --hex --bytes 256 body
340 $ f --size --hex --bytes 256 body
341 body: size=116310
341 body: size=116310
342 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
342 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
343 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
343 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
344 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
344 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
345 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
345 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
346 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
346 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
347 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
347 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
348 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
348 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
349 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
349 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
350 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
350 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
351 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
351 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
352 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
352 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
353 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
353 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
354 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
354 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
355 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
355 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
356 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
356 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
357 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
357 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
358 #endif
358 #endif
359 #if zstd dirstate-v2
359 #if zstd dirstate-v2
360 $ f --size --hex --bytes 256 body
360 $ f --size --hex --bytes 256 body
361 body: size=109549
361 body: size=109549
362 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
362 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
363 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
363 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
364 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
364 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
365 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
365 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
366 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
366 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
367 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
367 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
368 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
368 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
369 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
369 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
370 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
370 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
371 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
371 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
372 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
372 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
373 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
373 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
374 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
374 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
375 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
375 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
376 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
376 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
377 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
377 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
378 #endif
378 #endif
379
379
380 --uncompressed is an alias to --stream
380 --uncompressed is an alias to --stream
381
381
382 #if stream-legacy
382 #if stream-legacy
383 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
383 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
384 streaming all changes
384 streaming all changes
385 1090 files to transfer, 102 KB of data (no-zstd !)
385 1090 files to transfer, 102 KB of data (no-zstd !)
386 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
386 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
387 1090 files to transfer, 98.8 KB of data (zstd !)
387 1090 files to transfer, 98.8 KB of data (zstd !)
388 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
388 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
389 searching for changes
389 searching for changes
390 no changes found
390 no changes found
391 #endif
391 #endif
392 #if stream-bundle2
392 #if stream-bundle2
393 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
393 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
394 streaming all changes
394 streaming all changes
395 1093 files to transfer, 102 KB of data (no-zstd !)
395 1093 files to transfer, 102 KB of data (no-zstd !)
396 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
396 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
397 1093 files to transfer, 98.9 KB of data (zstd !)
397 1093 files to transfer, 98.9 KB of data (zstd !)
398 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
398 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
399 #endif
399 #endif
400
400
401 Clone with background file closing enabled
401 Clone with background file closing enabled
402
402
403 #if stream-legacy
403 #if stream-legacy
404 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
404 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
405 using http://localhost:$HGPORT/
405 using http://localhost:$HGPORT/
406 sending capabilities command
406 sending capabilities command
407 sending branchmap command
407 sending branchmap command
408 streaming all changes
408 streaming all changes
409 sending stream_out command
409 sending stream_out command
410 1090 files to transfer, 102 KB of data (no-zstd !)
410 1090 files to transfer, 102 KB of data (no-zstd !)
411 1090 files to transfer, 98.8 KB of data (zstd !)
411 1090 files to transfer, 98.8 KB of data (zstd !)
412 starting 4 threads for background file closing
412 starting 4 threads for background file closing
413 updating the branch cache
413 updating the branch cache
414 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
414 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
415 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
415 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
416 query 1; heads
416 query 1; heads
417 sending batch command
417 sending batch command
418 searching for changes
418 searching for changes
419 all remote heads known locally
419 all remote heads known locally
420 no changes found
420 no changes found
421 sending getbundle command
421 sending getbundle command
422 bundle2-input-bundle: with-transaction
422 bundle2-input-bundle: with-transaction
423 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
423 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
424 bundle2-input-part: "phase-heads" supported
424 bundle2-input-part: "phase-heads" supported
425 bundle2-input-part: total payload size 24
425 bundle2-input-part: total payload size 24
426 bundle2-input-bundle: 2 parts total
426 bundle2-input-bundle: 2 parts total
427 checking for updated bookmarks
427 checking for updated bookmarks
428 updating the branch cache
428 updating the branch cache
429 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
429 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
430 #endif
430 #endif
431 #if stream-bundle2
431 #if stream-bundle2
432 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
432 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
433 using http://localhost:$HGPORT/
433 using http://localhost:$HGPORT/
434 sending capabilities command
434 sending capabilities command
435 query 1; heads
435 query 1; heads
436 sending batch command
436 sending batch command
437 streaming all changes
437 streaming all changes
438 sending getbundle command
438 sending getbundle command
439 bundle2-input-bundle: with-transaction
439 bundle2-input-bundle: with-transaction
440 bundle2-input-part: "stream2" (params: 3 mandatory) supported
440 bundle2-input-part: "stream2" (params: 3 mandatory) supported
441 applying stream bundle
441 applying stream bundle
442 1093 files to transfer, 102 KB of data (no-zstd !)
442 1093 files to transfer, 102 KB of data (no-zstd !)
443 1093 files to transfer, 98.9 KB of data (zstd !)
443 1093 files to transfer, 98.9 KB of data (zstd !)
444 starting 4 threads for background file closing
444 starting 4 threads for background file closing
445 starting 4 threads for background file closing
445 starting 4 threads for background file closing
446 updating the branch cache
446 updating the branch cache
447 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
447 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
448 bundle2-input-part: total payload size 118984 (no-zstd !)
448 bundle2-input-part: total payload size 118984 (no-zstd !)
449 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
449 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
450 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
450 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
451 bundle2-input-part: total payload size 116140 (zstd bigendian !)
451 bundle2-input-part: total payload size 116140 (zstd bigendian !)
452 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
452 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
453 bundle2-input-bundle: 2 parts total
453 bundle2-input-bundle: 2 parts total
454 checking for updated bookmarks
454 checking for updated bookmarks
455 updating the branch cache
455 updating the branch cache
456 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
456 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
457 #endif
457 #endif
458
458
459 Cannot stream clone when there are secret changesets
459 Cannot stream clone when there are secret changesets
460
460
461 $ hg -R server phase --force --secret -r tip
461 $ hg -R server phase --force --secret -r tip
462 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
462 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
463 warning: stream clone requested but server has them disabled
463 warning: stream clone requested but server has them disabled
464 requesting all changes
464 requesting all changes
465 adding changesets
465 adding changesets
466 adding manifests
466 adding manifests
467 adding file changes
467 adding file changes
468 added 2 changesets with 1025 changes to 1025 files
468 added 2 changesets with 1025 changes to 1025 files
469 new changesets 96ee1d7354c4:c17445101a72
469 new changesets 96ee1d7354c4:c17445101a72
470
470
471 $ killdaemons.py
471 $ killdaemons.py
472
472
473 Streaming of secrets can be overridden by server config
473 Streaming of secrets can be overridden by server config
474
474
475 $ cd server
475 $ cd server
476 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
476 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
477 $ cat hg.pid > $DAEMON_PIDS
477 $ cat hg.pid > $DAEMON_PIDS
478 $ cd ..
478 $ cd ..
479
479
480 #if stream-legacy
480 #if stream-legacy
481 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
481 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
482 streaming all changes
482 streaming all changes
483 1090 files to transfer, 102 KB of data (no-zstd !)
483 1090 files to transfer, 102 KB of data (no-zstd !)
484 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
484 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
485 1090 files to transfer, 98.8 KB of data (zstd !)
485 1090 files to transfer, 98.8 KB of data (zstd !)
486 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
486 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
487 searching for changes
487 searching for changes
488 no changes found
488 no changes found
489 #endif
489 #endif
490 #if stream-bundle2
490 #if stream-bundle2
491 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
491 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
492 streaming all changes
492 streaming all changes
493 1093 files to transfer, 102 KB of data (no-zstd !)
493 1093 files to transfer, 102 KB of data (no-zstd !)
494 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
494 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
495 1093 files to transfer, 98.9 KB of data (zstd !)
495 1093 files to transfer, 98.9 KB of data (zstd !)
496 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
496 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
497 #endif
497 #endif
498
498
499 $ killdaemons.py
499 $ killdaemons.py
500
500
501 Verify interaction between preferuncompressed and secret presence
501 Verify interaction between preferuncompressed and secret presence
502
502
503 $ cd server
503 $ cd server
504 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
504 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
505 $ cat hg.pid > $DAEMON_PIDS
505 $ cat hg.pid > $DAEMON_PIDS
506 $ cd ..
506 $ cd ..
507
507
508 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
508 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
509 requesting all changes
509 requesting all changes
510 adding changesets
510 adding changesets
511 adding manifests
511 adding manifests
512 adding file changes
512 adding file changes
513 added 2 changesets with 1025 changes to 1025 files
513 added 2 changesets with 1025 changes to 1025 files
514 new changesets 96ee1d7354c4:c17445101a72
514 new changesets 96ee1d7354c4:c17445101a72
515
515
516 $ killdaemons.py
516 $ killdaemons.py
517
517
518 Clone not allowed when full bundles disabled and can't serve secrets
518 Clone not allowed when full bundles disabled and can't serve secrets
519
519
520 $ cd server
520 $ cd server
521 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
521 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
522 $ cat hg.pid > $DAEMON_PIDS
522 $ cat hg.pid > $DAEMON_PIDS
523 $ cd ..
523 $ cd ..
524
524
525 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
525 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
526 warning: stream clone requested but server has them disabled
526 warning: stream clone requested but server has them disabled
527 requesting all changes
527 requesting all changes
528 remote: abort: server has pull-based clones disabled
528 remote: abort: server has pull-based clones disabled
529 abort: pull failed on remote
529 abort: pull failed on remote
530 (remove --pull if specified or upgrade Mercurial)
530 (remove --pull if specified or upgrade Mercurial)
531 [100]
531 [100]
532
532
533 Local stream clone with secrets involved
533 Local stream clone with secrets involved
534 (This is just a test over behavior: if you have access to the repo's files,
534 (This is just a test over behavior: if you have access to the repo's files,
535 there is no security so it isn't important to prevent a clone here.)
535 there is no security so it isn't important to prevent a clone here.)
536
536
537 $ hg clone -U --stream server local-secret
537 $ hg clone -U --stream server local-secret
538 warning: stream clone requested but server has them disabled
538 warning: stream clone requested but server has them disabled
539 requesting all changes
539 requesting all changes
540 adding changesets
540 adding changesets
541 adding manifests
541 adding manifests
542 adding file changes
542 adding file changes
543 added 2 changesets with 1025 changes to 1025 files
543 added 2 changesets with 1025 changes to 1025 files
544 new changesets 96ee1d7354c4:c17445101a72
544 new changesets 96ee1d7354c4:c17445101a72
545
545
546 Stream clone while repo is changing:
546 Stream clone while repo is changing:
547
547
548 $ mkdir changing
548 $ mkdir changing
549 $ cd changing
549 $ cd changing
550
550
551 extension for delaying the server process so we reliably can modify the repo
551 extension for delaying the server process so we reliably can modify the repo
552 while cloning
552 while cloning
553
553
554 $ cat > stream_steps.py <<EOF
554 $ cat > stream_steps.py <<EOF
555 > import os
555 > import os
556 > import sys
556 > import sys
557 > from mercurial import (
557 > from mercurial import (
558 > encoding,
558 > encoding,
559 > extensions,
559 > extensions,
560 > streamclone,
560 > streamclone,
561 > testing,
561 > testing,
562 > )
562 > )
563 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
563 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
564 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
564 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
565 >
565 >
566 > def _test_sync_point_walk_1(orig, repo):
566 > def _test_sync_point_walk_1(orig, repo):
567 > testing.write_file(WALKED_FILE_1)
567 > testing.write_file(WALKED_FILE_1)
568 >
568 >
569 > def _test_sync_point_walk_2(orig, repo):
569 > def _test_sync_point_walk_2(orig, repo):
570 > assert repo._currentlock(repo._lockref) is None
570 > assert repo._currentlock(repo._lockref) is None
571 > testing.wait_file(WALKED_FILE_2)
571 > testing.wait_file(WALKED_FILE_2)
572 >
572 >
573 > extensions.wrapfunction(
573 > extensions.wrapfunction(
574 > streamclone,
574 > streamclone,
575 > '_test_sync_point_walk_1',
575 > '_test_sync_point_walk_1',
576 > _test_sync_point_walk_1
576 > _test_sync_point_walk_1
577 > )
577 > )
578 > extensions.wrapfunction(
578 > extensions.wrapfunction(
579 > streamclone,
579 > streamclone,
580 > '_test_sync_point_walk_2',
580 > '_test_sync_point_walk_2',
581 > _test_sync_point_walk_2
581 > _test_sync_point_walk_2
582 > )
582 > )
583 > EOF
583 > EOF
584
584
585 prepare repo with small and big file to cover both code paths in emitrevlogdata
585 prepare repo with small and big file to cover both code paths in emitrevlogdata
586
586
587 $ hg init repo
587 $ hg init repo
588 $ touch repo/f1
588 $ touch repo/f1
589 $ $TESTDIR/seq.py 50000 > repo/f2
589 $ $TESTDIR/seq.py 50000 > repo/f2
590 $ hg -R repo ci -Aqm "0"
590 $ hg -R repo ci -Aqm "0"
591 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
591 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
592 $ export HG_TEST_STREAM_WALKED_FILE_1
592 $ export HG_TEST_STREAM_WALKED_FILE_1
593 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
593 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
594 $ export HG_TEST_STREAM_WALKED_FILE_2
594 $ export HG_TEST_STREAM_WALKED_FILE_2
595 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
595 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
596 $ export HG_TEST_STREAM_WALKED_FILE_3
596 $ export HG_TEST_STREAM_WALKED_FILE_3
597 # $ cat << EOF >> $HGRCPATH
597 # $ cat << EOF >> $HGRCPATH
598 # > [hooks]
598 # > [hooks]
599 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
599 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
600 # > EOF
600 # > EOF
601 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
601 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
602 $ cat hg.pid >> $DAEMON_PIDS
602 $ cat hg.pid >> $DAEMON_PIDS
603
603
604 clone while modifying the repo between stating file with write lock and
604 clone while modifying the repo between stating file with write lock and
605 actually serving file content
605 actually serving file content
606
606
607 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
607 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
608 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
608 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
609 $ echo >> repo/f1
609 $ echo >> repo/f1
610 $ echo >> repo/f2
610 $ echo >> repo/f2
611 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
611 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
612 $ touch $HG_TEST_STREAM_WALKED_FILE_2
612 $ touch $HG_TEST_STREAM_WALKED_FILE_2
613 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
613 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
614 $ hg -R clone id
614 $ hg -R clone id
615 000000000000
615 000000000000
616 $ cat errors.log
616 $ cat errors.log
617 $ cd ..
617 $ cd ..
618
618
619 Stream repository with bookmarks
619 Stream repository with bookmarks
620 --------------------------------
620 --------------------------------
621
621
622 (revert introduction of secret changeset)
622 (revert introduction of secret changeset)
623
623
624 $ hg -R server phase --draft 'secret()'
624 $ hg -R server phase --draft 'secret()'
625
625
626 add a bookmark
626 add a bookmark
627
627
628 $ hg -R server bookmark -r tip some-bookmark
628 $ hg -R server bookmark -r tip some-bookmark
629
629
630 clone it
630 clone it
631
631
632 #if stream-legacy
632 #if stream-legacy
633 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
633 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
634 streaming all changes
634 streaming all changes
635 1090 files to transfer, 102 KB of data (no-zstd !)
635 1090 files to transfer, 102 KB of data (no-zstd !)
636 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
636 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
637 1090 files to transfer, 98.8 KB of data (zstd !)
637 1090 files to transfer, 98.8 KB of data (zstd !)
638 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
638 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
639 searching for changes
639 searching for changes
640 no changes found
640 no changes found
641 updating to branch default
641 updating to branch default
642 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
642 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
643 #endif
643 #endif
644 #if stream-bundle2
644 #if stream-bundle2
645 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
645 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
646 streaming all changes
646 streaming all changes
647 1096 files to transfer, 102 KB of data (no-zstd !)
647 1096 files to transfer, 102 KB of data (no-zstd !)
648 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
648 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
649 1096 files to transfer, 99.1 KB of data (zstd !)
649 1096 files to transfer, 99.1 KB of data (zstd !)
650 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
650 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
651 updating to branch default
651 updating to branch default
652 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
652 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
653 #endif
653 #endif
654 $ hg verify -R with-bookmarks
654 $ hg verify -R with-bookmarks
655 checking changesets
655 checking changesets
656 checking manifests
656 checking manifests
657 crosschecking files in changesets and manifests
657 crosschecking files in changesets and manifests
658 checking files
658 checking files
659 checked 3 changesets with 1088 changes to 1088 files
659 checked 3 changesets with 1088 changes to 1088 files
660 $ hg -R with-bookmarks bookmarks
660 $ hg -R with-bookmarks bookmarks
661 some-bookmark 2:5223b5e3265f
661 some-bookmark 2:5223b5e3265f
662
662
663 Stream repository with phases
663 Stream repository with phases
664 -----------------------------
664 -----------------------------
665
665
666 Clone as publishing
666 Clone as publishing
667
667
668 $ hg -R server phase -r 'all()'
668 $ hg -R server phase -r 'all()'
669 0: draft
669 0: draft
670 1: draft
670 1: draft
671 2: draft
671 2: draft
672
672
673 #if stream-legacy
673 #if stream-legacy
674 $ hg clone --stream http://localhost:$HGPORT phase-publish
674 $ hg clone --stream http://localhost:$HGPORT phase-publish
675 streaming all changes
675 streaming all changes
676 1090 files to transfer, 102 KB of data (no-zstd !)
676 1090 files to transfer, 102 KB of data (no-zstd !)
677 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
677 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
678 1090 files to transfer, 98.8 KB of data (zstd !)
678 1090 files to transfer, 98.8 KB of data (zstd !)
679 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
679 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
680 searching for changes
680 searching for changes
681 no changes found
681 no changes found
682 updating to branch default
682 updating to branch default
683 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
683 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
684 #endif
684 #endif
685 #if stream-bundle2
685 #if stream-bundle2
686 $ hg clone --stream http://localhost:$HGPORT phase-publish
686 $ hg clone --stream http://localhost:$HGPORT phase-publish
687 streaming all changes
687 streaming all changes
688 1096 files to transfer, 102 KB of data (no-zstd !)
688 1096 files to transfer, 102 KB of data (no-zstd !)
689 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
689 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
690 1096 files to transfer, 99.1 KB of data (zstd !)
690 1096 files to transfer, 99.1 KB of data (zstd !)
691 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
691 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
692 updating to branch default
692 updating to branch default
693 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
693 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
694 #endif
694 #endif
695 $ hg verify -R phase-publish
695 $ hg verify -R phase-publish
696 checking changesets
696 checking changesets
697 checking manifests
697 checking manifests
698 crosschecking files in changesets and manifests
698 crosschecking files in changesets and manifests
699 checking files
699 checking files
700 checked 3 changesets with 1088 changes to 1088 files
700 checked 3 changesets with 1088 changes to 1088 files
701 $ hg -R phase-publish phase -r 'all()'
701 $ hg -R phase-publish phase -r 'all()'
702 0: public
702 0: public
703 1: public
703 1: public
704 2: public
704 2: public
705
705
706 Clone as non publishing
706 Clone as non publishing
707
707
708 $ cat << EOF >> server/.hg/hgrc
708 $ cat << EOF >> server/.hg/hgrc
709 > [phases]
709 > [phases]
710 > publish = False
710 > publish = False
711 > EOF
711 > EOF
712 $ killdaemons.py
712 $ killdaemons.py
713 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
713 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
714 $ cat hg.pid > $DAEMON_PIDS
714 $ cat hg.pid > $DAEMON_PIDS
715
715
716 #if stream-legacy
716 #if stream-legacy
717
717
718 With v1 of the stream protocol, changeset are always cloned as public. It make
718 With v1 of the stream protocol, changeset are always cloned as public. It make
719 stream v1 unsuitable for non-publishing repository.
719 stream v1 unsuitable for non-publishing repository.
720
720
721 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
721 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
722 streaming all changes
722 streaming all changes
723 1090 files to transfer, 102 KB of data (no-zstd !)
723 1090 files to transfer, 102 KB of data (no-zstd !)
724 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
724 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
725 1090 files to transfer, 98.8 KB of data (zstd !)
725 1090 files to transfer, 98.8 KB of data (zstd !)
726 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
726 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
727 searching for changes
727 searching for changes
728 no changes found
728 no changes found
729 updating to branch default
729 updating to branch default
730 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
730 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
731 $ hg -R phase-no-publish phase -r 'all()'
731 $ hg -R phase-no-publish phase -r 'all()'
732 0: public
732 0: public
733 1: public
733 1: public
734 2: public
734 2: public
735 #endif
735 #endif
736 #if stream-bundle2
736 #if stream-bundle2
737 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
737 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
738 streaming all changes
738 streaming all changes
739 1097 files to transfer, 102 KB of data (no-zstd !)
739 1097 files to transfer, 102 KB of data (no-zstd !)
740 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
740 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
741 1097 files to transfer, 99.1 KB of data (zstd !)
741 1097 files to transfer, 99.1 KB of data (zstd !)
742 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
742 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
743 updating to branch default
743 updating to branch default
744 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
744 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
745 $ hg -R phase-no-publish phase -r 'all()'
745 $ hg -R phase-no-publish phase -r 'all()'
746 0: draft
746 0: draft
747 1: draft
747 1: draft
748 2: draft
748 2: draft
749 #endif
749 #endif
750 $ hg verify -R phase-no-publish
750 $ hg verify -R phase-no-publish
751 checking changesets
751 checking changesets
752 checking manifests
752 checking manifests
753 crosschecking files in changesets and manifests
753 crosschecking files in changesets and manifests
754 checking files
754 checking files
755 checked 3 changesets with 1088 changes to 1088 files
755 checked 3 changesets with 1088 changes to 1088 files
756
756
757 $ killdaemons.py
757 $ killdaemons.py
758
758
759 #if stream-legacy
759 #if stream-legacy
760
760
761 With v1 of the stream protocol, changeset are always cloned as public. There's
761 With v1 of the stream protocol, changeset are always cloned as public. There's
762 no obsolescence markers exchange in stream v1.
762 no obsolescence markers exchange in stream v1.
763
763
764 #endif
764 #endif
765 #if stream-bundle2
765 #if stream-bundle2
766
766
767 Stream repository with obsolescence
767 Stream repository with obsolescence
768 -----------------------------------
768 -----------------------------------
769
769
770 Clone non-publishing with obsolescence
770 Clone non-publishing with obsolescence
771
771
772 $ cat >> $HGRCPATH << EOF
772 $ cat >> $HGRCPATH << EOF
773 > [experimental]
773 > [experimental]
774 > evolution=all
774 > evolution=all
775 > EOF
775 > EOF
776
776
777 $ cd server
777 $ cd server
778 $ echo foo > foo
778 $ echo foo > foo
779 $ hg -q commit -m 'about to be pruned'
779 $ hg -q commit -m 'about to be pruned'
780 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
780 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
781 1 new obsolescence markers
781 1 new obsolescence markers
782 obsoleted 1 changesets
782 obsoleted 1 changesets
783 $ hg up null -q
783 $ hg up null -q
784 $ hg log -T '{rev}: {phase}\n'
784 $ hg log -T '{rev}: {phase}\n'
785 2: draft
785 2: draft
786 1: draft
786 1: draft
787 0: draft
787 0: draft
788 $ hg serve -p $HGPORT -d --pid-file=hg.pid
788 $ hg serve -p $HGPORT -d --pid-file=hg.pid
789 $ cat hg.pid > $DAEMON_PIDS
789 $ cat hg.pid > $DAEMON_PIDS
790 $ cd ..
790 $ cd ..
791
791
792 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
792 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
793 streaming all changes
793 streaming all changes
794 1098 files to transfer, 102 KB of data (no-zstd !)
794 1098 files to transfer, 102 KB of data (no-zstd !)
795 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
795 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
796 1098 files to transfer, 99.5 KB of data (zstd !)
796 1098 files to transfer, 99.5 KB of data (zstd !)
797 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
797 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
798 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
798 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
799 2: draft
799 2: draft
800 1: draft
800 1: draft
801 0: draft
801 0: draft
802 $ hg debugobsolete -R with-obsolescence
802 $ hg debugobsolete -R with-obsolescence
803 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
803 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
804 $ hg verify -R with-obsolescence
804 $ hg verify -R with-obsolescence
805 checking changesets
805 checking changesets
806 checking manifests
806 checking manifests
807 crosschecking files in changesets and manifests
807 crosschecking files in changesets and manifests
808 checking files
808 checking files
809 checked 4 changesets with 1089 changes to 1088 files
809 checked 4 changesets with 1089 changes to 1088 files
810
810
811 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
811 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
812 streaming all changes
812 streaming all changes
813 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
813 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
814 abort: pull failed on remote
814 abort: pull failed on remote
815 [100]
815 [100]
816
816
817 $ killdaemons.py
817 $ killdaemons.py
818
818
819 #endif
819 #endif
820
821 Cloning a repo with no requirements doesn't give some obscure error
822
823 $ mkdir -p empty-repo/.hg
824 $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo2
825 $ hg --cwd empty-repo2 verify -q
General Comments 0
You need to be logged in to leave comments. Login now