##// END OF EJS Templates
cleanup: typos, formatting
Joerg Sonnenberger -
r51885:704c3d08 stable
parent child Browse files
Show More
@@ -1,2676 +1,2676 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148
148
149 import collections
149 import collections
150 import errno
150 import errno
151 import os
151 import os
152 import re
152 import re
153 import string
153 import string
154 import struct
154 import struct
155 import sys
155 import sys
156
156
157 from .i18n import _
157 from .i18n import _
158 from .node import (
158 from .node import (
159 hex,
159 hex,
160 short,
160 short,
161 )
161 )
162 from . import (
162 from . import (
163 bookmarks,
163 bookmarks,
164 changegroup,
164 changegroup,
165 encoding,
165 encoding,
166 error,
166 error,
167 obsolete,
167 obsolete,
168 phases,
168 phases,
169 pushkey,
169 pushkey,
170 pycompat,
170 pycompat,
171 requirements,
171 requirements,
172 scmutil,
172 scmutil,
173 streamclone,
173 streamclone,
174 tags,
174 tags,
175 url,
175 url,
176 util,
176 util,
177 )
177 )
178 from .utils import (
178 from .utils import (
179 stringutil,
179 stringutil,
180 urlutil,
180 urlutil,
181 )
181 )
182 from .interfaces import repository
182 from .interfaces import repository
183
183
184 urlerr = util.urlerr
184 urlerr = util.urlerr
185 urlreq = util.urlreq
185 urlreq = util.urlreq
186
186
187 _pack = struct.pack
187 _pack = struct.pack
188 _unpack = struct.unpack
188 _unpack = struct.unpack
189
189
190 _fstreamparamsize = b'>i'
190 _fstreamparamsize = b'>i'
191 _fpartheadersize = b'>i'
191 _fpartheadersize = b'>i'
192 _fparttypesize = b'>B'
192 _fparttypesize = b'>B'
193 _fpartid = b'>I'
193 _fpartid = b'>I'
194 _fpayloadsize = b'>i'
194 _fpayloadsize = b'>i'
195 _fpartparamcount = b'>BB'
195 _fpartparamcount = b'>BB'
196
196
197 preferedchunksize = 32768
197 preferedchunksize = 32768
198
198
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200
200
201
201
202 def outdebug(ui, message):
202 def outdebug(ui, message):
203 """debug regarding output stream (bundling)"""
203 """debug regarding output stream (bundling)"""
204 if ui.configbool(b'devel', b'bundle2.debug'):
204 if ui.configbool(b'devel', b'bundle2.debug'):
205 ui.debug(b'bundle2-output: %s\n' % message)
205 ui.debug(b'bundle2-output: %s\n' % message)
206
206
207
207
208 def indebug(ui, message):
208 def indebug(ui, message):
209 """debug on input stream (unbundling)"""
209 """debug on input stream (unbundling)"""
210 if ui.configbool(b'devel', b'bundle2.debug'):
210 if ui.configbool(b'devel', b'bundle2.debug'):
211 ui.debug(b'bundle2-input: %s\n' % message)
211 ui.debug(b'bundle2-input: %s\n' % message)
212
212
213
213
214 def validateparttype(parttype):
214 def validateparttype(parttype):
215 """raise ValueError if a parttype contains invalid character"""
215 """raise ValueError if a parttype contains invalid character"""
216 if _parttypeforbidden.search(parttype):
216 if _parttypeforbidden.search(parttype):
217 raise ValueError(parttype)
217 raise ValueError(parttype)
218
218
219
219
220 def _makefpartparamsizes(nbparams):
220 def _makefpartparamsizes(nbparams):
221 """return a struct format to read part parameter sizes
221 """return a struct format to read part parameter sizes
222
222
223 The number parameters is variable so we need to build that format
223 The number parameters is variable so we need to build that format
224 dynamically.
224 dynamically.
225 """
225 """
226 return b'>' + (b'BB' * nbparams)
226 return b'>' + (b'BB' * nbparams)
227
227
228
228
229 parthandlermapping = {}
229 parthandlermapping = {}
230
230
231
231
232 def parthandler(parttype, params=()):
232 def parthandler(parttype, params=()):
233 """decorator that register a function as a bundle2 part handler
233 """decorator that register a function as a bundle2 part handler
234
234
235 eg::
235 eg::
236
236
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 def myparttypehandler(...):
238 def myparttypehandler(...):
239 '''process a part of type "my part".'''
239 '''process a part of type "my part".'''
240 ...
240 ...
241 """
241 """
242 validateparttype(parttype)
242 validateparttype(parttype)
243
243
244 def _decorator(func):
244 def _decorator(func):
245 lparttype = parttype.lower() # enforce lower case matching.
245 lparttype = parttype.lower() # enforce lower case matching.
246 assert lparttype not in parthandlermapping
246 assert lparttype not in parthandlermapping
247 parthandlermapping[lparttype] = func
247 parthandlermapping[lparttype] = func
248 func.params = frozenset(params)
248 func.params = frozenset(params)
249 return func
249 return func
250
250
251 return _decorator
251 return _decorator
252
252
253
253
254 class unbundlerecords:
254 class unbundlerecords:
255 """keep record of what happens during and unbundle
255 """keep record of what happens during and unbundle
256
256
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 category of record and obj is an arbitrary object.
258 category of record and obj is an arbitrary object.
259
259
260 `records['cat']` will return all entries of this category 'cat'.
260 `records['cat']` will return all entries of this category 'cat'.
261
261
262 Iterating on the object itself will yield `('category', obj)` tuples
262 Iterating on the object itself will yield `('category', obj)` tuples
263 for all entries.
263 for all entries.
264
264
265 All iterations happens in chronological order.
265 All iterations happens in chronological order.
266 """
266 """
267
267
268 def __init__(self):
268 def __init__(self):
269 self._categories = {}
269 self._categories = {}
270 self._sequences = []
270 self._sequences = []
271 self._replies = {}
271 self._replies = {}
272
272
273 def add(self, category, entry, inreplyto=None):
273 def add(self, category, entry, inreplyto=None):
274 """add a new record of a given category.
274 """add a new record of a given category.
275
275
276 The entry can then be retrieved in the list returned by
276 The entry can then be retrieved in the list returned by
277 self['category']."""
277 self['category']."""
278 self._categories.setdefault(category, []).append(entry)
278 self._categories.setdefault(category, []).append(entry)
279 self._sequences.append((category, entry))
279 self._sequences.append((category, entry))
280 if inreplyto is not None:
280 if inreplyto is not None:
281 self.getreplies(inreplyto).add(category, entry)
281 self.getreplies(inreplyto).add(category, entry)
282
282
283 def getreplies(self, partid):
283 def getreplies(self, partid):
284 """get the records that are replies to a specific part"""
284 """get the records that are replies to a specific part"""
285 return self._replies.setdefault(partid, unbundlerecords())
285 return self._replies.setdefault(partid, unbundlerecords())
286
286
287 def __getitem__(self, cat):
287 def __getitem__(self, cat):
288 return tuple(self._categories.get(cat, ()))
288 return tuple(self._categories.get(cat, ()))
289
289
290 def __iter__(self):
290 def __iter__(self):
291 return iter(self._sequences)
291 return iter(self._sequences)
292
292
293 def __len__(self):
293 def __len__(self):
294 return len(self._sequences)
294 return len(self._sequences)
295
295
296 def __nonzero__(self):
296 def __nonzero__(self):
297 return bool(self._sequences)
297 return bool(self._sequences)
298
298
299 __bool__ = __nonzero__
299 __bool__ = __nonzero__
300
300
301
301
302 class bundleoperation:
302 class bundleoperation:
303 """an object that represents a single bundling process
303 """an object that represents a single bundling process
304
304
305 Its purpose is to carry unbundle-related objects and states.
305 Its purpose is to carry unbundle-related objects and states.
306
306
307 A new object should be created at the beginning of each bundle processing.
307 A new object should be created at the beginning of each bundle processing.
308 The object is to be returned by the processing function.
308 The object is to be returned by the processing function.
309
309
310 The object has very little content now it will ultimately contain:
310 The object has very little content now it will ultimately contain:
311 * an access to the repo the bundle is applied to,
311 * an access to the repo the bundle is applied to,
312 * a ui object,
312 * a ui object,
313 * a way to retrieve a transaction to add changes to the repo,
313 * a way to retrieve a transaction to add changes to the repo,
314 * a way to record the result of processing each part,
314 * a way to record the result of processing each part,
315 * a way to construct a bundle response when applicable.
315 * a way to construct a bundle response when applicable.
316 """
316 """
317
317
318 def __init__(
318 def __init__(
319 self,
319 self,
320 repo,
320 repo,
321 transactiongetter,
321 transactiongetter,
322 captureoutput=True,
322 captureoutput=True,
323 source=b'',
323 source=b'',
324 remote=None,
324 remote=None,
325 ):
325 ):
326 self.repo = repo
326 self.repo = repo
327 # the peer object who produced this bundle if available
327 # the peer object who produced this bundle if available
328 self.remote = remote
328 self.remote = remote
329 self.ui = repo.ui
329 self.ui = repo.ui
330 self.records = unbundlerecords()
330 self.records = unbundlerecords()
331 self.reply = None
331 self.reply = None
332 self.captureoutput = captureoutput
332 self.captureoutput = captureoutput
333 self.hookargs = {}
333 self.hookargs = {}
334 self._gettransaction = transactiongetter
334 self._gettransaction = transactiongetter
335 # carries value that can modify part behavior
335 # carries value that can modify part behavior
336 self.modes = {}
336 self.modes = {}
337 self.source = source
337 self.source = source
338
338
339 def gettransaction(self):
339 def gettransaction(self):
340 transaction = self._gettransaction()
340 transaction = self._gettransaction()
341
341
342 if self.hookargs:
342 if self.hookargs:
343 # the ones added to the transaction supercede those added
343 # the ones added to the transaction supercede those added
344 # to the operation.
344 # to the operation.
345 self.hookargs.update(transaction.hookargs)
345 self.hookargs.update(transaction.hookargs)
346 transaction.hookargs = self.hookargs
346 transaction.hookargs = self.hookargs
347
347
348 # mark the hookargs as flushed. further attempts to add to
348 # mark the hookargs as flushed. further attempts to add to
349 # hookargs will result in an abort.
349 # hookargs will result in an abort.
350 self.hookargs = None
350 self.hookargs = None
351
351
352 return transaction
352 return transaction
353
353
354 def addhookargs(self, hookargs):
354 def addhookargs(self, hookargs):
355 if self.hookargs is None:
355 if self.hookargs is None:
356 raise error.ProgrammingError(
356 raise error.ProgrammingError(
357 b'attempted to add hookargs to '
357 b'attempted to add hookargs to '
358 b'operation after transaction started'
358 b'operation after transaction started'
359 )
359 )
360 self.hookargs.update(hookargs)
360 self.hookargs.update(hookargs)
361
361
362
362
363 class TransactionUnavailable(RuntimeError):
363 class TransactionUnavailable(RuntimeError):
364 pass
364 pass
365
365
366
366
367 def _notransaction():
367 def _notransaction():
368 """default method to get a transaction while processing a bundle
368 """default method to get a transaction while processing a bundle
369
369
370 Raise an exception to highlight the fact that no transaction was expected
370 Raise an exception to highlight the fact that no transaction was expected
371 to be created"""
371 to be created"""
372 raise TransactionUnavailable()
372 raise TransactionUnavailable()
373
373
374
374
375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
376 # transform me into unbundler.apply() as soon as the freeze is lifted
376 # transform me into unbundler.apply() as soon as the freeze is lifted
377 if isinstance(unbundler, unbundle20):
377 if isinstance(unbundler, unbundle20):
378 tr.hookargs[b'bundle2'] = b'1'
378 tr.hookargs[b'bundle2'] = b'1'
379 if source is not None and b'source' not in tr.hookargs:
379 if source is not None and b'source' not in tr.hookargs:
380 tr.hookargs[b'source'] = source
380 tr.hookargs[b'source'] = source
381 if url is not None and b'url' not in tr.hookargs:
381 if url is not None and b'url' not in tr.hookargs:
382 tr.hookargs[b'url'] = url
382 tr.hookargs[b'url'] = url
383 return processbundle(
383 return processbundle(
384 repo, unbundler, lambda: tr, source=source, remote=remote
384 repo, unbundler, lambda: tr, source=source, remote=remote
385 )
385 )
386 else:
386 else:
387 # the transactiongetter won't be used, but we might as well set it
387 # the transactiongetter won't be used, but we might as well set it
388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
390 return op
390 return op
391
391
392
392
393 class partiterator:
393 class partiterator:
394 def __init__(self, repo, op, unbundler):
394 def __init__(self, repo, op, unbundler):
395 self.repo = repo
395 self.repo = repo
396 self.op = op
396 self.op = op
397 self.unbundler = unbundler
397 self.unbundler = unbundler
398 self.iterator = None
398 self.iterator = None
399 self.count = 0
399 self.count = 0
400 self.current = None
400 self.current = None
401
401
402 def __enter__(self):
402 def __enter__(self):
403 def func():
403 def func():
404 itr = enumerate(self.unbundler.iterparts(), 1)
404 itr = enumerate(self.unbundler.iterparts(), 1)
405 for count, p in itr:
405 for count, p in itr:
406 self.count = count
406 self.count = count
407 self.current = p
407 self.current = p
408 yield p
408 yield p
409 p.consume()
409 p.consume()
410 self.current = None
410 self.current = None
411
411
412 self.iterator = func()
412 self.iterator = func()
413 return self.iterator
413 return self.iterator
414
414
415 def __exit__(self, type, exc, tb):
415 def __exit__(self, type, exc, tb):
416 if not self.iterator:
416 if not self.iterator:
417 return
417 return
418
418
419 # Only gracefully abort in a normal exception situation. User aborts
419 # Only gracefully abort in a normal exception situation. User aborts
420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
421 # and should not gracefully cleanup.
421 # and should not gracefully cleanup.
422 if isinstance(exc, Exception):
422 if isinstance(exc, Exception):
423 # Any exceptions seeking to the end of the bundle at this point are
423 # Any exceptions seeking to the end of the bundle at this point are
424 # almost certainly related to the underlying stream being bad.
424 # almost certainly related to the underlying stream being bad.
425 # And, chances are that the exception we're handling is related to
425 # And, chances are that the exception we're handling is related to
426 # getting in that bad state. So, we swallow the seeking error and
426 # getting in that bad state. So, we swallow the seeking error and
427 # re-raise the original error.
427 # re-raise the original error.
428 seekerror = False
428 seekerror = False
429 try:
429 try:
430 if self.current:
430 if self.current:
431 # consume the part content to not corrupt the stream.
431 # consume the part content to not corrupt the stream.
432 self.current.consume()
432 self.current.consume()
433
433
434 for part in self.iterator:
434 for part in self.iterator:
435 # consume the bundle content
435 # consume the bundle content
436 part.consume()
436 part.consume()
437 except Exception:
437 except Exception:
438 seekerror = True
438 seekerror = True
439
439
440 # Small hack to let caller code distinguish exceptions from bundle2
440 # Small hack to let caller code distinguish exceptions from bundle2
441 # processing from processing the old format. This is mostly needed
441 # processing from processing the old format. This is mostly needed
442 # to handle different return codes to unbundle according to the type
442 # to handle different return codes to unbundle according to the type
443 # of bundle. We should probably clean up or drop this return code
443 # of bundle. We should probably clean up or drop this return code
444 # craziness in a future version.
444 # craziness in a future version.
445 exc.duringunbundle2 = True
445 exc.duringunbundle2 = True
446 salvaged = []
446 salvaged = []
447 replycaps = None
447 replycaps = None
448 if self.op.reply is not None:
448 if self.op.reply is not None:
449 salvaged = self.op.reply.salvageoutput()
449 salvaged = self.op.reply.salvageoutput()
450 replycaps = self.op.reply.capabilities
450 replycaps = self.op.reply.capabilities
451 exc._replycaps = replycaps
451 exc._replycaps = replycaps
452 exc._bundle2salvagedoutput = salvaged
452 exc._bundle2salvagedoutput = salvaged
453
453
454 # Re-raising from a variable loses the original stack. So only use
454 # Re-raising from a variable loses the original stack. So only use
455 # that form if we need to.
455 # that form if we need to.
456 if seekerror:
456 if seekerror:
457 raise exc
457 raise exc
458
458
459 self.repo.ui.debug(
459 self.repo.ui.debug(
460 b'bundle2-input-bundle: %i parts total\n' % self.count
460 b'bundle2-input-bundle: %i parts total\n' % self.count
461 )
461 )
462
462
463
463
464 def processbundle(
464 def processbundle(
465 repo,
465 repo,
466 unbundler,
466 unbundler,
467 transactiongetter=None,
467 transactiongetter=None,
468 op=None,
468 op=None,
469 source=b'',
469 source=b'',
470 remote=None,
470 remote=None,
471 ):
471 ):
472 """This function process a bundle, apply effect to/from a repo
472 """This function process a bundle, apply effect to/from a repo
473
473
474 It iterates over each part then searches for and uses the proper handling
474 It iterates over each part then searches for and uses the proper handling
475 code to process the part. Parts are processed in order.
475 code to process the part. Parts are processed in order.
476
476
477 Unknown Mandatory part will abort the process.
477 Unknown Mandatory part will abort the process.
478
478
479 It is temporarily possible to provide a prebuilt bundleoperation to the
479 It is temporarily possible to provide a prebuilt bundleoperation to the
480 function. This is used to ensure output is properly propagated in case of
480 function. This is used to ensure output is properly propagated in case of
481 an error during the unbundling. This output capturing part will likely be
481 an error during the unbundling. This output capturing part will likely be
482 reworked and this ability will probably go away in the process.
482 reworked and this ability will probably go away in the process.
483 """
483 """
484 if op is None:
484 if op is None:
485 if transactiongetter is None:
485 if transactiongetter is None:
486 transactiongetter = _notransaction
486 transactiongetter = _notransaction
487 op = bundleoperation(
487 op = bundleoperation(
488 repo,
488 repo,
489 transactiongetter,
489 transactiongetter,
490 source=source,
490 source=source,
491 remote=remote,
491 remote=remote,
492 )
492 )
493 # todo:
493 # todo:
494 # - replace this is a init function soon.
494 # - replace this is a init function soon.
495 # - exception catching
495 # - exception catching
496 unbundler.params
496 unbundler.params
497 if repo.ui.debugflag:
497 if repo.ui.debugflag:
498 msg = [b'bundle2-input-bundle:']
498 msg = [b'bundle2-input-bundle:']
499 if unbundler.params:
499 if unbundler.params:
500 msg.append(b' %i params' % len(unbundler.params))
500 msg.append(b' %i params' % len(unbundler.params))
501 if op._gettransaction is None or op._gettransaction is _notransaction:
501 if op._gettransaction is None or op._gettransaction is _notransaction:
502 msg.append(b' no-transaction')
502 msg.append(b' no-transaction')
503 else:
503 else:
504 msg.append(b' with-transaction')
504 msg.append(b' with-transaction')
505 msg.append(b'\n')
505 msg.append(b'\n')
506 repo.ui.debug(b''.join(msg))
506 repo.ui.debug(b''.join(msg))
507
507
508 processparts(repo, op, unbundler)
508 processparts(repo, op, unbundler)
509
509
510 return op
510 return op
511
511
512
512
513 def processparts(repo, op, unbundler):
513 def processparts(repo, op, unbundler):
514 with partiterator(repo, op, unbundler) as parts:
514 with partiterator(repo, op, unbundler) as parts:
515 for part in parts:
515 for part in parts:
516 _processpart(op, part)
516 _processpart(op, part)
517
517
518
518
519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
520 if op.remote is not None and op.remote.path is not None:
520 if op.remote is not None and op.remote.path is not None:
521 remote_path = op.remote.path
521 remote_path = op.remote.path
522 kwargs = kwargs.copy()
522 kwargs = kwargs.copy()
523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
525 op.records.add(
525 op.records.add(
526 b'changegroup',
526 b'changegroup',
527 {
527 {
528 b'return': ret,
528 b'return': ret,
529 },
529 },
530 )
530 )
531 return ret
531 return ret
532
532
533
533
534 def _gethandler(op, part):
534 def _gethandler(op, part):
535 status = b'unknown' # used by debug output
535 status = b'unknown' # used by debug output
536 try:
536 try:
537 handler = parthandlermapping.get(part.type)
537 handler = parthandlermapping.get(part.type)
538 if handler is None:
538 if handler is None:
539 status = b'unsupported-type'
539 status = b'unsupported-type'
540 raise error.BundleUnknownFeatureError(parttype=part.type)
540 raise error.BundleUnknownFeatureError(parttype=part.type)
541 indebug(op.ui, b'found a handler for part %s' % part.type)
541 indebug(op.ui, b'found a handler for part %s' % part.type)
542 unknownparams = part.mandatorykeys - handler.params
542 unknownparams = part.mandatorykeys - handler.params
543 if unknownparams:
543 if unknownparams:
544 unknownparams = list(unknownparams)
544 unknownparams = list(unknownparams)
545 unknownparams.sort()
545 unknownparams.sort()
546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
547 raise error.BundleUnknownFeatureError(
547 raise error.BundleUnknownFeatureError(
548 parttype=part.type, params=unknownparams
548 parttype=part.type, params=unknownparams
549 )
549 )
550 status = b'supported'
550 status = b'supported'
551 except error.BundleUnknownFeatureError as exc:
551 except error.BundleUnknownFeatureError as exc:
552 if part.mandatory: # mandatory parts
552 if part.mandatory: # mandatory parts
553 raise
553 raise
554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
555 return # skip to part processing
555 return # skip to part processing
556 finally:
556 finally:
557 if op.ui.debugflag:
557 if op.ui.debugflag:
558 msg = [b'bundle2-input-part: "%s"' % part.type]
558 msg = [b'bundle2-input-part: "%s"' % part.type]
559 if not part.mandatory:
559 if not part.mandatory:
560 msg.append(b' (advisory)')
560 msg.append(b' (advisory)')
561 nbmp = len(part.mandatorykeys)
561 nbmp = len(part.mandatorykeys)
562 nbap = len(part.params) - nbmp
562 nbap = len(part.params) - nbmp
563 if nbmp or nbap:
563 if nbmp or nbap:
564 msg.append(b' (params:')
564 msg.append(b' (params:')
565 if nbmp:
565 if nbmp:
566 msg.append(b' %i mandatory' % nbmp)
566 msg.append(b' %i mandatory' % nbmp)
567 if nbap:
567 if nbap:
568 msg.append(b' %i advisory' % nbmp)
568 msg.append(b' %i advisory' % nbmp)
569 msg.append(b')')
569 msg.append(b')')
570 msg.append(b' %s\n' % status)
570 msg.append(b' %s\n' % status)
571 op.ui.debug(b''.join(msg))
571 op.ui.debug(b''.join(msg))
572
572
573 return handler
573 return handler
574
574
575
575
576 def _processpart(op, part):
576 def _processpart(op, part):
577 """process a single part from a bundle
577 """process a single part from a bundle
578
578
579 The part is guaranteed to have been fully consumed when the function exits
579 The part is guaranteed to have been fully consumed when the function exits
580 (even if an exception is raised)."""
580 (even if an exception is raised)."""
581 handler = _gethandler(op, part)
581 handler = _gethandler(op, part)
582 if handler is None:
582 if handler is None:
583 return
583 return
584
584
585 # handler is called outside the above try block so that we don't
585 # handler is called outside the above try block so that we don't
586 # risk catching KeyErrors from anything other than the
586 # risk catching KeyErrors from anything other than the
587 # parthandlermapping lookup (any KeyError raised by handler()
587 # parthandlermapping lookup (any KeyError raised by handler()
588 # itself represents a defect of a different variety).
588 # itself represents a defect of a different variety).
589 output = None
589 output = None
590 if op.captureoutput and op.reply is not None:
590 if op.captureoutput and op.reply is not None:
591 op.ui.pushbuffer(error=True, subproc=True)
591 op.ui.pushbuffer(error=True, subproc=True)
592 output = b''
592 output = b''
593 try:
593 try:
594 handler(op, part)
594 handler(op, part)
595 finally:
595 finally:
596 if output is not None:
596 if output is not None:
597 output = op.ui.popbuffer()
597 output = op.ui.popbuffer()
598 if output:
598 if output:
599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
600 outpart.addparam(
600 outpart.addparam(
601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
602 )
602 )
603
603
604
604
605 def decodecaps(blob):
605 def decodecaps(blob):
606 """decode a bundle2 caps bytes blob into a dictionary
606 """decode a bundle2 caps bytes blob into a dictionary
607
607
608 The blob is a list of capabilities (one per line)
608 The blob is a list of capabilities (one per line)
609 Capabilities may have values using a line of the form::
609 Capabilities may have values using a line of the form::
610
610
611 capability=value1,value2,value3
611 capability=value1,value2,value3
612
612
613 The values are always a list."""
613 The values are always a list."""
614 caps = {}
614 caps = {}
615 for line in blob.splitlines():
615 for line in blob.splitlines():
616 if not line:
616 if not line:
617 continue
617 continue
618 if b'=' not in line:
618 if b'=' not in line:
619 key, vals = line, ()
619 key, vals = line, ()
620 else:
620 else:
621 key, vals = line.split(b'=', 1)
621 key, vals = line.split(b'=', 1)
622 vals = vals.split(b',')
622 vals = vals.split(b',')
623 key = urlreq.unquote(key)
623 key = urlreq.unquote(key)
624 vals = [urlreq.unquote(v) for v in vals]
624 vals = [urlreq.unquote(v) for v in vals]
625 caps[key] = vals
625 caps[key] = vals
626 return caps
626 return caps
627
627
628
628
629 def encodecaps(caps):
629 def encodecaps(caps):
630 """encode a bundle2 caps dictionary into a bytes blob"""
630 """encode a bundle2 caps dictionary into a bytes blob"""
631 chunks = []
631 chunks = []
632 for ca in sorted(caps):
632 for ca in sorted(caps):
633 vals = caps[ca]
633 vals = caps[ca]
634 ca = urlreq.quote(ca)
634 ca = urlreq.quote(ca)
635 vals = [urlreq.quote(v) for v in vals]
635 vals = [urlreq.quote(v) for v in vals]
636 if vals:
636 if vals:
637 ca = b"%s=%s" % (ca, b','.join(vals))
637 ca = b"%s=%s" % (ca, b','.join(vals))
638 chunks.append(ca)
638 chunks.append(ca)
639 return b'\n'.join(chunks)
639 return b'\n'.join(chunks)
640
640
641
641
642 bundletypes = {
642 bundletypes = {
643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
644 # since the unification ssh accepts a header but there
644 # since the unification ssh accepts a header but there
645 # is no capability signaling it.
645 # is no capability signaling it.
646 b"HG20": (), # special-cased below
646 b"HG20": (), # special-cased below
647 b"HG10UN": (b"HG10UN", b'UN'),
647 b"HG10UN": (b"HG10UN", b'UN'),
648 b"HG10BZ": (b"HG10", b'BZ'),
648 b"HG10BZ": (b"HG10", b'BZ'),
649 b"HG10GZ": (b"HG10GZ", b'GZ'),
649 b"HG10GZ": (b"HG10GZ", b'GZ'),
650 }
650 }
651
651
652 # hgweb uses this list to communicate its preferred type
652 # hgweb uses this list to communicate its preferred type
653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
654
654
655
655
656 class bundle20:
656 class bundle20:
657 """represent an outgoing bundle2 container
657 """represent an outgoing bundle2 container
658
658
659 Use the `addparam` method to add stream level parameter. and `newpart` to
659 Use the `addparam` method to add stream level parameter. and `newpart` to
660 populate it. Then call `getchunks` to retrieve all the binary chunks of
660 populate it. Then call `getchunks` to retrieve all the binary chunks of
661 data that compose the bundle2 container."""
661 data that compose the bundle2 container."""
662
662
663 _magicstring = b'HG20'
663 _magicstring = b'HG20'
664
664
665 def __init__(self, ui, capabilities=()):
665 def __init__(self, ui, capabilities=()):
666 self.ui = ui
666 self.ui = ui
667 self._params = []
667 self._params = []
668 self._parts = []
668 self._parts = []
669 self.capabilities = dict(capabilities)
669 self.capabilities = dict(capabilities)
670 self._compengine = util.compengines.forbundletype(b'UN')
670 self._compengine = util.compengines.forbundletype(b'UN')
671 self._compopts = None
671 self._compopts = None
672 # If compression is being handled by a consumer of the raw
672 # If compression is being handled by a consumer of the raw
673 # data (e.g. the wire protocol), unsetting this flag tells
673 # data (e.g. the wire protocol), unsetting this flag tells
674 # consumers that the bundle is best left uncompressed.
674 # consumers that the bundle is best left uncompressed.
675 self.prefercompressed = True
675 self.prefercompressed = True
676
676
677 def setcompression(self, alg, compopts=None):
677 def setcompression(self, alg, compopts=None):
678 """setup core part compression to <alg>"""
678 """setup core part compression to <alg>"""
679 if alg in (None, b'UN'):
679 if alg in (None, b'UN'):
680 return
680 return
681 assert not any(n.lower() == b'compression' for n, v in self._params)
681 assert not any(n.lower() == b'compression' for n, v in self._params)
682 self.addparam(b'Compression', alg)
682 self.addparam(b'Compression', alg)
683 self._compengine = util.compengines.forbundletype(alg)
683 self._compengine = util.compengines.forbundletype(alg)
684 self._compopts = compopts
684 self._compopts = compopts
685
685
686 @property
686 @property
687 def nbparts(self):
687 def nbparts(self):
688 """total number of parts added to the bundler"""
688 """total number of parts added to the bundler"""
689 return len(self._parts)
689 return len(self._parts)
690
690
691 # methods used to defines the bundle2 content
691 # methods used to defines the bundle2 content
692 def addparam(self, name, value=None):
692 def addparam(self, name, value=None):
693 """add a stream level parameter"""
693 """add a stream level parameter"""
694 if not name:
694 if not name:
695 raise error.ProgrammingError(b'empty parameter name')
695 raise error.ProgrammingError(b'empty parameter name')
696 if name[0:1] not in pycompat.bytestr(
696 if name[0:1] not in pycompat.bytestr(
697 string.ascii_letters # pytype: disable=wrong-arg-types
697 string.ascii_letters # pytype: disable=wrong-arg-types
698 ):
698 ):
699 raise error.ProgrammingError(
699 raise error.ProgrammingError(
700 b'non letter first character: %s' % name
700 b'non letter first character: %s' % name
701 )
701 )
702 self._params.append((name, value))
702 self._params.append((name, value))
703
703
704 def addpart(self, part):
704 def addpart(self, part):
705 """add a new part to the bundle2 container
705 """add a new part to the bundle2 container
706
706
707 Parts contains the actual applicative payload."""
707 Parts contains the actual applicative payload."""
708 assert part.id is None
708 assert part.id is None
709 part.id = len(self._parts) # very cheap counter
709 part.id = len(self._parts) # very cheap counter
710 self._parts.append(part)
710 self._parts.append(part)
711
711
712 def newpart(self, typeid, *args, **kwargs):
712 def newpart(self, typeid, *args, **kwargs):
713 """create a new part and add it to the containers
713 """create a new part and add it to the containers
714
714
715 As the part is directly added to the containers. For now, this means
715 As the part is directly added to the containers. For now, this means
716 that any failure to properly initialize the part after calling
716 that any failure to properly initialize the part after calling
717 ``newpart`` should result in a failure of the whole bundling process.
717 ``newpart`` should result in a failure of the whole bundling process.
718
718
719 You can still fall back to manually create and add if you need better
719 You can still fall back to manually create and add if you need better
720 control."""
720 control."""
721 part = bundlepart(typeid, *args, **kwargs)
721 part = bundlepart(typeid, *args, **kwargs)
722 self.addpart(part)
722 self.addpart(part)
723 return part
723 return part
724
724
725 # methods used to generate the bundle2 stream
725 # methods used to generate the bundle2 stream
726 def getchunks(self):
726 def getchunks(self):
727 if self.ui.debugflag:
727 if self.ui.debugflag:
728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
729 if self._params:
729 if self._params:
730 msg.append(b' (%i params)' % len(self._params))
730 msg.append(b' (%i params)' % len(self._params))
731 msg.append(b' %i parts total\n' % len(self._parts))
731 msg.append(b' %i parts total\n' % len(self._parts))
732 self.ui.debug(b''.join(msg))
732 self.ui.debug(b''.join(msg))
733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
734 yield self._magicstring
734 yield self._magicstring
735 param = self._paramchunk()
735 param = self._paramchunk()
736 outdebug(self.ui, b'bundle parameter: %s' % param)
736 outdebug(self.ui, b'bundle parameter: %s' % param)
737 yield _pack(_fstreamparamsize, len(param))
737 yield _pack(_fstreamparamsize, len(param))
738 if param:
738 if param:
739 yield param
739 yield param
740 for chunk in self._compengine.compressstream(
740 for chunk in self._compengine.compressstream(
741 self._getcorechunk(), self._compopts
741 self._getcorechunk(), self._compopts
742 ):
742 ):
743 yield chunk
743 yield chunk
744
744
745 def _paramchunk(self):
745 def _paramchunk(self):
746 """return a encoded version of all stream parameters"""
746 """return a encoded version of all stream parameters"""
747 blocks = []
747 blocks = []
748 for par, value in self._params:
748 for par, value in self._params:
749 par = urlreq.quote(par)
749 par = urlreq.quote(par)
750 if value is not None:
750 if value is not None:
751 value = urlreq.quote(value)
751 value = urlreq.quote(value)
752 par = b'%s=%s' % (par, value)
752 par = b'%s=%s' % (par, value)
753 blocks.append(par)
753 blocks.append(par)
754 return b' '.join(blocks)
754 return b' '.join(blocks)
755
755
756 def _getcorechunk(self):
756 def _getcorechunk(self):
757 """yield chunk for the core part of the bundle
757 """yield chunk for the core part of the bundle
758
758
759 (all but headers and parameters)"""
759 (all but headers and parameters)"""
760 outdebug(self.ui, b'start of parts')
760 outdebug(self.ui, b'start of parts')
761 for part in self._parts:
761 for part in self._parts:
762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
763 for chunk in part.getchunks(ui=self.ui):
763 for chunk in part.getchunks(ui=self.ui):
764 yield chunk
764 yield chunk
765 outdebug(self.ui, b'end of bundle')
765 outdebug(self.ui, b'end of bundle')
766 yield _pack(_fpartheadersize, 0)
766 yield _pack(_fpartheadersize, 0)
767
767
768 def salvageoutput(self):
768 def salvageoutput(self):
769 """return a list with a copy of all output parts in the bundle
769 """return a list with a copy of all output parts in the bundle
770
770
771 This is meant to be used during error handling to make sure we preserve
771 This is meant to be used during error handling to make sure we preserve
772 server output"""
772 server output"""
773 salvaged = []
773 salvaged = []
774 for part in self._parts:
774 for part in self._parts:
775 if part.type.startswith(b'output'):
775 if part.type.startswith(b'output'):
776 salvaged.append(part.copy())
776 salvaged.append(part.copy())
777 return salvaged
777 return salvaged
778
778
779
779
780 class unpackermixin:
780 class unpackermixin:
781 """A mixin to extract bytes and struct data from a stream"""
781 """A mixin to extract bytes and struct data from a stream"""
782
782
783 def __init__(self, fp):
783 def __init__(self, fp):
784 self._fp = fp
784 self._fp = fp
785
785
786 def _unpack(self, format):
786 def _unpack(self, format):
787 """unpack this struct format from the stream
787 """unpack this struct format from the stream
788
788
789 This method is meant for internal usage by the bundle2 protocol only.
789 This method is meant for internal usage by the bundle2 protocol only.
790 They directly manipulate the low level stream including bundle2 level
790 They directly manipulate the low level stream including bundle2 level
791 instruction.
791 instruction.
792
792
793 Do not use it to implement higher-level logic or methods."""
793 Do not use it to implement higher-level logic or methods."""
794 data = self._readexact(struct.calcsize(format))
794 data = self._readexact(struct.calcsize(format))
795 return _unpack(format, data)
795 return _unpack(format, data)
796
796
797 def _readexact(self, size):
797 def _readexact(self, size):
798 """read exactly <size> bytes from the stream
798 """read exactly <size> bytes from the stream
799
799
800 This method is meant for internal usage by the bundle2 protocol only.
800 This method is meant for internal usage by the bundle2 protocol only.
801 They directly manipulate the low level stream including bundle2 level
801 They directly manipulate the low level stream including bundle2 level
802 instruction.
802 instruction.
803
803
804 Do not use it to implement higher-level logic or methods."""
804 Do not use it to implement higher-level logic or methods."""
805 return changegroup.readexactly(self._fp, size)
805 return changegroup.readexactly(self._fp, size)
806
806
807
807
808 def getunbundler(ui, fp, magicstring=None):
808 def getunbundler(ui, fp, magicstring=None):
809 """return a valid unbundler object for a given magicstring"""
809 """return a valid unbundler object for a given magicstring"""
810 if magicstring is None:
810 if magicstring is None:
811 magicstring = changegroup.readexactly(fp, 4)
811 magicstring = changegroup.readexactly(fp, 4)
812 magic, version = magicstring[0:2], magicstring[2:4]
812 magic, version = magicstring[0:2], magicstring[2:4]
813 if magic != b'HG':
813 if magic != b'HG':
814 ui.debug(
814 ui.debug(
815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
816 % (magic, version)
816 % (magic, version)
817 )
817 )
818 raise error.Abort(_(b'not a Mercurial bundle'))
818 raise error.Abort(_(b'not a Mercurial bundle'))
819 unbundlerclass = formatmap.get(version)
819 unbundlerclass = formatmap.get(version)
820 if unbundlerclass is None:
820 if unbundlerclass is None:
821 raise error.Abort(_(b'unknown bundle version %s') % version)
821 raise error.Abort(_(b'unknown bundle version %s') % version)
822 unbundler = unbundlerclass(ui, fp)
822 unbundler = unbundlerclass(ui, fp)
823 indebug(ui, b'start processing of %s stream' % magicstring)
823 indebug(ui, b'start processing of %s stream' % magicstring)
824 return unbundler
824 return unbundler
825
825
826
826
827 class unbundle20(unpackermixin):
827 class unbundle20(unpackermixin):
828 """interpret a bundle2 stream
828 """interpret a bundle2 stream
829
829
830 This class is fed with a binary stream and yields parts through its
830 This class is fed with a binary stream and yields parts through its
831 `iterparts` methods."""
831 `iterparts` methods."""
832
832
833 _magicstring = b'HG20'
833 _magicstring = b'HG20'
834
834
835 def __init__(self, ui, fp):
835 def __init__(self, ui, fp):
836 """If header is specified, we do not read it out of the stream."""
836 """If header is specified, we do not read it out of the stream."""
837 self.ui = ui
837 self.ui = ui
838 self._compengine = util.compengines.forbundletype(b'UN')
838 self._compengine = util.compengines.forbundletype(b'UN')
839 self._compressed = None
839 self._compressed = None
840 super(unbundle20, self).__init__(fp)
840 super(unbundle20, self).__init__(fp)
841
841
842 @util.propertycache
842 @util.propertycache
843 def params(self):
843 def params(self):
844 """dictionary of stream level parameters"""
844 """dictionary of stream level parameters"""
845 indebug(self.ui, b'reading bundle2 stream parameters')
845 indebug(self.ui, b'reading bundle2 stream parameters')
846 params = {}
846 params = {}
847 paramssize = self._unpack(_fstreamparamsize)[0]
847 paramssize = self._unpack(_fstreamparamsize)[0]
848 if paramssize < 0:
848 if paramssize < 0:
849 raise error.BundleValueError(
849 raise error.BundleValueError(
850 b'negative bundle param size: %i' % paramssize
850 b'negative bundle param size: %i' % paramssize
851 )
851 )
852 if paramssize:
852 if paramssize:
853 params = self._readexact(paramssize)
853 params = self._readexact(paramssize)
854 params = self._processallparams(params)
854 params = self._processallparams(params)
855 return params
855 return params
856
856
857 def _processallparams(self, paramsblock):
857 def _processallparams(self, paramsblock):
858 """ """
858 """ """
859 params = util.sortdict()
859 params = util.sortdict()
860 for p in paramsblock.split(b' '):
860 for p in paramsblock.split(b' '):
861 p = p.split(b'=', 1)
861 p = p.split(b'=', 1)
862 p = [urlreq.unquote(i) for i in p]
862 p = [urlreq.unquote(i) for i in p]
863 if len(p) < 2:
863 if len(p) < 2:
864 p.append(None)
864 p.append(None)
865 self._processparam(*p)
865 self._processparam(*p)
866 params[p[0]] = p[1]
866 params[p[0]] = p[1]
867 return params
867 return params
868
868
869 def _processparam(self, name, value):
869 def _processparam(self, name, value):
870 """process a parameter, applying its effect if needed
870 """process a parameter, applying its effect if needed
871
871
872 Parameter starting with a lower case letter are advisory and will be
872 Parameter starting with a lower case letter are advisory and will be
873 ignored when unknown. Those starting with an upper case letter are
873 ignored when unknown. Those starting with an upper case letter are
874 mandatory and will this function will raise a KeyError when unknown.
874 mandatory and will this function will raise a KeyError when unknown.
875
875
876 Note: no option are currently supported. Any input will be either
876 Note: no option are currently supported. Any input will be either
877 ignored or failing.
877 ignored or failing.
878 """
878 """
879 if not name:
879 if not name:
880 raise ValueError('empty parameter name')
880 raise ValueError('empty parameter name')
881 if name[0:1] not in pycompat.bytestr(
881 if name[0:1] not in pycompat.bytestr(
882 string.ascii_letters # pytype: disable=wrong-arg-types
882 string.ascii_letters # pytype: disable=wrong-arg-types
883 ):
883 ):
884 raise ValueError('non letter first character: %s' % name)
884 raise ValueError('non letter first character: %s' % name)
885 try:
885 try:
886 handler = b2streamparamsmap[name.lower()]
886 handler = b2streamparamsmap[name.lower()]
887 except KeyError:
887 except KeyError:
888 if name[0:1].islower():
888 if name[0:1].islower():
889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
890 else:
890 else:
891 raise error.BundleUnknownFeatureError(params=(name,))
891 raise error.BundleUnknownFeatureError(params=(name,))
892 else:
892 else:
893 handler(self, name, value)
893 handler(self, name, value)
894
894
895 def _forwardchunks(self):
895 def _forwardchunks(self):
896 """utility to transfer a bundle2 as binary
896 """utility to transfer a bundle2 as binary
897
897
898 This is made necessary by the fact the 'getbundle' command over 'ssh'
898 This is made necessary by the fact the 'getbundle' command over 'ssh'
899 have no way to know then the reply end, relying on the bundle to be
899 have no way to know when the reply ends, relying on the bundle to be
900 interpreted to know its end. This is terrible and we are sorry, but we
900 interpreted to know its end. This is terrible and we are sorry, but we
901 needed to move forward to get general delta enabled.
901 needed to move forward to get general delta enabled.
902 """
902 """
903 yield self._magicstring
903 yield self._magicstring
904 assert 'params' not in vars(self)
904 assert 'params' not in vars(self)
905 paramssize = self._unpack(_fstreamparamsize)[0]
905 paramssize = self._unpack(_fstreamparamsize)[0]
906 if paramssize < 0:
906 if paramssize < 0:
907 raise error.BundleValueError(
907 raise error.BundleValueError(
908 b'negative bundle param size: %i' % paramssize
908 b'negative bundle param size: %i' % paramssize
909 )
909 )
910 if paramssize:
910 if paramssize:
911 params = self._readexact(paramssize)
911 params = self._readexact(paramssize)
912 self._processallparams(params)
912 self._processallparams(params)
913 # The payload itself is decompressed below, so drop
913 # The payload itself is decompressed below, so drop
914 # the compression parameter passed down to compensate.
914 # the compression parameter passed down to compensate.
915 outparams = []
915 outparams = []
916 for p in params.split(b' '):
916 for p in params.split(b' '):
917 k, v = p.split(b'=', 1)
917 k, v = p.split(b'=', 1)
918 if k.lower() != b'compression':
918 if k.lower() != b'compression':
919 outparams.append(p)
919 outparams.append(p)
920 outparams = b' '.join(outparams)
920 outparams = b' '.join(outparams)
921 yield _pack(_fstreamparamsize, len(outparams))
921 yield _pack(_fstreamparamsize, len(outparams))
922 yield outparams
922 yield outparams
923 else:
923 else:
924 yield _pack(_fstreamparamsize, paramssize)
924 yield _pack(_fstreamparamsize, paramssize)
925 # From there, payload might need to be decompressed
925 # From there, payload might need to be decompressed
926 self._fp = self._compengine.decompressorreader(self._fp)
926 self._fp = self._compengine.decompressorreader(self._fp)
927 emptycount = 0
927 emptycount = 0
928 while emptycount < 2:
928 while emptycount < 2:
929 # so we can brainlessly loop
929 # so we can brainlessly loop
930 assert _fpartheadersize == _fpayloadsize
930 assert _fpartheadersize == _fpayloadsize
931 size = self._unpack(_fpartheadersize)[0]
931 size = self._unpack(_fpartheadersize)[0]
932 yield _pack(_fpartheadersize, size)
932 yield _pack(_fpartheadersize, size)
933 if size:
933 if size:
934 emptycount = 0
934 emptycount = 0
935 else:
935 else:
936 emptycount += 1
936 emptycount += 1
937 continue
937 continue
938 if size == flaginterrupt:
938 if size == flaginterrupt:
939 continue
939 continue
940 elif size < 0:
940 elif size < 0:
941 raise error.BundleValueError(b'negative chunk size: %i')
941 raise error.BundleValueError(b'negative chunk size: %i')
942 yield self._readexact(size)
942 yield self._readexact(size)
943
943
944 def iterparts(self, seekable=False):
944 def iterparts(self, seekable=False):
945 """yield all parts contained in the stream"""
945 """yield all parts contained in the stream"""
946 cls = seekableunbundlepart if seekable else unbundlepart
946 cls = seekableunbundlepart if seekable else unbundlepart
947 # make sure param have been loaded
947 # make sure param have been loaded
948 self.params
948 self.params
949 # From there, payload need to be decompressed
949 # From there, payload need to be decompressed
950 self._fp = self._compengine.decompressorreader(self._fp)
950 self._fp = self._compengine.decompressorreader(self._fp)
951 indebug(self.ui, b'start extraction of bundle2 parts')
951 indebug(self.ui, b'start extraction of bundle2 parts')
952 headerblock = self._readpartheader()
952 headerblock = self._readpartheader()
953 while headerblock is not None:
953 while headerblock is not None:
954 part = cls(self.ui, headerblock, self._fp)
954 part = cls(self.ui, headerblock, self._fp)
955 yield part
955 yield part
956 # Ensure part is fully consumed so we can start reading the next
956 # Ensure part is fully consumed so we can start reading the next
957 # part.
957 # part.
958 part.consume()
958 part.consume()
959
959
960 headerblock = self._readpartheader()
960 headerblock = self._readpartheader()
961 indebug(self.ui, b'end of bundle2 stream')
961 indebug(self.ui, b'end of bundle2 stream')
962
962
963 def _readpartheader(self):
963 def _readpartheader(self):
964 """reads a part header size and return the bytes blob
964 """reads a part header size and return the bytes blob
965
965
966 returns None if empty"""
966 returns None if empty"""
967 headersize = self._unpack(_fpartheadersize)[0]
967 headersize = self._unpack(_fpartheadersize)[0]
968 if headersize < 0:
968 if headersize < 0:
969 raise error.BundleValueError(
969 raise error.BundleValueError(
970 b'negative part header size: %i' % headersize
970 b'negative part header size: %i' % headersize
971 )
971 )
972 indebug(self.ui, b'part header size: %i' % headersize)
972 indebug(self.ui, b'part header size: %i' % headersize)
973 if headersize:
973 if headersize:
974 return self._readexact(headersize)
974 return self._readexact(headersize)
975 return None
975 return None
976
976
977 def compressed(self):
977 def compressed(self):
978 self.params # load params
978 self.params # load params
979 return self._compressed
979 return self._compressed
980
980
981 def close(self):
981 def close(self):
982 """close underlying file"""
982 """close underlying file"""
983 if util.safehasattr(self._fp, 'close'):
983 if util.safehasattr(self._fp, 'close'):
984 return self._fp.close()
984 return self._fp.close()
985
985
986
986
987 formatmap = {b'20': unbundle20}
987 formatmap = {b'20': unbundle20}
988
988
989 b2streamparamsmap = {}
989 b2streamparamsmap = {}
990
990
991
991
992 def b2streamparamhandler(name):
992 def b2streamparamhandler(name):
993 """register a handler for a stream level parameter"""
993 """register a handler for a stream level parameter"""
994
994
995 def decorator(func):
995 def decorator(func):
996 assert name not in formatmap
996 assert name not in formatmap
997 b2streamparamsmap[name] = func
997 b2streamparamsmap[name] = func
998 return func
998 return func
999
999
1000 return decorator
1000 return decorator
1001
1001
1002
1002
1003 @b2streamparamhandler(b'compression')
1003 @b2streamparamhandler(b'compression')
1004 def processcompression(unbundler, param, value):
1004 def processcompression(unbundler, param, value):
1005 """read compression parameter and install payload decompression"""
1005 """read compression parameter and install payload decompression"""
1006 if value not in util.compengines.supportedbundletypes:
1006 if value not in util.compengines.supportedbundletypes:
1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1008 unbundler._compengine = util.compengines.forbundletype(value)
1008 unbundler._compengine = util.compengines.forbundletype(value)
1009 if value is not None:
1009 if value is not None:
1010 unbundler._compressed = True
1010 unbundler._compressed = True
1011
1011
1012
1012
1013 class bundlepart:
1013 class bundlepart:
1014 """A bundle2 part contains application level payload
1014 """A bundle2 part contains application level payload
1015
1015
1016 The part `type` is used to route the part to the application level
1016 The part `type` is used to route the part to the application level
1017 handler.
1017 handler.
1018
1018
1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1020 generator of byte chunks.
1020 generator of byte chunks.
1021
1021
1022 You can add parameters to the part using the ``addparam`` method.
1022 You can add parameters to the part using the ``addparam`` method.
1023 Parameters can be either mandatory (default) or advisory. Remote side
1023 Parameters can be either mandatory (default) or advisory. Remote side
1024 should be able to safely ignore the advisory ones.
1024 should be able to safely ignore the advisory ones.
1025
1025
1026 Both data and parameters cannot be modified after the generation has begun.
1026 Both data and parameters cannot be modified after the generation has begun.
1027 """
1027 """
1028
1028
1029 def __init__(
1029 def __init__(
1030 self,
1030 self,
1031 parttype,
1031 parttype,
1032 mandatoryparams=(),
1032 mandatoryparams=(),
1033 advisoryparams=(),
1033 advisoryparams=(),
1034 data=b'',
1034 data=b'',
1035 mandatory=True,
1035 mandatory=True,
1036 ):
1036 ):
1037 validateparttype(parttype)
1037 validateparttype(parttype)
1038 self.id = None
1038 self.id = None
1039 self.type = parttype
1039 self.type = parttype
1040 self._data = data
1040 self._data = data
1041 self._mandatoryparams = list(mandatoryparams)
1041 self._mandatoryparams = list(mandatoryparams)
1042 self._advisoryparams = list(advisoryparams)
1042 self._advisoryparams = list(advisoryparams)
1043 # checking for duplicated entries
1043 # checking for duplicated entries
1044 self._seenparams = set()
1044 self._seenparams = set()
1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1046 if pname in self._seenparams:
1046 if pname in self._seenparams:
1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1048 self._seenparams.add(pname)
1048 self._seenparams.add(pname)
1049 # status of the part's generation:
1049 # status of the part's generation:
1050 # - None: not started,
1050 # - None: not started,
1051 # - False: currently generated,
1051 # - False: currently generated,
1052 # - True: generation done.
1052 # - True: generation done.
1053 self._generated = None
1053 self._generated = None
1054 self.mandatory = mandatory
1054 self.mandatory = mandatory
1055
1055
1056 def __repr__(self):
1056 def __repr__(self):
1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1059 cls,
1059 cls,
1060 id(self),
1060 id(self),
1061 self.id,
1061 self.id,
1062 self.type,
1062 self.type,
1063 self.mandatory,
1063 self.mandatory,
1064 )
1064 )
1065
1065
1066 def copy(self):
1066 def copy(self):
1067 """return a copy of the part
1067 """return a copy of the part
1068
1068
1069 The new part have the very same content but no partid assigned yet.
1069 The new part have the very same content but no partid assigned yet.
1070 Parts with generated data cannot be copied."""
1070 Parts with generated data cannot be copied."""
1071 assert not util.safehasattr(self.data, 'next')
1071 assert not util.safehasattr(self.data, 'next')
1072 return self.__class__(
1072 return self.__class__(
1073 self.type,
1073 self.type,
1074 self._mandatoryparams,
1074 self._mandatoryparams,
1075 self._advisoryparams,
1075 self._advisoryparams,
1076 self._data,
1076 self._data,
1077 self.mandatory,
1077 self.mandatory,
1078 )
1078 )
1079
1079
1080 # methods used to defines the part content
1080 # methods used to defines the part content
1081 @property
1081 @property
1082 def data(self):
1082 def data(self):
1083 return self._data
1083 return self._data
1084
1084
1085 @data.setter
1085 @data.setter
1086 def data(self, data):
1086 def data(self, data):
1087 if self._generated is not None:
1087 if self._generated is not None:
1088 raise error.ReadOnlyPartError(b'part is being generated')
1088 raise error.ReadOnlyPartError(b'part is being generated')
1089 self._data = data
1089 self._data = data
1090
1090
1091 @property
1091 @property
1092 def mandatoryparams(self):
1092 def mandatoryparams(self):
1093 # make it an immutable tuple to force people through ``addparam``
1093 # make it an immutable tuple to force people through ``addparam``
1094 return tuple(self._mandatoryparams)
1094 return tuple(self._mandatoryparams)
1095
1095
1096 @property
1096 @property
1097 def advisoryparams(self):
1097 def advisoryparams(self):
1098 # make it an immutable tuple to force people through ``addparam``
1098 # make it an immutable tuple to force people through ``addparam``
1099 return tuple(self._advisoryparams)
1099 return tuple(self._advisoryparams)
1100
1100
1101 def addparam(self, name, value=b'', mandatory=True):
1101 def addparam(self, name, value=b'', mandatory=True):
1102 """add a parameter to the part
1102 """add a parameter to the part
1103
1103
1104 If 'mandatory' is set to True, the remote handler must claim support
1104 If 'mandatory' is set to True, the remote handler must claim support
1105 for this parameter or the unbundling will be aborted.
1105 for this parameter or the unbundling will be aborted.
1106
1106
1107 The 'name' and 'value' cannot exceed 255 bytes each.
1107 The 'name' and 'value' cannot exceed 255 bytes each.
1108 """
1108 """
1109 if self._generated is not None:
1109 if self._generated is not None:
1110 raise error.ReadOnlyPartError(b'part is being generated')
1110 raise error.ReadOnlyPartError(b'part is being generated')
1111 if name in self._seenparams:
1111 if name in self._seenparams:
1112 raise ValueError(b'duplicated params: %s' % name)
1112 raise ValueError(b'duplicated params: %s' % name)
1113 self._seenparams.add(name)
1113 self._seenparams.add(name)
1114 params = self._advisoryparams
1114 params = self._advisoryparams
1115 if mandatory:
1115 if mandatory:
1116 params = self._mandatoryparams
1116 params = self._mandatoryparams
1117 params.append((name, value))
1117 params.append((name, value))
1118
1118
1119 # methods used to generates the bundle2 stream
1119 # methods used to generates the bundle2 stream
1120 def getchunks(self, ui):
1120 def getchunks(self, ui):
1121 if self._generated is not None:
1121 if self._generated is not None:
1122 raise error.ProgrammingError(b'part can only be consumed once')
1122 raise error.ProgrammingError(b'part can only be consumed once')
1123 self._generated = False
1123 self._generated = False
1124
1124
1125 if ui.debugflag:
1125 if ui.debugflag:
1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1127 if not self.mandatory:
1127 if not self.mandatory:
1128 msg.append(b' (advisory)')
1128 msg.append(b' (advisory)')
1129 nbmp = len(self.mandatoryparams)
1129 nbmp = len(self.mandatoryparams)
1130 nbap = len(self.advisoryparams)
1130 nbap = len(self.advisoryparams)
1131 if nbmp or nbap:
1131 if nbmp or nbap:
1132 msg.append(b' (params:')
1132 msg.append(b' (params:')
1133 if nbmp:
1133 if nbmp:
1134 msg.append(b' %i mandatory' % nbmp)
1134 msg.append(b' %i mandatory' % nbmp)
1135 if nbap:
1135 if nbap:
1136 msg.append(b' %i advisory' % nbmp)
1136 msg.append(b' %i advisory' % nbmp)
1137 msg.append(b')')
1137 msg.append(b')')
1138 if not self.data:
1138 if not self.data:
1139 msg.append(b' empty payload')
1139 msg.append(b' empty payload')
1140 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1140 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1141 self.data, b'__next__'
1141 self.data, b'__next__'
1142 ):
1142 ):
1143 msg.append(b' streamed payload')
1143 msg.append(b' streamed payload')
1144 else:
1144 else:
1145 msg.append(b' %i bytes payload' % len(self.data))
1145 msg.append(b' %i bytes payload' % len(self.data))
1146 msg.append(b'\n')
1146 msg.append(b'\n')
1147 ui.debug(b''.join(msg))
1147 ui.debug(b''.join(msg))
1148
1148
1149 #### header
1149 #### header
1150 if self.mandatory:
1150 if self.mandatory:
1151 parttype = self.type.upper()
1151 parttype = self.type.upper()
1152 else:
1152 else:
1153 parttype = self.type.lower()
1153 parttype = self.type.lower()
1154 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1154 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1155 ## parttype
1155 ## parttype
1156 header = [
1156 header = [
1157 _pack(_fparttypesize, len(parttype)),
1157 _pack(_fparttypesize, len(parttype)),
1158 parttype,
1158 parttype,
1159 _pack(_fpartid, self.id),
1159 _pack(_fpartid, self.id),
1160 ]
1160 ]
1161 ## parameters
1161 ## parameters
1162 # count
1162 # count
1163 manpar = self.mandatoryparams
1163 manpar = self.mandatoryparams
1164 advpar = self.advisoryparams
1164 advpar = self.advisoryparams
1165 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1165 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1166 # size
1166 # size
1167 parsizes = []
1167 parsizes = []
1168 for key, value in manpar:
1168 for key, value in manpar:
1169 parsizes.append(len(key))
1169 parsizes.append(len(key))
1170 parsizes.append(len(value))
1170 parsizes.append(len(value))
1171 for key, value in advpar:
1171 for key, value in advpar:
1172 parsizes.append(len(key))
1172 parsizes.append(len(key))
1173 parsizes.append(len(value))
1173 parsizes.append(len(value))
1174 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1174 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1175 header.append(paramsizes)
1175 header.append(paramsizes)
1176 # key, value
1176 # key, value
1177 for key, value in manpar:
1177 for key, value in manpar:
1178 header.append(key)
1178 header.append(key)
1179 header.append(value)
1179 header.append(value)
1180 for key, value in advpar:
1180 for key, value in advpar:
1181 header.append(key)
1181 header.append(key)
1182 header.append(value)
1182 header.append(value)
1183 ## finalize header
1183 ## finalize header
1184 try:
1184 try:
1185 headerchunk = b''.join(header)
1185 headerchunk = b''.join(header)
1186 except TypeError:
1186 except TypeError:
1187 raise TypeError(
1187 raise TypeError(
1188 'Found a non-bytes trying to '
1188 'Found a non-bytes trying to '
1189 'build bundle part header: %r' % header
1189 'build bundle part header: %r' % header
1190 )
1190 )
1191 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1191 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1192 yield _pack(_fpartheadersize, len(headerchunk))
1192 yield _pack(_fpartheadersize, len(headerchunk))
1193 yield headerchunk
1193 yield headerchunk
1194 ## payload
1194 ## payload
1195 try:
1195 try:
1196 for chunk in self._payloadchunks():
1196 for chunk in self._payloadchunks():
1197 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1197 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1198 yield _pack(_fpayloadsize, len(chunk))
1198 yield _pack(_fpayloadsize, len(chunk))
1199 yield chunk
1199 yield chunk
1200 except GeneratorExit:
1200 except GeneratorExit:
1201 # GeneratorExit means that nobody is listening for our
1201 # GeneratorExit means that nobody is listening for our
1202 # results anyway, so just bail quickly rather than trying
1202 # results anyway, so just bail quickly rather than trying
1203 # to produce an error part.
1203 # to produce an error part.
1204 ui.debug(b'bundle2-generatorexit\n')
1204 ui.debug(b'bundle2-generatorexit\n')
1205 raise
1205 raise
1206 except BaseException as exc:
1206 except BaseException as exc:
1207 bexc = stringutil.forcebytestr(exc)
1207 bexc = stringutil.forcebytestr(exc)
1208 # backup exception data for later
1208 # backup exception data for later
1209 ui.debug(
1209 ui.debug(
1210 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1210 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1211 )
1211 )
1212 tb = sys.exc_info()[2]
1212 tb = sys.exc_info()[2]
1213 msg = b'unexpected error: %s' % bexc
1213 msg = b'unexpected error: %s' % bexc
1214 interpart = bundlepart(
1214 interpart = bundlepart(
1215 b'error:abort', [(b'message', msg)], mandatory=False
1215 b'error:abort', [(b'message', msg)], mandatory=False
1216 )
1216 )
1217 interpart.id = 0
1217 interpart.id = 0
1218 yield _pack(_fpayloadsize, -1)
1218 yield _pack(_fpayloadsize, -1)
1219 for chunk in interpart.getchunks(ui=ui):
1219 for chunk in interpart.getchunks(ui=ui):
1220 yield chunk
1220 yield chunk
1221 outdebug(ui, b'closing payload chunk')
1221 outdebug(ui, b'closing payload chunk')
1222 # abort current part payload
1222 # abort current part payload
1223 yield _pack(_fpayloadsize, 0)
1223 yield _pack(_fpayloadsize, 0)
1224 pycompat.raisewithtb(exc, tb)
1224 pycompat.raisewithtb(exc, tb)
1225 # end of payload
1225 # end of payload
1226 outdebug(ui, b'closing payload chunk')
1226 outdebug(ui, b'closing payload chunk')
1227 yield _pack(_fpayloadsize, 0)
1227 yield _pack(_fpayloadsize, 0)
1228 self._generated = True
1228 self._generated = True
1229
1229
1230 def _payloadchunks(self):
1230 def _payloadchunks(self):
1231 """yield chunks of a the part payload
1231 """yield chunks of a the part payload
1232
1232
1233 Exists to handle the different methods to provide data to a part."""
1233 Exists to handle the different methods to provide data to a part."""
1234 # we only support fixed size data now.
1234 # we only support fixed size data now.
1235 # This will be improved in the future.
1235 # This will be improved in the future.
1236 if util.safehasattr(self.data, 'next') or util.safehasattr(
1236 if util.safehasattr(self.data, 'next') or util.safehasattr(
1237 self.data, '__next__'
1237 self.data, '__next__'
1238 ):
1238 ):
1239 buff = util.chunkbuffer(self.data)
1239 buff = util.chunkbuffer(self.data)
1240 chunk = buff.read(preferedchunksize)
1240 chunk = buff.read(preferedchunksize)
1241 while chunk:
1241 while chunk:
1242 yield chunk
1242 yield chunk
1243 chunk = buff.read(preferedchunksize)
1243 chunk = buff.read(preferedchunksize)
1244 elif len(self.data):
1244 elif len(self.data):
1245 yield self.data
1245 yield self.data
1246
1246
1247
1247
1248 flaginterrupt = -1
1248 flaginterrupt = -1
1249
1249
1250
1250
1251 class interrupthandler(unpackermixin):
1251 class interrupthandler(unpackermixin):
1252 """read one part and process it with restricted capability
1252 """read one part and process it with restricted capability
1253
1253
1254 This allows to transmit exception raised on the producer size during part
1254 This allows to transmit exception raised on the producer size during part
1255 iteration while the consumer is reading a part.
1255 iteration while the consumer is reading a part.
1256
1256
1257 Part processed in this manner only have access to a ui object,"""
1257 Part processed in this manner only have access to a ui object,"""
1258
1258
1259 def __init__(self, ui, fp):
1259 def __init__(self, ui, fp):
1260 super(interrupthandler, self).__init__(fp)
1260 super(interrupthandler, self).__init__(fp)
1261 self.ui = ui
1261 self.ui = ui
1262
1262
1263 def _readpartheader(self):
1263 def _readpartheader(self):
1264 """reads a part header size and return the bytes blob
1264 """reads a part header size and return the bytes blob
1265
1265
1266 returns None if empty"""
1266 returns None if empty"""
1267 headersize = self._unpack(_fpartheadersize)[0]
1267 headersize = self._unpack(_fpartheadersize)[0]
1268 if headersize < 0:
1268 if headersize < 0:
1269 raise error.BundleValueError(
1269 raise error.BundleValueError(
1270 b'negative part header size: %i' % headersize
1270 b'negative part header size: %i' % headersize
1271 )
1271 )
1272 indebug(self.ui, b'part header size: %i\n' % headersize)
1272 indebug(self.ui, b'part header size: %i\n' % headersize)
1273 if headersize:
1273 if headersize:
1274 return self._readexact(headersize)
1274 return self._readexact(headersize)
1275 return None
1275 return None
1276
1276
1277 def __call__(self):
1277 def __call__(self):
1278
1278
1279 self.ui.debug(
1279 self.ui.debug(
1280 b'bundle2-input-stream-interrupt: opening out of band context\n'
1280 b'bundle2-input-stream-interrupt: opening out of band context\n'
1281 )
1281 )
1282 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1282 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1283 headerblock = self._readpartheader()
1283 headerblock = self._readpartheader()
1284 if headerblock is None:
1284 if headerblock is None:
1285 indebug(self.ui, b'no part found during interruption.')
1285 indebug(self.ui, b'no part found during interruption.')
1286 return
1286 return
1287 part = unbundlepart(self.ui, headerblock, self._fp)
1287 part = unbundlepart(self.ui, headerblock, self._fp)
1288 op = interruptoperation(self.ui)
1288 op = interruptoperation(self.ui)
1289 hardabort = False
1289 hardabort = False
1290 try:
1290 try:
1291 _processpart(op, part)
1291 _processpart(op, part)
1292 except (SystemExit, KeyboardInterrupt):
1292 except (SystemExit, KeyboardInterrupt):
1293 hardabort = True
1293 hardabort = True
1294 raise
1294 raise
1295 finally:
1295 finally:
1296 if not hardabort:
1296 if not hardabort:
1297 part.consume()
1297 part.consume()
1298 self.ui.debug(
1298 self.ui.debug(
1299 b'bundle2-input-stream-interrupt: closing out of band context\n'
1299 b'bundle2-input-stream-interrupt: closing out of band context\n'
1300 )
1300 )
1301
1301
1302
1302
1303 class interruptoperation:
1303 class interruptoperation:
1304 """A limited operation to be use by part handler during interruption
1304 """A limited operation to be use by part handler during interruption
1305
1305
1306 It only have access to an ui object.
1306 It only have access to an ui object.
1307 """
1307 """
1308
1308
1309 def __init__(self, ui):
1309 def __init__(self, ui):
1310 self.ui = ui
1310 self.ui = ui
1311 self.reply = None
1311 self.reply = None
1312 self.captureoutput = False
1312 self.captureoutput = False
1313
1313
1314 @property
1314 @property
1315 def repo(self):
1315 def repo(self):
1316 raise error.ProgrammingError(b'no repo access from stream interruption')
1316 raise error.ProgrammingError(b'no repo access from stream interruption')
1317
1317
1318 def gettransaction(self):
1318 def gettransaction(self):
1319 raise TransactionUnavailable(b'no repo access from stream interruption')
1319 raise TransactionUnavailable(b'no repo access from stream interruption')
1320
1320
1321
1321
1322 def decodepayloadchunks(ui, fh):
1322 def decodepayloadchunks(ui, fh):
1323 """Reads bundle2 part payload data into chunks.
1323 """Reads bundle2 part payload data into chunks.
1324
1324
1325 Part payload data consists of framed chunks. This function takes
1325 Part payload data consists of framed chunks. This function takes
1326 a file handle and emits those chunks.
1326 a file handle and emits those chunks.
1327 """
1327 """
1328 dolog = ui.configbool(b'devel', b'bundle2.debug')
1328 dolog = ui.configbool(b'devel', b'bundle2.debug')
1329 debug = ui.debug
1329 debug = ui.debug
1330
1330
1331 headerstruct = struct.Struct(_fpayloadsize)
1331 headerstruct = struct.Struct(_fpayloadsize)
1332 headersize = headerstruct.size
1332 headersize = headerstruct.size
1333 unpack = headerstruct.unpack
1333 unpack = headerstruct.unpack
1334
1334
1335 readexactly = changegroup.readexactly
1335 readexactly = changegroup.readexactly
1336 read = fh.read
1336 read = fh.read
1337
1337
1338 chunksize = unpack(readexactly(fh, headersize))[0]
1338 chunksize = unpack(readexactly(fh, headersize))[0]
1339 indebug(ui, b'payload chunk size: %i' % chunksize)
1339 indebug(ui, b'payload chunk size: %i' % chunksize)
1340
1340
1341 # changegroup.readexactly() is inlined below for performance.
1341 # changegroup.readexactly() is inlined below for performance.
1342 while chunksize:
1342 while chunksize:
1343 if chunksize >= 0:
1343 if chunksize >= 0:
1344 s = read(chunksize)
1344 s = read(chunksize)
1345 if len(s) < chunksize:
1345 if len(s) < chunksize:
1346 raise error.Abort(
1346 raise error.Abort(
1347 _(
1347 _(
1348 b'stream ended unexpectedly '
1348 b'stream ended unexpectedly '
1349 b' (got %d bytes, expected %d)'
1349 b' (got %d bytes, expected %d)'
1350 )
1350 )
1351 % (len(s), chunksize)
1351 % (len(s), chunksize)
1352 )
1352 )
1353
1353
1354 yield s
1354 yield s
1355 elif chunksize == flaginterrupt:
1355 elif chunksize == flaginterrupt:
1356 # Interrupt "signal" detected. The regular stream is interrupted
1356 # Interrupt "signal" detected. The regular stream is interrupted
1357 # and a bundle2 part follows. Consume it.
1357 # and a bundle2 part follows. Consume it.
1358 interrupthandler(ui, fh)()
1358 interrupthandler(ui, fh)()
1359 else:
1359 else:
1360 raise error.BundleValueError(
1360 raise error.BundleValueError(
1361 b'negative payload chunk size: %s' % chunksize
1361 b'negative payload chunk size: %s' % chunksize
1362 )
1362 )
1363
1363
1364 s = read(headersize)
1364 s = read(headersize)
1365 if len(s) < headersize:
1365 if len(s) < headersize:
1366 raise error.Abort(
1366 raise error.Abort(
1367 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1367 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1368 % (len(s), chunksize)
1368 % (len(s), chunksize)
1369 )
1369 )
1370
1370
1371 chunksize = unpack(s)[0]
1371 chunksize = unpack(s)[0]
1372
1372
1373 # indebug() inlined for performance.
1373 # indebug() inlined for performance.
1374 if dolog:
1374 if dolog:
1375 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1375 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1376
1376
1377
1377
1378 class unbundlepart(unpackermixin):
1378 class unbundlepart(unpackermixin):
1379 """a bundle part read from a bundle"""
1379 """a bundle part read from a bundle"""
1380
1380
1381 def __init__(self, ui, header, fp):
1381 def __init__(self, ui, header, fp):
1382 super(unbundlepart, self).__init__(fp)
1382 super(unbundlepart, self).__init__(fp)
1383 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1383 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1384 fp, 'tell'
1384 fp, 'tell'
1385 )
1385 )
1386 self.ui = ui
1386 self.ui = ui
1387 # unbundle state attr
1387 # unbundle state attr
1388 self._headerdata = header
1388 self._headerdata = header
1389 self._headeroffset = 0
1389 self._headeroffset = 0
1390 self._initialized = False
1390 self._initialized = False
1391 self.consumed = False
1391 self.consumed = False
1392 # part data
1392 # part data
1393 self.id = None
1393 self.id = None
1394 self.type = None
1394 self.type = None
1395 self.mandatoryparams = None
1395 self.mandatoryparams = None
1396 self.advisoryparams = None
1396 self.advisoryparams = None
1397 self.params = None
1397 self.params = None
1398 self.mandatorykeys = ()
1398 self.mandatorykeys = ()
1399 self._readheader()
1399 self._readheader()
1400 self._mandatory = None
1400 self._mandatory = None
1401 self._pos = 0
1401 self._pos = 0
1402
1402
1403 def _fromheader(self, size):
1403 def _fromheader(self, size):
1404 """return the next <size> byte from the header"""
1404 """return the next <size> byte from the header"""
1405 offset = self._headeroffset
1405 offset = self._headeroffset
1406 data = self._headerdata[offset : (offset + size)]
1406 data = self._headerdata[offset : (offset + size)]
1407 self._headeroffset = offset + size
1407 self._headeroffset = offset + size
1408 return data
1408 return data
1409
1409
1410 def _unpackheader(self, format):
1410 def _unpackheader(self, format):
1411 """read given format from header
1411 """read given format from header
1412
1412
1413 This automatically compute the size of the format to read."""
1413 This automatically compute the size of the format to read."""
1414 data = self._fromheader(struct.calcsize(format))
1414 data = self._fromheader(struct.calcsize(format))
1415 return _unpack(format, data)
1415 return _unpack(format, data)
1416
1416
1417 def _initparams(self, mandatoryparams, advisoryparams):
1417 def _initparams(self, mandatoryparams, advisoryparams):
1418 """internal function to setup all logic related parameters"""
1418 """internal function to setup all logic related parameters"""
1419 # make it read only to prevent people touching it by mistake.
1419 # make it read only to prevent people touching it by mistake.
1420 self.mandatoryparams = tuple(mandatoryparams)
1420 self.mandatoryparams = tuple(mandatoryparams)
1421 self.advisoryparams = tuple(advisoryparams)
1421 self.advisoryparams = tuple(advisoryparams)
1422 # user friendly UI
1422 # user friendly UI
1423 self.params = util.sortdict(self.mandatoryparams)
1423 self.params = util.sortdict(self.mandatoryparams)
1424 self.params.update(self.advisoryparams)
1424 self.params.update(self.advisoryparams)
1425 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1425 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1426
1426
1427 def _readheader(self):
1427 def _readheader(self):
1428 """read the header and setup the object"""
1428 """read the header and setup the object"""
1429 typesize = self._unpackheader(_fparttypesize)[0]
1429 typesize = self._unpackheader(_fparttypesize)[0]
1430 self.type = self._fromheader(typesize)
1430 self.type = self._fromheader(typesize)
1431 indebug(self.ui, b'part type: "%s"' % self.type)
1431 indebug(self.ui, b'part type: "%s"' % self.type)
1432 self.id = self._unpackheader(_fpartid)[0]
1432 self.id = self._unpackheader(_fpartid)[0]
1433 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1433 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1434 # extract mandatory bit from type
1434 # extract mandatory bit from type
1435 self.mandatory = self.type != self.type.lower()
1435 self.mandatory = self.type != self.type.lower()
1436 self.type = self.type.lower()
1436 self.type = self.type.lower()
1437 ## reading parameters
1437 ## reading parameters
1438 # param count
1438 # param count
1439 mancount, advcount = self._unpackheader(_fpartparamcount)
1439 mancount, advcount = self._unpackheader(_fpartparamcount)
1440 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1440 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1441 # param size
1441 # param size
1442 fparamsizes = _makefpartparamsizes(mancount + advcount)
1442 fparamsizes = _makefpartparamsizes(mancount + advcount)
1443 paramsizes = self._unpackheader(fparamsizes)
1443 paramsizes = self._unpackheader(fparamsizes)
1444 # make it a list of couple again
1444 # make it a list of couple again
1445 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1445 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1446 # split mandatory from advisory
1446 # split mandatory from advisory
1447 mansizes = paramsizes[:mancount]
1447 mansizes = paramsizes[:mancount]
1448 advsizes = paramsizes[mancount:]
1448 advsizes = paramsizes[mancount:]
1449 # retrieve param value
1449 # retrieve param value
1450 manparams = []
1450 manparams = []
1451 for key, value in mansizes:
1451 for key, value in mansizes:
1452 manparams.append((self._fromheader(key), self._fromheader(value)))
1452 manparams.append((self._fromheader(key), self._fromheader(value)))
1453 advparams = []
1453 advparams = []
1454 for key, value in advsizes:
1454 for key, value in advsizes:
1455 advparams.append((self._fromheader(key), self._fromheader(value)))
1455 advparams.append((self._fromheader(key), self._fromheader(value)))
1456 self._initparams(manparams, advparams)
1456 self._initparams(manparams, advparams)
1457 ## part payload
1457 ## part payload
1458 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1458 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1459 # we read the data, tell it
1459 # we read the data, tell it
1460 self._initialized = True
1460 self._initialized = True
1461
1461
1462 def _payloadchunks(self):
1462 def _payloadchunks(self):
1463 """Generator of decoded chunks in the payload."""
1463 """Generator of decoded chunks in the payload."""
1464 return decodepayloadchunks(self.ui, self._fp)
1464 return decodepayloadchunks(self.ui, self._fp)
1465
1465
1466 def consume(self):
1466 def consume(self):
1467 """Read the part payload until completion.
1467 """Read the part payload until completion.
1468
1468
1469 By consuming the part data, the underlying stream read offset will
1469 By consuming the part data, the underlying stream read offset will
1470 be advanced to the next part (or end of stream).
1470 be advanced to the next part (or end of stream).
1471 """
1471 """
1472 if self.consumed:
1472 if self.consumed:
1473 return
1473 return
1474
1474
1475 chunk = self.read(32768)
1475 chunk = self.read(32768)
1476 while chunk:
1476 while chunk:
1477 self._pos += len(chunk)
1477 self._pos += len(chunk)
1478 chunk = self.read(32768)
1478 chunk = self.read(32768)
1479
1479
1480 def read(self, size=None):
1480 def read(self, size=None):
1481 """read payload data"""
1481 """read payload data"""
1482 if not self._initialized:
1482 if not self._initialized:
1483 self._readheader()
1483 self._readheader()
1484 if size is None:
1484 if size is None:
1485 data = self._payloadstream.read()
1485 data = self._payloadstream.read()
1486 else:
1486 else:
1487 data = self._payloadstream.read(size)
1487 data = self._payloadstream.read(size)
1488 self._pos += len(data)
1488 self._pos += len(data)
1489 if size is None or len(data) < size:
1489 if size is None or len(data) < size:
1490 if not self.consumed and self._pos:
1490 if not self.consumed and self._pos:
1491 self.ui.debug(
1491 self.ui.debug(
1492 b'bundle2-input-part: total payload size %i\n' % self._pos
1492 b'bundle2-input-part: total payload size %i\n' % self._pos
1493 )
1493 )
1494 self.consumed = True
1494 self.consumed = True
1495 return data
1495 return data
1496
1496
1497
1497
1498 class seekableunbundlepart(unbundlepart):
1498 class seekableunbundlepart(unbundlepart):
1499 """A bundle2 part in a bundle that is seekable.
1499 """A bundle2 part in a bundle that is seekable.
1500
1500
1501 Regular ``unbundlepart`` instances can only be read once. This class
1501 Regular ``unbundlepart`` instances can only be read once. This class
1502 extends ``unbundlepart`` to enable bi-directional seeking within the
1502 extends ``unbundlepart`` to enable bi-directional seeking within the
1503 part.
1503 part.
1504
1504
1505 Bundle2 part data consists of framed chunks. Offsets when seeking
1505 Bundle2 part data consists of framed chunks. Offsets when seeking
1506 refer to the decoded data, not the offsets in the underlying bundle2
1506 refer to the decoded data, not the offsets in the underlying bundle2
1507 stream.
1507 stream.
1508
1508
1509 To facilitate quickly seeking within the decoded data, instances of this
1509 To facilitate quickly seeking within the decoded data, instances of this
1510 class maintain a mapping between offsets in the underlying stream and
1510 class maintain a mapping between offsets in the underlying stream and
1511 the decoded payload. This mapping will consume memory in proportion
1511 the decoded payload. This mapping will consume memory in proportion
1512 to the number of chunks within the payload (which almost certainly
1512 to the number of chunks within the payload (which almost certainly
1513 increases in proportion with the size of the part).
1513 increases in proportion with the size of the part).
1514 """
1514 """
1515
1515
1516 def __init__(self, ui, header, fp):
1516 def __init__(self, ui, header, fp):
1517 # (payload, file) offsets for chunk starts.
1517 # (payload, file) offsets for chunk starts.
1518 self._chunkindex = []
1518 self._chunkindex = []
1519
1519
1520 super(seekableunbundlepart, self).__init__(ui, header, fp)
1520 super(seekableunbundlepart, self).__init__(ui, header, fp)
1521
1521
1522 def _payloadchunks(self, chunknum=0):
1522 def _payloadchunks(self, chunknum=0):
1523 '''seek to specified chunk and start yielding data'''
1523 '''seek to specified chunk and start yielding data'''
1524 if len(self._chunkindex) == 0:
1524 if len(self._chunkindex) == 0:
1525 assert chunknum == 0, b'Must start with chunk 0'
1525 assert chunknum == 0, b'Must start with chunk 0'
1526 self._chunkindex.append((0, self._tellfp()))
1526 self._chunkindex.append((0, self._tellfp()))
1527 else:
1527 else:
1528 assert chunknum < len(self._chunkindex), (
1528 assert chunknum < len(self._chunkindex), (
1529 b'Unknown chunk %d' % chunknum
1529 b'Unknown chunk %d' % chunknum
1530 )
1530 )
1531 self._seekfp(self._chunkindex[chunknum][1])
1531 self._seekfp(self._chunkindex[chunknum][1])
1532
1532
1533 pos = self._chunkindex[chunknum][0]
1533 pos = self._chunkindex[chunknum][0]
1534
1534
1535 for chunk in decodepayloadchunks(self.ui, self._fp):
1535 for chunk in decodepayloadchunks(self.ui, self._fp):
1536 chunknum += 1
1536 chunknum += 1
1537 pos += len(chunk)
1537 pos += len(chunk)
1538 if chunknum == len(self._chunkindex):
1538 if chunknum == len(self._chunkindex):
1539 self._chunkindex.append((pos, self._tellfp()))
1539 self._chunkindex.append((pos, self._tellfp()))
1540
1540
1541 yield chunk
1541 yield chunk
1542
1542
1543 def _findchunk(self, pos):
1543 def _findchunk(self, pos):
1544 '''for a given payload position, return a chunk number and offset'''
1544 '''for a given payload position, return a chunk number and offset'''
1545 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1545 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1546 if ppos == pos:
1546 if ppos == pos:
1547 return chunk, 0
1547 return chunk, 0
1548 elif ppos > pos:
1548 elif ppos > pos:
1549 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1549 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1550 raise ValueError(b'Unknown chunk')
1550 raise ValueError(b'Unknown chunk')
1551
1551
1552 def tell(self):
1552 def tell(self):
1553 return self._pos
1553 return self._pos
1554
1554
1555 def seek(self, offset, whence=os.SEEK_SET):
1555 def seek(self, offset, whence=os.SEEK_SET):
1556 if whence == os.SEEK_SET:
1556 if whence == os.SEEK_SET:
1557 newpos = offset
1557 newpos = offset
1558 elif whence == os.SEEK_CUR:
1558 elif whence == os.SEEK_CUR:
1559 newpos = self._pos + offset
1559 newpos = self._pos + offset
1560 elif whence == os.SEEK_END:
1560 elif whence == os.SEEK_END:
1561 if not self.consumed:
1561 if not self.consumed:
1562 # Can't use self.consume() here because it advances self._pos.
1562 # Can't use self.consume() here because it advances self._pos.
1563 chunk = self.read(32768)
1563 chunk = self.read(32768)
1564 while chunk:
1564 while chunk:
1565 chunk = self.read(32768)
1565 chunk = self.read(32768)
1566 newpos = self._chunkindex[-1][0] - offset
1566 newpos = self._chunkindex[-1][0] - offset
1567 else:
1567 else:
1568 raise ValueError(b'Unknown whence value: %r' % (whence,))
1568 raise ValueError(b'Unknown whence value: %r' % (whence,))
1569
1569
1570 if newpos > self._chunkindex[-1][0] and not self.consumed:
1570 if newpos > self._chunkindex[-1][0] and not self.consumed:
1571 # Can't use self.consume() here because it advances self._pos.
1571 # Can't use self.consume() here because it advances self._pos.
1572 chunk = self.read(32768)
1572 chunk = self.read(32768)
1573 while chunk:
1573 while chunk:
1574 chunk = self.read(32668)
1574 chunk = self.read(32668)
1575
1575
1576 if not 0 <= newpos <= self._chunkindex[-1][0]:
1576 if not 0 <= newpos <= self._chunkindex[-1][0]:
1577 raise ValueError(b'Offset out of range')
1577 raise ValueError(b'Offset out of range')
1578
1578
1579 if self._pos != newpos:
1579 if self._pos != newpos:
1580 chunk, internaloffset = self._findchunk(newpos)
1580 chunk, internaloffset = self._findchunk(newpos)
1581 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1581 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1582 adjust = self.read(internaloffset)
1582 adjust = self.read(internaloffset)
1583 if len(adjust) != internaloffset:
1583 if len(adjust) != internaloffset:
1584 raise error.Abort(_(b'Seek failed\n'))
1584 raise error.Abort(_(b'Seek failed\n'))
1585 self._pos = newpos
1585 self._pos = newpos
1586
1586
1587 def _seekfp(self, offset, whence=0):
1587 def _seekfp(self, offset, whence=0):
1588 """move the underlying file pointer
1588 """move the underlying file pointer
1589
1589
1590 This method is meant for internal usage by the bundle2 protocol only.
1590 This method is meant for internal usage by the bundle2 protocol only.
1591 They directly manipulate the low level stream including bundle2 level
1591 They directly manipulate the low level stream including bundle2 level
1592 instruction.
1592 instruction.
1593
1593
1594 Do not use it to implement higher-level logic or methods."""
1594 Do not use it to implement higher-level logic or methods."""
1595 if self._seekable:
1595 if self._seekable:
1596 return self._fp.seek(offset, whence)
1596 return self._fp.seek(offset, whence)
1597 else:
1597 else:
1598 raise NotImplementedError(_(b'File pointer is not seekable'))
1598 raise NotImplementedError(_(b'File pointer is not seekable'))
1599
1599
1600 def _tellfp(self):
1600 def _tellfp(self):
1601 """return the file offset, or None if file is not seekable
1601 """return the file offset, or None if file is not seekable
1602
1602
1603 This method is meant for internal usage by the bundle2 protocol only.
1603 This method is meant for internal usage by the bundle2 protocol only.
1604 They directly manipulate the low level stream including bundle2 level
1604 They directly manipulate the low level stream including bundle2 level
1605 instruction.
1605 instruction.
1606
1606
1607 Do not use it to implement higher-level logic or methods."""
1607 Do not use it to implement higher-level logic or methods."""
1608 if self._seekable:
1608 if self._seekable:
1609 try:
1609 try:
1610 return self._fp.tell()
1610 return self._fp.tell()
1611 except IOError as e:
1611 except IOError as e:
1612 if e.errno == errno.ESPIPE:
1612 if e.errno == errno.ESPIPE:
1613 self._seekable = False
1613 self._seekable = False
1614 else:
1614 else:
1615 raise
1615 raise
1616 return None
1616 return None
1617
1617
1618
1618
1619 # These are only the static capabilities.
1619 # These are only the static capabilities.
1620 # Check the 'getrepocaps' function for the rest.
1620 # Check the 'getrepocaps' function for the rest.
1621 capabilities = {
1621 capabilities = {
1622 b'HG20': (),
1622 b'HG20': (),
1623 b'bookmarks': (),
1623 b'bookmarks': (),
1624 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1624 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1625 b'listkeys': (),
1625 b'listkeys': (),
1626 b'pushkey': (),
1626 b'pushkey': (),
1627 b'digests': tuple(sorted(util.DIGESTS.keys())),
1627 b'digests': tuple(sorted(util.DIGESTS.keys())),
1628 b'remote-changegroup': (b'http', b'https'),
1628 b'remote-changegroup': (b'http', b'https'),
1629 b'hgtagsfnodes': (),
1629 b'hgtagsfnodes': (),
1630 b'phases': (b'heads',),
1630 b'phases': (b'heads',),
1631 b'stream': (b'v2',),
1631 b'stream': (b'v2',),
1632 }
1632 }
1633
1633
1634
1634
1635 def getrepocaps(repo, allowpushback=False, role=None):
1635 def getrepocaps(repo, allowpushback=False, role=None):
1636 """return the bundle2 capabilities for a given repo
1636 """return the bundle2 capabilities for a given repo
1637
1637
1638 Exists to allow extensions (like evolution) to mutate the capabilities.
1638 Exists to allow extensions (like evolution) to mutate the capabilities.
1639
1639
1640 The returned value is used for servers advertising their capabilities as
1640 The returned value is used for servers advertising their capabilities as
1641 well as clients advertising their capabilities to servers as part of
1641 well as clients advertising their capabilities to servers as part of
1642 bundle2 requests. The ``role`` argument specifies which is which.
1642 bundle2 requests. The ``role`` argument specifies which is which.
1643 """
1643 """
1644 if role not in (b'client', b'server'):
1644 if role not in (b'client', b'server'):
1645 raise error.ProgrammingError(b'role argument must be client or server')
1645 raise error.ProgrammingError(b'role argument must be client or server')
1646
1646
1647 caps = capabilities.copy()
1647 caps = capabilities.copy()
1648 caps[b'changegroup'] = tuple(
1648 caps[b'changegroup'] = tuple(
1649 sorted(changegroup.supportedincomingversions(repo))
1649 sorted(changegroup.supportedincomingversions(repo))
1650 )
1650 )
1651 if obsolete.isenabled(repo, obsolete.exchangeopt):
1651 if obsolete.isenabled(repo, obsolete.exchangeopt):
1652 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1652 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1653 caps[b'obsmarkers'] = supportedformat
1653 caps[b'obsmarkers'] = supportedformat
1654 if allowpushback:
1654 if allowpushback:
1655 caps[b'pushback'] = ()
1655 caps[b'pushback'] = ()
1656 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1656 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1657 if cpmode == b'check-related':
1657 if cpmode == b'check-related':
1658 caps[b'checkheads'] = (b'related',)
1658 caps[b'checkheads'] = (b'related',)
1659 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1659 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1660 caps.pop(b'phases')
1660 caps.pop(b'phases')
1661
1661
1662 # Don't advertise stream clone support in server mode if not configured.
1662 # Don't advertise stream clone support in server mode if not configured.
1663 if role == b'server':
1663 if role == b'server':
1664 streamsupported = repo.ui.configbool(
1664 streamsupported = repo.ui.configbool(
1665 b'server', b'uncompressed', untrusted=True
1665 b'server', b'uncompressed', untrusted=True
1666 )
1666 )
1667 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1667 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1668
1668
1669 if not streamsupported or not featuresupported:
1669 if not streamsupported or not featuresupported:
1670 caps.pop(b'stream')
1670 caps.pop(b'stream')
1671 # Else always advertise support on client, because payload support
1671 # Else always advertise support on client, because payload support
1672 # should always be advertised.
1672 # should always be advertised.
1673
1673
1674 if repo.ui.configbool(b'experimental', b'stream-v3'):
1674 if repo.ui.configbool(b'experimental', b'stream-v3'):
1675 if b'stream' in caps:
1675 if b'stream' in caps:
1676 caps[b'stream'] += (b'v3-exp',)
1676 caps[b'stream'] += (b'v3-exp',)
1677
1677
1678 # b'rev-branch-cache is no longer advertised, but still supported
1678 # b'rev-branch-cache is no longer advertised, but still supported
1679 # for legacy clients.
1679 # for legacy clients.
1680
1680
1681 return caps
1681 return caps
1682
1682
1683
1683
1684 def bundle2caps(remote):
1684 def bundle2caps(remote):
1685 """return the bundle capabilities of a peer as dict"""
1685 """return the bundle capabilities of a peer as dict"""
1686 raw = remote.capable(b'bundle2')
1686 raw = remote.capable(b'bundle2')
1687 if not raw and raw != b'':
1687 if not raw and raw != b'':
1688 return {}
1688 return {}
1689 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1689 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1690 return decodecaps(capsblob)
1690 return decodecaps(capsblob)
1691
1691
1692
1692
1693 def obsmarkersversion(caps):
1693 def obsmarkersversion(caps):
1694 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1694 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1695 obscaps = caps.get(b'obsmarkers', ())
1695 obscaps = caps.get(b'obsmarkers', ())
1696 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1696 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1697
1697
1698
1698
1699 def writenewbundle(
1699 def writenewbundle(
1700 ui,
1700 ui,
1701 repo,
1701 repo,
1702 source,
1702 source,
1703 filename,
1703 filename,
1704 bundletype,
1704 bundletype,
1705 outgoing,
1705 outgoing,
1706 opts,
1706 opts,
1707 vfs=None,
1707 vfs=None,
1708 compression=None,
1708 compression=None,
1709 compopts=None,
1709 compopts=None,
1710 allow_internal=False,
1710 allow_internal=False,
1711 ):
1711 ):
1712 if bundletype.startswith(b'HG10'):
1712 if bundletype.startswith(b'HG10'):
1713 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1713 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1714 return writebundle(
1714 return writebundle(
1715 ui,
1715 ui,
1716 cg,
1716 cg,
1717 filename,
1717 filename,
1718 bundletype,
1718 bundletype,
1719 vfs=vfs,
1719 vfs=vfs,
1720 compression=compression,
1720 compression=compression,
1721 compopts=compopts,
1721 compopts=compopts,
1722 )
1722 )
1723 elif not bundletype.startswith(b'HG20'):
1723 elif not bundletype.startswith(b'HG20'):
1724 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1724 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1725
1725
1726 # enforce that no internal phase are to be bundled
1726 # enforce that no internal phase are to be bundled
1727 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1727 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1728 if bundled_internal and not allow_internal:
1728 if bundled_internal and not allow_internal:
1729 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1729 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1730 msg = "backup bundle would contains %d internal changesets"
1730 msg = "backup bundle would contains %d internal changesets"
1731 msg %= count
1731 msg %= count
1732 raise error.ProgrammingError(msg)
1732 raise error.ProgrammingError(msg)
1733
1733
1734 caps = {}
1734 caps = {}
1735 if opts.get(b'obsolescence', False):
1735 if opts.get(b'obsolescence', False):
1736 caps[b'obsmarkers'] = (b'V1',)
1736 caps[b'obsmarkers'] = (b'V1',)
1737 if opts.get(b'streamv2'):
1737 if opts.get(b'streamv2'):
1738 caps[b'stream'] = [b'v2']
1738 caps[b'stream'] = [b'v2']
1739 elif opts.get(b'streamv3-exp'):
1739 elif opts.get(b'streamv3-exp'):
1740 caps[b'stream'] = [b'v3-exp']
1740 caps[b'stream'] = [b'v3-exp']
1741 bundle = bundle20(ui, caps)
1741 bundle = bundle20(ui, caps)
1742 bundle.setcompression(compression, compopts)
1742 bundle.setcompression(compression, compopts)
1743 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1743 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1744 chunkiter = bundle.getchunks()
1744 chunkiter = bundle.getchunks()
1745
1745
1746 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1746 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1747
1747
1748
1748
1749 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1749 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1750 # We should eventually reconcile this logic with the one behind
1750 # We should eventually reconcile this logic with the one behind
1751 # 'exchange.getbundle2partsgenerator'.
1751 # 'exchange.getbundle2partsgenerator'.
1752 #
1752 #
1753 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1753 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1754 # different right now. So we keep them separated for now for the sake of
1754 # different right now. So we keep them separated for now for the sake of
1755 # simplicity.
1755 # simplicity.
1756
1756
1757 # we might not always want a changegroup in such bundle, for example in
1757 # we might not always want a changegroup in such bundle, for example in
1758 # stream bundles
1758 # stream bundles
1759 if opts.get(b'changegroup', True):
1759 if opts.get(b'changegroup', True):
1760 cgversion = opts.get(b'cg.version')
1760 cgversion = opts.get(b'cg.version')
1761 if cgversion is None:
1761 if cgversion is None:
1762 cgversion = changegroup.safeversion(repo)
1762 cgversion = changegroup.safeversion(repo)
1763 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1763 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1764 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1764 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1765 part.addparam(b'version', cg.version)
1765 part.addparam(b'version', cg.version)
1766 if b'clcount' in cg.extras:
1766 if b'clcount' in cg.extras:
1767 part.addparam(
1767 part.addparam(
1768 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1768 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1769 )
1769 )
1770 if opts.get(b'phases'):
1770 if opts.get(b'phases'):
1771 target_phase = phases.draft
1771 target_phase = phases.draft
1772 for head in outgoing.ancestorsof:
1772 for head in outgoing.ancestorsof:
1773 target_phase = max(target_phase, repo[head].phase())
1773 target_phase = max(target_phase, repo[head].phase())
1774 if target_phase > phases.draft:
1774 if target_phase > phases.draft:
1775 part.addparam(
1775 part.addparam(
1776 b'targetphase',
1776 b'targetphase',
1777 b'%d' % target_phase,
1777 b'%d' % target_phase,
1778 mandatory=False,
1778 mandatory=False,
1779 )
1779 )
1780 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1780 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1781 part.addparam(b'exp-sidedata', b'1')
1781 part.addparam(b'exp-sidedata', b'1')
1782
1782
1783 if opts.get(b'streamv2', False):
1783 if opts.get(b'streamv2', False):
1784 addpartbundlestream2(bundler, repo, stream=True)
1784 addpartbundlestream2(bundler, repo, stream=True)
1785
1785
1786 if opts.get(b'streamv3-exp', False):
1786 if opts.get(b'streamv3-exp', False):
1787 addpartbundlestream2(bundler, repo, stream=True)
1787 addpartbundlestream2(bundler, repo, stream=True)
1788
1788
1789 if opts.get(b'tagsfnodescache', True):
1789 if opts.get(b'tagsfnodescache', True):
1790 addparttagsfnodescache(repo, bundler, outgoing)
1790 addparttagsfnodescache(repo, bundler, outgoing)
1791
1791
1792 if opts.get(b'revbranchcache', True):
1792 if opts.get(b'revbranchcache', True):
1793 addpartrevbranchcache(repo, bundler, outgoing)
1793 addpartrevbranchcache(repo, bundler, outgoing)
1794
1794
1795 if opts.get(b'obsolescence', False):
1795 if opts.get(b'obsolescence', False):
1796 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1796 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1797 buildobsmarkerspart(
1797 buildobsmarkerspart(
1798 bundler,
1798 bundler,
1799 obsmarkers,
1799 obsmarkers,
1800 mandatory=opts.get(b'obsolescence-mandatory', True),
1800 mandatory=opts.get(b'obsolescence-mandatory', True),
1801 )
1801 )
1802
1802
1803 if opts.get(b'phases', False):
1803 if opts.get(b'phases', False):
1804 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1804 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1805 phasedata = phases.binaryencode(headsbyphase)
1805 phasedata = phases.binaryencode(headsbyphase)
1806 bundler.newpart(b'phase-heads', data=phasedata)
1806 bundler.newpart(b'phase-heads', data=phasedata)
1807
1807
1808
1808
1809 def addparttagsfnodescache(repo, bundler, outgoing):
1809 def addparttagsfnodescache(repo, bundler, outgoing):
1810 # we include the tags fnode cache for the bundle changeset
1810 # we include the tags fnode cache for the bundle changeset
1811 # (as an optional parts)
1811 # (as an optional parts)
1812 cache = tags.hgtagsfnodescache(repo.unfiltered())
1812 cache = tags.hgtagsfnodescache(repo.unfiltered())
1813 chunks = []
1813 chunks = []
1814
1814
1815 # .hgtags fnodes are only relevant for head changesets. While we could
1815 # .hgtags fnodes are only relevant for head changesets. While we could
1816 # transfer values for all known nodes, there will likely be little to
1816 # transfer values for all known nodes, there will likely be little to
1817 # no benefit.
1817 # no benefit.
1818 #
1818 #
1819 # We don't bother using a generator to produce output data because
1819 # We don't bother using a generator to produce output data because
1820 # a) we only have 40 bytes per head and even esoteric numbers of heads
1820 # a) we only have 40 bytes per head and even esoteric numbers of heads
1821 # consume little memory (1M heads is 40MB) b) we don't want to send the
1821 # consume little memory (1M heads is 40MB) b) we don't want to send the
1822 # part if we don't have entries and knowing if we have entries requires
1822 # part if we don't have entries and knowing if we have entries requires
1823 # cache lookups.
1823 # cache lookups.
1824 for node in outgoing.ancestorsof:
1824 for node in outgoing.ancestorsof:
1825 # Don't compute missing, as this may slow down serving.
1825 # Don't compute missing, as this may slow down serving.
1826 fnode = cache.getfnode(node, computemissing=False)
1826 fnode = cache.getfnode(node, computemissing=False)
1827 if fnode:
1827 if fnode:
1828 chunks.extend([node, fnode])
1828 chunks.extend([node, fnode])
1829
1829
1830 if chunks:
1830 if chunks:
1831 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1831 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1832
1832
1833
1833
1834 def addpartrevbranchcache(repo, bundler, outgoing):
1834 def addpartrevbranchcache(repo, bundler, outgoing):
1835 # we include the rev branch cache for the bundle changeset
1835 # we include the rev branch cache for the bundle changeset
1836 # (as an optional parts)
1836 # (as an optional parts)
1837 cache = repo.revbranchcache()
1837 cache = repo.revbranchcache()
1838 cl = repo.unfiltered().changelog
1838 cl = repo.unfiltered().changelog
1839 branchesdata = collections.defaultdict(lambda: (set(), set()))
1839 branchesdata = collections.defaultdict(lambda: (set(), set()))
1840 for node in outgoing.missing:
1840 for node in outgoing.missing:
1841 branch, close = cache.branchinfo(cl.rev(node))
1841 branch, close = cache.branchinfo(cl.rev(node))
1842 branchesdata[branch][close].add(node)
1842 branchesdata[branch][close].add(node)
1843
1843
1844 def generate():
1844 def generate():
1845 for branch, (nodes, closed) in sorted(branchesdata.items()):
1845 for branch, (nodes, closed) in sorted(branchesdata.items()):
1846 utf8branch = encoding.fromlocal(branch)
1846 utf8branch = encoding.fromlocal(branch)
1847 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1847 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1848 yield utf8branch
1848 yield utf8branch
1849 for n in sorted(nodes):
1849 for n in sorted(nodes):
1850 yield n
1850 yield n
1851 for n in sorted(closed):
1851 for n in sorted(closed):
1852 yield n
1852 yield n
1853
1853
1854 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1854 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1855
1855
1856
1856
1857 def _formatrequirementsspec(requirements):
1857 def _formatrequirementsspec(requirements):
1858 requirements = [req for req in requirements if req != b"shared"]
1858 requirements = [req for req in requirements if req != b"shared"]
1859 return urlreq.quote(b','.join(sorted(requirements)))
1859 return urlreq.quote(b','.join(sorted(requirements)))
1860
1860
1861
1861
1862 def _formatrequirementsparams(requirements):
1862 def _formatrequirementsparams(requirements):
1863 requirements = _formatrequirementsspec(requirements)
1863 requirements = _formatrequirementsspec(requirements)
1864 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1864 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1865 return params
1865 return params
1866
1866
1867
1867
1868 def format_remote_wanted_sidedata(repo):
1868 def format_remote_wanted_sidedata(repo):
1869 """Formats a repo's wanted sidedata categories into a bytestring for
1869 """Formats a repo's wanted sidedata categories into a bytestring for
1870 capabilities exchange."""
1870 capabilities exchange."""
1871 wanted = b""
1871 wanted = b""
1872 if repo._wanted_sidedata:
1872 if repo._wanted_sidedata:
1873 wanted = b','.join(
1873 wanted = b','.join(
1874 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1874 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1875 )
1875 )
1876 return wanted
1876 return wanted
1877
1877
1878
1878
1879 def read_remote_wanted_sidedata(remote):
1879 def read_remote_wanted_sidedata(remote):
1880 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1880 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1881 return read_wanted_sidedata(sidedata_categories)
1881 return read_wanted_sidedata(sidedata_categories)
1882
1882
1883
1883
1884 def read_wanted_sidedata(formatted):
1884 def read_wanted_sidedata(formatted):
1885 if formatted:
1885 if formatted:
1886 return set(formatted.split(b','))
1886 return set(formatted.split(b','))
1887 return set()
1887 return set()
1888
1888
1889
1889
1890 def addpartbundlestream2(bundler, repo, **kwargs):
1890 def addpartbundlestream2(bundler, repo, **kwargs):
1891 if not kwargs.get('stream', False):
1891 if not kwargs.get('stream', False):
1892 return
1892 return
1893
1893
1894 if not streamclone.allowservergeneration(repo):
1894 if not streamclone.allowservergeneration(repo):
1895 msg = _(b'stream data requested but server does not allow this feature')
1895 msg = _(b'stream data requested but server does not allow this feature')
1896 hint = _(b'the client seems buggy')
1896 hint = _(b'the client seems buggy')
1897 raise error.Abort(msg, hint=hint)
1897 raise error.Abort(msg, hint=hint)
1898 if not (b'stream' in bundler.capabilities):
1898 if not (b'stream' in bundler.capabilities):
1899 msg = _(
1899 msg = _(
1900 b'stream data requested but supported streaming clone versions were not specified'
1900 b'stream data requested but supported streaming clone versions were not specified'
1901 )
1901 )
1902 hint = _(b'the client seems buggy')
1902 hint = _(b'the client seems buggy')
1903 raise error.Abort(msg, hint=hint)
1903 raise error.Abort(msg, hint=hint)
1904 client_supported = set(bundler.capabilities[b'stream'])
1904 client_supported = set(bundler.capabilities[b'stream'])
1905 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1905 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1906 common_supported = client_supported & server_supported
1906 common_supported = client_supported & server_supported
1907 if not common_supported:
1907 if not common_supported:
1908 msg = _(b'no common supported version with the client: %s; %s')
1908 msg = _(b'no common supported version with the client: %s; %s')
1909 str_server = b','.join(sorted(server_supported))
1909 str_server = b','.join(sorted(server_supported))
1910 str_client = b','.join(sorted(client_supported))
1910 str_client = b','.join(sorted(client_supported))
1911 msg %= (str_server, str_client)
1911 msg %= (str_server, str_client)
1912 raise error.Abort(msg)
1912 raise error.Abort(msg)
1913 version = max(common_supported)
1913 version = max(common_supported)
1914
1914
1915 # Stream clones don't compress well. And compression undermines a
1915 # Stream clones don't compress well. And compression undermines a
1916 # goal of stream clones, which is to be fast. Communicate the desire
1916 # goal of stream clones, which is to be fast. Communicate the desire
1917 # to avoid compression to consumers of the bundle.
1917 # to avoid compression to consumers of the bundle.
1918 bundler.prefercompressed = False
1918 bundler.prefercompressed = False
1919
1919
1920 # get the includes and excludes
1920 # get the includes and excludes
1921 includepats = kwargs.get('includepats')
1921 includepats = kwargs.get('includepats')
1922 excludepats = kwargs.get('excludepats')
1922 excludepats = kwargs.get('excludepats')
1923
1923
1924 narrowstream = repo.ui.configbool(
1924 narrowstream = repo.ui.configbool(
1925 b'experimental', b'server.stream-narrow-clones'
1925 b'experimental', b'server.stream-narrow-clones'
1926 )
1926 )
1927
1927
1928 if (includepats or excludepats) and not narrowstream:
1928 if (includepats or excludepats) and not narrowstream:
1929 raise error.Abort(_(b'server does not support narrow stream clones'))
1929 raise error.Abort(_(b'server does not support narrow stream clones'))
1930
1930
1931 includeobsmarkers = False
1931 includeobsmarkers = False
1932 if repo.obsstore:
1932 if repo.obsstore:
1933 remoteversions = obsmarkersversion(bundler.capabilities)
1933 remoteversions = obsmarkersversion(bundler.capabilities)
1934 if not remoteversions:
1934 if not remoteversions:
1935 raise error.Abort(
1935 raise error.Abort(
1936 _(
1936 _(
1937 b'server has obsolescence markers, but client '
1937 b'server has obsolescence markers, but client '
1938 b'cannot receive them via stream clone'
1938 b'cannot receive them via stream clone'
1939 )
1939 )
1940 )
1940 )
1941 elif repo.obsstore._version in remoteversions:
1941 elif repo.obsstore._version in remoteversions:
1942 includeobsmarkers = True
1942 includeobsmarkers = True
1943
1943
1944 if version == b"v2":
1944 if version == b"v2":
1945 filecount, bytecount, it = streamclone.generatev2(
1945 filecount, bytecount, it = streamclone.generatev2(
1946 repo, includepats, excludepats, includeobsmarkers
1946 repo, includepats, excludepats, includeobsmarkers
1947 )
1947 )
1948 requirements = streamclone.streamed_requirements(repo)
1948 requirements = streamclone.streamed_requirements(repo)
1949 requirements = _formatrequirementsspec(requirements)
1949 requirements = _formatrequirementsspec(requirements)
1950 part = bundler.newpart(b'stream2', data=it)
1950 part = bundler.newpart(b'stream2', data=it)
1951 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1951 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1952 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1952 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1953 part.addparam(b'requirements', requirements, mandatory=True)
1953 part.addparam(b'requirements', requirements, mandatory=True)
1954 elif version == b"v3-exp":
1954 elif version == b"v3-exp":
1955 it = streamclone.generatev3(
1955 it = streamclone.generatev3(
1956 repo, includepats, excludepats, includeobsmarkers
1956 repo, includepats, excludepats, includeobsmarkers
1957 )
1957 )
1958 requirements = streamclone.streamed_requirements(repo)
1958 requirements = streamclone.streamed_requirements(repo)
1959 requirements = _formatrequirementsspec(requirements)
1959 requirements = _formatrequirementsspec(requirements)
1960 part = bundler.newpart(b'stream3-exp', data=it)
1960 part = bundler.newpart(b'stream3-exp', data=it)
1961 part.addparam(b'requirements', requirements, mandatory=True)
1961 part.addparam(b'requirements', requirements, mandatory=True)
1962
1962
1963
1963
1964 def buildobsmarkerspart(bundler, markers, mandatory=True):
1964 def buildobsmarkerspart(bundler, markers, mandatory=True):
1965 """add an obsmarker part to the bundler with <markers>
1965 """add an obsmarker part to the bundler with <markers>
1966
1966
1967 No part is created if markers is empty.
1967 No part is created if markers is empty.
1968 Raises ValueError if the bundler doesn't support any known obsmarker format.
1968 Raises ValueError if the bundler doesn't support any known obsmarker format.
1969 """
1969 """
1970 if not markers:
1970 if not markers:
1971 return None
1971 return None
1972
1972
1973 remoteversions = obsmarkersversion(bundler.capabilities)
1973 remoteversions = obsmarkersversion(bundler.capabilities)
1974 version = obsolete.commonversion(remoteversions)
1974 version = obsolete.commonversion(remoteversions)
1975 if version is None:
1975 if version is None:
1976 raise ValueError(b'bundler does not support common obsmarker format')
1976 raise ValueError(b'bundler does not support common obsmarker format')
1977 stream = obsolete.encodemarkers(markers, True, version=version)
1977 stream = obsolete.encodemarkers(markers, True, version=version)
1978 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1978 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1979
1979
1980
1980
1981 def writebundle(
1981 def writebundle(
1982 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1982 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1983 ):
1983 ):
1984 """Write a bundle file and return its filename.
1984 """Write a bundle file and return its filename.
1985
1985
1986 Existing files will not be overwritten.
1986 Existing files will not be overwritten.
1987 If no filename is specified, a temporary file is created.
1987 If no filename is specified, a temporary file is created.
1988 bz2 compression can be turned off.
1988 bz2 compression can be turned off.
1989 The bundle file will be deleted in case of errors.
1989 The bundle file will be deleted in case of errors.
1990 """
1990 """
1991
1991
1992 if bundletype == b"HG20":
1992 if bundletype == b"HG20":
1993 bundle = bundle20(ui)
1993 bundle = bundle20(ui)
1994 bundle.setcompression(compression, compopts)
1994 bundle.setcompression(compression, compopts)
1995 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1995 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1996 part.addparam(b'version', cg.version)
1996 part.addparam(b'version', cg.version)
1997 if b'clcount' in cg.extras:
1997 if b'clcount' in cg.extras:
1998 part.addparam(
1998 part.addparam(
1999 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1999 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
2000 )
2000 )
2001 chunkiter = bundle.getchunks()
2001 chunkiter = bundle.getchunks()
2002 else:
2002 else:
2003 # compression argument is only for the bundle2 case
2003 # compression argument is only for the bundle2 case
2004 assert compression is None
2004 assert compression is None
2005 if cg.version != b'01':
2005 if cg.version != b'01':
2006 raise error.Abort(
2006 raise error.Abort(
2007 _(b'old bundle types only supports v1 changegroups')
2007 _(b'old bundle types only supports v1 changegroups')
2008 )
2008 )
2009
2009
2010 # HG20 is the case without 2 values to unpack, but is handled above.
2010 # HG20 is the case without 2 values to unpack, but is handled above.
2011 # pytype: disable=bad-unpacking
2011 # pytype: disable=bad-unpacking
2012 header, comp = bundletypes[bundletype]
2012 header, comp = bundletypes[bundletype]
2013 # pytype: enable=bad-unpacking
2013 # pytype: enable=bad-unpacking
2014
2014
2015 if comp not in util.compengines.supportedbundletypes:
2015 if comp not in util.compengines.supportedbundletypes:
2016 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2016 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2017 compengine = util.compengines.forbundletype(comp)
2017 compengine = util.compengines.forbundletype(comp)
2018
2018
2019 def chunkiter():
2019 def chunkiter():
2020 yield header
2020 yield header
2021 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2021 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2022 yield chunk
2022 yield chunk
2023
2023
2024 chunkiter = chunkiter()
2024 chunkiter = chunkiter()
2025
2025
2026 # parse the changegroup data, otherwise we will block
2026 # parse the changegroup data, otherwise we will block
2027 # in case of sshrepo because we don't know the end of the stream
2027 # in case of sshrepo because we don't know the end of the stream
2028 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2028 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2029
2029
2030
2030
2031 def combinechangegroupresults(op):
2031 def combinechangegroupresults(op):
2032 """logic to combine 0 or more addchangegroup results into one"""
2032 """logic to combine 0 or more addchangegroup results into one"""
2033 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2033 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2034 changedheads = 0
2034 changedheads = 0
2035 result = 1
2035 result = 1
2036 for ret in results:
2036 for ret in results:
2037 # If any changegroup result is 0, return 0
2037 # If any changegroup result is 0, return 0
2038 if ret == 0:
2038 if ret == 0:
2039 result = 0
2039 result = 0
2040 break
2040 break
2041 if ret < -1:
2041 if ret < -1:
2042 changedheads += ret + 1
2042 changedheads += ret + 1
2043 elif ret > 1:
2043 elif ret > 1:
2044 changedheads += ret - 1
2044 changedheads += ret - 1
2045 if changedheads > 0:
2045 if changedheads > 0:
2046 result = 1 + changedheads
2046 result = 1 + changedheads
2047 elif changedheads < 0:
2047 elif changedheads < 0:
2048 result = -1 + changedheads
2048 result = -1 + changedheads
2049 return result
2049 return result
2050
2050
2051
2051
2052 @parthandler(
2052 @parthandler(
2053 b'changegroup',
2053 b'changegroup',
2054 (
2054 (
2055 b'version',
2055 b'version',
2056 b'nbchanges',
2056 b'nbchanges',
2057 b'exp-sidedata',
2057 b'exp-sidedata',
2058 b'exp-wanted-sidedata',
2058 b'exp-wanted-sidedata',
2059 b'treemanifest',
2059 b'treemanifest',
2060 b'targetphase',
2060 b'targetphase',
2061 ),
2061 ),
2062 )
2062 )
2063 def handlechangegroup(op, inpart):
2063 def handlechangegroup(op, inpart):
2064 """apply a changegroup part on the repo"""
2064 """apply a changegroup part on the repo"""
2065 from . import localrepo
2065 from . import localrepo
2066
2066
2067 tr = op.gettransaction()
2067 tr = op.gettransaction()
2068 unpackerversion = inpart.params.get(b'version', b'01')
2068 unpackerversion = inpart.params.get(b'version', b'01')
2069 # We should raise an appropriate exception here
2069 # We should raise an appropriate exception here
2070 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2070 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2071 # the source and url passed here are overwritten by the one contained in
2071 # the source and url passed here are overwritten by the one contained in
2072 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2072 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2073 nbchangesets = None
2073 nbchangesets = None
2074 if b'nbchanges' in inpart.params:
2074 if b'nbchanges' in inpart.params:
2075 nbchangesets = int(inpart.params.get(b'nbchanges'))
2075 nbchangesets = int(inpart.params.get(b'nbchanges'))
2076 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2076 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2077 if len(op.repo.changelog) != 0:
2077 if len(op.repo.changelog) != 0:
2078 raise error.Abort(
2078 raise error.Abort(
2079 _(
2079 _(
2080 b"bundle contains tree manifests, but local repo is "
2080 b"bundle contains tree manifests, but local repo is "
2081 b"non-empty and does not use tree manifests"
2081 b"non-empty and does not use tree manifests"
2082 )
2082 )
2083 )
2083 )
2084 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2084 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2085 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2085 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2086 op.repo.ui, op.repo.requirements, op.repo.features
2086 op.repo.ui, op.repo.requirements, op.repo.features
2087 )
2087 )
2088 scmutil.writereporequirements(op.repo)
2088 scmutil.writereporequirements(op.repo)
2089
2089
2090 extrakwargs = {}
2090 extrakwargs = {}
2091 targetphase = inpart.params.get(b'targetphase')
2091 targetphase = inpart.params.get(b'targetphase')
2092 if targetphase is not None:
2092 if targetphase is not None:
2093 extrakwargs['targetphase'] = int(targetphase)
2093 extrakwargs['targetphase'] = int(targetphase)
2094
2094
2095 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2095 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2096 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2096 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2097
2097
2098 ret = _processchangegroup(
2098 ret = _processchangegroup(
2099 op,
2099 op,
2100 cg,
2100 cg,
2101 tr,
2101 tr,
2102 op.source,
2102 op.source,
2103 b'bundle2',
2103 b'bundle2',
2104 expectedtotal=nbchangesets,
2104 expectedtotal=nbchangesets,
2105 **extrakwargs
2105 **extrakwargs
2106 )
2106 )
2107 if op.reply is not None:
2107 if op.reply is not None:
2108 # This is definitely not the final form of this
2108 # This is definitely not the final form of this
2109 # return. But one need to start somewhere.
2109 # return. But one need to start somewhere.
2110 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2110 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2111 part.addparam(
2111 part.addparam(
2112 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2112 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2113 )
2113 )
2114 part.addparam(b'return', b'%i' % ret, mandatory=False)
2114 part.addparam(b'return', b'%i' % ret, mandatory=False)
2115 assert not inpart.read()
2115 assert not inpart.read()
2116
2116
2117
2117
2118 _remotechangegroupparams = tuple(
2118 _remotechangegroupparams = tuple(
2119 [b'url', b'size', b'digests']
2119 [b'url', b'size', b'digests']
2120 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2120 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2121 )
2121 )
2122
2122
2123
2123
2124 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2124 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2125 def handleremotechangegroup(op, inpart):
2125 def handleremotechangegroup(op, inpart):
2126 """apply a bundle10 on the repo, given an url and validation information
2126 """apply a bundle10 on the repo, given an url and validation information
2127
2127
2128 All the information about the remote bundle to import are given as
2128 All the information about the remote bundle to import are given as
2129 parameters. The parameters include:
2129 parameters. The parameters include:
2130 - url: the url to the bundle10.
2130 - url: the url to the bundle10.
2131 - size: the bundle10 file size. It is used to validate what was
2131 - size: the bundle10 file size. It is used to validate what was
2132 retrieved by the client matches the server knowledge about the bundle.
2132 retrieved by the client matches the server knowledge about the bundle.
2133 - digests: a space separated list of the digest types provided as
2133 - digests: a space separated list of the digest types provided as
2134 parameters.
2134 parameters.
2135 - digest:<digest-type>: the hexadecimal representation of the digest with
2135 - digest:<digest-type>: the hexadecimal representation of the digest with
2136 that name. Like the size, it is used to validate what was retrieved by
2136 that name. Like the size, it is used to validate what was retrieved by
2137 the client matches what the server knows about the bundle.
2137 the client matches what the server knows about the bundle.
2138
2138
2139 When multiple digest types are given, all of them are checked.
2139 When multiple digest types are given, all of them are checked.
2140 """
2140 """
2141 try:
2141 try:
2142 raw_url = inpart.params[b'url']
2142 raw_url = inpart.params[b'url']
2143 except KeyError:
2143 except KeyError:
2144 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2144 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2145 parsed_url = urlutil.url(raw_url)
2145 parsed_url = urlutil.url(raw_url)
2146 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2146 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2147 raise error.Abort(
2147 raise error.Abort(
2148 _(b'remote-changegroup does not support %s urls')
2148 _(b'remote-changegroup does not support %s urls')
2149 % parsed_url.scheme
2149 % parsed_url.scheme
2150 )
2150 )
2151
2151
2152 try:
2152 try:
2153 size = int(inpart.params[b'size'])
2153 size = int(inpart.params[b'size'])
2154 except ValueError:
2154 except ValueError:
2155 raise error.Abort(
2155 raise error.Abort(
2156 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2156 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2157 )
2157 )
2158 except KeyError:
2158 except KeyError:
2159 raise error.Abort(
2159 raise error.Abort(
2160 _(b'remote-changegroup: missing "%s" param') % b'size'
2160 _(b'remote-changegroup: missing "%s" param') % b'size'
2161 )
2161 )
2162
2162
2163 digests = {}
2163 digests = {}
2164 for typ in inpart.params.get(b'digests', b'').split():
2164 for typ in inpart.params.get(b'digests', b'').split():
2165 param = b'digest:%s' % typ
2165 param = b'digest:%s' % typ
2166 try:
2166 try:
2167 value = inpart.params[param]
2167 value = inpart.params[param]
2168 except KeyError:
2168 except KeyError:
2169 raise error.Abort(
2169 raise error.Abort(
2170 _(b'remote-changegroup: missing "%s" param') % param
2170 _(b'remote-changegroup: missing "%s" param') % param
2171 )
2171 )
2172 digests[typ] = value
2172 digests[typ] = value
2173
2173
2174 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2174 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2175
2175
2176 tr = op.gettransaction()
2176 tr = op.gettransaction()
2177 from . import exchange
2177 from . import exchange
2178
2178
2179 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2179 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2180 if not isinstance(cg, changegroup.cg1unpacker):
2180 if not isinstance(cg, changegroup.cg1unpacker):
2181 raise error.Abort(
2181 raise error.Abort(
2182 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2182 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2183 )
2183 )
2184 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2184 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2185 if op.reply is not None:
2185 if op.reply is not None:
2186 # This is definitely not the final form of this
2186 # This is definitely not the final form of this
2187 # return. But one need to start somewhere.
2187 # return. But one need to start somewhere.
2188 part = op.reply.newpart(b'reply:changegroup')
2188 part = op.reply.newpart(b'reply:changegroup')
2189 part.addparam(
2189 part.addparam(
2190 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2190 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2191 )
2191 )
2192 part.addparam(b'return', b'%i' % ret, mandatory=False)
2192 part.addparam(b'return', b'%i' % ret, mandatory=False)
2193 try:
2193 try:
2194 real_part.validate()
2194 real_part.validate()
2195 except error.Abort as e:
2195 except error.Abort as e:
2196 raise error.Abort(
2196 raise error.Abort(
2197 _(b'bundle at %s is corrupted:\n%s')
2197 _(b'bundle at %s is corrupted:\n%s')
2198 % (urlutil.hidepassword(raw_url), e.message)
2198 % (urlutil.hidepassword(raw_url), e.message)
2199 )
2199 )
2200 assert not inpart.read()
2200 assert not inpart.read()
2201
2201
2202
2202
2203 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2203 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2204 def handlereplychangegroup(op, inpart):
2204 def handlereplychangegroup(op, inpart):
2205 ret = int(inpart.params[b'return'])
2205 ret = int(inpart.params[b'return'])
2206 replyto = int(inpart.params[b'in-reply-to'])
2206 replyto = int(inpart.params[b'in-reply-to'])
2207 op.records.add(b'changegroup', {b'return': ret}, replyto)
2207 op.records.add(b'changegroup', {b'return': ret}, replyto)
2208
2208
2209
2209
2210 @parthandler(b'check:bookmarks')
2210 @parthandler(b'check:bookmarks')
2211 def handlecheckbookmarks(op, inpart):
2211 def handlecheckbookmarks(op, inpart):
2212 """check location of bookmarks
2212 """check location of bookmarks
2213
2213
2214 This part is to be used to detect push race regarding bookmark, it
2214 This part is to be used to detect push race regarding bookmark, it
2215 contains binary encoded (bookmark, node) tuple. If the local state does
2215 contains binary encoded (bookmark, node) tuple. If the local state does
2216 not marks the one in the part, a PushRaced exception is raised
2216 not marks the one in the part, a PushRaced exception is raised
2217 """
2217 """
2218 bookdata = bookmarks.binarydecode(op.repo, inpart)
2218 bookdata = bookmarks.binarydecode(op.repo, inpart)
2219
2219
2220 msgstandard = (
2220 msgstandard = (
2221 b'remote repository changed while pushing - please try again '
2221 b'remote repository changed while pushing - please try again '
2222 b'(bookmark "%s" move from %s to %s)'
2222 b'(bookmark "%s" move from %s to %s)'
2223 )
2223 )
2224 msgmissing = (
2224 msgmissing = (
2225 b'remote repository changed while pushing - please try again '
2225 b'remote repository changed while pushing - please try again '
2226 b'(bookmark "%s" is missing, expected %s)'
2226 b'(bookmark "%s" is missing, expected %s)'
2227 )
2227 )
2228 msgexist = (
2228 msgexist = (
2229 b'remote repository changed while pushing - please try again '
2229 b'remote repository changed while pushing - please try again '
2230 b'(bookmark "%s" set on %s, expected missing)'
2230 b'(bookmark "%s" set on %s, expected missing)'
2231 )
2231 )
2232 for book, node in bookdata:
2232 for book, node in bookdata:
2233 currentnode = op.repo._bookmarks.get(book)
2233 currentnode = op.repo._bookmarks.get(book)
2234 if currentnode != node:
2234 if currentnode != node:
2235 if node is None:
2235 if node is None:
2236 finalmsg = msgexist % (book, short(currentnode))
2236 finalmsg = msgexist % (book, short(currentnode))
2237 elif currentnode is None:
2237 elif currentnode is None:
2238 finalmsg = msgmissing % (book, short(node))
2238 finalmsg = msgmissing % (book, short(node))
2239 else:
2239 else:
2240 finalmsg = msgstandard % (
2240 finalmsg = msgstandard % (
2241 book,
2241 book,
2242 short(node),
2242 short(node),
2243 short(currentnode),
2243 short(currentnode),
2244 )
2244 )
2245 raise error.PushRaced(finalmsg)
2245 raise error.PushRaced(finalmsg)
2246
2246
2247
2247
2248 @parthandler(b'check:heads')
2248 @parthandler(b'check:heads')
2249 def handlecheckheads(op, inpart):
2249 def handlecheckheads(op, inpart):
2250 """check that head of the repo did not change
2250 """check that head of the repo did not change
2251
2251
2252 This is used to detect a push race when using unbundle.
2252 This is used to detect a push race when using unbundle.
2253 This replaces the "heads" argument of unbundle."""
2253 This replaces the "heads" argument of unbundle."""
2254 h = inpart.read(20)
2254 h = inpart.read(20)
2255 heads = []
2255 heads = []
2256 while len(h) == 20:
2256 while len(h) == 20:
2257 heads.append(h)
2257 heads.append(h)
2258 h = inpart.read(20)
2258 h = inpart.read(20)
2259 assert not h
2259 assert not h
2260 # Trigger a transaction so that we are guaranteed to have the lock now.
2260 # Trigger a transaction so that we are guaranteed to have the lock now.
2261 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2261 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2262 op.gettransaction()
2262 op.gettransaction()
2263 if sorted(heads) != sorted(op.repo.heads()):
2263 if sorted(heads) != sorted(op.repo.heads()):
2264 raise error.PushRaced(
2264 raise error.PushRaced(
2265 b'remote repository changed while pushing - please try again'
2265 b'remote repository changed while pushing - please try again'
2266 )
2266 )
2267
2267
2268
2268
2269 @parthandler(b'check:updated-heads')
2269 @parthandler(b'check:updated-heads')
2270 def handlecheckupdatedheads(op, inpart):
2270 def handlecheckupdatedheads(op, inpart):
2271 """check for race on the heads touched by a push
2271 """check for race on the heads touched by a push
2272
2272
2273 This is similar to 'check:heads' but focus on the heads actually updated
2273 This is similar to 'check:heads' but focus on the heads actually updated
2274 during the push. If other activities happen on unrelated heads, it is
2274 during the push. If other activities happen on unrelated heads, it is
2275 ignored.
2275 ignored.
2276
2276
2277 This allow server with high traffic to avoid push contention as long as
2277 This allow server with high traffic to avoid push contention as long as
2278 unrelated parts of the graph are involved."""
2278 unrelated parts of the graph are involved."""
2279 h = inpart.read(20)
2279 h = inpart.read(20)
2280 heads = []
2280 heads = []
2281 while len(h) == 20:
2281 while len(h) == 20:
2282 heads.append(h)
2282 heads.append(h)
2283 h = inpart.read(20)
2283 h = inpart.read(20)
2284 assert not h
2284 assert not h
2285 # trigger a transaction so that we are guaranteed to have the lock now.
2285 # trigger a transaction so that we are guaranteed to have the lock now.
2286 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2286 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2287 op.gettransaction()
2287 op.gettransaction()
2288
2288
2289 currentheads = set()
2289 currentheads = set()
2290 for ls in op.repo.branchmap().iterheads():
2290 for ls in op.repo.branchmap().iterheads():
2291 currentheads.update(ls)
2291 currentheads.update(ls)
2292
2292
2293 for h in heads:
2293 for h in heads:
2294 if h not in currentheads:
2294 if h not in currentheads:
2295 raise error.PushRaced(
2295 raise error.PushRaced(
2296 b'remote repository changed while pushing - '
2296 b'remote repository changed while pushing - '
2297 b'please try again'
2297 b'please try again'
2298 )
2298 )
2299
2299
2300
2300
2301 @parthandler(b'check:phases')
2301 @parthandler(b'check:phases')
2302 def handlecheckphases(op, inpart):
2302 def handlecheckphases(op, inpart):
2303 """check that phase boundaries of the repository did not change
2303 """check that phase boundaries of the repository did not change
2304
2304
2305 This is used to detect a push race.
2305 This is used to detect a push race.
2306 """
2306 """
2307 phasetonodes = phases.binarydecode(inpart)
2307 phasetonodes = phases.binarydecode(inpart)
2308 unfi = op.repo.unfiltered()
2308 unfi = op.repo.unfiltered()
2309 cl = unfi.changelog
2309 cl = unfi.changelog
2310 phasecache = unfi._phasecache
2310 phasecache = unfi._phasecache
2311 msg = (
2311 msg = (
2312 b'remote repository changed while pushing - please try again '
2312 b'remote repository changed while pushing - please try again '
2313 b'(%s is %s expected %s)'
2313 b'(%s is %s expected %s)'
2314 )
2314 )
2315 for expectedphase, nodes in phasetonodes.items():
2315 for expectedphase, nodes in phasetonodes.items():
2316 for n in nodes:
2316 for n in nodes:
2317 actualphase = phasecache.phase(unfi, cl.rev(n))
2317 actualphase = phasecache.phase(unfi, cl.rev(n))
2318 if actualphase != expectedphase:
2318 if actualphase != expectedphase:
2319 finalmsg = msg % (
2319 finalmsg = msg % (
2320 short(n),
2320 short(n),
2321 phases.phasenames[actualphase],
2321 phases.phasenames[actualphase],
2322 phases.phasenames[expectedphase],
2322 phases.phasenames[expectedphase],
2323 )
2323 )
2324 raise error.PushRaced(finalmsg)
2324 raise error.PushRaced(finalmsg)
2325
2325
2326
2326
2327 @parthandler(b'output')
2327 @parthandler(b'output')
2328 def handleoutput(op, inpart):
2328 def handleoutput(op, inpart):
2329 """forward output captured on the server to the client"""
2329 """forward output captured on the server to the client"""
2330 for line in inpart.read().splitlines():
2330 for line in inpart.read().splitlines():
2331 op.ui.status(_(b'remote: %s\n') % line)
2331 op.ui.status(_(b'remote: %s\n') % line)
2332
2332
2333
2333
2334 @parthandler(b'replycaps')
2334 @parthandler(b'replycaps')
2335 def handlereplycaps(op, inpart):
2335 def handlereplycaps(op, inpart):
2336 """Notify that a reply bundle should be created
2336 """Notify that a reply bundle should be created
2337
2337
2338 The payload contains the capabilities information for the reply"""
2338 The payload contains the capabilities information for the reply"""
2339 caps = decodecaps(inpart.read())
2339 caps = decodecaps(inpart.read())
2340 if op.reply is None:
2340 if op.reply is None:
2341 op.reply = bundle20(op.ui, caps)
2341 op.reply = bundle20(op.ui, caps)
2342
2342
2343
2343
2344 class AbortFromPart(error.Abort):
2344 class AbortFromPart(error.Abort):
2345 """Sub-class of Abort that denotes an error from a bundle2 part."""
2345 """Sub-class of Abort that denotes an error from a bundle2 part."""
2346
2346
2347
2347
2348 @parthandler(b'error:abort', (b'message', b'hint'))
2348 @parthandler(b'error:abort', (b'message', b'hint'))
2349 def handleerrorabort(op, inpart):
2349 def handleerrorabort(op, inpart):
2350 """Used to transmit abort error over the wire"""
2350 """Used to transmit abort error over the wire"""
2351 raise AbortFromPart(
2351 raise AbortFromPart(
2352 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2352 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2353 )
2353 )
2354
2354
2355
2355
2356 @parthandler(
2356 @parthandler(
2357 b'error:pushkey',
2357 b'error:pushkey',
2358 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2358 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2359 )
2359 )
2360 def handleerrorpushkey(op, inpart):
2360 def handleerrorpushkey(op, inpart):
2361 """Used to transmit failure of a mandatory pushkey over the wire"""
2361 """Used to transmit failure of a mandatory pushkey over the wire"""
2362 kwargs = {}
2362 kwargs = {}
2363 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2363 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2364 value = inpart.params.get(name)
2364 value = inpart.params.get(name)
2365 if value is not None:
2365 if value is not None:
2366 kwargs[name] = value
2366 kwargs[name] = value
2367 raise error.PushkeyFailed(
2367 raise error.PushkeyFailed(
2368 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2368 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2369 )
2369 )
2370
2370
2371
2371
2372 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2372 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2373 def handleerrorunsupportedcontent(op, inpart):
2373 def handleerrorunsupportedcontent(op, inpart):
2374 """Used to transmit unknown content error over the wire"""
2374 """Used to transmit unknown content error over the wire"""
2375 kwargs = {}
2375 kwargs = {}
2376 parttype = inpart.params.get(b'parttype')
2376 parttype = inpart.params.get(b'parttype')
2377 if parttype is not None:
2377 if parttype is not None:
2378 kwargs[b'parttype'] = parttype
2378 kwargs[b'parttype'] = parttype
2379 params = inpart.params.get(b'params')
2379 params = inpart.params.get(b'params')
2380 if params is not None:
2380 if params is not None:
2381 kwargs[b'params'] = params.split(b'\0')
2381 kwargs[b'params'] = params.split(b'\0')
2382
2382
2383 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2383 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2384
2384
2385
2385
2386 @parthandler(b'error:pushraced', (b'message',))
2386 @parthandler(b'error:pushraced', (b'message',))
2387 def handleerrorpushraced(op, inpart):
2387 def handleerrorpushraced(op, inpart):
2388 """Used to transmit push race error over the wire"""
2388 """Used to transmit push race error over the wire"""
2389 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2389 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2390
2390
2391
2391
2392 @parthandler(b'listkeys', (b'namespace',))
2392 @parthandler(b'listkeys', (b'namespace',))
2393 def handlelistkeys(op, inpart):
2393 def handlelistkeys(op, inpart):
2394 """retrieve pushkey namespace content stored in a bundle2"""
2394 """retrieve pushkey namespace content stored in a bundle2"""
2395 namespace = inpart.params[b'namespace']
2395 namespace = inpart.params[b'namespace']
2396 r = pushkey.decodekeys(inpart.read())
2396 r = pushkey.decodekeys(inpart.read())
2397 op.records.add(b'listkeys', (namespace, r))
2397 op.records.add(b'listkeys', (namespace, r))
2398
2398
2399
2399
2400 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2400 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2401 def handlepushkey(op, inpart):
2401 def handlepushkey(op, inpart):
2402 """process a pushkey request"""
2402 """process a pushkey request"""
2403 dec = pushkey.decode
2403 dec = pushkey.decode
2404 namespace = dec(inpart.params[b'namespace'])
2404 namespace = dec(inpart.params[b'namespace'])
2405 key = dec(inpart.params[b'key'])
2405 key = dec(inpart.params[b'key'])
2406 old = dec(inpart.params[b'old'])
2406 old = dec(inpart.params[b'old'])
2407 new = dec(inpart.params[b'new'])
2407 new = dec(inpart.params[b'new'])
2408 # Grab the transaction to ensure that we have the lock before performing the
2408 # Grab the transaction to ensure that we have the lock before performing the
2409 # pushkey.
2409 # pushkey.
2410 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2410 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2411 op.gettransaction()
2411 op.gettransaction()
2412 ret = op.repo.pushkey(namespace, key, old, new)
2412 ret = op.repo.pushkey(namespace, key, old, new)
2413 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2413 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2414 op.records.add(b'pushkey', record)
2414 op.records.add(b'pushkey', record)
2415 if op.reply is not None:
2415 if op.reply is not None:
2416 rpart = op.reply.newpart(b'reply:pushkey')
2416 rpart = op.reply.newpart(b'reply:pushkey')
2417 rpart.addparam(
2417 rpart.addparam(
2418 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2418 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2419 )
2419 )
2420 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2420 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2421 if inpart.mandatory and not ret:
2421 if inpart.mandatory and not ret:
2422 kwargs = {}
2422 kwargs = {}
2423 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2423 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2424 if key in inpart.params:
2424 if key in inpart.params:
2425 kwargs[key] = inpart.params[key]
2425 kwargs[key] = inpart.params[key]
2426 raise error.PushkeyFailed(
2426 raise error.PushkeyFailed(
2427 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2427 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2428 )
2428 )
2429
2429
2430
2430
2431 @parthandler(b'bookmarks')
2431 @parthandler(b'bookmarks')
2432 def handlebookmark(op, inpart):
2432 def handlebookmark(op, inpart):
2433 """transmit bookmark information
2433 """transmit bookmark information
2434
2434
2435 The part contains binary encoded bookmark information.
2435 The part contains binary encoded bookmark information.
2436
2436
2437 The exact behavior of this part can be controlled by the 'bookmarks' mode
2437 The exact behavior of this part can be controlled by the 'bookmarks' mode
2438 on the bundle operation.
2438 on the bundle operation.
2439
2439
2440 When mode is 'apply' (the default) the bookmark information is applied as
2440 When mode is 'apply' (the default) the bookmark information is applied as
2441 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2441 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2442 issued earlier to check for push races in such update. This behavior is
2442 issued earlier to check for push races in such update. This behavior is
2443 suitable for pushing.
2443 suitable for pushing.
2444
2444
2445 When mode is 'records', the information is recorded into the 'bookmarks'
2445 When mode is 'records', the information is recorded into the 'bookmarks'
2446 records of the bundle operation. This behavior is suitable for pulling.
2446 records of the bundle operation. This behavior is suitable for pulling.
2447 """
2447 """
2448 changes = bookmarks.binarydecode(op.repo, inpart)
2448 changes = bookmarks.binarydecode(op.repo, inpart)
2449
2449
2450 pushkeycompat = op.repo.ui.configbool(
2450 pushkeycompat = op.repo.ui.configbool(
2451 b'server', b'bookmarks-pushkey-compat'
2451 b'server', b'bookmarks-pushkey-compat'
2452 )
2452 )
2453 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2453 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2454
2454
2455 if bookmarksmode == b'apply':
2455 if bookmarksmode == b'apply':
2456 tr = op.gettransaction()
2456 tr = op.gettransaction()
2457 bookstore = op.repo._bookmarks
2457 bookstore = op.repo._bookmarks
2458 if pushkeycompat:
2458 if pushkeycompat:
2459 allhooks = []
2459 allhooks = []
2460 for book, node in changes:
2460 for book, node in changes:
2461 hookargs = tr.hookargs.copy()
2461 hookargs = tr.hookargs.copy()
2462 hookargs[b'pushkeycompat'] = b'1'
2462 hookargs[b'pushkeycompat'] = b'1'
2463 hookargs[b'namespace'] = b'bookmarks'
2463 hookargs[b'namespace'] = b'bookmarks'
2464 hookargs[b'key'] = book
2464 hookargs[b'key'] = book
2465 hookargs[b'old'] = hex(bookstore.get(book, b''))
2465 hookargs[b'old'] = hex(bookstore.get(book, b''))
2466 hookargs[b'new'] = hex(node if node is not None else b'')
2466 hookargs[b'new'] = hex(node if node is not None else b'')
2467 allhooks.append(hookargs)
2467 allhooks.append(hookargs)
2468
2468
2469 for hookargs in allhooks:
2469 for hookargs in allhooks:
2470 op.repo.hook(
2470 op.repo.hook(
2471 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2471 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2472 )
2472 )
2473
2473
2474 for book, node in changes:
2474 for book, node in changes:
2475 if bookmarks.isdivergent(book):
2475 if bookmarks.isdivergent(book):
2476 msg = _(b'cannot accept divergent bookmark %s!') % book
2476 msg = _(b'cannot accept divergent bookmark %s!') % book
2477 raise error.Abort(msg)
2477 raise error.Abort(msg)
2478
2478
2479 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2479 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2480
2480
2481 if pushkeycompat:
2481 if pushkeycompat:
2482
2482
2483 def runhook(unused_success):
2483 def runhook(unused_success):
2484 for hookargs in allhooks:
2484 for hookargs in allhooks:
2485 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2485 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2486
2486
2487 op.repo._afterlock(runhook)
2487 op.repo._afterlock(runhook)
2488
2488
2489 elif bookmarksmode == b'records':
2489 elif bookmarksmode == b'records':
2490 for book, node in changes:
2490 for book, node in changes:
2491 record = {b'bookmark': book, b'node': node}
2491 record = {b'bookmark': book, b'node': node}
2492 op.records.add(b'bookmarks', record)
2492 op.records.add(b'bookmarks', record)
2493 else:
2493 else:
2494 raise error.ProgrammingError(
2494 raise error.ProgrammingError(
2495 b'unknown bookmark mode: %s' % bookmarksmode
2495 b'unknown bookmark mode: %s' % bookmarksmode
2496 )
2496 )
2497
2497
2498
2498
2499 @parthandler(b'phase-heads')
2499 @parthandler(b'phase-heads')
2500 def handlephases(op, inpart):
2500 def handlephases(op, inpart):
2501 """apply phases from bundle part to repo"""
2501 """apply phases from bundle part to repo"""
2502 headsbyphase = phases.binarydecode(inpart)
2502 headsbyphase = phases.binarydecode(inpart)
2503 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2503 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2504
2504
2505
2505
2506 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2506 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2507 def handlepushkeyreply(op, inpart):
2507 def handlepushkeyreply(op, inpart):
2508 """retrieve the result of a pushkey request"""
2508 """retrieve the result of a pushkey request"""
2509 ret = int(inpart.params[b'return'])
2509 ret = int(inpart.params[b'return'])
2510 partid = int(inpart.params[b'in-reply-to'])
2510 partid = int(inpart.params[b'in-reply-to'])
2511 op.records.add(b'pushkey', {b'return': ret}, partid)
2511 op.records.add(b'pushkey', {b'return': ret}, partid)
2512
2512
2513
2513
2514 @parthandler(b'obsmarkers')
2514 @parthandler(b'obsmarkers')
2515 def handleobsmarker(op, inpart):
2515 def handleobsmarker(op, inpart):
2516 """add a stream of obsmarkers to the repo"""
2516 """add a stream of obsmarkers to the repo"""
2517 tr = op.gettransaction()
2517 tr = op.gettransaction()
2518 markerdata = inpart.read()
2518 markerdata = inpart.read()
2519 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2519 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2520 op.ui.writenoi18n(
2520 op.ui.writenoi18n(
2521 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2521 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2522 )
2522 )
2523 # The mergemarkers call will crash if marker creation is not enabled.
2523 # The mergemarkers call will crash if marker creation is not enabled.
2524 # we want to avoid this if the part is advisory.
2524 # we want to avoid this if the part is advisory.
2525 if not inpart.mandatory and op.repo.obsstore.readonly:
2525 if not inpart.mandatory and op.repo.obsstore.readonly:
2526 op.repo.ui.debug(
2526 op.repo.ui.debug(
2527 b'ignoring obsolescence markers, feature not enabled\n'
2527 b'ignoring obsolescence markers, feature not enabled\n'
2528 )
2528 )
2529 return
2529 return
2530 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2530 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2531 op.repo.invalidatevolatilesets()
2531 op.repo.invalidatevolatilesets()
2532 op.records.add(b'obsmarkers', {b'new': new})
2532 op.records.add(b'obsmarkers', {b'new': new})
2533 if op.reply is not None:
2533 if op.reply is not None:
2534 rpart = op.reply.newpart(b'reply:obsmarkers')
2534 rpart = op.reply.newpart(b'reply:obsmarkers')
2535 rpart.addparam(
2535 rpart.addparam(
2536 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2536 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2537 )
2537 )
2538 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2538 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2539
2539
2540
2540
2541 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2541 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2542 def handleobsmarkerreply(op, inpart):
2542 def handleobsmarkerreply(op, inpart):
2543 """retrieve the result of a pushkey request"""
2543 """retrieve the result of a pushkey request"""
2544 ret = int(inpart.params[b'new'])
2544 ret = int(inpart.params[b'new'])
2545 partid = int(inpart.params[b'in-reply-to'])
2545 partid = int(inpart.params[b'in-reply-to'])
2546 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2546 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2547
2547
2548
2548
2549 @parthandler(b'hgtagsfnodes')
2549 @parthandler(b'hgtagsfnodes')
2550 def handlehgtagsfnodes(op, inpart):
2550 def handlehgtagsfnodes(op, inpart):
2551 """Applies .hgtags fnodes cache entries to the local repo.
2551 """Applies .hgtags fnodes cache entries to the local repo.
2552
2552
2553 Payload is pairs of 20 byte changeset nodes and filenodes.
2553 Payload is pairs of 20 byte changeset nodes and filenodes.
2554 """
2554 """
2555 # Grab the transaction so we ensure that we have the lock at this point.
2555 # Grab the transaction so we ensure that we have the lock at this point.
2556 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2556 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2557 op.gettransaction()
2557 op.gettransaction()
2558 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2558 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2559
2559
2560 count = 0
2560 count = 0
2561 while True:
2561 while True:
2562 node = inpart.read(20)
2562 node = inpart.read(20)
2563 fnode = inpart.read(20)
2563 fnode = inpart.read(20)
2564 if len(node) < 20 or len(fnode) < 20:
2564 if len(node) < 20 or len(fnode) < 20:
2565 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2565 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2566 break
2566 break
2567 cache.setfnode(node, fnode)
2567 cache.setfnode(node, fnode)
2568 count += 1
2568 count += 1
2569
2569
2570 cache.write()
2570 cache.write()
2571 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2571 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2572
2572
2573
2573
2574 rbcstruct = struct.Struct(b'>III')
2574 rbcstruct = struct.Struct(b'>III')
2575
2575
2576
2576
2577 @parthandler(b'cache:rev-branch-cache')
2577 @parthandler(b'cache:rev-branch-cache')
2578 def handlerbc(op, inpart):
2578 def handlerbc(op, inpart):
2579 """Legacy part, ignored for compatibility with bundles from or
2579 """Legacy part, ignored for compatibility with bundles from or
2580 for Mercurial before 5.7. Newer Mercurial computes the cache
2580 for Mercurial before 5.7. Newer Mercurial computes the cache
2581 efficiently enough during unbundling that the additional transfer
2581 efficiently enough during unbundling that the additional transfer
2582 is unnecessary."""
2582 is unnecessary."""
2583
2583
2584
2584
2585 @parthandler(b'pushvars')
2585 @parthandler(b'pushvars')
2586 def bundle2getvars(op, part):
2586 def bundle2getvars(op, part):
2587 '''unbundle a bundle2 containing shellvars on the server'''
2587 '''unbundle a bundle2 containing shellvars on the server'''
2588 # An option to disable unbundling on server-side for security reasons
2588 # An option to disable unbundling on server-side for security reasons
2589 if op.ui.configbool(b'push', b'pushvars.server'):
2589 if op.ui.configbool(b'push', b'pushvars.server'):
2590 hookargs = {}
2590 hookargs = {}
2591 for key, value in part.advisoryparams:
2591 for key, value in part.advisoryparams:
2592 key = key.upper()
2592 key = key.upper()
2593 # We want pushed variables to have USERVAR_ prepended so we know
2593 # We want pushed variables to have USERVAR_ prepended so we know
2594 # they came from the --pushvar flag.
2594 # they came from the --pushvar flag.
2595 key = b"USERVAR_" + key
2595 key = b"USERVAR_" + key
2596 hookargs[key] = value
2596 hookargs[key] = value
2597 op.addhookargs(hookargs)
2597 op.addhookargs(hookargs)
2598
2598
2599
2599
2600 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2600 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2601 def handlestreamv2bundle(op, part):
2601 def handlestreamv2bundle(op, part):
2602
2602
2603 requirements = urlreq.unquote(part.params[b'requirements'])
2603 requirements = urlreq.unquote(part.params[b'requirements'])
2604 requirements = requirements.split(b',') if requirements else []
2604 requirements = requirements.split(b',') if requirements else []
2605 filecount = int(part.params[b'filecount'])
2605 filecount = int(part.params[b'filecount'])
2606 bytecount = int(part.params[b'bytecount'])
2606 bytecount = int(part.params[b'bytecount'])
2607
2607
2608 repo = op.repo
2608 repo = op.repo
2609 if len(repo):
2609 if len(repo):
2610 msg = _(b'cannot apply stream clone to non empty repository')
2610 msg = _(b'cannot apply stream clone to non empty repository')
2611 raise error.Abort(msg)
2611 raise error.Abort(msg)
2612
2612
2613 repo.ui.debug(b'applying stream bundle\n')
2613 repo.ui.debug(b'applying stream bundle\n')
2614 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2614 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2615
2615
2616
2616
2617 @parthandler(b'stream3-exp', (b'requirements',))
2617 @parthandler(b'stream3-exp', (b'requirements',))
2618 def handlestreamv3bundle(op, part):
2618 def handlestreamv3bundle(op, part):
2619 requirements = urlreq.unquote(part.params[b'requirements'])
2619 requirements = urlreq.unquote(part.params[b'requirements'])
2620 requirements = requirements.split(b',') if requirements else []
2620 requirements = requirements.split(b',') if requirements else []
2621
2621
2622 repo = op.repo
2622 repo = op.repo
2623 if len(repo):
2623 if len(repo):
2624 msg = _(b'cannot apply stream clone to non empty repository')
2624 msg = _(b'cannot apply stream clone to non empty repository')
2625 raise error.Abort(msg)
2625 raise error.Abort(msg)
2626
2626
2627 repo.ui.debug(b'applying stream bundle\n')
2627 repo.ui.debug(b'applying stream bundle\n')
2628 streamclone.applybundlev3(repo, part, requirements)
2628 streamclone.applybundlev3(repo, part, requirements)
2629
2629
2630
2630
2631 def widen_bundle(
2631 def widen_bundle(
2632 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2632 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2633 ):
2633 ):
2634 """generates bundle2 for widening a narrow clone
2634 """generates bundle2 for widening a narrow clone
2635
2635
2636 bundler is the bundle to which data should be added
2636 bundler is the bundle to which data should be added
2637 repo is the localrepository instance
2637 repo is the localrepository instance
2638 oldmatcher matches what the client already has
2638 oldmatcher matches what the client already has
2639 newmatcher matches what the client needs (including what it already has)
2639 newmatcher matches what the client needs (including what it already has)
2640 common is set of common heads between server and client
2640 common is set of common heads between server and client
2641 known is a set of revs known on the client side (used in ellipses)
2641 known is a set of revs known on the client side (used in ellipses)
2642 cgversion is the changegroup version to send
2642 cgversion is the changegroup version to send
2643 ellipses is boolean value telling whether to send ellipses data or not
2643 ellipses is boolean value telling whether to send ellipses data or not
2644
2644
2645 returns bundle2 of the data required for extending
2645 returns bundle2 of the data required for extending
2646 """
2646 """
2647 commonnodes = set()
2647 commonnodes = set()
2648 cl = repo.changelog
2648 cl = repo.changelog
2649 for r in repo.revs(b"::%ln", common):
2649 for r in repo.revs(b"::%ln", common):
2650 commonnodes.add(cl.node(r))
2650 commonnodes.add(cl.node(r))
2651 if commonnodes:
2651 if commonnodes:
2652 packer = changegroup.getbundler(
2652 packer = changegroup.getbundler(
2653 cgversion,
2653 cgversion,
2654 repo,
2654 repo,
2655 oldmatcher=oldmatcher,
2655 oldmatcher=oldmatcher,
2656 matcher=newmatcher,
2656 matcher=newmatcher,
2657 fullnodes=commonnodes,
2657 fullnodes=commonnodes,
2658 )
2658 )
2659 cgdata = packer.generate(
2659 cgdata = packer.generate(
2660 {repo.nullid},
2660 {repo.nullid},
2661 list(commonnodes),
2661 list(commonnodes),
2662 False,
2662 False,
2663 b'narrow_widen',
2663 b'narrow_widen',
2664 changelog=False,
2664 changelog=False,
2665 )
2665 )
2666
2666
2667 part = bundler.newpart(b'changegroup', data=cgdata)
2667 part = bundler.newpart(b'changegroup', data=cgdata)
2668 part.addparam(b'version', cgversion)
2668 part.addparam(b'version', cgversion)
2669 if scmutil.istreemanifest(repo):
2669 if scmutil.istreemanifest(repo):
2670 part.addparam(b'treemanifest', b'1')
2670 part.addparam(b'treemanifest', b'1')
2671 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2671 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2672 part.addparam(b'exp-sidedata', b'1')
2672 part.addparam(b'exp-sidedata', b'1')
2673 wanted = format_remote_wanted_sidedata(repo)
2673 wanted = format_remote_wanted_sidedata(repo)
2674 part.addparam(b'exp-wanted-sidedata', wanted)
2674 part.addparam(b'exp-wanted-sidedata', wanted)
2675
2675
2676 return bundler
2676 return bundler
@@ -1,1122 +1,1121 b''
1 # upgrade.py - functions for in place upgrade of Mercurial repository
1 # upgrade.py - functions for in place upgrade of Mercurial repository
2 #
2 #
3 # Copyright (c) 2016-present, Gregory Szorc
3 # Copyright (c) 2016-present, Gregory Szorc
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8
8
9 from ..i18n import _
9 from ..i18n import _
10 from .. import (
10 from .. import (
11 error,
11 error,
12 localrepo,
12 localrepo,
13 pycompat,
13 pycompat,
14 requirements,
14 requirements,
15 revlog,
15 revlog,
16 util,
16 util,
17 )
17 )
18
18
19 from ..utils import compression
19 from ..utils import compression
20
20
21 if pycompat.TYPE_CHECKING:
21 if pycompat.TYPE_CHECKING:
22 from typing import (
22 from typing import (
23 List,
23 List,
24 Type,
24 Type,
25 )
25 )
26
26
27
27
28 # list of requirements that request a clone of all revlog if added/removed
28 # list of requirements that request a clone of all revlog if added/removed
29 RECLONES_REQUIREMENTS = {
29 RECLONES_REQUIREMENTS = {
30 requirements.GENERALDELTA_REQUIREMENT,
30 requirements.GENERALDELTA_REQUIREMENT,
31 requirements.SPARSEREVLOG_REQUIREMENT,
31 requirements.SPARSEREVLOG_REQUIREMENT,
32 requirements.REVLOGV2_REQUIREMENT,
32 requirements.REVLOGV2_REQUIREMENT,
33 requirements.CHANGELOGV2_REQUIREMENT,
33 requirements.CHANGELOGV2_REQUIREMENT,
34 }
34 }
35
35
36
36
37 def preservedrequirements(repo):
37 def preservedrequirements(repo):
38 preserved = {
38 preserved = {
39 requirements.SHARED_REQUIREMENT,
39 requirements.SHARED_REQUIREMENT,
40 requirements.NARROW_REQUIREMENT,
40 requirements.NARROW_REQUIREMENT,
41 }
41 }
42 return preserved & repo.requirements
42 return preserved & repo.requirements
43
43
44
44
45 FORMAT_VARIANT = b'deficiency'
45 FORMAT_VARIANT = b'deficiency'
46 OPTIMISATION = b'optimization'
46 OPTIMISATION = b'optimization'
47
47
48
48
49 class improvement:
49 class improvement:
50 """Represents an improvement that can be made as part of an upgrade."""
50 """Represents an improvement that can be made as part of an upgrade."""
51
51
52 ### The following attributes should be defined for each subclass:
52 ### The following attributes should be defined for each subclass:
53
53
54 # Either ``FORMAT_VARIANT`` or ``OPTIMISATION``.
54 # Either ``FORMAT_VARIANT`` or ``OPTIMISATION``.
55 # A format variant is where we change the storage format. Not all format
55 # A format variant is where we change the storage format. Not all format
56 # variant changes are an obvious problem.
56 # variant changes are an obvious problem.
57 # An optimization is an action (sometimes optional) that
57 # An optimization is an action (sometimes optional) that
58 # can be taken to further improve the state of the repository.
58 # can be taken to further improve the state of the repository.
59 type = None
59 type = None
60
60
61 # machine-readable string uniquely identifying this improvement. it will be
61 # machine-readable string uniquely identifying this improvement. it will be
62 # mapped to an action later in the upgrade process.
62 # mapped to an action later in the upgrade process.
63 name = None
63 name = None
64
64
65 # message intended for humans explaining the improvement in more detail,
65 # message intended for humans explaining the improvement in more detail,
66 # including the implications of it ``FORMAT_VARIANT`` types, should be
66 # including the implications of it ``FORMAT_VARIANT`` types, should be
67 # worded
67 # worded
68 # in the present tense.
68 # in the present tense.
69 description = None
69 description = None
70
70
71 # message intended for humans explaining what an upgrade addressing this
71 # message intended for humans explaining what an upgrade addressing this
72 # issue will do. should be worded in the future tense.
72 # issue will do. should be worded in the future tense.
73 upgrademessage = None
73 upgrademessage = None
74
74
75 # value of current Mercurial default for new repository
75 # value of current Mercurial default for new repository
76 default = None
76 default = None
77
77
78 # Message intended for humans which will be shown post an upgrade
78 # Message intended for humans which will be shown post an upgrade
79 # operation when the improvement will be added
79 # operation when the improvement will be added
80 postupgrademessage = None
80 postupgrademessage = None
81
81
82 # Message intended for humans which will be shown post an upgrade
82 # Message intended for humans which will be shown post an upgrade
83 # operation in which this improvement was removed
83 # operation in which this improvement was removed
84 postdowngrademessage = None
84 postdowngrademessage = None
85
85
86 # By default we assume that every improvement touches requirements and all revlogs
86 # By default we assume that every improvement touches requirements and all revlogs
87
87
88 # Whether this improvement touches filelogs
88 # Whether this improvement touches filelogs
89 touches_filelogs = True
89 touches_filelogs = True
90
90
91 # Whether this improvement touches manifests
91 # Whether this improvement touches manifests
92 touches_manifests = True
92 touches_manifests = True
93
93
94 # Whether this improvement touches changelog
94 # Whether this improvement touches changelog
95 touches_changelog = True
95 touches_changelog = True
96
96
97 # Whether this improvement changes repository requirements
97 # Whether this improvement changes repository requirements
98 touches_requirements = True
98 touches_requirements = True
99
99
100 # Whether this improvement touches the dirstate
100 # Whether this improvement touches the dirstate
101 touches_dirstate = False
101 touches_dirstate = False
102
102
103 # Can this action be run on a share instead of its mains repository
103 # Can this action be run on a share instead of its mains repository
104 compatible_with_share = False
104 compatible_with_share = False
105
105
106
106
107 allformatvariant = [] # type: List[Type['formatvariant']]
107 allformatvariant = [] # type: List[Type['formatvariant']]
108
108
109
109
110 def registerformatvariant(cls):
110 def registerformatvariant(cls):
111 allformatvariant.append(cls)
111 allformatvariant.append(cls)
112 return cls
112 return cls
113
113
114
114
115 class formatvariant(improvement):
115 class formatvariant(improvement):
116 """an improvement subclass dedicated to repository format"""
116 """an improvement subclass dedicated to repository format"""
117
117
118 type = FORMAT_VARIANT
118 type = FORMAT_VARIANT
119
119
120 @staticmethod
120 @staticmethod
121 def fromrepo(repo):
121 def fromrepo(repo):
122 """current value of the variant in the repository"""
122 """current value of the variant in the repository"""
123 raise NotImplementedError()
123 raise NotImplementedError()
124
124
125 @staticmethod
125 @staticmethod
126 def fromconfig(repo):
126 def fromconfig(repo):
127 """current value of the variant in the configuration"""
127 """current value of the variant in the configuration"""
128 raise NotImplementedError()
128 raise NotImplementedError()
129
129
130
130
131 class requirementformatvariant(formatvariant):
131 class requirementformatvariant(formatvariant):
132 """formatvariant based on a 'requirement' name.
132 """formatvariant based on a 'requirement' name.
133
133
134 Many format variant are controlled by a 'requirement'. We define a small
134 Many format variant are controlled by a 'requirement'. We define a small
135 subclass to factor the code.
135 subclass to factor the code.
136 """
136 """
137
137
138 # the requirement that control this format variant
138 # the requirement that control this format variant
139 _requirement = None
139 _requirement = None
140
140
141 @staticmethod
141 @staticmethod
142 def _newreporequirements(ui):
142 def _newreporequirements(ui):
143 return localrepo.newreporequirements(
143 return localrepo.newreporequirements(
144 ui, localrepo.defaultcreateopts(ui)
144 ui, localrepo.defaultcreateopts(ui)
145 )
145 )
146
146
147 @classmethod
147 @classmethod
148 def fromrepo(cls, repo):
148 def fromrepo(cls, repo):
149 assert cls._requirement is not None
149 assert cls._requirement is not None
150 return cls._requirement in repo.requirements
150 return cls._requirement in repo.requirements
151
151
152 @classmethod
152 @classmethod
153 def fromconfig(cls, repo):
153 def fromconfig(cls, repo):
154 assert cls._requirement is not None
154 assert cls._requirement is not None
155 return cls._requirement in cls._newreporequirements(repo.ui)
155 return cls._requirement in cls._newreporequirements(repo.ui)
156
156
157
157
158 @registerformatvariant
158 @registerformatvariant
159 class fncache(requirementformatvariant):
159 class fncache(requirementformatvariant):
160 name = b'fncache'
160 name = b'fncache'
161
161
162 _requirement = requirements.FNCACHE_REQUIREMENT
162 _requirement = requirements.FNCACHE_REQUIREMENT
163
163
164 default = True
164 default = True
165
165
166 description = _(
166 description = _(
167 b'long and reserved filenames may not work correctly; '
167 b'long and reserved filenames may not work correctly; '
168 b'repository performance is sub-optimal'
168 b'repository performance is sub-optimal'
169 )
169 )
170
170
171 upgrademessage = _(
171 upgrademessage = _(
172 b'repository will be more resilient to storing '
172 b'repository will be more resilient to storing '
173 b'certain paths and performance of certain '
173 b'certain paths and performance of certain '
174 b'operations should be improved'
174 b'operations should be improved'
175 )
175 )
176
176
177
177
178 @registerformatvariant
178 @registerformatvariant
179 class dirstatev2(requirementformatvariant):
179 class dirstatev2(requirementformatvariant):
180 name = b'dirstate-v2'
180 name = b'dirstate-v2'
181 _requirement = requirements.DIRSTATE_V2_REQUIREMENT
181 _requirement = requirements.DIRSTATE_V2_REQUIREMENT
182
182
183 default = False
183 default = False
184
184
185 description = _(
185 description = _(
186 b'version 1 of the dirstate file format requires '
186 b'version 1 of the dirstate file format requires '
187 b'reading and parsing it all at once.\n'
187 b'reading and parsing it all at once.\n'
188 b'Version 2 has a better structure,'
188 b'Version 2 has a better structure,'
189 b'better information and lighter update mechanism'
189 b'better information and lighter update mechanism'
190 )
190 )
191
191
192 upgrademessage = _(b'"hg status" will be faster')
192 upgrademessage = _(b'"hg status" will be faster')
193
193
194 touches_filelogs = False
194 touches_filelogs = False
195 touches_manifests = False
195 touches_manifests = False
196 touches_changelog = False
196 touches_changelog = False
197 touches_requirements = True
197 touches_requirements = True
198 touches_dirstate = True
198 touches_dirstate = True
199 compatible_with_share = True
199 compatible_with_share = True
200
200
201
201
202 @registerformatvariant
202 @registerformatvariant
203 class dirstatetrackedkey(requirementformatvariant):
203 class dirstatetrackedkey(requirementformatvariant):
204 name = b'tracked-hint'
204 name = b'tracked-hint'
205 _requirement = requirements.DIRSTATE_TRACKED_HINT_V1
205 _requirement = requirements.DIRSTATE_TRACKED_HINT_V1
206
206
207 default = False
207 default = False
208
208
209 description = _(
209 description = _(
210 b'Add a small file to help external tooling that watch the tracked set'
210 b'Add a small file to help external tooling that watch the tracked set'
211 )
211 )
212
212
213 upgrademessage = _(
213 upgrademessage = _(
214 b'external tools will be informated of potential change in the tracked set'
214 b'external tools will be informated of potential change in the tracked set'
215 )
215 )
216
216
217 touches_filelogs = False
217 touches_filelogs = False
218 touches_manifests = False
218 touches_manifests = False
219 touches_changelog = False
219 touches_changelog = False
220 touches_requirements = True
220 touches_requirements = True
221 touches_dirstate = True
221 touches_dirstate = True
222 compatible_with_share = True
222 compatible_with_share = True
223
223
224
224
225 @registerformatvariant
225 @registerformatvariant
226 class dotencode(requirementformatvariant):
226 class dotencode(requirementformatvariant):
227 name = b'dotencode'
227 name = b'dotencode'
228
228
229 _requirement = requirements.DOTENCODE_REQUIREMENT
229 _requirement = requirements.DOTENCODE_REQUIREMENT
230
230
231 default = True
231 default = True
232
232
233 description = _(
233 description = _(
234 b'storage of filenames beginning with a period or '
234 b'storage of filenames beginning with a period or '
235 b'space may not work correctly'
235 b'space may not work correctly'
236 )
236 )
237
237
238 upgrademessage = _(
238 upgrademessage = _(
239 b'repository will be better able to store files '
239 b'repository will be better able to store files '
240 b'beginning with a space or period'
240 b'beginning with a space or period'
241 )
241 )
242
242
243
243
244 @registerformatvariant
244 @registerformatvariant
245 class generaldelta(requirementformatvariant):
245 class generaldelta(requirementformatvariant):
246 name = b'generaldelta'
246 name = b'generaldelta'
247
247
248 _requirement = requirements.GENERALDELTA_REQUIREMENT
248 _requirement = requirements.GENERALDELTA_REQUIREMENT
249
249
250 default = True
250 default = True
251
251
252 description = _(
252 description = _(
253 b'deltas within internal storage are unable to '
253 b'deltas within internal storage are unable to '
254 b'choose optimal revisions; repository is larger and '
254 b'choose optimal revisions; repository is larger and '
255 b'slower than it could be; interaction with other '
255 b'slower than it could be; interaction with other '
256 b'repositories may require extra network and CPU '
256 b'repositories may require extra network and CPU '
257 b'resources, making "hg push" and "hg pull" slower'
257 b'resources, making "hg push" and "hg pull" slower'
258 )
258 )
259
259
260 upgrademessage = _(
260 upgrademessage = _(
261 b'repository storage will be able to create '
261 b'repository storage will be able to create '
262 b'optimal deltas; new repository data will be '
262 b'optimal deltas; new repository data will be '
263 b'smaller and read times should decrease; '
263 b'smaller and read times should decrease; '
264 b'interacting with other repositories using this '
264 b'interacting with other repositories using this '
265 b'storage model should require less network and '
265 b'storage model should require less network and '
266 b'CPU resources, making "hg push" and "hg pull" '
266 b'CPU resources, making "hg push" and "hg pull" '
267 b'faster'
267 b'faster'
268 )
268 )
269
269
270
270
271 @registerformatvariant
271 @registerformatvariant
272 class sharesafe(requirementformatvariant):
272 class sharesafe(requirementformatvariant):
273 name = b'share-safe'
273 name = b'share-safe'
274 _requirement = requirements.SHARESAFE_REQUIREMENT
274 _requirement = requirements.SHARESAFE_REQUIREMENT
275
275
276 default = True
276 default = True
277
277
278 description = _(
278 description = _(
279 b'old shared repositories do not share source repository '
279 b'old shared repositories do not share source repository '
280 b'requirements and config. This leads to various problems '
280 b'requirements and config. This leads to various problems '
281 b'when the source repository format is upgraded or some new '
281 b'when the source repository format is upgraded or some new '
282 b'extensions are enabled.'
282 b'extensions are enabled.'
283 )
283 )
284
284
285 upgrademessage = _(
285 upgrademessage = _(
286 b'Upgrades a repository to share-safe format so that future '
286 b'Upgrades a repository to share-safe format so that future '
287 b'shares of this repository share its requirements and configs.'
287 b'shares of this repository share its requirements and configs.'
288 )
288 )
289
289
290 postdowngrademessage = _(
290 postdowngrademessage = _(
291 b'repository downgraded to not use share safe mode, '
291 b'repository downgraded to not use share safe mode, '
292 b'existing shares will not work and needs to'
292 b'existing shares will not work and need to be reshared.'
293 b' be reshared.'
294 )
293 )
295
294
296 postupgrademessage = _(
295 postupgrademessage = _(
297 b'repository upgraded to share safe mode, existing'
296 b'repository upgraded to share safe mode, existing'
298 b' shares will still work in old non-safe mode. '
297 b' shares will still work in old non-safe mode. '
299 b'Re-share existing shares to use them in safe mode'
298 b'Re-share existing shares to use them in safe mode'
300 b' New shares will be created in safe mode.'
299 b' New shares will be created in safe mode.'
301 )
300 )
302
301
303 # upgrade only needs to change the requirements
302 # upgrade only needs to change the requirements
304 touches_filelogs = False
303 touches_filelogs = False
305 touches_manifests = False
304 touches_manifests = False
306 touches_changelog = False
305 touches_changelog = False
307 touches_requirements = True
306 touches_requirements = True
308
307
309
308
310 @registerformatvariant
309 @registerformatvariant
311 class sparserevlog(requirementformatvariant):
310 class sparserevlog(requirementformatvariant):
312 name = b'sparserevlog'
311 name = b'sparserevlog'
313
312
314 _requirement = requirements.SPARSEREVLOG_REQUIREMENT
313 _requirement = requirements.SPARSEREVLOG_REQUIREMENT
315
314
316 default = True
315 default = True
317
316
318 description = _(
317 description = _(
319 b'in order to limit disk reading and memory usage on older '
318 b'in order to limit disk reading and memory usage on older '
320 b'version, the span of a delta chain from its root to its '
319 b'version, the span of a delta chain from its root to its '
321 b'end is limited, whatever the relevant data in this span. '
320 b'end is limited, whatever the relevant data in this span. '
322 b'This can severly limit Mercurial ability to build good '
321 b'This can severly limit Mercurial ability to build good '
323 b'chain of delta resulting is much more storage space being '
322 b'chain of delta resulting is much more storage space being '
324 b'taken and limit reusability of on disk delta during '
323 b'taken and limit reusability of on disk delta during '
325 b'exchange.'
324 b'exchange.'
326 )
325 )
327
326
328 upgrademessage = _(
327 upgrademessage = _(
329 b'Revlog supports delta chain with more unused data '
328 b'Revlog supports delta chain with more unused data '
330 b'between payload. These gaps will be skipped at read '
329 b'between payload. These gaps will be skipped at read '
331 b'time. This allows for better delta chains, making a '
330 b'time. This allows for better delta chains, making a '
332 b'better compression and faster exchange with server.'
331 b'better compression and faster exchange with server.'
333 )
332 )
334
333
335
334
336 @registerformatvariant
335 @registerformatvariant
337 class persistentnodemap(requirementformatvariant):
336 class persistentnodemap(requirementformatvariant):
338 name = b'persistent-nodemap'
337 name = b'persistent-nodemap'
339
338
340 _requirement = requirements.NODEMAP_REQUIREMENT
339 _requirement = requirements.NODEMAP_REQUIREMENT
341
340
342 default = False
341 default = False
343
342
344 description = _(
343 description = _(
345 b'persist the node -> rev mapping on disk to speedup lookup'
344 b'persist the node -> rev mapping on disk to speedup lookup'
346 )
345 )
347
346
348 upgrademessage = _(b'Speedup revision lookup by node id.')
347 upgrademessage = _(b'Speedup revision lookup by node id.')
349
348
350
349
351 @registerformatvariant
350 @registerformatvariant
352 class copiessdc(requirementformatvariant):
351 class copiessdc(requirementformatvariant):
353 name = b'copies-sdc'
352 name = b'copies-sdc'
354
353
355 _requirement = requirements.COPIESSDC_REQUIREMENT
354 _requirement = requirements.COPIESSDC_REQUIREMENT
356
355
357 default = False
356 default = False
358
357
359 description = _(b'Stores copies information alongside changesets.')
358 description = _(b'Stores copies information alongside changesets.')
360
359
361 upgrademessage = _(
360 upgrademessage = _(
362 b'Allows to use more efficient algorithm to deal with ' b'copy tracing.'
361 b'Allows to use more efficient algorithm to deal with copy tracing.'
363 )
362 )
364
363
365 touches_filelogs = False
364 touches_filelogs = False
366 touches_manifests = False
365 touches_manifests = False
367
366
368
367
369 @registerformatvariant
368 @registerformatvariant
370 class revlogv2(requirementformatvariant):
369 class revlogv2(requirementformatvariant):
371 name = b'revlog-v2'
370 name = b'revlog-v2'
372 _requirement = requirements.REVLOGV2_REQUIREMENT
371 _requirement = requirements.REVLOGV2_REQUIREMENT
373 default = False
372 default = False
374 description = _(b'Version 2 of the revlog.')
373 description = _(b'Version 2 of the revlog.')
375 upgrademessage = _(b'very experimental')
374 upgrademessage = _(b'very experimental')
376
375
377
376
378 @registerformatvariant
377 @registerformatvariant
379 class changelogv2(requirementformatvariant):
378 class changelogv2(requirementformatvariant):
380 name = b'changelog-v2'
379 name = b'changelog-v2'
381 _requirement = requirements.CHANGELOGV2_REQUIREMENT
380 _requirement = requirements.CHANGELOGV2_REQUIREMENT
382 default = False
381 default = False
383 description = _(b'An iteration of the revlog focussed on changelog needs.')
382 description = _(b'An iteration of the revlog focussed on changelog needs.')
384 upgrademessage = _(b'quite experimental')
383 upgrademessage = _(b'quite experimental')
385
384
386 touches_filelogs = False
385 touches_filelogs = False
387 touches_manifests = False
386 touches_manifests = False
388
387
389
388
390 @registerformatvariant
389 @registerformatvariant
391 class removecldeltachain(formatvariant):
390 class removecldeltachain(formatvariant):
392 name = b'plain-cl-delta'
391 name = b'plain-cl-delta'
393
392
394 default = True
393 default = True
395
394
396 description = _(
395 description = _(
397 b'changelog storage is using deltas instead of '
396 b'changelog storage is using deltas instead of '
398 b'raw entries; changelog reading and any '
397 b'raw entries; changelog reading and any '
399 b'operation relying on changelog data are slower '
398 b'operation relying on changelog data are slower '
400 b'than they could be'
399 b'than they could be'
401 )
400 )
402
401
403 upgrademessage = _(
402 upgrademessage = _(
404 b'changelog storage will be reformated to '
403 b'changelog storage will be reformated to '
405 b'store raw entries; changelog reading will be '
404 b'store raw entries; changelog reading will be '
406 b'faster; changelog size may be reduced'
405 b'faster; changelog size may be reduced'
407 )
406 )
408
407
409 @staticmethod
408 @staticmethod
410 def fromrepo(repo):
409 def fromrepo(repo):
411 # Mercurial 4.0 changed changelogs to not use delta chains. Search for
410 # Mercurial 4.0 changed changelogs to not use delta chains. Search for
412 # changelogs with deltas.
411 # changelogs with deltas.
413 cl = repo.changelog
412 cl = repo.changelog
414 chainbase = cl.chainbase
413 chainbase = cl.chainbase
415 return all(rev == chainbase(rev) for rev in cl)
414 return all(rev == chainbase(rev) for rev in cl)
416
415
417 @staticmethod
416 @staticmethod
418 def fromconfig(repo):
417 def fromconfig(repo):
419 return True
418 return True
420
419
421
420
422 _has_zstd = (
421 _has_zstd = (
423 b'zstd' in util.compengines
422 b'zstd' in util.compengines
424 and util.compengines[b'zstd'].available()
423 and util.compengines[b'zstd'].available()
425 and util.compengines[b'zstd'].revlogheader()
424 and util.compengines[b'zstd'].revlogheader()
426 )
425 )
427
426
428
427
429 @registerformatvariant
428 @registerformatvariant
430 class compressionengine(formatvariant):
429 class compressionengine(formatvariant):
431 name = b'compression'
430 name = b'compression'
432
431
433 if _has_zstd:
432 if _has_zstd:
434 default = b'zstd'
433 default = b'zstd'
435 else:
434 else:
436 default = b'zlib'
435 default = b'zlib'
437
436
438 description = _(
437 description = _(
439 b'Compresion algorithm used to compress data. '
438 b'Compresion algorithm used to compress data. '
440 b'Some engine are faster than other'
439 b'Some engine are faster than other'
441 )
440 )
442
441
443 upgrademessage = _(
442 upgrademessage = _(
444 b'revlog content will be recompressed with the new algorithm.'
443 b'revlog content will be recompressed with the new algorithm.'
445 )
444 )
446
445
447 @classmethod
446 @classmethod
448 def fromrepo(cls, repo):
447 def fromrepo(cls, repo):
449 # we allow multiple compression engine requirement to co-exist because
448 # we allow multiple compression engine requirement to co-exist because
450 # strickly speaking, revlog seems to support mixed compression style.
449 # strickly speaking, revlog seems to support mixed compression style.
451 #
450 #
452 # The compression used for new entries will be "the last one"
451 # The compression used for new entries will be "the last one"
453 compression = b'zlib'
452 compression = b'zlib'
454 for req in repo.requirements:
453 for req in repo.requirements:
455 prefix = req.startswith
454 prefix = req.startswith
456 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
455 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
457 compression = req.split(b'-', 2)[2]
456 compression = req.split(b'-', 2)[2]
458 return compression
457 return compression
459
458
460 @classmethod
459 @classmethod
461 def fromconfig(cls, repo):
460 def fromconfig(cls, repo):
462 compengines = repo.ui.configlist(b'format', b'revlog-compression')
461 compengines = repo.ui.configlist(b'format', b'revlog-compression')
463 # return the first valid value as the selection code would do
462 # return the first valid value as the selection code would do
464 for comp in compengines:
463 for comp in compengines:
465 if comp in util.compengines:
464 if comp in util.compengines:
466 e = util.compengines[comp]
465 e = util.compengines[comp]
467 if e.available() and e.revlogheader():
466 if e.available() and e.revlogheader():
468 return comp
467 return comp
469
468
470 # no valide compression found lets display it all for clarity
469 # no valide compression found lets display it all for clarity
471 return b','.join(compengines)
470 return b','.join(compengines)
472
471
473
472
474 @registerformatvariant
473 @registerformatvariant
475 class compressionlevel(formatvariant):
474 class compressionlevel(formatvariant):
476 name = b'compression-level'
475 name = b'compression-level'
477 default = b'default'
476 default = b'default'
478
477
479 description = _(b'compression level')
478 description = _(b'compression level')
480
479
481 upgrademessage = _(b'revlog content will be recompressed')
480 upgrademessage = _(b'revlog content will be recompressed')
482
481
483 @classmethod
482 @classmethod
484 def fromrepo(cls, repo):
483 def fromrepo(cls, repo):
485 comp = compressionengine.fromrepo(repo)
484 comp = compressionengine.fromrepo(repo)
486 level = None
485 level = None
487 if comp == b'zlib':
486 if comp == b'zlib':
488 level = repo.ui.configint(b'storage', b'revlog.zlib.level')
487 level = repo.ui.configint(b'storage', b'revlog.zlib.level')
489 elif comp == b'zstd':
488 elif comp == b'zstd':
490 level = repo.ui.configint(b'storage', b'revlog.zstd.level')
489 level = repo.ui.configint(b'storage', b'revlog.zstd.level')
491 if level is None:
490 if level is None:
492 return b'default'
491 return b'default'
493 return bytes(level)
492 return bytes(level)
494
493
495 @classmethod
494 @classmethod
496 def fromconfig(cls, repo):
495 def fromconfig(cls, repo):
497 comp = compressionengine.fromconfig(repo)
496 comp = compressionengine.fromconfig(repo)
498 level = None
497 level = None
499 if comp == b'zlib':
498 if comp == b'zlib':
500 level = repo.ui.configint(b'storage', b'revlog.zlib.level')
499 level = repo.ui.configint(b'storage', b'revlog.zlib.level')
501 elif comp == b'zstd':
500 elif comp == b'zstd':
502 level = repo.ui.configint(b'storage', b'revlog.zstd.level')
501 level = repo.ui.configint(b'storage', b'revlog.zstd.level')
503 if level is None:
502 if level is None:
504 return b'default'
503 return b'default'
505 return bytes(level)
504 return bytes(level)
506
505
507
506
508 def find_format_upgrades(repo):
507 def find_format_upgrades(repo):
509 """returns a list of format upgrades which can be perform on the repo"""
508 """returns a list of format upgrades which can be perform on the repo"""
510 upgrades = []
509 upgrades = []
511
510
512 # We could detect lack of revlogv1 and store here, but they were added
511 # We could detect lack of revlogv1 and store here, but they were added
513 # in 0.9.2 and we don't support upgrading repos without these
512 # in 0.9.2 and we don't support upgrading repos without these
514 # requirements, so let's not bother.
513 # requirements, so let's not bother.
515
514
516 for fv in allformatvariant:
515 for fv in allformatvariant:
517 if not fv.fromrepo(repo):
516 if not fv.fromrepo(repo):
518 upgrades.append(fv)
517 upgrades.append(fv)
519
518
520 return upgrades
519 return upgrades
521
520
522
521
523 def find_format_downgrades(repo):
522 def find_format_downgrades(repo):
524 """returns a list of format downgrades which will be performed on the repo
523 """returns a list of format downgrades which will be performed on the repo
525 because of disabled config option for them"""
524 because of disabled config option for them"""
526
525
527 downgrades = []
526 downgrades = []
528
527
529 for fv in allformatvariant:
528 for fv in allformatvariant:
530 if fv.name == b'compression':
529 if fv.name == b'compression':
531 # If there is a compression change between repository
530 # If there is a compression change between repository
532 # and config, destination repository compression will change
531 # and config, destination repository compression will change
533 # and current compression will be removed.
532 # and current compression will be removed.
534 if fv.fromrepo(repo) != fv.fromconfig(repo):
533 if fv.fromrepo(repo) != fv.fromconfig(repo):
535 downgrades.append(fv)
534 downgrades.append(fv)
536 continue
535 continue
537 # format variant exist in repo but does not exist in new repository
536 # format variant exist in repo but does not exist in new repository
538 # config
537 # config
539 if fv.fromrepo(repo) and not fv.fromconfig(repo):
538 if fv.fromrepo(repo) and not fv.fromconfig(repo):
540 downgrades.append(fv)
539 downgrades.append(fv)
541
540
542 return downgrades
541 return downgrades
543
542
544
543
545 ALL_OPTIMISATIONS = []
544 ALL_OPTIMISATIONS = []
546
545
547
546
548 def register_optimization(obj):
547 def register_optimization(obj):
549 ALL_OPTIMISATIONS.append(obj)
548 ALL_OPTIMISATIONS.append(obj)
550 return obj
549 return obj
551
550
552
551
553 class optimization(improvement):
552 class optimization(improvement):
554 """an improvement subclass dedicated to optimizations"""
553 """an improvement subclass dedicated to optimizations"""
555
554
556 type = OPTIMISATION
555 type = OPTIMISATION
557
556
558
557
559 @register_optimization
558 @register_optimization
560 class redeltaparents(optimization):
559 class redeltaparents(optimization):
561 name = b're-delta-parent'
560 name = b're-delta-parent'
562
561
563 type = OPTIMISATION
562 type = OPTIMISATION
564
563
565 description = _(
564 description = _(
566 b'deltas within internal storage will be recalculated to '
565 b'deltas within internal storage will be recalculated to '
567 b'choose an optimal base revision where this was not '
566 b'choose an optimal base revision where this was not '
568 b'already done; the size of the repository may shrink and '
567 b'already done; the size of the repository may shrink and '
569 b'various operations may become faster; the first time '
568 b'various operations may become faster; the first time '
570 b'this optimization is performed could slow down upgrade '
569 b'this optimization is performed could slow down upgrade '
571 b'execution considerably; subsequent invocations should '
570 b'execution considerably; subsequent invocations should '
572 b'not run noticeably slower'
571 b'not run noticeably slower'
573 )
572 )
574
573
575 upgrademessage = _(
574 upgrademessage = _(
576 b'deltas within internal storage will choose a new '
575 b'deltas within internal storage will choose a new '
577 b'base revision if needed'
576 b'base revision if needed'
578 )
577 )
579
578
580
579
581 @register_optimization
580 @register_optimization
582 class redeltamultibase(optimization):
581 class redeltamultibase(optimization):
583 name = b're-delta-multibase'
582 name = b're-delta-multibase'
584
583
585 type = OPTIMISATION
584 type = OPTIMISATION
586
585
587 description = _(
586 description = _(
588 b'deltas within internal storage will be recalculated '
587 b'deltas within internal storage will be recalculated '
589 b'against multiple base revision and the smallest '
588 b'against multiple base revision and the smallest '
590 b'difference will be used; the size of the repository may '
589 b'difference will be used; the size of the repository may '
591 b'shrink significantly when there are many merges; this '
590 b'shrink significantly when there are many merges; this '
592 b'optimization will slow down execution in proportion to '
591 b'optimization will slow down execution in proportion to '
593 b'the number of merges in the repository and the amount '
592 b'the number of merges in the repository and the amount '
594 b'of files in the repository; this slow down should not '
593 b'of files in the repository; this slow down should not '
595 b'be significant unless there are tens of thousands of '
594 b'be significant unless there are tens of thousands of '
596 b'files and thousands of merges'
595 b'files and thousands of merges'
597 )
596 )
598
597
599 upgrademessage = _(
598 upgrademessage = _(
600 b'deltas within internal storage will choose an '
599 b'deltas within internal storage will choose an '
601 b'optimal delta by computing deltas against multiple '
600 b'optimal delta by computing deltas against multiple '
602 b'parents; may slow down execution time '
601 b'parents; may slow down execution time '
603 b'significantly'
602 b'significantly'
604 )
603 )
605
604
606
605
607 @register_optimization
606 @register_optimization
608 class redeltaall(optimization):
607 class redeltaall(optimization):
609 name = b're-delta-all'
608 name = b're-delta-all'
610
609
611 type = OPTIMISATION
610 type = OPTIMISATION
612
611
613 description = _(
612 description = _(
614 b'deltas within internal storage will always be '
613 b'deltas within internal storage will always be '
615 b'recalculated without reusing prior deltas; this will '
614 b'recalculated without reusing prior deltas; this will '
616 b'likely make execution run several times slower; this '
615 b'likely make execution run several times slower; this '
617 b'optimization is typically not needed'
616 b'optimization is typically not needed'
618 )
617 )
619
618
620 upgrademessage = _(
619 upgrademessage = _(
621 b'deltas within internal storage will be fully '
620 b'deltas within internal storage will be fully '
622 b'recomputed; this will likely drastically slow down '
621 b'recomputed; this will likely drastically slow down '
623 b'execution time'
622 b'execution time'
624 )
623 )
625
624
626
625
627 @register_optimization
626 @register_optimization
628 class redeltafulladd(optimization):
627 class redeltafulladd(optimization):
629 name = b're-delta-fulladd'
628 name = b're-delta-fulladd'
630
629
631 type = OPTIMISATION
630 type = OPTIMISATION
632
631
633 description = _(
632 description = _(
634 b'every revision will be re-added as if it was new '
633 b'every revision will be re-added as if it was new '
635 b'content. It will go through the full storage '
634 b'content. It will go through the full storage '
636 b'mechanism giving extensions a chance to process it '
635 b'mechanism giving extensions a chance to process it '
637 b'(eg. lfs). This is similar to "re-delta-all" but even '
636 b'(eg. lfs). This is similar to "re-delta-all" but even '
638 b'slower since more logic is involved.'
637 b'slower since more logic is involved.'
639 )
638 )
640
639
641 upgrademessage = _(
640 upgrademessage = _(
642 b'each revision will be added as new content to the '
641 b'each revision will be added as new content to the '
643 b'internal storage; this will likely drastically slow '
642 b'internal storage; this will likely drastically slow '
644 b'down execution time, but some extensions might need '
643 b'down execution time, but some extensions might need '
645 b'it'
644 b'it'
646 )
645 )
647
646
648
647
649 def findoptimizations(repo):
648 def findoptimizations(repo):
650 """Determine optimisation that could be used during upgrade"""
649 """Determine optimisation that could be used during upgrade"""
651 # These are unconditionally added. There is logic later that figures out
650 # These are unconditionally added. There is logic later that figures out
652 # which ones to apply.
651 # which ones to apply.
653 return list(ALL_OPTIMISATIONS)
652 return list(ALL_OPTIMISATIONS)
654
653
655
654
656 def determine_upgrade_actions(
655 def determine_upgrade_actions(
657 repo, format_upgrades, optimizations, sourcereqs, destreqs
656 repo, format_upgrades, optimizations, sourcereqs, destreqs
658 ):
657 ):
659 """Determine upgrade actions that will be performed.
658 """Determine upgrade actions that will be performed.
660
659
661 Given a list of improvements as returned by ``find_format_upgrades`` and
660 Given a list of improvements as returned by ``find_format_upgrades`` and
662 ``findoptimizations``, determine the list of upgrade actions that
661 ``findoptimizations``, determine the list of upgrade actions that
663 will be performed.
662 will be performed.
664
663
665 The role of this function is to filter improvements if needed, apply
664 The role of this function is to filter improvements if needed, apply
666 recommended optimizations from the improvements list that make sense,
665 recommended optimizations from the improvements list that make sense,
667 etc.
666 etc.
668
667
669 Returns a list of action names.
668 Returns a list of action names.
670 """
669 """
671 newactions = []
670 newactions = []
672
671
673 for d in format_upgrades:
672 for d in format_upgrades:
674 if util.safehasattr(d, '_requirement'):
673 if util.safehasattr(d, '_requirement'):
675 name = d._requirement
674 name = d._requirement
676 else:
675 else:
677 name = None
676 name = None
678
677
679 # If the action is a requirement that doesn't show up in the
678 # If the action is a requirement that doesn't show up in the
680 # destination requirements, prune the action.
679 # destination requirements, prune the action.
681 if name is not None and name not in destreqs:
680 if name is not None and name not in destreqs:
682 continue
681 continue
683
682
684 newactions.append(d)
683 newactions.append(d)
685
684
686 newactions.extend(
685 newactions.extend(
687 o
686 o
688 for o in sorted(optimizations, key=(lambda x: x.name))
687 for o in sorted(optimizations, key=(lambda x: x.name))
689 if o not in newactions
688 if o not in newactions
690 )
689 )
691
690
692 # FUTURE consider adding some optimizations here for certain transitions.
691 # FUTURE consider adding some optimizations here for certain transitions.
693 # e.g. adding generaldelta could schedule parent redeltas.
692 # e.g. adding generaldelta could schedule parent redeltas.
694
693
695 return newactions
694 return newactions
696
695
697
696
698 class BaseOperation:
697 class BaseOperation:
699 """base class that contains the minimum for an upgrade to work
698 """base class that contains the minimum for an upgrade to work
700
699
701 (this might need to be extended as the usage for subclass alternative to
700 (this might need to be extended as the usage for subclass alternative to
702 UpgradeOperation extends)
701 UpgradeOperation extends)
703 """
702 """
704
703
705 def __init__(
704 def __init__(
706 self,
705 self,
707 new_requirements,
706 new_requirements,
708 backup_store,
707 backup_store,
709 ):
708 ):
710 self.new_requirements = new_requirements
709 self.new_requirements = new_requirements
711 # should this operation create a backup of the store
710 # should this operation create a backup of the store
712 self.backup_store = backup_store
711 self.backup_store = backup_store
713
712
714
713
715 class UpgradeOperation(BaseOperation):
714 class UpgradeOperation(BaseOperation):
716 """represent the work to be done during an upgrade"""
715 """represent the work to be done during an upgrade"""
717
716
718 def __init__(
717 def __init__(
719 self,
718 self,
720 ui,
719 ui,
721 new_requirements,
720 new_requirements,
722 current_requirements,
721 current_requirements,
723 upgrade_actions,
722 upgrade_actions,
724 removed_actions,
723 removed_actions,
725 revlogs_to_process,
724 revlogs_to_process,
726 backup_store,
725 backup_store,
727 ):
726 ):
728 super().__init__(
727 super().__init__(
729 new_requirements,
728 new_requirements,
730 backup_store,
729 backup_store,
731 )
730 )
732 self.ui = ui
731 self.ui = ui
733 self.current_requirements = current_requirements
732 self.current_requirements = current_requirements
734 # list of upgrade actions the operation will perform
733 # list of upgrade actions the operation will perform
735 self.upgrade_actions = upgrade_actions
734 self.upgrade_actions = upgrade_actions
736 self.removed_actions = removed_actions
735 self.removed_actions = removed_actions
737 self.revlogs_to_process = revlogs_to_process
736 self.revlogs_to_process = revlogs_to_process
738 # requirements which will be added by the operation
737 # requirements which will be added by the operation
739 self._added_requirements = (
738 self._added_requirements = (
740 self.new_requirements - self.current_requirements
739 self.new_requirements - self.current_requirements
741 )
740 )
742 # requirements which will be removed by the operation
741 # requirements which will be removed by the operation
743 self._removed_requirements = (
742 self._removed_requirements = (
744 self.current_requirements - self.new_requirements
743 self.current_requirements - self.new_requirements
745 )
744 )
746 # requirements which will be preserved by the operation
745 # requirements which will be preserved by the operation
747 self._preserved_requirements = (
746 self._preserved_requirements = (
748 self.current_requirements & self.new_requirements
747 self.current_requirements & self.new_requirements
749 )
748 )
750 # optimizations which are not used and it's recommended that they
749 # optimizations which are not used and it's recommended that they
751 # should use them
750 # should use them
752 all_optimizations = findoptimizations(None)
751 all_optimizations = findoptimizations(None)
753 self.unused_optimizations = [
752 self.unused_optimizations = [
754 i for i in all_optimizations if i not in self.upgrade_actions
753 i for i in all_optimizations if i not in self.upgrade_actions
755 ]
754 ]
756
755
757 # delta reuse mode of this upgrade operation
756 # delta reuse mode of this upgrade operation
758 upgrade_actions_names = self.upgrade_actions_names
757 upgrade_actions_names = self.upgrade_actions_names
759 self.delta_reuse_mode = revlog.revlog.DELTAREUSEALWAYS
758 self.delta_reuse_mode = revlog.revlog.DELTAREUSEALWAYS
760 if b're-delta-all' in upgrade_actions_names:
759 if b're-delta-all' in upgrade_actions_names:
761 self.delta_reuse_mode = revlog.revlog.DELTAREUSENEVER
760 self.delta_reuse_mode = revlog.revlog.DELTAREUSENEVER
762 elif b're-delta-parent' in upgrade_actions_names:
761 elif b're-delta-parent' in upgrade_actions_names:
763 self.delta_reuse_mode = revlog.revlog.DELTAREUSESAMEREVS
762 self.delta_reuse_mode = revlog.revlog.DELTAREUSESAMEREVS
764 elif b're-delta-multibase' in upgrade_actions_names:
763 elif b're-delta-multibase' in upgrade_actions_names:
765 self.delta_reuse_mode = revlog.revlog.DELTAREUSESAMEREVS
764 self.delta_reuse_mode = revlog.revlog.DELTAREUSESAMEREVS
766 elif b're-delta-fulladd' in upgrade_actions_names:
765 elif b're-delta-fulladd' in upgrade_actions_names:
767 self.delta_reuse_mode = revlog.revlog.DELTAREUSEFULLADD
766 self.delta_reuse_mode = revlog.revlog.DELTAREUSEFULLADD
768
767
769 # should this operation force re-delta of both parents
768 # should this operation force re-delta of both parents
770 self.force_re_delta_both_parents = (
769 self.force_re_delta_both_parents = (
771 b're-delta-multibase' in upgrade_actions_names
770 b're-delta-multibase' in upgrade_actions_names
772 )
771 )
773
772
774 @property
773 @property
775 def upgrade_actions_names(self):
774 def upgrade_actions_names(self):
776 return set([a.name for a in self.upgrade_actions])
775 return set([a.name for a in self.upgrade_actions])
777
776
778 @property
777 @property
779 def requirements_only(self):
778 def requirements_only(self):
780 # does the operation only touches repository requirement
779 # does the operation only touches repository requirement
781 return (
780 return (
782 self.touches_requirements
781 self.touches_requirements
783 and not self.touches_filelogs
782 and not self.touches_filelogs
784 and not self.touches_manifests
783 and not self.touches_manifests
785 and not self.touches_changelog
784 and not self.touches_changelog
786 and not self.touches_dirstate
785 and not self.touches_dirstate
787 )
786 )
788
787
789 @property
788 @property
790 def touches_filelogs(self):
789 def touches_filelogs(self):
791 for a in self.upgrade_actions:
790 for a in self.upgrade_actions:
792 # in optimisations, we re-process the revlogs again
791 # in optimisations, we re-process the revlogs again
793 if a.type == OPTIMISATION:
792 if a.type == OPTIMISATION:
794 return True
793 return True
795 elif a.touches_filelogs:
794 elif a.touches_filelogs:
796 return True
795 return True
797 for a in self.removed_actions:
796 for a in self.removed_actions:
798 if a.touches_filelogs:
797 if a.touches_filelogs:
799 return True
798 return True
800 return False
799 return False
801
800
802 @property
801 @property
803 def touches_manifests(self):
802 def touches_manifests(self):
804 for a in self.upgrade_actions:
803 for a in self.upgrade_actions:
805 # in optimisations, we re-process the revlogs again
804 # in optimisations, we re-process the revlogs again
806 if a.type == OPTIMISATION:
805 if a.type == OPTIMISATION:
807 return True
806 return True
808 elif a.touches_manifests:
807 elif a.touches_manifests:
809 return True
808 return True
810 for a in self.removed_actions:
809 for a in self.removed_actions:
811 if a.touches_manifests:
810 if a.touches_manifests:
812 return True
811 return True
813 return False
812 return False
814
813
815 @property
814 @property
816 def touches_changelog(self):
815 def touches_changelog(self):
817 for a in self.upgrade_actions:
816 for a in self.upgrade_actions:
818 # in optimisations, we re-process the revlogs again
817 # in optimisations, we re-process the revlogs again
819 if a.type == OPTIMISATION:
818 if a.type == OPTIMISATION:
820 return True
819 return True
821 elif a.touches_changelog:
820 elif a.touches_changelog:
822 return True
821 return True
823 for a in self.removed_actions:
822 for a in self.removed_actions:
824 if a.touches_changelog:
823 if a.touches_changelog:
825 return True
824 return True
826 return False
825 return False
827
826
828 @property
827 @property
829 def touches_requirements(self):
828 def touches_requirements(self):
830 for a in self.upgrade_actions:
829 for a in self.upgrade_actions:
831 # optimisations are used to re-process revlogs and does not result
830 # optimisations are used to re-process revlogs and does not result
832 # in a requirement being added or removed
831 # in a requirement being added or removed
833 if a.type == OPTIMISATION:
832 if a.type == OPTIMISATION:
834 pass
833 pass
835 elif a.touches_requirements:
834 elif a.touches_requirements:
836 return True
835 return True
837 for a in self.removed_actions:
836 for a in self.removed_actions:
838 if a.touches_requirements:
837 if a.touches_requirements:
839 return True
838 return True
840
839
841 @property
840 @property
842 def touches_dirstate(self):
841 def touches_dirstate(self):
843 for a in self.upgrade_actions:
842 for a in self.upgrade_actions:
844 # revlog optimisations do not affect the dirstate
843 # revlog optimisations do not affect the dirstate
845 if a.type == OPTIMISATION:
844 if a.type == OPTIMISATION:
846 pass
845 pass
847 elif a.touches_dirstate:
846 elif a.touches_dirstate:
848 return True
847 return True
849 for a in self.removed_actions:
848 for a in self.removed_actions:
850 if a.touches_dirstate:
849 if a.touches_dirstate:
851 return True
850 return True
852
851
853 return False
852 return False
854
853
855 def _write_labeled(self, l, label: bytes):
854 def _write_labeled(self, l, label: bytes):
856 """
855 """
857 Utility function to aid writing of a list under one label
856 Utility function to aid writing of a list under one label
858 """
857 """
859 first = True
858 first = True
860 for r in sorted(l):
859 for r in sorted(l):
861 if not first:
860 if not first:
862 self.ui.write(b', ')
861 self.ui.write(b', ')
863 self.ui.write(r, label=label)
862 self.ui.write(r, label=label)
864 first = False
863 first = False
865
864
866 def print_requirements(self):
865 def print_requirements(self):
867 self.ui.write(_(b'requirements\n'))
866 self.ui.write(_(b'requirements\n'))
868 self.ui.write(_(b' preserved: '))
867 self.ui.write(_(b' preserved: '))
869 self._write_labeled(
868 self._write_labeled(
870 self._preserved_requirements, b"upgrade-repo.requirement.preserved"
869 self._preserved_requirements, b"upgrade-repo.requirement.preserved"
871 )
870 )
872 self.ui.write((b'\n'))
871 self.ui.write((b'\n'))
873 if self._removed_requirements:
872 if self._removed_requirements:
874 self.ui.write(_(b' removed: '))
873 self.ui.write(_(b' removed: '))
875 self._write_labeled(
874 self._write_labeled(
876 self._removed_requirements, b"upgrade-repo.requirement.removed"
875 self._removed_requirements, b"upgrade-repo.requirement.removed"
877 )
876 )
878 self.ui.write((b'\n'))
877 self.ui.write((b'\n'))
879 if self._added_requirements:
878 if self._added_requirements:
880 self.ui.write(_(b' added: '))
879 self.ui.write(_(b' added: '))
881 self._write_labeled(
880 self._write_labeled(
882 self._added_requirements, b"upgrade-repo.requirement.added"
881 self._added_requirements, b"upgrade-repo.requirement.added"
883 )
882 )
884 self.ui.write((b'\n'))
883 self.ui.write((b'\n'))
885 self.ui.write(b'\n')
884 self.ui.write(b'\n')
886
885
887 def print_optimisations(self):
886 def print_optimisations(self):
888 optimisations = [
887 optimisations = [
889 a for a in self.upgrade_actions if a.type == OPTIMISATION
888 a for a in self.upgrade_actions if a.type == OPTIMISATION
890 ]
889 ]
891 optimisations.sort(key=lambda a: a.name)
890 optimisations.sort(key=lambda a: a.name)
892 if optimisations:
891 if optimisations:
893 self.ui.write(_(b'optimisations: '))
892 self.ui.write(_(b'optimisations: '))
894 self._write_labeled(
893 self._write_labeled(
895 [a.name for a in optimisations],
894 [a.name for a in optimisations],
896 b"upgrade-repo.optimisation.performed",
895 b"upgrade-repo.optimisation.performed",
897 )
896 )
898 self.ui.write(b'\n\n')
897 self.ui.write(b'\n\n')
899
898
900 def print_upgrade_actions(self):
899 def print_upgrade_actions(self):
901 for a in self.upgrade_actions:
900 for a in self.upgrade_actions:
902 self.ui.status(b'%s\n %s\n\n' % (a.name, a.upgrademessage))
901 self.ui.status(b'%s\n %s\n\n' % (a.name, a.upgrademessage))
903
902
904 def print_affected_revlogs(self):
903 def print_affected_revlogs(self):
905 if not self.revlogs_to_process:
904 if not self.revlogs_to_process:
906 self.ui.write((b'no revlogs to process\n'))
905 self.ui.write((b'no revlogs to process\n'))
907 else:
906 else:
908 self.ui.write((b'processed revlogs:\n'))
907 self.ui.write((b'processed revlogs:\n'))
909 for r in sorted(self.revlogs_to_process):
908 for r in sorted(self.revlogs_to_process):
910 self.ui.write((b' - %s\n' % r))
909 self.ui.write((b' - %s\n' % r))
911 self.ui.write((b'\n'))
910 self.ui.write((b'\n'))
912
911
913 def print_unused_optimizations(self):
912 def print_unused_optimizations(self):
914 for i in self.unused_optimizations:
913 for i in self.unused_optimizations:
915 self.ui.status(_(b'%s\n %s\n\n') % (i.name, i.description))
914 self.ui.status(_(b'%s\n %s\n\n') % (i.name, i.description))
916
915
917 def has_upgrade_action(self, name):
916 def has_upgrade_action(self, name):
918 """Check whether the upgrade operation will perform this action"""
917 """Check whether the upgrade operation will perform this action"""
919 return name in self._upgrade_actions_names
918 return name in self._upgrade_actions_names
920
919
921 def print_post_op_messages(self):
920 def print_post_op_messages(self):
922 """print post upgrade operation warning messages"""
921 """print post upgrade operation warning messages"""
923 for a in self.upgrade_actions:
922 for a in self.upgrade_actions:
924 if a.postupgrademessage is not None:
923 if a.postupgrademessage is not None:
925 self.ui.warn(b'%s\n' % a.postupgrademessage)
924 self.ui.warn(b'%s\n' % a.postupgrademessage)
926 for a in self.removed_actions:
925 for a in self.removed_actions:
927 if a.postdowngrademessage is not None:
926 if a.postdowngrademessage is not None:
928 self.ui.warn(b'%s\n' % a.postdowngrademessage)
927 self.ui.warn(b'%s\n' % a.postdowngrademessage)
929
928
930
929
931 ### Code checking if a repository can got through the upgrade process at all. #
930 ### Code checking if a repository can got through the upgrade process at all. #
932
931
933
932
934 def requiredsourcerequirements(repo):
933 def requiredsourcerequirements(repo):
935 """Obtain requirements required to be present to upgrade a repo.
934 """Obtain requirements required to be present to upgrade a repo.
936
935
937 An upgrade will not be allowed if the repository doesn't have the
936 An upgrade will not be allowed if the repository doesn't have the
938 requirements returned by this function.
937 requirements returned by this function.
939 """
938 """
940 return {
939 return {
941 # Introduced in Mercurial 0.9.2.
940 # Introduced in Mercurial 0.9.2.
942 requirements.STORE_REQUIREMENT,
941 requirements.STORE_REQUIREMENT,
943 }
942 }
944
943
945
944
946 def blocksourcerequirements(repo):
945 def blocksourcerequirements(repo):
947 """Obtain requirements that will prevent an upgrade from occurring.
946 """Obtain requirements that will prevent an upgrade from occurring.
948
947
949 An upgrade cannot be performed if the source repository contains a
948 An upgrade cannot be performed if the source repository contains a
950 requirements in the returned set.
949 requirements in the returned set.
951 """
950 """
952 return {
951 return {
953 # This was a precursor to generaldelta and was never enabled by default.
952 # This was a precursor to generaldelta and was never enabled by default.
954 # It should (hopefully) not exist in the wild.
953 # It should (hopefully) not exist in the wild.
955 b'parentdelta',
954 b'parentdelta',
956 }
955 }
957
956
958
957
959 def check_revlog_version(reqs):
958 def check_revlog_version(reqs):
960 """Check that the requirements contain at least one Revlog version"""
959 """Check that the requirements contain at least one Revlog version"""
961 all_revlogs = {
960 all_revlogs = {
962 requirements.REVLOGV1_REQUIREMENT,
961 requirements.REVLOGV1_REQUIREMENT,
963 requirements.REVLOGV2_REQUIREMENT,
962 requirements.REVLOGV2_REQUIREMENT,
964 }
963 }
965 if not all_revlogs.intersection(reqs):
964 if not all_revlogs.intersection(reqs):
966 msg = _(b'cannot upgrade repository; missing a revlog version')
965 msg = _(b'cannot upgrade repository; missing a revlog version')
967 raise error.Abort(msg)
966 raise error.Abort(msg)
968
967
969
968
970 def check_source_requirements(repo):
969 def check_source_requirements(repo):
971 """Ensure that no existing requirements prevent the repository upgrade"""
970 """Ensure that no existing requirements prevent the repository upgrade"""
972
971
973 check_revlog_version(repo.requirements)
972 check_revlog_version(repo.requirements)
974 required = requiredsourcerequirements(repo)
973 required = requiredsourcerequirements(repo)
975 missingreqs = required - repo.requirements
974 missingreqs = required - repo.requirements
976 if missingreqs:
975 if missingreqs:
977 msg = _(b'cannot upgrade repository; requirement missing: %s')
976 msg = _(b'cannot upgrade repository; requirement missing: %s')
978 missingreqs = b', '.join(sorted(missingreqs))
977 missingreqs = b', '.join(sorted(missingreqs))
979 raise error.Abort(msg % missingreqs)
978 raise error.Abort(msg % missingreqs)
980
979
981 blocking = blocksourcerequirements(repo)
980 blocking = blocksourcerequirements(repo)
982 blockingreqs = blocking & repo.requirements
981 blockingreqs = blocking & repo.requirements
983 if blockingreqs:
982 if blockingreqs:
984 m = _(b'cannot upgrade repository; unsupported source requirement: %s')
983 m = _(b'cannot upgrade repository; unsupported source requirement: %s')
985 blockingreqs = b', '.join(sorted(blockingreqs))
984 blockingreqs = b', '.join(sorted(blockingreqs))
986 raise error.Abort(m % blockingreqs)
985 raise error.Abort(m % blockingreqs)
987 # Upgrade should operate on the actual store, not the shared link.
986 # Upgrade should operate on the actual store, not the shared link.
988
987
989 bad_share = (
988 bad_share = (
990 requirements.SHARED_REQUIREMENT in repo.requirements
989 requirements.SHARED_REQUIREMENT in repo.requirements
991 and requirements.SHARESAFE_REQUIREMENT not in repo.requirements
990 and requirements.SHARESAFE_REQUIREMENT not in repo.requirements
992 )
991 )
993 if bad_share:
992 if bad_share:
994 m = _(b'cannot upgrade repository; share repository without share-safe')
993 m = _(b'cannot upgrade repository; share repository without share-safe')
995 h = _(b'check :hg:`help config.format.use-share-safe`')
994 h = _(b'check :hg:`help config.format.use-share-safe`')
996 raise error.Abort(m, hint=h)
995 raise error.Abort(m, hint=h)
997
996
998
997
999 ### Verify the validity of the planned requirement changes ####################
998 ### Verify the validity of the planned requirement changes ####################
1000
999
1001
1000
1002 def supportremovedrequirements(repo):
1001 def supportremovedrequirements(repo):
1003 """Obtain requirements that can be removed during an upgrade.
1002 """Obtain requirements that can be removed during an upgrade.
1004
1003
1005 If an upgrade were to create a repository that dropped a requirement,
1004 If an upgrade were to create a repository that dropped a requirement,
1006 the dropped requirement must appear in the returned set for the upgrade
1005 the dropped requirement must appear in the returned set for the upgrade
1007 to be allowed.
1006 to be allowed.
1008 """
1007 """
1009 supported = {
1008 supported = {
1010 requirements.SPARSEREVLOG_REQUIREMENT,
1009 requirements.SPARSEREVLOG_REQUIREMENT,
1011 requirements.COPIESSDC_REQUIREMENT,
1010 requirements.COPIESSDC_REQUIREMENT,
1012 requirements.NODEMAP_REQUIREMENT,
1011 requirements.NODEMAP_REQUIREMENT,
1013 requirements.SHARESAFE_REQUIREMENT,
1012 requirements.SHARESAFE_REQUIREMENT,
1014 requirements.REVLOGV2_REQUIREMENT,
1013 requirements.REVLOGV2_REQUIREMENT,
1015 requirements.CHANGELOGV2_REQUIREMENT,
1014 requirements.CHANGELOGV2_REQUIREMENT,
1016 requirements.REVLOGV1_REQUIREMENT,
1015 requirements.REVLOGV1_REQUIREMENT,
1017 requirements.DIRSTATE_TRACKED_HINT_V1,
1016 requirements.DIRSTATE_TRACKED_HINT_V1,
1018 requirements.DIRSTATE_V2_REQUIREMENT,
1017 requirements.DIRSTATE_V2_REQUIREMENT,
1019 }
1018 }
1020 for name in compression.compengines:
1019 for name in compression.compengines:
1021 engine = compression.compengines[name]
1020 engine = compression.compengines[name]
1022 if engine.available() and engine.revlogheader():
1021 if engine.available() and engine.revlogheader():
1023 supported.add(b'exp-compression-%s' % name)
1022 supported.add(b'exp-compression-%s' % name)
1024 if engine.name() == b'zstd':
1023 if engine.name() == b'zstd':
1025 supported.add(b'revlog-compression-zstd')
1024 supported.add(b'revlog-compression-zstd')
1026 return supported
1025 return supported
1027
1026
1028
1027
1029 def supporteddestrequirements(repo):
1028 def supporteddestrequirements(repo):
1030 """Obtain requirements that upgrade supports in the destination.
1029 """Obtain requirements that upgrade supports in the destination.
1031
1030
1032 If the result of the upgrade would have requirements not in this set,
1031 If the result of the upgrade would have requirements not in this set,
1033 the upgrade is disallowed.
1032 the upgrade is disallowed.
1034
1033
1035 Extensions should monkeypatch this to add their custom requirements.
1034 Extensions should monkeypatch this to add their custom requirements.
1036 """
1035 """
1037 supported = {
1036 supported = {
1038 requirements.CHANGELOGV2_REQUIREMENT,
1037 requirements.CHANGELOGV2_REQUIREMENT,
1039 requirements.COPIESSDC_REQUIREMENT,
1038 requirements.COPIESSDC_REQUIREMENT,
1040 requirements.DIRSTATE_TRACKED_HINT_V1,
1039 requirements.DIRSTATE_TRACKED_HINT_V1,
1041 requirements.DIRSTATE_V2_REQUIREMENT,
1040 requirements.DIRSTATE_V2_REQUIREMENT,
1042 requirements.DOTENCODE_REQUIREMENT,
1041 requirements.DOTENCODE_REQUIREMENT,
1043 requirements.FNCACHE_REQUIREMENT,
1042 requirements.FNCACHE_REQUIREMENT,
1044 requirements.GENERALDELTA_REQUIREMENT,
1043 requirements.GENERALDELTA_REQUIREMENT,
1045 requirements.NODEMAP_REQUIREMENT,
1044 requirements.NODEMAP_REQUIREMENT,
1046 requirements.REVLOGV1_REQUIREMENT, # allowed in case of downgrade
1045 requirements.REVLOGV1_REQUIREMENT, # allowed in case of downgrade
1047 requirements.REVLOGV2_REQUIREMENT,
1046 requirements.REVLOGV2_REQUIREMENT,
1048 requirements.SHARED_REQUIREMENT,
1047 requirements.SHARED_REQUIREMENT,
1049 requirements.SHARESAFE_REQUIREMENT,
1048 requirements.SHARESAFE_REQUIREMENT,
1050 requirements.SPARSEREVLOG_REQUIREMENT,
1049 requirements.SPARSEREVLOG_REQUIREMENT,
1051 requirements.STORE_REQUIREMENT,
1050 requirements.STORE_REQUIREMENT,
1052 requirements.TREEMANIFEST_REQUIREMENT,
1051 requirements.TREEMANIFEST_REQUIREMENT,
1053 requirements.NARROW_REQUIREMENT,
1052 requirements.NARROW_REQUIREMENT,
1054 }
1053 }
1055 for name in compression.compengines:
1054 for name in compression.compengines:
1056 engine = compression.compengines[name]
1055 engine = compression.compengines[name]
1057 if engine.available() and engine.revlogheader():
1056 if engine.available() and engine.revlogheader():
1058 supported.add(b'exp-compression-%s' % name)
1057 supported.add(b'exp-compression-%s' % name)
1059 if engine.name() == b'zstd':
1058 if engine.name() == b'zstd':
1060 supported.add(b'revlog-compression-zstd')
1059 supported.add(b'revlog-compression-zstd')
1061 return supported
1060 return supported
1062
1061
1063
1062
1064 def allowednewrequirements(repo):
1063 def allowednewrequirements(repo):
1065 """Obtain requirements that can be added to a repository during upgrade.
1064 """Obtain requirements that can be added to a repository during upgrade.
1066
1065
1067 This is used to disallow proposed requirements from being added when
1066 This is used to disallow proposed requirements from being added when
1068 they weren't present before.
1067 they weren't present before.
1069
1068
1070 We use a list of allowed requirement additions instead of a list of known
1069 We use a list of allowed requirement additions instead of a list of known
1071 bad additions because the whitelist approach is safer and will prevent
1070 bad additions because the whitelist approach is safer and will prevent
1072 future, unknown requirements from accidentally being added.
1071 future, unknown requirements from accidentally being added.
1073 """
1072 """
1074 supported = {
1073 supported = {
1075 requirements.DOTENCODE_REQUIREMENT,
1074 requirements.DOTENCODE_REQUIREMENT,
1076 requirements.FNCACHE_REQUIREMENT,
1075 requirements.FNCACHE_REQUIREMENT,
1077 requirements.GENERALDELTA_REQUIREMENT,
1076 requirements.GENERALDELTA_REQUIREMENT,
1078 requirements.SPARSEREVLOG_REQUIREMENT,
1077 requirements.SPARSEREVLOG_REQUIREMENT,
1079 requirements.COPIESSDC_REQUIREMENT,
1078 requirements.COPIESSDC_REQUIREMENT,
1080 requirements.NODEMAP_REQUIREMENT,
1079 requirements.NODEMAP_REQUIREMENT,
1081 requirements.SHARESAFE_REQUIREMENT,
1080 requirements.SHARESAFE_REQUIREMENT,
1082 requirements.REVLOGV1_REQUIREMENT,
1081 requirements.REVLOGV1_REQUIREMENT,
1083 requirements.REVLOGV2_REQUIREMENT,
1082 requirements.REVLOGV2_REQUIREMENT,
1084 requirements.CHANGELOGV2_REQUIREMENT,
1083 requirements.CHANGELOGV2_REQUIREMENT,
1085 requirements.DIRSTATE_TRACKED_HINT_V1,
1084 requirements.DIRSTATE_TRACKED_HINT_V1,
1086 requirements.DIRSTATE_V2_REQUIREMENT,
1085 requirements.DIRSTATE_V2_REQUIREMENT,
1087 }
1086 }
1088 for name in compression.compengines:
1087 for name in compression.compengines:
1089 engine = compression.compengines[name]
1088 engine = compression.compengines[name]
1090 if engine.available() and engine.revlogheader():
1089 if engine.available() and engine.revlogheader():
1091 supported.add(b'exp-compression-%s' % name)
1090 supported.add(b'exp-compression-%s' % name)
1092 if engine.name() == b'zstd':
1091 if engine.name() == b'zstd':
1093 supported.add(b'revlog-compression-zstd')
1092 supported.add(b'revlog-compression-zstd')
1094 return supported
1093 return supported
1095
1094
1096
1095
1097 def check_requirements_changes(repo, new_reqs):
1096 def check_requirements_changes(repo, new_reqs):
1098 old_reqs = repo.requirements
1097 old_reqs = repo.requirements
1099 check_revlog_version(repo.requirements)
1098 check_revlog_version(repo.requirements)
1100 support_removal = supportremovedrequirements(repo)
1099 support_removal = supportremovedrequirements(repo)
1101 no_remove_reqs = old_reqs - new_reqs - support_removal
1100 no_remove_reqs = old_reqs - new_reqs - support_removal
1102 if no_remove_reqs:
1101 if no_remove_reqs:
1103 msg = _(b'cannot upgrade repository; requirement would be removed: %s')
1102 msg = _(b'cannot upgrade repository; requirement would be removed: %s')
1104 no_remove_reqs = b', '.join(sorted(no_remove_reqs))
1103 no_remove_reqs = b', '.join(sorted(no_remove_reqs))
1105 raise error.Abort(msg % no_remove_reqs)
1104 raise error.Abort(msg % no_remove_reqs)
1106
1105
1107 support_addition = allowednewrequirements(repo)
1106 support_addition = allowednewrequirements(repo)
1108 no_add_reqs = new_reqs - old_reqs - support_addition
1107 no_add_reqs = new_reqs - old_reqs - support_addition
1109 if no_add_reqs:
1108 if no_add_reqs:
1110 m = _(b'cannot upgrade repository; do not support adding requirement: ')
1109 m = _(b'cannot upgrade repository; do not support adding requirement: ')
1111 no_add_reqs = b', '.join(sorted(no_add_reqs))
1110 no_add_reqs = b', '.join(sorted(no_add_reqs))
1112 raise error.Abort(m + no_add_reqs)
1111 raise error.Abort(m + no_add_reqs)
1113
1112
1114 supported = supporteddestrequirements(repo)
1113 supported = supporteddestrequirements(repo)
1115 unsupported_reqs = new_reqs - supported
1114 unsupported_reqs = new_reqs - supported
1116 if unsupported_reqs:
1115 if unsupported_reqs:
1117 msg = _(
1116 msg = _(
1118 b'cannot upgrade repository; do not support destination '
1117 b'cannot upgrade repository; do not support destination '
1119 b'requirement: %s'
1118 b'requirement: %s'
1120 )
1119 )
1121 unsupported_reqs = b', '.join(sorted(unsupported_reqs))
1120 unsupported_reqs = b', '.join(sorted(unsupported_reqs))
1122 raise error.Abort(msg % unsupported_reqs)
1121 raise error.Abort(msg % unsupported_reqs)
@@ -1,655 +1,655 b''
1 setup
1 setup
2
2
3 $ cat >> $HGRCPATH <<EOF
3 $ cat >> $HGRCPATH <<EOF
4 > [extensions]
4 > [extensions]
5 > share =
5 > share =
6 > [format]
6 > [format]
7 > use-share-safe = True
7 > use-share-safe = True
8 > [storage]
8 > [storage]
9 > revlog.persistent-nodemap.slow-path=allow
9 > revlog.persistent-nodemap.slow-path=allow
10 > # enforce zlib to ensure we can upgrade to zstd later
10 > # enforce zlib to ensure we can upgrade to zstd later
11 > [format]
11 > [format]
12 > revlog-compression=zlib
12 > revlog-compression=zlib
13 > # we want to be able to enable it later
13 > # we want to be able to enable it later
14 > use-persistent-nodemap=no
14 > use-persistent-nodemap=no
15 > EOF
15 > EOF
16
16
17 prepare source repo
17 prepare source repo
18
18
19 $ hg init source
19 $ hg init source
20 $ cd source
20 $ cd source
21 $ cat .hg/requires
21 $ cat .hg/requires
22 dirstate-v2 (dirstate-v2 !)
22 dirstate-v2 (dirstate-v2 !)
23 share-safe
23 share-safe
24 $ cat .hg/store/requires
24 $ cat .hg/store/requires
25 dotencode
25 dotencode
26 fncache
26 fncache
27 generaldelta
27 generaldelta
28 revlogv1
28 revlogv1
29 sparserevlog
29 sparserevlog
30 store
30 store
31 $ hg debugrequirements
31 $ hg debugrequirements
32 dotencode
32 dotencode
33 dirstate-v2 (dirstate-v2 !)
33 dirstate-v2 (dirstate-v2 !)
34 fncache
34 fncache
35 generaldelta
35 generaldelta
36 revlogv1
36 revlogv1
37 share-safe
37 share-safe
38 sparserevlog
38 sparserevlog
39 store
39 store
40
40
41 $ echo a > a
41 $ echo a > a
42 $ hg ci -Aqm "added a"
42 $ hg ci -Aqm "added a"
43 $ echo b > b
43 $ echo b > b
44 $ hg ci -Aqm "added b"
44 $ hg ci -Aqm "added b"
45
45
46 $ HGEDITOR=cat hg config --shared
46 $ HGEDITOR=cat hg config --shared
47 abort: repository is not shared; can't use --shared
47 abort: repository is not shared; can't use --shared
48 [10]
48 [10]
49 $ cd ..
49 $ cd ..
50
50
51 Create a shared repo and check the requirements are shared and read correctly
51 Create a shared repo and check the requirements are shared and read correctly
52 $ hg share source shared1
52 $ hg share source shared1
53 updating working directory
53 updating working directory
54 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
54 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
55 $ cd shared1
55 $ cd shared1
56 $ cat .hg/requires
56 $ cat .hg/requires
57 dirstate-v2 (dirstate-v2 !)
57 dirstate-v2 (dirstate-v2 !)
58 share-safe
58 share-safe
59 shared
59 shared
60
60
61 $ hg debugrequirements -R ../source
61 $ hg debugrequirements -R ../source
62 dotencode
62 dotencode
63 dirstate-v2 (dirstate-v2 !)
63 dirstate-v2 (dirstate-v2 !)
64 fncache
64 fncache
65 generaldelta
65 generaldelta
66 revlogv1
66 revlogv1
67 share-safe
67 share-safe
68 sparserevlog
68 sparserevlog
69 store
69 store
70
70
71 $ hg debugrequirements
71 $ hg debugrequirements
72 dotencode
72 dotencode
73 dirstate-v2 (dirstate-v2 !)
73 dirstate-v2 (dirstate-v2 !)
74 fncache
74 fncache
75 generaldelta
75 generaldelta
76 revlogv1
76 revlogv1
77 share-safe
77 share-safe
78 shared
78 shared
79 sparserevlog
79 sparserevlog
80 store
80 store
81
81
82 $ echo c > c
82 $ echo c > c
83 $ hg ci -Aqm "added c"
83 $ hg ci -Aqm "added c"
84
84
85 Check that config of the source repository is also loaded
85 Check that config of the source repository is also loaded
86
86
87 $ hg showconfig ui.curses
87 $ hg showconfig ui.curses
88 [1]
88 [1]
89
89
90 $ echo "[ui]" >> ../source/.hg/hgrc
90 $ echo "[ui]" >> ../source/.hg/hgrc
91 $ echo "curses=true" >> ../source/.hg/hgrc
91 $ echo "curses=true" >> ../source/.hg/hgrc
92
92
93 $ hg showconfig ui.curses
93 $ hg showconfig ui.curses
94 true
94 true
95
95
96 Test that extensions of source repository are also loaded
96 Test that extensions of source repository are also loaded
97
97
98 $ hg debugextensions
98 $ hg debugextensions
99 share
99 share
100 $ hg extdiff -p echo
100 $ hg extdiff -p echo
101 hg: unknown command 'extdiff'
101 hg: unknown command 'extdiff'
102 'extdiff' is provided by the following extension:
102 'extdiff' is provided by the following extension:
103
103
104 extdiff command to allow external programs to compare revisions
104 extdiff command to allow external programs to compare revisions
105
105
106 (use 'hg help extensions' for information on enabling extensions)
106 (use 'hg help extensions' for information on enabling extensions)
107 [10]
107 [10]
108
108
109 $ echo "[extensions]" >> ../source/.hg/hgrc
109 $ echo "[extensions]" >> ../source/.hg/hgrc
110 $ echo "extdiff=" >> ../source/.hg/hgrc
110 $ echo "extdiff=" >> ../source/.hg/hgrc
111
111
112 $ hg debugextensions -R ../source
112 $ hg debugextensions -R ../source
113 extdiff
113 extdiff
114 share
114 share
115 $ hg extdiff -R ../source -p echo
115 $ hg extdiff -R ../source -p echo
116
116
117 BROKEN: the command below will not work if config of shared source is not loaded
117 BROKEN: the command below will not work if config of shared source is not loaded
118 on dispatch but debugextensions says that extension
118 on dispatch but debugextensions says that extension
119 is loaded
119 is loaded
120 $ hg debugextensions
120 $ hg debugextensions
121 extdiff
121 extdiff
122 share
122 share
123
123
124 $ hg extdiff -p echo
124 $ hg extdiff -p echo
125
125
126 However, local .hg/hgrc should override the config set by share source
126 However, local .hg/hgrc should override the config set by share source
127
127
128 $ echo "[ui]" >> .hg/hgrc
128 $ echo "[ui]" >> .hg/hgrc
129 $ echo "curses=false" >> .hg/hgrc
129 $ echo "curses=false" >> .hg/hgrc
130
130
131 $ hg showconfig ui.curses
131 $ hg showconfig ui.curses
132 false
132 false
133
133
134 $ HGEDITOR=cat hg config --shared
134 $ HGEDITOR=cat hg config --shared
135 [ui]
135 [ui]
136 curses=true
136 curses=true
137 [extensions]
137 [extensions]
138 extdiff=
138 extdiff=
139
139
140 $ HGEDITOR=cat hg config --local
140 $ HGEDITOR=cat hg config --local
141 [ui]
141 [ui]
142 curses=false
142 curses=false
143
143
144 Testing that hooks set in source repository also runs in shared repo
144 Testing that hooks set in source repository also runs in shared repo
145
145
146 $ cd ../source
146 $ cd ../source
147 $ cat <<EOF >> .hg/hgrc
147 $ cat <<EOF >> .hg/hgrc
148 > [extensions]
148 > [extensions]
149 > hooklib=
149 > hooklib=
150 > [hooks]
150 > [hooks]
151 > pretxnchangegroup.reject_merge_commits = \
151 > pretxnchangegroup.reject_merge_commits = \
152 > python:hgext.hooklib.reject_merge_commits.hook
152 > python:hgext.hooklib.reject_merge_commits.hook
153 > EOF
153 > EOF
154
154
155 $ cd ..
155 $ cd ..
156 $ hg clone source cloned
156 $ hg clone source cloned
157 updating to branch default
157 updating to branch default
158 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
158 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
159 $ cd cloned
159 $ cd cloned
160 $ hg up 0
160 $ hg up 0
161 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
161 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
162 $ echo bar > bar
162 $ echo bar > bar
163 $ hg ci -Aqm "added bar"
163 $ hg ci -Aqm "added bar"
164 $ hg merge
164 $ hg merge
165 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
165 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
166 (branch merge, don't forget to commit)
166 (branch merge, don't forget to commit)
167 $ hg ci -m "merge commit"
167 $ hg ci -m "merge commit"
168
168
169 $ hg push ../source
169 $ hg push ../source
170 pushing to ../source
170 pushing to ../source
171 searching for changes
171 searching for changes
172 adding changesets
172 adding changesets
173 adding manifests
173 adding manifests
174 adding file changes
174 adding file changes
175 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
175 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
176 transaction abort!
176 transaction abort!
177 rollback completed
177 rollback completed
178 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
178 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
179 [255]
179 [255]
180
180
181 $ hg push ../shared1
181 $ hg push ../shared1
182 pushing to ../shared1
182 pushing to ../shared1
183 searching for changes
183 searching for changes
184 adding changesets
184 adding changesets
185 adding manifests
185 adding manifests
186 adding file changes
186 adding file changes
187 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
187 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
188 transaction abort!
188 transaction abort!
189 rollback completed
189 rollback completed
190 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
190 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
191 [255]
191 [255]
192
192
193 Test that if share source config is untrusted, we dont read it
193 Test that if share source config is untrusted, we dont read it
194
194
195 $ cd ../shared1
195 $ cd ../shared1
196
196
197 $ cat << EOF > $TESTTMP/untrusted.py
197 $ cat << EOF > $TESTTMP/untrusted.py
198 > from mercurial import scmutil, util
198 > from mercurial import scmutil, util
199 > def uisetup(ui):
199 > def uisetup(ui):
200 > class untrustedui(ui.__class__):
200 > class untrustedui(ui.__class__):
201 > def _trusted(self, fp, f):
201 > def _trusted(self, fp, f):
202 > if util.normpath(fp.name).endswith(b'source/.hg/hgrc'):
202 > if util.normpath(fp.name).endswith(b'source/.hg/hgrc'):
203 > return False
203 > return False
204 > return super(untrustedui, self)._trusted(fp, f)
204 > return super(untrustedui, self)._trusted(fp, f)
205 > ui.__class__ = untrustedui
205 > ui.__class__ = untrustedui
206 > EOF
206 > EOF
207
207
208 $ hg showconfig hooks
208 $ hg showconfig hooks
209 hooks.pretxnchangegroup.reject_merge_commits=python:hgext.hooklib.reject_merge_commits.hook
209 hooks.pretxnchangegroup.reject_merge_commits=python:hgext.hooklib.reject_merge_commits.hook
210
210
211 $ hg showconfig hooks --config extensions.untrusted=$TESTTMP/untrusted.py
211 $ hg showconfig hooks --config extensions.untrusted=$TESTTMP/untrusted.py
212 [1]
212 [1]
213
213
214 Update the source repository format and check that shared repo works
214 Update the source repository format and check that shared repo works
215
215
216 $ cd ../source
216 $ cd ../source
217
217
218 Disable zstd related tests because its not present on pure version
218 Disable zstd related tests because its not present on pure version
219 #if zstd
219 #if zstd
220 $ echo "[format]" >> .hg/hgrc
220 $ echo "[format]" >> .hg/hgrc
221 $ echo "revlog-compression=zstd" >> .hg/hgrc
221 $ echo "revlog-compression=zstd" >> .hg/hgrc
222
222
223 $ hg debugupgraderepo --run -q
223 $ hg debugupgraderepo --run -q
224 upgrade will perform the following actions:
224 upgrade will perform the following actions:
225
225
226 requirements
226 requirements
227 preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-dirstate-v2 !)
227 preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-dirstate-v2 !)
228 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (dirstate-v2 !)
228 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (dirstate-v2 !)
229 added: revlog-compression-zstd
229 added: revlog-compression-zstd
230
230
231 processed revlogs:
231 processed revlogs:
232 - all-filelogs
232 - all-filelogs
233 - changelog
233 - changelog
234 - manifest
234 - manifest
235
235
236 $ hg log -r .
236 $ hg log -r .
237 changeset: 1:5f6d8a4bf34a
237 changeset: 1:5f6d8a4bf34a
238 user: test
238 user: test
239 date: Thu Jan 01 00:00:00 1970 +0000
239 date: Thu Jan 01 00:00:00 1970 +0000
240 summary: added b
240 summary: added b
241
241
242 #endif
242 #endif
243 $ echo "[format]" >> .hg/hgrc
243 $ echo "[format]" >> .hg/hgrc
244 $ echo "use-persistent-nodemap=True" >> .hg/hgrc
244 $ echo "use-persistent-nodemap=True" >> .hg/hgrc
245
245
246 $ hg debugupgraderepo --run -q -R ../shared1
246 $ hg debugupgraderepo --run -q -R ../shared1
247 abort: cannot use these actions on a share repository: persistent-nodemap
247 abort: cannot use these actions on a share repository: persistent-nodemap
248 (upgrade the main repository directly)
248 (upgrade the main repository directly)
249 [255]
249 [255]
250
250
251 $ hg debugupgraderepo --run -q
251 $ hg debugupgraderepo --run -q
252 upgrade will perform the following actions:
252 upgrade will perform the following actions:
253
253
254 requirements
254 requirements
255 preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-zstd no-dirstate-v2 !)
255 preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-zstd no-dirstate-v2 !)
256 preserved: dotencode, fncache, generaldelta, revlog-compression-zstd, revlogv1, share-safe, sparserevlog, store (zstd no-dirstate-v2 !)
256 preserved: dotencode, fncache, generaldelta, revlog-compression-zstd, revlogv1, share-safe, sparserevlog, store (zstd no-dirstate-v2 !)
257 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-zstd dirstate-v2 !)
257 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-zstd dirstate-v2 !)
258 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlog-compression-zstd, revlogv1, share-safe, sparserevlog, store (zstd dirstate-v2 !)
258 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlog-compression-zstd, revlogv1, share-safe, sparserevlog, store (zstd dirstate-v2 !)
259 added: persistent-nodemap
259 added: persistent-nodemap
260
260
261 processed revlogs:
261 processed revlogs:
262 - all-filelogs
262 - all-filelogs
263 - changelog
263 - changelog
264 - manifest
264 - manifest
265
265
266 $ hg log -r .
266 $ hg log -r .
267 changeset: 1:5f6d8a4bf34a
267 changeset: 1:5f6d8a4bf34a
268 user: test
268 user: test
269 date: Thu Jan 01 00:00:00 1970 +0000
269 date: Thu Jan 01 00:00:00 1970 +0000
270 summary: added b
270 summary: added b
271
271
272
272
273 Shared one should work
273 Shared one should work
274 $ cd ../shared1
274 $ cd ../shared1
275 $ hg log -r .
275 $ hg log -r .
276 changeset: 2:155349b645be
276 changeset: 2:155349b645be
277 tag: tip
277 tag: tip
278 user: test
278 user: test
279 date: Thu Jan 01 00:00:00 1970 +0000
279 date: Thu Jan 01 00:00:00 1970 +0000
280 summary: added c
280 summary: added c
281
281
282
282
283 Testing that nonsharedrc is loaded for source and not shared
283 Testing that nonsharedrc is loaded for source and not shared
284
284
285 $ cd ../source
285 $ cd ../source
286 $ touch .hg/hgrc-not-shared
286 $ touch .hg/hgrc-not-shared
287 $ echo "[ui]" >> .hg/hgrc-not-shared
287 $ echo "[ui]" >> .hg/hgrc-not-shared
288 $ echo "traceback=true" >> .hg/hgrc-not-shared
288 $ echo "traceback=true" >> .hg/hgrc-not-shared
289
289
290 $ hg showconfig ui.traceback
290 $ hg showconfig ui.traceback
291 true
291 true
292
292
293 $ HGEDITOR=cat hg config --non-shared
293 $ HGEDITOR=cat hg config --non-shared
294 [ui]
294 [ui]
295 traceback=true
295 traceback=true
296
296
297 $ cd ../shared1
297 $ cd ../shared1
298 $ hg showconfig ui.traceback
298 $ hg showconfig ui.traceback
299 [1]
299 [1]
300
300
301 Unsharing works
301 Unsharing works
302
302
303 $ hg unshare
303 $ hg unshare
304
304
305 Test that source config is added to the shared one after unshare, and the config
305 Test that source config is added to the shared one after unshare, and the config
306 of current repo is still respected over the config which came from source config
306 of current repo is still respected over the config which came from source config
307 $ cd ../cloned
307 $ cd ../cloned
308 $ hg push ../shared1
308 $ hg push ../shared1
309 pushing to ../shared1
309 pushing to ../shared1
310 searching for changes
310 searching for changes
311 adding changesets
311 adding changesets
312 adding manifests
312 adding manifests
313 adding file changes
313 adding file changes
314 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
314 error: pretxnchangegroup.reject_merge_commits hook failed: bcde3522682d rejected as merge on the same branch. Please consider rebase.
315 transaction abort!
315 transaction abort!
316 rollback completed
316 rollback completed
317 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
317 abort: bcde3522682d rejected as merge on the same branch. Please consider rebase.
318 [255]
318 [255]
319 $ hg showconfig ui.curses -R ../shared1
319 $ hg showconfig ui.curses -R ../shared1
320 false
320 false
321
321
322 $ cd ../
322 $ cd ../
323
323
324 Test that upgrading using debugupgraderepo works
324 Test that upgrading using debugupgraderepo works
325 =================================================
325 =================================================
326
326
327 $ hg init non-share-safe --config format.use-share-safe=false
327 $ hg init non-share-safe --config format.use-share-safe=false
328 $ cd non-share-safe
328 $ cd non-share-safe
329 $ hg debugrequirements
329 $ hg debugrequirements
330 dotencode
330 dotencode
331 dirstate-v2 (dirstate-v2 !)
331 dirstate-v2 (dirstate-v2 !)
332 fncache
332 fncache
333 generaldelta
333 generaldelta
334 revlogv1
334 revlogv1
335 sparserevlog
335 sparserevlog
336 store
336 store
337 $ echo foo > foo
337 $ echo foo > foo
338 $ hg ci -Aqm 'added foo'
338 $ hg ci -Aqm 'added foo'
339 $ echo bar > bar
339 $ echo bar > bar
340 $ hg ci -Aqm 'added bar'
340 $ hg ci -Aqm 'added bar'
341
341
342 Create a share before upgrading
342 Create a share before upgrading
343
343
344 $ cd ..
344 $ cd ..
345 $ hg share non-share-safe nss-share
345 $ hg share non-share-safe nss-share
346 updating working directory
346 updating working directory
347 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
347 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
348 $ hg debugrequirements -R nss-share
348 $ hg debugrequirements -R nss-share
349 dotencode
349 dotencode
350 dirstate-v2 (dirstate-v2 !)
350 dirstate-v2 (dirstate-v2 !)
351 fncache
351 fncache
352 generaldelta
352 generaldelta
353 revlogv1
353 revlogv1
354 shared
354 shared
355 sparserevlog
355 sparserevlog
356 store
356 store
357 $ cd non-share-safe
357 $ cd non-share-safe
358
358
359 Upgrade
359 Upgrade
360
360
361 $ hg debugupgraderepo -q
361 $ hg debugupgraderepo -q
362 requirements
362 requirements
363 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
363 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
364 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
364 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
365 added: share-safe
365 added: share-safe
366
366
367 no revlogs to process
367 no revlogs to process
368
368
369 $ hg debugupgraderepo --run
369 $ hg debugupgraderepo --run
370 upgrade will perform the following actions:
370 upgrade will perform the following actions:
371
371
372 requirements
372 requirements
373 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
373 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
374 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
374 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
375 added: share-safe
375 added: share-safe
376
376
377 share-safe
377 share-safe
378 Upgrades a repository to share-safe format so that future shares of this repository share its requirements and configs.
378 Upgrades a repository to share-safe format so that future shares of this repository share its requirements and configs.
379
379
380 no revlogs to process
380 no revlogs to process
381
381
382 beginning upgrade...
382 beginning upgrade...
383 repository locked and read-only
383 repository locked and read-only
384 creating temporary repository to stage upgraded data: $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
384 creating temporary repository to stage upgraded data: $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
385 (it is safe to interrupt this process any time before data migration completes)
385 (it is safe to interrupt this process any time before data migration completes)
386 upgrading repository requirements
386 upgrading repository requirements
387 removing temporary repository $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
387 removing temporary repository $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
388 repository upgraded to share safe mode, existing shares will still work in old non-safe mode. Re-share existing shares to use them in safe mode New shares will be created in safe mode.
388 repository upgraded to share safe mode, existing shares will still work in old non-safe mode. Re-share existing shares to use them in safe mode New shares will be created in safe mode.
389
389
390 $ hg debugrequirements
390 $ hg debugrequirements
391 dotencode
391 dotencode
392 dirstate-v2 (dirstate-v2 !)
392 dirstate-v2 (dirstate-v2 !)
393 fncache
393 fncache
394 generaldelta
394 generaldelta
395 revlogv1
395 revlogv1
396 share-safe
396 share-safe
397 sparserevlog
397 sparserevlog
398 store
398 store
399
399
400 $ cat .hg/requires
400 $ cat .hg/requires
401 dirstate-v2 (dirstate-v2 !)
401 dirstate-v2 (dirstate-v2 !)
402 share-safe
402 share-safe
403
403
404 $ cat .hg/store/requires
404 $ cat .hg/store/requires
405 dotencode
405 dotencode
406 fncache
406 fncache
407 generaldelta
407 generaldelta
408 revlogv1
408 revlogv1
409 sparserevlog
409 sparserevlog
410 store
410 store
411
411
412 $ hg log -GT "{node}: {desc}\n"
412 $ hg log -GT "{node}: {desc}\n"
413 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
413 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
414 |
414 |
415 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
415 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
416
416
417
417
418 Make sure existing shares dont work with default config
418 Make sure existing shares dont work with default config
419
419
420 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
420 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
421 abort: version mismatch: source uses share-safe functionality while the current share does not
421 abort: version mismatch: source uses share-safe functionality while the current share does not
422 (see `hg help config.format.use-share-safe` for more information)
422 (see `hg help config.format.use-share-safe` for more information)
423 [255]
423 [255]
424
424
425
425
426 Create a safe share from upgrade one
426 Create a safe share from upgrade one
427
427
428 $ cd ..
428 $ cd ..
429 $ hg share non-share-safe ss-share
429 $ hg share non-share-safe ss-share
430 updating working directory
430 updating working directory
431 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
431 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
432 $ cd ss-share
432 $ cd ss-share
433 $ hg log -GT "{node}: {desc}\n"
433 $ hg log -GT "{node}: {desc}\n"
434 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
434 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
435 |
435 |
436 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
436 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
437
437
438 $ cd ../non-share-safe
438 $ cd ../non-share-safe
439
439
440 Test that downgrading works too
440 Test that downgrading works too
441
441
442 $ cat >> $HGRCPATH <<EOF
442 $ cat >> $HGRCPATH <<EOF
443 > [extensions]
443 > [extensions]
444 > share =
444 > share =
445 > [format]
445 > [format]
446 > use-share-safe = False
446 > use-share-safe = False
447 > EOF
447 > EOF
448
448
449 $ hg debugupgraderepo -q
449 $ hg debugupgraderepo -q
450 requirements
450 requirements
451 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
451 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
452 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
452 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
453 removed: share-safe
453 removed: share-safe
454
454
455 no revlogs to process
455 no revlogs to process
456
456
457 $ hg debugupgraderepo --run
457 $ hg debugupgraderepo --run
458 upgrade will perform the following actions:
458 upgrade will perform the following actions:
459
459
460 requirements
460 requirements
461 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
461 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
462 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
462 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
463 removed: share-safe
463 removed: share-safe
464
464
465 no revlogs to process
465 no revlogs to process
466
466
467 beginning upgrade...
467 beginning upgrade...
468 repository locked and read-only
468 repository locked and read-only
469 creating temporary repository to stage upgraded data: $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
469 creating temporary repository to stage upgraded data: $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
470 (it is safe to interrupt this process any time before data migration completes)
470 (it is safe to interrupt this process any time before data migration completes)
471 upgrading repository requirements
471 upgrading repository requirements
472 removing temporary repository $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
472 removing temporary repository $TESTTMP/non-share-safe/.hg/upgrade.* (glob)
473 repository downgraded to not use share safe mode, existing shares will not work and needs to be reshared.
473 repository downgraded to not use share safe mode, existing shares will not work and need to be reshared.
474
474
475 $ hg debugrequirements
475 $ hg debugrequirements
476 dotencode
476 dotencode
477 dirstate-v2 (dirstate-v2 !)
477 dirstate-v2 (dirstate-v2 !)
478 fncache
478 fncache
479 generaldelta
479 generaldelta
480 revlogv1
480 revlogv1
481 sparserevlog
481 sparserevlog
482 store
482 store
483
483
484 $ cat .hg/requires
484 $ cat .hg/requires
485 dotencode
485 dotencode
486 dirstate-v2 (dirstate-v2 !)
486 dirstate-v2 (dirstate-v2 !)
487 fncache
487 fncache
488 generaldelta
488 generaldelta
489 revlogv1
489 revlogv1
490 sparserevlog
490 sparserevlog
491 store
491 store
492
492
493 $ test -f .hg/store/requires
493 $ test -f .hg/store/requires
494 [1]
494 [1]
495
495
496 $ hg log -GT "{node}: {desc}\n"
496 $ hg log -GT "{node}: {desc}\n"
497 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
497 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
498 |
498 |
499 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
499 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
500
500
501
501
502 Make sure existing shares still works
502 Make sure existing shares still works
503
503
504 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
504 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
505 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
505 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
506 |
506 |
507 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
507 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
508
508
509
509
510 $ hg log -GT "{node}: {desc}\n" -R ../ss-share
510 $ hg log -GT "{node}: {desc}\n" -R ../ss-share
511 abort: share source does not support share-safe requirement
511 abort: share source does not support share-safe requirement
512 (see `hg help config.format.use-share-safe` for more information)
512 (see `hg help config.format.use-share-safe` for more information)
513 [255]
513 [255]
514
514
515 Testing automatic downgrade of shares when config is set
515 Testing automatic downgrade of shares when config is set
516
516
517 $ touch ../ss-share/.hg/wlock
517 $ touch ../ss-share/.hg/wlock
518 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort
518 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort
519 abort: failed to downgrade share, got error: Lock held
519 abort: failed to downgrade share, got error: Lock held
520 (see `hg help config.format.use-share-safe` for more information)
520 (see `hg help config.format.use-share-safe` for more information)
521 [255]
521 [255]
522 $ rm ../ss-share/.hg/wlock
522 $ rm ../ss-share/.hg/wlock
523
523
524 $ cp -R ../ss-share ../ss-share-bck
524 $ cp -R ../ss-share ../ss-share-bck
525 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort
525 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort
526 repository downgraded to not use share-safe mode
526 repository downgraded to not use share-safe mode
527 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
527 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
528 |
528 |
529 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
529 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
530
530
531 $ rm -rf ../ss-share
531 $ rm -rf ../ss-share
532 $ mv ../ss-share-bck ../ss-share
532 $ mv ../ss-share-bck ../ss-share
533
533
534 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort --config share.safe-mismatch.source-not-safe:verbose-upgrade=no
534 $ hg log -GT "{node}: {desc}\n" -R ../ss-share --config share.safe-mismatch.source-not-safe=downgrade-abort --config share.safe-mismatch.source-not-safe:verbose-upgrade=no
535 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
535 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
536 |
536 |
537 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
537 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
538
538
539
539
540 $ hg log -GT "{node}: {desc}\n" -R ../ss-share
540 $ hg log -GT "{node}: {desc}\n" -R ../ss-share
541 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
541 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
542 |
542 |
543 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
543 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
544
544
545
545
546
546
547 Testing automatic upgrade of shares when config is set
547 Testing automatic upgrade of shares when config is set
548
548
549 $ hg debugupgraderepo -q --run --config format.use-share-safe=True
549 $ hg debugupgraderepo -q --run --config format.use-share-safe=True
550 upgrade will perform the following actions:
550 upgrade will perform the following actions:
551
551
552 requirements
552 requirements
553 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
553 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store (no-dirstate-v2 !)
554 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
554 preserved: dotencode, use-dirstate-v2, fncache, generaldelta, revlogv1, sparserevlog, store (dirstate-v2 !)
555 added: share-safe
555 added: share-safe
556
556
557 no revlogs to process
557 no revlogs to process
558
558
559 repository upgraded to share safe mode, existing shares will still work in old non-safe mode. Re-share existing shares to use them in safe mode New shares will be created in safe mode.
559 repository upgraded to share safe mode, existing shares will still work in old non-safe mode. Re-share existing shares to use them in safe mode New shares will be created in safe mode.
560 $ hg debugrequirements
560 $ hg debugrequirements
561 dotencode
561 dotencode
562 dirstate-v2 (dirstate-v2 !)
562 dirstate-v2 (dirstate-v2 !)
563 fncache
563 fncache
564 generaldelta
564 generaldelta
565 revlogv1
565 revlogv1
566 share-safe
566 share-safe
567 sparserevlog
567 sparserevlog
568 store
568 store
569 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
569 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
570 abort: version mismatch: source uses share-safe functionality while the current share does not
570 abort: version mismatch: source uses share-safe functionality while the current share does not
571 (see `hg help config.format.use-share-safe` for more information)
571 (see `hg help config.format.use-share-safe` for more information)
572 [255]
572 [255]
573
573
574 Check that if lock is taken, upgrade fails but read operation are successful
574 Check that if lock is taken, upgrade fails but read operation are successful
575 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgra
575 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgra
576 abort: share-safe mismatch with source.
576 abort: share-safe mismatch with source.
577 Unrecognized value 'upgra' of `share.safe-mismatch.source-safe` set.
577 Unrecognized value 'upgra' of `share.safe-mismatch.source-safe` set.
578 (see `hg help config.format.use-share-safe` for more information)
578 (see `hg help config.format.use-share-safe` for more information)
579 [255]
579 [255]
580 $ touch ../nss-share/.hg/wlock
580 $ touch ../nss-share/.hg/wlock
581 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-allow
581 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-allow
582 failed to upgrade share, got error: Lock held
582 failed to upgrade share, got error: Lock held
583 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
583 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
584 |
584 |
585 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
585 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
586
586
587
587
588 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-allow --config share.safe-mismatch.source-safe.warn=False
588 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-allow --config share.safe-mismatch.source-safe.warn=False
589 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
589 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
590 |
590 |
591 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
591 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
592
592
593
593
594 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort
594 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort
595 abort: failed to upgrade share, got error: Lock held
595 abort: failed to upgrade share, got error: Lock held
596 (see `hg help config.format.use-share-safe` for more information)
596 (see `hg help config.format.use-share-safe` for more information)
597 [255]
597 [255]
598
598
599 $ rm ../nss-share/.hg/wlock
599 $ rm ../nss-share/.hg/wlock
600 $ cp -R ../nss-share ../nss-share-bck
600 $ cp -R ../nss-share ../nss-share-bck
601 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort
601 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort
602 repository upgraded to use share-safe mode
602 repository upgraded to use share-safe mode
603 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
603 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
604 |
604 |
605 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
605 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
606
606
607 $ rm -rf ../nss-share
607 $ rm -rf ../nss-share
608 $ mv ../nss-share-bck ../nss-share
608 $ mv ../nss-share-bck ../nss-share
609 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort --config share.safe-mismatch.source-safe:verbose-upgrade=no
609 $ hg log -GT "{node}: {desc}\n" -R ../nss-share --config share.safe-mismatch.source-safe=upgrade-abort --config share.safe-mismatch.source-safe:verbose-upgrade=no
610 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
610 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
611 |
611 |
612 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
612 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
613
613
614
614
615 Test that unshare works
615 Test that unshare works
616
616
617 $ hg unshare -R ../nss-share
617 $ hg unshare -R ../nss-share
618 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
618 $ hg log -GT "{node}: {desc}\n" -R ../nss-share
619 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
619 @ f63db81e6dde1d9c78814167f77fb1fb49283f4f: added bar
620 |
620 |
621 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
621 o f3ba8b99bb6f897c87bbc1c07b75c6ddf43a4f77: added foo
622
622
623
623
624 Test automatique upgrade/downgrade of main-repository
624 Test automatique upgrade/downgrade of main-repository
625 ------------------------------------------------------
625 ------------------------------------------------------
626
626
627 create an initial repository
627 create an initial repository
628
628
629 $ hg init auto-upgrade \
629 $ hg init auto-upgrade \
630 > --config format.use-share-safe=no
630 > --config format.use-share-safe=no
631 $ hg debugbuilddag -R auto-upgrade --new-file .+5
631 $ hg debugbuilddag -R auto-upgrade --new-file .+5
632 $ hg -R auto-upgrade update
632 $ hg -R auto-upgrade update
633 6 files updated, 0 files merged, 0 files removed, 0 files unresolved
633 6 files updated, 0 files merged, 0 files removed, 0 files unresolved
634 $ hg debugformat -R auto-upgrade | grep share-safe
634 $ hg debugformat -R auto-upgrade | grep share-safe
635 share-safe: no
635 share-safe: no
636
636
637 upgrade it to share-safe automatically
637 upgrade it to share-safe automatically
638
638
639 $ hg status -R auto-upgrade \
639 $ hg status -R auto-upgrade \
640 > --config format.use-share-safe.automatic-upgrade-of-mismatching-repositories=yes \
640 > --config format.use-share-safe.automatic-upgrade-of-mismatching-repositories=yes \
641 > --config format.use-share-safe=yes
641 > --config format.use-share-safe=yes
642 automatically upgrading repository to the `share-safe` feature
642 automatically upgrading repository to the `share-safe` feature
643 (see `hg help config.format.use-share-safe` for details)
643 (see `hg help config.format.use-share-safe` for details)
644 $ hg debugformat -R auto-upgrade | grep share-safe
644 $ hg debugformat -R auto-upgrade | grep share-safe
645 share-safe: yes
645 share-safe: yes
646
646
647 downgrade it from share-safe automatically
647 downgrade it from share-safe automatically
648
648
649 $ hg status -R auto-upgrade \
649 $ hg status -R auto-upgrade \
650 > --config format.use-share-safe.automatic-upgrade-of-mismatching-repositories=yes \
650 > --config format.use-share-safe.automatic-upgrade-of-mismatching-repositories=yes \
651 > --config format.use-share-safe=no
651 > --config format.use-share-safe=no
652 automatically downgrading repository from the `share-safe` feature
652 automatically downgrading repository from the `share-safe` feature
653 (see `hg help config.format.use-share-safe` for details)
653 (see `hg help config.format.use-share-safe` for details)
654 $ hg debugformat -R auto-upgrade | grep share-safe
654 $ hg debugformat -R auto-upgrade | grep share-safe
655 share-safe: no
655 share-safe: no
General Comments 0
You need to be logged in to leave comments. Login now