##// END OF EJS Templates
exchange: improve computation of relevant markers for large repos...
Joerg Sonnenberger -
r52789:8583d138 default
parent child Browse files
Show More
@@ -1,2690 +1,2690
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import annotations
148 from __future__ import annotations
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157 import typing
157 import typing
158
158
159 from .i18n import _
159 from .i18n import _
160 from .node import (
160 from .node import (
161 hex,
161 hex,
162 short,
162 short,
163 )
163 )
164 from . import (
164 from . import (
165 bookmarks,
165 bookmarks,
166 changegroup,
166 changegroup,
167 encoding,
167 encoding,
168 error,
168 error,
169 obsolete,
169 obsolete,
170 phases,
170 phases,
171 pushkey,
171 pushkey,
172 pycompat,
172 pycompat,
173 requirements,
173 requirements,
174 scmutil,
174 scmutil,
175 streamclone,
175 streamclone,
176 tags,
176 tags,
177 url,
177 url,
178 util,
178 util,
179 )
179 )
180 from .utils import (
180 from .utils import (
181 stringutil,
181 stringutil,
182 urlutil,
182 urlutil,
183 )
183 )
184 from .interfaces import repository
184 from .interfaces import repository
185
185
186 if typing.TYPE_CHECKING:
186 if typing.TYPE_CHECKING:
187 from typing import (
187 from typing import (
188 Dict,
188 Dict,
189 List,
189 List,
190 Optional,
190 Optional,
191 Tuple,
191 Tuple,
192 Union,
192 Union,
193 )
193 )
194
194
195 Capabilities = Dict[bytes, Union[List[bytes], Tuple[bytes, ...]]]
195 Capabilities = Dict[bytes, Union[List[bytes], Tuple[bytes, ...]]]
196
196
197 urlerr = util.urlerr
197 urlerr = util.urlerr
198 urlreq = util.urlreq
198 urlreq = util.urlreq
199
199
200 _pack = struct.pack
200 _pack = struct.pack
201 _unpack = struct.unpack
201 _unpack = struct.unpack
202
202
203 _fstreamparamsize = b'>i'
203 _fstreamparamsize = b'>i'
204 _fpartheadersize = b'>i'
204 _fpartheadersize = b'>i'
205 _fparttypesize = b'>B'
205 _fparttypesize = b'>B'
206 _fpartid = b'>I'
206 _fpartid = b'>I'
207 _fpayloadsize = b'>i'
207 _fpayloadsize = b'>i'
208 _fpartparamcount = b'>BB'
208 _fpartparamcount = b'>BB'
209
209
210 preferedchunksize = 32768
210 preferedchunksize = 32768
211
211
212 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
212 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
213
213
214
214
215 def outdebug(ui, message):
215 def outdebug(ui, message):
216 """debug regarding output stream (bundling)"""
216 """debug regarding output stream (bundling)"""
217 if ui.configbool(b'devel', b'bundle2.debug'):
217 if ui.configbool(b'devel', b'bundle2.debug'):
218 ui.debug(b'bundle2-output: %s\n' % message)
218 ui.debug(b'bundle2-output: %s\n' % message)
219
219
220
220
221 def indebug(ui, message):
221 def indebug(ui, message):
222 """debug on input stream (unbundling)"""
222 """debug on input stream (unbundling)"""
223 if ui.configbool(b'devel', b'bundle2.debug'):
223 if ui.configbool(b'devel', b'bundle2.debug'):
224 ui.debug(b'bundle2-input: %s\n' % message)
224 ui.debug(b'bundle2-input: %s\n' % message)
225
225
226
226
227 def validateparttype(parttype):
227 def validateparttype(parttype):
228 """raise ValueError if a parttype contains invalid character"""
228 """raise ValueError if a parttype contains invalid character"""
229 if _parttypeforbidden.search(parttype):
229 if _parttypeforbidden.search(parttype):
230 raise ValueError(parttype)
230 raise ValueError(parttype)
231
231
232
232
233 def _makefpartparamsizes(nbparams):
233 def _makefpartparamsizes(nbparams):
234 """return a struct format to read part parameter sizes
234 """return a struct format to read part parameter sizes
235
235
236 The number parameters is variable so we need to build that format
236 The number parameters is variable so we need to build that format
237 dynamically.
237 dynamically.
238 """
238 """
239 return b'>' + (b'BB' * nbparams)
239 return b'>' + (b'BB' * nbparams)
240
240
241
241
242 parthandlermapping = {}
242 parthandlermapping = {}
243
243
244
244
245 def parthandler(parttype, params=()):
245 def parthandler(parttype, params=()):
246 """decorator that register a function as a bundle2 part handler
246 """decorator that register a function as a bundle2 part handler
247
247
248 eg::
248 eg::
249
249
250 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
250 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
251 def myparttypehandler(...):
251 def myparttypehandler(...):
252 '''process a part of type "my part".'''
252 '''process a part of type "my part".'''
253 ...
253 ...
254 """
254 """
255 validateparttype(parttype)
255 validateparttype(parttype)
256
256
257 def _decorator(func):
257 def _decorator(func):
258 lparttype = parttype.lower() # enforce lower case matching.
258 lparttype = parttype.lower() # enforce lower case matching.
259 assert lparttype not in parthandlermapping
259 assert lparttype not in parthandlermapping
260 parthandlermapping[lparttype] = func
260 parthandlermapping[lparttype] = func
261 func.params = frozenset(params)
261 func.params = frozenset(params)
262 return func
262 return func
263
263
264 return _decorator
264 return _decorator
265
265
266
266
267 class unbundlerecords:
267 class unbundlerecords:
268 """keep record of what happens during and unbundle
268 """keep record of what happens during and unbundle
269
269
270 New records are added using `records.add('cat', obj)`. Where 'cat' is a
270 New records are added using `records.add('cat', obj)`. Where 'cat' is a
271 category of record and obj is an arbitrary object.
271 category of record and obj is an arbitrary object.
272
272
273 `records['cat']` will return all entries of this category 'cat'.
273 `records['cat']` will return all entries of this category 'cat'.
274
274
275 Iterating on the object itself will yield `('category', obj)` tuples
275 Iterating on the object itself will yield `('category', obj)` tuples
276 for all entries.
276 for all entries.
277
277
278 All iterations happens in chronological order.
278 All iterations happens in chronological order.
279 """
279 """
280
280
281 def __init__(self):
281 def __init__(self):
282 self._categories = {}
282 self._categories = {}
283 self._sequences = []
283 self._sequences = []
284 self._replies = {}
284 self._replies = {}
285
285
286 def add(self, category, entry, inreplyto=None):
286 def add(self, category, entry, inreplyto=None):
287 """add a new record of a given category.
287 """add a new record of a given category.
288
288
289 The entry can then be retrieved in the list returned by
289 The entry can then be retrieved in the list returned by
290 self['category']."""
290 self['category']."""
291 self._categories.setdefault(category, []).append(entry)
291 self._categories.setdefault(category, []).append(entry)
292 self._sequences.append((category, entry))
292 self._sequences.append((category, entry))
293 if inreplyto is not None:
293 if inreplyto is not None:
294 self.getreplies(inreplyto).add(category, entry)
294 self.getreplies(inreplyto).add(category, entry)
295
295
296 def getreplies(self, partid):
296 def getreplies(self, partid):
297 """get the records that are replies to a specific part"""
297 """get the records that are replies to a specific part"""
298 return self._replies.setdefault(partid, unbundlerecords())
298 return self._replies.setdefault(partid, unbundlerecords())
299
299
300 def __getitem__(self, cat):
300 def __getitem__(self, cat):
301 return tuple(self._categories.get(cat, ()))
301 return tuple(self._categories.get(cat, ()))
302
302
303 def __iter__(self):
303 def __iter__(self):
304 return iter(self._sequences)
304 return iter(self._sequences)
305
305
306 def __len__(self):
306 def __len__(self):
307 return len(self._sequences)
307 return len(self._sequences)
308
308
309 def __nonzero__(self):
309 def __nonzero__(self):
310 return bool(self._sequences)
310 return bool(self._sequences)
311
311
312 __bool__ = __nonzero__
312 __bool__ = __nonzero__
313
313
314
314
315 class bundleoperation:
315 class bundleoperation:
316 """an object that represents a single bundling process
316 """an object that represents a single bundling process
317
317
318 Its purpose is to carry unbundle-related objects and states.
318 Its purpose is to carry unbundle-related objects and states.
319
319
320 A new object should be created at the beginning of each bundle processing.
320 A new object should be created at the beginning of each bundle processing.
321 The object is to be returned by the processing function.
321 The object is to be returned by the processing function.
322
322
323 The object has very little content now it will ultimately contain:
323 The object has very little content now it will ultimately contain:
324 * an access to the repo the bundle is applied to,
324 * an access to the repo the bundle is applied to,
325 * a ui object,
325 * a ui object,
326 * a way to retrieve a transaction to add changes to the repo,
326 * a way to retrieve a transaction to add changes to the repo,
327 * a way to record the result of processing each part,
327 * a way to record the result of processing each part,
328 * a way to construct a bundle response when applicable.
328 * a way to construct a bundle response when applicable.
329 """
329 """
330
330
331 def __init__(
331 def __init__(
332 self,
332 self,
333 repo,
333 repo,
334 transactiongetter,
334 transactiongetter,
335 captureoutput=True,
335 captureoutput=True,
336 source=b'',
336 source=b'',
337 remote=None,
337 remote=None,
338 ):
338 ):
339 self.repo = repo
339 self.repo = repo
340 # the peer object who produced this bundle if available
340 # the peer object who produced this bundle if available
341 self.remote = remote
341 self.remote = remote
342 self.ui = repo.ui
342 self.ui = repo.ui
343 self.records = unbundlerecords()
343 self.records = unbundlerecords()
344 self.reply = None
344 self.reply = None
345 self.captureoutput = captureoutput
345 self.captureoutput = captureoutput
346 self.hookargs = {}
346 self.hookargs = {}
347 self._gettransaction = transactiongetter
347 self._gettransaction = transactiongetter
348 # carries value that can modify part behavior
348 # carries value that can modify part behavior
349 self.modes = {}
349 self.modes = {}
350 self.source = source
350 self.source = source
351
351
352 def gettransaction(self):
352 def gettransaction(self):
353 transaction = self._gettransaction()
353 transaction = self._gettransaction()
354
354
355 if self.hookargs:
355 if self.hookargs:
356 # the ones added to the transaction supercede those added
356 # the ones added to the transaction supercede those added
357 # to the operation.
357 # to the operation.
358 self.hookargs.update(transaction.hookargs)
358 self.hookargs.update(transaction.hookargs)
359 transaction.hookargs = self.hookargs
359 transaction.hookargs = self.hookargs
360
360
361 # mark the hookargs as flushed. further attempts to add to
361 # mark the hookargs as flushed. further attempts to add to
362 # hookargs will result in an abort.
362 # hookargs will result in an abort.
363 self.hookargs = None
363 self.hookargs = None
364
364
365 return transaction
365 return transaction
366
366
367 def addhookargs(self, hookargs):
367 def addhookargs(self, hookargs):
368 if self.hookargs is None:
368 if self.hookargs is None:
369 raise error.ProgrammingError(
369 raise error.ProgrammingError(
370 b'attempted to add hookargs to '
370 b'attempted to add hookargs to '
371 b'operation after transaction started'
371 b'operation after transaction started'
372 )
372 )
373 self.hookargs.update(hookargs)
373 self.hookargs.update(hookargs)
374
374
375
375
376 class TransactionUnavailable(RuntimeError):
376 class TransactionUnavailable(RuntimeError):
377 pass
377 pass
378
378
379
379
380 def _notransaction():
380 def _notransaction():
381 """default method to get a transaction while processing a bundle
381 """default method to get a transaction while processing a bundle
382
382
383 Raise an exception to highlight the fact that no transaction was expected
383 Raise an exception to highlight the fact that no transaction was expected
384 to be created"""
384 to be created"""
385 raise TransactionUnavailable()
385 raise TransactionUnavailable()
386
386
387
387
388 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
388 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
389 # transform me into unbundler.apply() as soon as the freeze is lifted
389 # transform me into unbundler.apply() as soon as the freeze is lifted
390 if isinstance(unbundler, unbundle20):
390 if isinstance(unbundler, unbundle20):
391 tr.hookargs[b'bundle2'] = b'1'
391 tr.hookargs[b'bundle2'] = b'1'
392 if source is not None and b'source' not in tr.hookargs:
392 if source is not None and b'source' not in tr.hookargs:
393 tr.hookargs[b'source'] = source
393 tr.hookargs[b'source'] = source
394 if url is not None and b'url' not in tr.hookargs:
394 if url is not None and b'url' not in tr.hookargs:
395 tr.hookargs[b'url'] = url
395 tr.hookargs[b'url'] = url
396 return processbundle(
396 return processbundle(
397 repo, unbundler, lambda: tr, source=source, remote=remote
397 repo, unbundler, lambda: tr, source=source, remote=remote
398 )
398 )
399 else:
399 else:
400 # the transactiongetter won't be used, but we might as well set it
400 # the transactiongetter won't be used, but we might as well set it
401 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
401 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
402 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
402 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
403 return op
403 return op
404
404
405
405
406 class partiterator:
406 class partiterator:
407 def __init__(self, repo, op, unbundler):
407 def __init__(self, repo, op, unbundler):
408 self.repo = repo
408 self.repo = repo
409 self.op = op
409 self.op = op
410 self.unbundler = unbundler
410 self.unbundler = unbundler
411 self.iterator = None
411 self.iterator = None
412 self.count = 0
412 self.count = 0
413 self.current = None
413 self.current = None
414
414
415 def __enter__(self):
415 def __enter__(self):
416 def func():
416 def func():
417 itr = enumerate(self.unbundler.iterparts(), 1)
417 itr = enumerate(self.unbundler.iterparts(), 1)
418 for count, p in itr:
418 for count, p in itr:
419 self.count = count
419 self.count = count
420 self.current = p
420 self.current = p
421 yield p
421 yield p
422 p.consume()
422 p.consume()
423 self.current = None
423 self.current = None
424
424
425 self.iterator = func()
425 self.iterator = func()
426 return self.iterator
426 return self.iterator
427
427
428 def __exit__(self, type, exc, tb):
428 def __exit__(self, type, exc, tb):
429 if not self.iterator:
429 if not self.iterator:
430 return
430 return
431
431
432 # Only gracefully abort in a normal exception situation. User aborts
432 # Only gracefully abort in a normal exception situation. User aborts
433 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
433 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
434 # and should not gracefully cleanup.
434 # and should not gracefully cleanup.
435 if isinstance(exc, Exception):
435 if isinstance(exc, Exception):
436 # Any exceptions seeking to the end of the bundle at this point are
436 # Any exceptions seeking to the end of the bundle at this point are
437 # almost certainly related to the underlying stream being bad.
437 # almost certainly related to the underlying stream being bad.
438 # And, chances are that the exception we're handling is related to
438 # And, chances are that the exception we're handling is related to
439 # getting in that bad state. So, we swallow the seeking error and
439 # getting in that bad state. So, we swallow the seeking error and
440 # re-raise the original error.
440 # re-raise the original error.
441 seekerror = False
441 seekerror = False
442 try:
442 try:
443 if self.current:
443 if self.current:
444 # consume the part content to not corrupt the stream.
444 # consume the part content to not corrupt the stream.
445 self.current.consume()
445 self.current.consume()
446
446
447 for part in self.iterator:
447 for part in self.iterator:
448 # consume the bundle content
448 # consume the bundle content
449 part.consume()
449 part.consume()
450 except Exception:
450 except Exception:
451 seekerror = True
451 seekerror = True
452
452
453 # Small hack to let caller code distinguish exceptions from bundle2
453 # Small hack to let caller code distinguish exceptions from bundle2
454 # processing from processing the old format. This is mostly needed
454 # processing from processing the old format. This is mostly needed
455 # to handle different return codes to unbundle according to the type
455 # to handle different return codes to unbundle according to the type
456 # of bundle. We should probably clean up or drop this return code
456 # of bundle. We should probably clean up or drop this return code
457 # craziness in a future version.
457 # craziness in a future version.
458 exc.duringunbundle2 = True
458 exc.duringunbundle2 = True
459 salvaged = []
459 salvaged = []
460 replycaps = None
460 replycaps = None
461 if self.op.reply is not None:
461 if self.op.reply is not None:
462 salvaged = self.op.reply.salvageoutput()
462 salvaged = self.op.reply.salvageoutput()
463 replycaps = self.op.reply.capabilities
463 replycaps = self.op.reply.capabilities
464 exc._replycaps = replycaps
464 exc._replycaps = replycaps
465 exc._bundle2salvagedoutput = salvaged
465 exc._bundle2salvagedoutput = salvaged
466
466
467 # Re-raising from a variable loses the original stack. So only use
467 # Re-raising from a variable loses the original stack. So only use
468 # that form if we need to.
468 # that form if we need to.
469 if seekerror:
469 if seekerror:
470 raise exc
470 raise exc
471
471
472 self.repo.ui.debug(
472 self.repo.ui.debug(
473 b'bundle2-input-bundle: %i parts total\n' % self.count
473 b'bundle2-input-bundle: %i parts total\n' % self.count
474 )
474 )
475
475
476
476
477 def processbundle(
477 def processbundle(
478 repo,
478 repo,
479 unbundler,
479 unbundler,
480 transactiongetter=None,
480 transactiongetter=None,
481 op=None,
481 op=None,
482 source=b'',
482 source=b'',
483 remote=None,
483 remote=None,
484 ):
484 ):
485 """This function process a bundle, apply effect to/from a repo
485 """This function process a bundle, apply effect to/from a repo
486
486
487 It iterates over each part then searches for and uses the proper handling
487 It iterates over each part then searches for and uses the proper handling
488 code to process the part. Parts are processed in order.
488 code to process the part. Parts are processed in order.
489
489
490 Unknown Mandatory part will abort the process.
490 Unknown Mandatory part will abort the process.
491
491
492 It is temporarily possible to provide a prebuilt bundleoperation to the
492 It is temporarily possible to provide a prebuilt bundleoperation to the
493 function. This is used to ensure output is properly propagated in case of
493 function. This is used to ensure output is properly propagated in case of
494 an error during the unbundling. This output capturing part will likely be
494 an error during the unbundling. This output capturing part will likely be
495 reworked and this ability will probably go away in the process.
495 reworked and this ability will probably go away in the process.
496 """
496 """
497 if op is None:
497 if op is None:
498 if transactiongetter is None:
498 if transactiongetter is None:
499 transactiongetter = _notransaction
499 transactiongetter = _notransaction
500 op = bundleoperation(
500 op = bundleoperation(
501 repo,
501 repo,
502 transactiongetter,
502 transactiongetter,
503 source=source,
503 source=source,
504 remote=remote,
504 remote=remote,
505 )
505 )
506 # todo:
506 # todo:
507 # - replace this is a init function soon.
507 # - replace this is a init function soon.
508 # - exception catching
508 # - exception catching
509 unbundler.params
509 unbundler.params
510 if repo.ui.debugflag:
510 if repo.ui.debugflag:
511 msg = [b'bundle2-input-bundle:']
511 msg = [b'bundle2-input-bundle:']
512 if unbundler.params:
512 if unbundler.params:
513 msg.append(b' %i params' % len(unbundler.params))
513 msg.append(b' %i params' % len(unbundler.params))
514 if op._gettransaction is None or op._gettransaction is _notransaction:
514 if op._gettransaction is None or op._gettransaction is _notransaction:
515 msg.append(b' no-transaction')
515 msg.append(b' no-transaction')
516 else:
516 else:
517 msg.append(b' with-transaction')
517 msg.append(b' with-transaction')
518 msg.append(b'\n')
518 msg.append(b'\n')
519 repo.ui.debug(b''.join(msg))
519 repo.ui.debug(b''.join(msg))
520
520
521 processparts(repo, op, unbundler)
521 processparts(repo, op, unbundler)
522
522
523 return op
523 return op
524
524
525
525
526 def processparts(repo, op, unbundler):
526 def processparts(repo, op, unbundler):
527 with partiterator(repo, op, unbundler) as parts:
527 with partiterator(repo, op, unbundler) as parts:
528 for part in parts:
528 for part in parts:
529 _processpart(op, part)
529 _processpart(op, part)
530
530
531
531
532 def _processchangegroup(op, cg, tr, source, url, **kwargs):
532 def _processchangegroup(op, cg, tr, source, url, **kwargs):
533 if op.remote is not None and op.remote.path is not None:
533 if op.remote is not None and op.remote.path is not None:
534 remote_path = op.remote.path
534 remote_path = op.remote.path
535 kwargs = kwargs.copy()
535 kwargs = kwargs.copy()
536 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
536 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
537 ret = cg.apply(op.repo, tr, source, url, **kwargs)
537 ret = cg.apply(op.repo, tr, source, url, **kwargs)
538 op.records.add(
538 op.records.add(
539 b'changegroup',
539 b'changegroup',
540 {
540 {
541 b'return': ret,
541 b'return': ret,
542 },
542 },
543 )
543 )
544 return ret
544 return ret
545
545
546
546
547 def _gethandler(op, part):
547 def _gethandler(op, part):
548 status = b'unknown' # used by debug output
548 status = b'unknown' # used by debug output
549 try:
549 try:
550 handler = parthandlermapping.get(part.type)
550 handler = parthandlermapping.get(part.type)
551 if handler is None:
551 if handler is None:
552 status = b'unsupported-type'
552 status = b'unsupported-type'
553 raise error.BundleUnknownFeatureError(parttype=part.type)
553 raise error.BundleUnknownFeatureError(parttype=part.type)
554 indebug(op.ui, b'found a handler for part %s' % part.type)
554 indebug(op.ui, b'found a handler for part %s' % part.type)
555 unknownparams = part.mandatorykeys - handler.params
555 unknownparams = part.mandatorykeys - handler.params
556 if unknownparams:
556 if unknownparams:
557 unknownparams = list(unknownparams)
557 unknownparams = list(unknownparams)
558 unknownparams.sort()
558 unknownparams.sort()
559 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
559 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
560 raise error.BundleUnknownFeatureError(
560 raise error.BundleUnknownFeatureError(
561 parttype=part.type, params=unknownparams
561 parttype=part.type, params=unknownparams
562 )
562 )
563 status = b'supported'
563 status = b'supported'
564 except error.BundleUnknownFeatureError as exc:
564 except error.BundleUnknownFeatureError as exc:
565 if part.mandatory: # mandatory parts
565 if part.mandatory: # mandatory parts
566 raise
566 raise
567 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
567 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
568 return # skip to part processing
568 return # skip to part processing
569 finally:
569 finally:
570 if op.ui.debugflag:
570 if op.ui.debugflag:
571 msg = [b'bundle2-input-part: "%s"' % part.type]
571 msg = [b'bundle2-input-part: "%s"' % part.type]
572 if not part.mandatory:
572 if not part.mandatory:
573 msg.append(b' (advisory)')
573 msg.append(b' (advisory)')
574 nbmp = len(part.mandatorykeys)
574 nbmp = len(part.mandatorykeys)
575 nbap = len(part.params) - nbmp
575 nbap = len(part.params) - nbmp
576 if nbmp or nbap:
576 if nbmp or nbap:
577 msg.append(b' (params:')
577 msg.append(b' (params:')
578 if nbmp:
578 if nbmp:
579 msg.append(b' %i mandatory' % nbmp)
579 msg.append(b' %i mandatory' % nbmp)
580 if nbap:
580 if nbap:
581 msg.append(b' %i advisory' % nbmp)
581 msg.append(b' %i advisory' % nbmp)
582 msg.append(b')')
582 msg.append(b')')
583 msg.append(b' %s\n' % status)
583 msg.append(b' %s\n' % status)
584 op.ui.debug(b''.join(msg))
584 op.ui.debug(b''.join(msg))
585
585
586 return handler
586 return handler
587
587
588
588
589 def _processpart(op, part):
589 def _processpart(op, part):
590 """process a single part from a bundle
590 """process a single part from a bundle
591
591
592 The part is guaranteed to have been fully consumed when the function exits
592 The part is guaranteed to have been fully consumed when the function exits
593 (even if an exception is raised)."""
593 (even if an exception is raised)."""
594 handler = _gethandler(op, part)
594 handler = _gethandler(op, part)
595 if handler is None:
595 if handler is None:
596 return
596 return
597
597
598 # handler is called outside the above try block so that we don't
598 # handler is called outside the above try block so that we don't
599 # risk catching KeyErrors from anything other than the
599 # risk catching KeyErrors from anything other than the
600 # parthandlermapping lookup (any KeyError raised by handler()
600 # parthandlermapping lookup (any KeyError raised by handler()
601 # itself represents a defect of a different variety).
601 # itself represents a defect of a different variety).
602 output = None
602 output = None
603 if op.captureoutput and op.reply is not None:
603 if op.captureoutput and op.reply is not None:
604 op.ui.pushbuffer(error=True, subproc=True)
604 op.ui.pushbuffer(error=True, subproc=True)
605 output = b''
605 output = b''
606 try:
606 try:
607 handler(op, part)
607 handler(op, part)
608 finally:
608 finally:
609 if output is not None:
609 if output is not None:
610 output = op.ui.popbuffer()
610 output = op.ui.popbuffer()
611 if output:
611 if output:
612 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
612 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
613 outpart.addparam(
613 outpart.addparam(
614 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
614 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
615 )
615 )
616
616
617
617
618 def decodecaps(blob: bytes) -> "Capabilities":
618 def decodecaps(blob: bytes) -> "Capabilities":
619 """decode a bundle2 caps bytes blob into a dictionary
619 """decode a bundle2 caps bytes blob into a dictionary
620
620
621 The blob is a list of capabilities (one per line)
621 The blob is a list of capabilities (one per line)
622 Capabilities may have values using a line of the form::
622 Capabilities may have values using a line of the form::
623
623
624 capability=value1,value2,value3
624 capability=value1,value2,value3
625
625
626 The values are always a list."""
626 The values are always a list."""
627 caps = {}
627 caps = {}
628 for line in blob.splitlines():
628 for line in blob.splitlines():
629 if not line:
629 if not line:
630 continue
630 continue
631 if b'=' not in line:
631 if b'=' not in line:
632 key, vals = line, ()
632 key, vals = line, ()
633 else:
633 else:
634 key, vals = line.split(b'=', 1)
634 key, vals = line.split(b'=', 1)
635 vals = vals.split(b',')
635 vals = vals.split(b',')
636 key = urlreq.unquote(key)
636 key = urlreq.unquote(key)
637 vals = [urlreq.unquote(v) for v in vals]
637 vals = [urlreq.unquote(v) for v in vals]
638 caps[key] = vals
638 caps[key] = vals
639 return caps
639 return caps
640
640
641
641
642 def encodecaps(caps):
642 def encodecaps(caps):
643 """encode a bundle2 caps dictionary into a bytes blob"""
643 """encode a bundle2 caps dictionary into a bytes blob"""
644 chunks = []
644 chunks = []
645 for ca in sorted(caps):
645 for ca in sorted(caps):
646 vals = caps[ca]
646 vals = caps[ca]
647 ca = urlreq.quote(ca)
647 ca = urlreq.quote(ca)
648 vals = [urlreq.quote(v) for v in vals]
648 vals = [urlreq.quote(v) for v in vals]
649 if vals:
649 if vals:
650 ca = b"%s=%s" % (ca, b','.join(vals))
650 ca = b"%s=%s" % (ca, b','.join(vals))
651 chunks.append(ca)
651 chunks.append(ca)
652 return b'\n'.join(chunks)
652 return b'\n'.join(chunks)
653
653
654
654
655 bundletypes = {
655 bundletypes = {
656 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
656 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
657 # since the unification ssh accepts a header but there
657 # since the unification ssh accepts a header but there
658 # is no capability signaling it.
658 # is no capability signaling it.
659 b"HG20": (), # special-cased below
659 b"HG20": (), # special-cased below
660 b"HG10UN": (b"HG10UN", b'UN'),
660 b"HG10UN": (b"HG10UN", b'UN'),
661 b"HG10BZ": (b"HG10", b'BZ'),
661 b"HG10BZ": (b"HG10", b'BZ'),
662 b"HG10GZ": (b"HG10GZ", b'GZ'),
662 b"HG10GZ": (b"HG10GZ", b'GZ'),
663 }
663 }
664
664
665 # hgweb uses this list to communicate its preferred type
665 # hgweb uses this list to communicate its preferred type
666 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
666 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
667
667
668
668
669 class bundle20:
669 class bundle20:
670 """represent an outgoing bundle2 container
670 """represent an outgoing bundle2 container
671
671
672 Use the `addparam` method to add stream level parameter. and `newpart` to
672 Use the `addparam` method to add stream level parameter. and `newpart` to
673 populate it. Then call `getchunks` to retrieve all the binary chunks of
673 populate it. Then call `getchunks` to retrieve all the binary chunks of
674 data that compose the bundle2 container."""
674 data that compose the bundle2 container."""
675
675
676 _magicstring = b'HG20'
676 _magicstring = b'HG20'
677
677
678 def __init__(self, ui, capabilities: "Optional[Capabilities]" = None):
678 def __init__(self, ui, capabilities: "Optional[Capabilities]" = None):
679 if capabilities is None:
679 if capabilities is None:
680 capabilities = {}
680 capabilities = {}
681
681
682 self.ui = ui
682 self.ui = ui
683 self._params = []
683 self._params = []
684 self._parts = []
684 self._parts = []
685 self.capabilities: "Capabilities" = dict(capabilities)
685 self.capabilities: "Capabilities" = dict(capabilities)
686 self._compengine = util.compengines.forbundletype(b'UN')
686 self._compengine = util.compengines.forbundletype(b'UN')
687 self._compopts = None
687 self._compopts = None
688 # If compression is being handled by a consumer of the raw
688 # If compression is being handled by a consumer of the raw
689 # data (e.g. the wire protocol), unsetting this flag tells
689 # data (e.g. the wire protocol), unsetting this flag tells
690 # consumers that the bundle is best left uncompressed.
690 # consumers that the bundle is best left uncompressed.
691 self.prefercompressed = True
691 self.prefercompressed = True
692
692
693 def setcompression(self, alg, compopts=None):
693 def setcompression(self, alg, compopts=None):
694 """setup core part compression to <alg>"""
694 """setup core part compression to <alg>"""
695 if alg in (None, b'UN'):
695 if alg in (None, b'UN'):
696 return
696 return
697 assert not any(n.lower() == b'compression' for n, v in self._params)
697 assert not any(n.lower() == b'compression' for n, v in self._params)
698 self.addparam(b'Compression', alg)
698 self.addparam(b'Compression', alg)
699 self._compengine = util.compengines.forbundletype(alg)
699 self._compengine = util.compengines.forbundletype(alg)
700 self._compopts = compopts
700 self._compopts = compopts
701
701
702 @property
702 @property
703 def nbparts(self):
703 def nbparts(self):
704 """total number of parts added to the bundler"""
704 """total number of parts added to the bundler"""
705 return len(self._parts)
705 return len(self._parts)
706
706
707 # methods used to defines the bundle2 content
707 # methods used to defines the bundle2 content
708 def addparam(self, name, value=None):
708 def addparam(self, name, value=None):
709 """add a stream level parameter"""
709 """add a stream level parameter"""
710 if not name:
710 if not name:
711 raise error.ProgrammingError(b'empty parameter name')
711 raise error.ProgrammingError(b'empty parameter name')
712 if name[0:1] not in pycompat.bytestr(
712 if name[0:1] not in pycompat.bytestr(
713 string.ascii_letters # pytype: disable=wrong-arg-types
713 string.ascii_letters # pytype: disable=wrong-arg-types
714 ):
714 ):
715 raise error.ProgrammingError(
715 raise error.ProgrammingError(
716 b'non letter first character: %s' % name
716 b'non letter first character: %s' % name
717 )
717 )
718 self._params.append((name, value))
718 self._params.append((name, value))
719
719
720 def addpart(self, part):
720 def addpart(self, part):
721 """add a new part to the bundle2 container
721 """add a new part to the bundle2 container
722
722
723 Parts contains the actual applicative payload."""
723 Parts contains the actual applicative payload."""
724 assert part.id is None
724 assert part.id is None
725 part.id = len(self._parts) # very cheap counter
725 part.id = len(self._parts) # very cheap counter
726 self._parts.append(part)
726 self._parts.append(part)
727
727
728 def newpart(self, typeid, *args, **kwargs):
728 def newpart(self, typeid, *args, **kwargs):
729 """create a new part and add it to the containers
729 """create a new part and add it to the containers
730
730
731 As the part is directly added to the containers. For now, this means
731 As the part is directly added to the containers. For now, this means
732 that any failure to properly initialize the part after calling
732 that any failure to properly initialize the part after calling
733 ``newpart`` should result in a failure of the whole bundling process.
733 ``newpart`` should result in a failure of the whole bundling process.
734
734
735 You can still fall back to manually create and add if you need better
735 You can still fall back to manually create and add if you need better
736 control."""
736 control."""
737 part = bundlepart(typeid, *args, **kwargs)
737 part = bundlepart(typeid, *args, **kwargs)
738 self.addpart(part)
738 self.addpart(part)
739 return part
739 return part
740
740
741 # methods used to generate the bundle2 stream
741 # methods used to generate the bundle2 stream
742 def getchunks(self):
742 def getchunks(self):
743 if self.ui.debugflag:
743 if self.ui.debugflag:
744 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
744 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
745 if self._params:
745 if self._params:
746 msg.append(b' (%i params)' % len(self._params))
746 msg.append(b' (%i params)' % len(self._params))
747 msg.append(b' %i parts total\n' % len(self._parts))
747 msg.append(b' %i parts total\n' % len(self._parts))
748 self.ui.debug(b''.join(msg))
748 self.ui.debug(b''.join(msg))
749 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
749 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
750 yield self._magicstring
750 yield self._magicstring
751 param = self._paramchunk()
751 param = self._paramchunk()
752 outdebug(self.ui, b'bundle parameter: %s' % param)
752 outdebug(self.ui, b'bundle parameter: %s' % param)
753 yield _pack(_fstreamparamsize, len(param))
753 yield _pack(_fstreamparamsize, len(param))
754 if param:
754 if param:
755 yield param
755 yield param
756 for chunk in self._compengine.compressstream(
756 for chunk in self._compengine.compressstream(
757 self._getcorechunk(), self._compopts
757 self._getcorechunk(), self._compopts
758 ):
758 ):
759 yield chunk
759 yield chunk
760
760
761 def _paramchunk(self):
761 def _paramchunk(self):
762 """return a encoded version of all stream parameters"""
762 """return a encoded version of all stream parameters"""
763 blocks = []
763 blocks = []
764 for par, value in self._params:
764 for par, value in self._params:
765 par = urlreq.quote(par)
765 par = urlreq.quote(par)
766 if value is not None:
766 if value is not None:
767 value = urlreq.quote(value)
767 value = urlreq.quote(value)
768 par = b'%s=%s' % (par, value)
768 par = b'%s=%s' % (par, value)
769 blocks.append(par)
769 blocks.append(par)
770 return b' '.join(blocks)
770 return b' '.join(blocks)
771
771
772 def _getcorechunk(self):
772 def _getcorechunk(self):
773 """yield chunk for the core part of the bundle
773 """yield chunk for the core part of the bundle
774
774
775 (all but headers and parameters)"""
775 (all but headers and parameters)"""
776 outdebug(self.ui, b'start of parts')
776 outdebug(self.ui, b'start of parts')
777 for part in self._parts:
777 for part in self._parts:
778 outdebug(self.ui, b'bundle part: "%s"' % part.type)
778 outdebug(self.ui, b'bundle part: "%s"' % part.type)
779 for chunk in part.getchunks(ui=self.ui):
779 for chunk in part.getchunks(ui=self.ui):
780 yield chunk
780 yield chunk
781 outdebug(self.ui, b'end of bundle')
781 outdebug(self.ui, b'end of bundle')
782 yield _pack(_fpartheadersize, 0)
782 yield _pack(_fpartheadersize, 0)
783
783
784 def salvageoutput(self):
784 def salvageoutput(self):
785 """return a list with a copy of all output parts in the bundle
785 """return a list with a copy of all output parts in the bundle
786
786
787 This is meant to be used during error handling to make sure we preserve
787 This is meant to be used during error handling to make sure we preserve
788 server output"""
788 server output"""
789 salvaged = []
789 salvaged = []
790 for part in self._parts:
790 for part in self._parts:
791 if part.type.startswith(b'output'):
791 if part.type.startswith(b'output'):
792 salvaged.append(part.copy())
792 salvaged.append(part.copy())
793 return salvaged
793 return salvaged
794
794
795
795
796 class unpackermixin:
796 class unpackermixin:
797 """A mixin to extract bytes and struct data from a stream"""
797 """A mixin to extract bytes and struct data from a stream"""
798
798
799 def __init__(self, fp):
799 def __init__(self, fp):
800 self._fp = fp
800 self._fp = fp
801
801
802 def _unpack(self, format):
802 def _unpack(self, format):
803 """unpack this struct format from the stream
803 """unpack this struct format from the stream
804
804
805 This method is meant for internal usage by the bundle2 protocol only.
805 This method is meant for internal usage by the bundle2 protocol only.
806 They directly manipulate the low level stream including bundle2 level
806 They directly manipulate the low level stream including bundle2 level
807 instruction.
807 instruction.
808
808
809 Do not use it to implement higher-level logic or methods."""
809 Do not use it to implement higher-level logic or methods."""
810 data = self._readexact(struct.calcsize(format))
810 data = self._readexact(struct.calcsize(format))
811 return _unpack(format, data)
811 return _unpack(format, data)
812
812
813 def _readexact(self, size):
813 def _readexact(self, size):
814 """read exactly <size> bytes from the stream
814 """read exactly <size> bytes from the stream
815
815
816 This method is meant for internal usage by the bundle2 protocol only.
816 This method is meant for internal usage by the bundle2 protocol only.
817 They directly manipulate the low level stream including bundle2 level
817 They directly manipulate the low level stream including bundle2 level
818 instruction.
818 instruction.
819
819
820 Do not use it to implement higher-level logic or methods."""
820 Do not use it to implement higher-level logic or methods."""
821 return changegroup.readexactly(self._fp, size)
821 return changegroup.readexactly(self._fp, size)
822
822
823
823
824 def getunbundler(ui, fp, magicstring=None):
824 def getunbundler(ui, fp, magicstring=None):
825 """return a valid unbundler object for a given magicstring"""
825 """return a valid unbundler object for a given magicstring"""
826 if magicstring is None:
826 if magicstring is None:
827 magicstring = changegroup.readexactly(fp, 4)
827 magicstring = changegroup.readexactly(fp, 4)
828 magic, version = magicstring[0:2], magicstring[2:4]
828 magic, version = magicstring[0:2], magicstring[2:4]
829 if magic != b'HG':
829 if magic != b'HG':
830 ui.debug(
830 ui.debug(
831 b"error: invalid magic: %r (version %r), should be 'HG'\n"
831 b"error: invalid magic: %r (version %r), should be 'HG'\n"
832 % (magic, version)
832 % (magic, version)
833 )
833 )
834 raise error.Abort(_(b'not a Mercurial bundle'))
834 raise error.Abort(_(b'not a Mercurial bundle'))
835 unbundlerclass = formatmap.get(version)
835 unbundlerclass = formatmap.get(version)
836 if unbundlerclass is None:
836 if unbundlerclass is None:
837 raise error.Abort(_(b'unknown bundle version %s') % version)
837 raise error.Abort(_(b'unknown bundle version %s') % version)
838 unbundler = unbundlerclass(ui, fp)
838 unbundler = unbundlerclass(ui, fp)
839 indebug(ui, b'start processing of %s stream' % magicstring)
839 indebug(ui, b'start processing of %s stream' % magicstring)
840 return unbundler
840 return unbundler
841
841
842
842
843 class unbundle20(unpackermixin):
843 class unbundle20(unpackermixin):
844 """interpret a bundle2 stream
844 """interpret a bundle2 stream
845
845
846 This class is fed with a binary stream and yields parts through its
846 This class is fed with a binary stream and yields parts through its
847 `iterparts` methods."""
847 `iterparts` methods."""
848
848
849 _magicstring = b'HG20'
849 _magicstring = b'HG20'
850
850
851 def __init__(self, ui, fp):
851 def __init__(self, ui, fp):
852 """If header is specified, we do not read it out of the stream."""
852 """If header is specified, we do not read it out of the stream."""
853 self.ui = ui
853 self.ui = ui
854 self._compengine = util.compengines.forbundletype(b'UN')
854 self._compengine = util.compengines.forbundletype(b'UN')
855 self._compressed = None
855 self._compressed = None
856 super(unbundle20, self).__init__(fp)
856 super(unbundle20, self).__init__(fp)
857
857
858 @util.propertycache
858 @util.propertycache
859 def params(self):
859 def params(self):
860 """dictionary of stream level parameters"""
860 """dictionary of stream level parameters"""
861 indebug(self.ui, b'reading bundle2 stream parameters')
861 indebug(self.ui, b'reading bundle2 stream parameters')
862 params = {}
862 params = {}
863 paramssize = self._unpack(_fstreamparamsize)[0]
863 paramssize = self._unpack(_fstreamparamsize)[0]
864 if paramssize < 0:
864 if paramssize < 0:
865 raise error.BundleValueError(
865 raise error.BundleValueError(
866 b'negative bundle param size: %i' % paramssize
866 b'negative bundle param size: %i' % paramssize
867 )
867 )
868 if paramssize:
868 if paramssize:
869 params = self._readexact(paramssize)
869 params = self._readexact(paramssize)
870 params = self._processallparams(params)
870 params = self._processallparams(params)
871 return params
871 return params
872
872
873 def _processallparams(self, paramsblock):
873 def _processallparams(self, paramsblock):
874 """ """
874 """ """
875 params = util.sortdict()
875 params = util.sortdict()
876 for p in paramsblock.split(b' '):
876 for p in paramsblock.split(b' '):
877 p = p.split(b'=', 1)
877 p = p.split(b'=', 1)
878 p = [urlreq.unquote(i) for i in p]
878 p = [urlreq.unquote(i) for i in p]
879 if len(p) < 2:
879 if len(p) < 2:
880 p.append(None)
880 p.append(None)
881 self._processparam(*p)
881 self._processparam(*p)
882 params[p[0]] = p[1]
882 params[p[0]] = p[1]
883 return params
883 return params
884
884
885 def _processparam(self, name, value):
885 def _processparam(self, name, value):
886 """process a parameter, applying its effect if needed
886 """process a parameter, applying its effect if needed
887
887
888 Parameter starting with a lower case letter are advisory and will be
888 Parameter starting with a lower case letter are advisory and will be
889 ignored when unknown. Those starting with an upper case letter are
889 ignored when unknown. Those starting with an upper case letter are
890 mandatory and will this function will raise a KeyError when unknown.
890 mandatory and will this function will raise a KeyError when unknown.
891
891
892 Note: no option are currently supported. Any input will be either
892 Note: no option are currently supported. Any input will be either
893 ignored or failing.
893 ignored or failing.
894 """
894 """
895 if not name:
895 if not name:
896 raise ValueError('empty parameter name')
896 raise ValueError('empty parameter name')
897 if name[0:1] not in pycompat.bytestr(
897 if name[0:1] not in pycompat.bytestr(
898 string.ascii_letters # pytype: disable=wrong-arg-types
898 string.ascii_letters # pytype: disable=wrong-arg-types
899 ):
899 ):
900 raise ValueError('non letter first character: %s' % name)
900 raise ValueError('non letter first character: %s' % name)
901 try:
901 try:
902 handler = b2streamparamsmap[name.lower()]
902 handler = b2streamparamsmap[name.lower()]
903 except KeyError:
903 except KeyError:
904 if name[0:1].islower():
904 if name[0:1].islower():
905 indebug(self.ui, b"ignoring unknown parameter %s" % name)
905 indebug(self.ui, b"ignoring unknown parameter %s" % name)
906 else:
906 else:
907 raise error.BundleUnknownFeatureError(params=(name,))
907 raise error.BundleUnknownFeatureError(params=(name,))
908 else:
908 else:
909 handler(self, name, value)
909 handler(self, name, value)
910
910
911 def _forwardchunks(self):
911 def _forwardchunks(self):
912 """utility to transfer a bundle2 as binary
912 """utility to transfer a bundle2 as binary
913
913
914 This is made necessary by the fact the 'getbundle' command over 'ssh'
914 This is made necessary by the fact the 'getbundle' command over 'ssh'
915 have no way to know when the reply ends, relying on the bundle to be
915 have no way to know when the reply ends, relying on the bundle to be
916 interpreted to know its end. This is terrible and we are sorry, but we
916 interpreted to know its end. This is terrible and we are sorry, but we
917 needed to move forward to get general delta enabled.
917 needed to move forward to get general delta enabled.
918 """
918 """
919 yield self._magicstring
919 yield self._magicstring
920 assert 'params' not in vars(self)
920 assert 'params' not in vars(self)
921 paramssize = self._unpack(_fstreamparamsize)[0]
921 paramssize = self._unpack(_fstreamparamsize)[0]
922 if paramssize < 0:
922 if paramssize < 0:
923 raise error.BundleValueError(
923 raise error.BundleValueError(
924 b'negative bundle param size: %i' % paramssize
924 b'negative bundle param size: %i' % paramssize
925 )
925 )
926 if paramssize:
926 if paramssize:
927 params = self._readexact(paramssize)
927 params = self._readexact(paramssize)
928 self._processallparams(params)
928 self._processallparams(params)
929 # The payload itself is decompressed below, so drop
929 # The payload itself is decompressed below, so drop
930 # the compression parameter passed down to compensate.
930 # the compression parameter passed down to compensate.
931 outparams = []
931 outparams = []
932 for p in params.split(b' '):
932 for p in params.split(b' '):
933 k, v = p.split(b'=', 1)
933 k, v = p.split(b'=', 1)
934 if k.lower() != b'compression':
934 if k.lower() != b'compression':
935 outparams.append(p)
935 outparams.append(p)
936 outparams = b' '.join(outparams)
936 outparams = b' '.join(outparams)
937 yield _pack(_fstreamparamsize, len(outparams))
937 yield _pack(_fstreamparamsize, len(outparams))
938 yield outparams
938 yield outparams
939 else:
939 else:
940 yield _pack(_fstreamparamsize, paramssize)
940 yield _pack(_fstreamparamsize, paramssize)
941 # From there, payload might need to be decompressed
941 # From there, payload might need to be decompressed
942 self._fp = self._compengine.decompressorreader(self._fp)
942 self._fp = self._compengine.decompressorreader(self._fp)
943 emptycount = 0
943 emptycount = 0
944 while emptycount < 2:
944 while emptycount < 2:
945 # so we can brainlessly loop
945 # so we can brainlessly loop
946 assert _fpartheadersize == _fpayloadsize
946 assert _fpartheadersize == _fpayloadsize
947 size = self._unpack(_fpartheadersize)[0]
947 size = self._unpack(_fpartheadersize)[0]
948 yield _pack(_fpartheadersize, size)
948 yield _pack(_fpartheadersize, size)
949 if size:
949 if size:
950 emptycount = 0
950 emptycount = 0
951 else:
951 else:
952 emptycount += 1
952 emptycount += 1
953 continue
953 continue
954 if size == flaginterrupt:
954 if size == flaginterrupt:
955 continue
955 continue
956 elif size < 0:
956 elif size < 0:
957 raise error.BundleValueError(b'negative chunk size: %i')
957 raise error.BundleValueError(b'negative chunk size: %i')
958 yield self._readexact(size)
958 yield self._readexact(size)
959
959
960 def iterparts(self, seekable=False):
960 def iterparts(self, seekable=False):
961 """yield all parts contained in the stream"""
961 """yield all parts contained in the stream"""
962 cls = seekableunbundlepart if seekable else unbundlepart
962 cls = seekableunbundlepart if seekable else unbundlepart
963 # make sure param have been loaded
963 # make sure param have been loaded
964 self.params
964 self.params
965 # From there, payload need to be decompressed
965 # From there, payload need to be decompressed
966 self._fp = self._compengine.decompressorreader(self._fp)
966 self._fp = self._compengine.decompressorreader(self._fp)
967 indebug(self.ui, b'start extraction of bundle2 parts')
967 indebug(self.ui, b'start extraction of bundle2 parts')
968 headerblock = self._readpartheader()
968 headerblock = self._readpartheader()
969 while headerblock is not None:
969 while headerblock is not None:
970 part = cls(self.ui, headerblock, self._fp)
970 part = cls(self.ui, headerblock, self._fp)
971 yield part
971 yield part
972 # Ensure part is fully consumed so we can start reading the next
972 # Ensure part is fully consumed so we can start reading the next
973 # part.
973 # part.
974 part.consume()
974 part.consume()
975
975
976 headerblock = self._readpartheader()
976 headerblock = self._readpartheader()
977 indebug(self.ui, b'end of bundle2 stream')
977 indebug(self.ui, b'end of bundle2 stream')
978
978
979 def _readpartheader(self):
979 def _readpartheader(self):
980 """reads a part header size and return the bytes blob
980 """reads a part header size and return the bytes blob
981
981
982 returns None if empty"""
982 returns None if empty"""
983 headersize = self._unpack(_fpartheadersize)[0]
983 headersize = self._unpack(_fpartheadersize)[0]
984 if headersize < 0:
984 if headersize < 0:
985 raise error.BundleValueError(
985 raise error.BundleValueError(
986 b'negative part header size: %i' % headersize
986 b'negative part header size: %i' % headersize
987 )
987 )
988 indebug(self.ui, b'part header size: %i' % headersize)
988 indebug(self.ui, b'part header size: %i' % headersize)
989 if headersize:
989 if headersize:
990 return self._readexact(headersize)
990 return self._readexact(headersize)
991 return None
991 return None
992
992
993 def compressed(self):
993 def compressed(self):
994 self.params # load params
994 self.params # load params
995 return self._compressed
995 return self._compressed
996
996
997 def close(self):
997 def close(self):
998 """close underlying file"""
998 """close underlying file"""
999 if hasattr(self._fp, 'close'):
999 if hasattr(self._fp, 'close'):
1000 return self._fp.close()
1000 return self._fp.close()
1001
1001
1002
1002
1003 formatmap = {b'20': unbundle20}
1003 formatmap = {b'20': unbundle20}
1004
1004
1005 b2streamparamsmap = {}
1005 b2streamparamsmap = {}
1006
1006
1007
1007
1008 def b2streamparamhandler(name):
1008 def b2streamparamhandler(name):
1009 """register a handler for a stream level parameter"""
1009 """register a handler for a stream level parameter"""
1010
1010
1011 def decorator(func):
1011 def decorator(func):
1012 assert name not in formatmap
1012 assert name not in formatmap
1013 b2streamparamsmap[name] = func
1013 b2streamparamsmap[name] = func
1014 return func
1014 return func
1015
1015
1016 return decorator
1016 return decorator
1017
1017
1018
1018
1019 @b2streamparamhandler(b'compression')
1019 @b2streamparamhandler(b'compression')
1020 def processcompression(unbundler, param, value):
1020 def processcompression(unbundler, param, value):
1021 """read compression parameter and install payload decompression"""
1021 """read compression parameter and install payload decompression"""
1022 if value not in util.compengines.supportedbundletypes:
1022 if value not in util.compengines.supportedbundletypes:
1023 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1023 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1024 unbundler._compengine = util.compengines.forbundletype(value)
1024 unbundler._compengine = util.compengines.forbundletype(value)
1025 if value is not None:
1025 if value is not None:
1026 unbundler._compressed = True
1026 unbundler._compressed = True
1027
1027
1028
1028
1029 class bundlepart:
1029 class bundlepart:
1030 """A bundle2 part contains application level payload
1030 """A bundle2 part contains application level payload
1031
1031
1032 The part `type` is used to route the part to the application level
1032 The part `type` is used to route the part to the application level
1033 handler.
1033 handler.
1034
1034
1035 The part payload is contained in ``part.data``. It could be raw bytes or a
1035 The part payload is contained in ``part.data``. It could be raw bytes or a
1036 generator of byte chunks.
1036 generator of byte chunks.
1037
1037
1038 You can add parameters to the part using the ``addparam`` method.
1038 You can add parameters to the part using the ``addparam`` method.
1039 Parameters can be either mandatory (default) or advisory. Remote side
1039 Parameters can be either mandatory (default) or advisory. Remote side
1040 should be able to safely ignore the advisory ones.
1040 should be able to safely ignore the advisory ones.
1041
1041
1042 Both data and parameters cannot be modified after the generation has begun.
1042 Both data and parameters cannot be modified after the generation has begun.
1043 """
1043 """
1044
1044
1045 def __init__(
1045 def __init__(
1046 self,
1046 self,
1047 parttype,
1047 parttype,
1048 mandatoryparams=(),
1048 mandatoryparams=(),
1049 advisoryparams=(),
1049 advisoryparams=(),
1050 data=b'',
1050 data=b'',
1051 mandatory=True,
1051 mandatory=True,
1052 ):
1052 ):
1053 validateparttype(parttype)
1053 validateparttype(parttype)
1054 self.id = None
1054 self.id = None
1055 self.type = parttype
1055 self.type = parttype
1056 self._data = data
1056 self._data = data
1057 self._mandatoryparams = list(mandatoryparams)
1057 self._mandatoryparams = list(mandatoryparams)
1058 self._advisoryparams = list(advisoryparams)
1058 self._advisoryparams = list(advisoryparams)
1059 # checking for duplicated entries
1059 # checking for duplicated entries
1060 self._seenparams = set()
1060 self._seenparams = set()
1061 for pname, __ in self._mandatoryparams + self._advisoryparams:
1061 for pname, __ in self._mandatoryparams + self._advisoryparams:
1062 if pname in self._seenparams:
1062 if pname in self._seenparams:
1063 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1063 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1064 self._seenparams.add(pname)
1064 self._seenparams.add(pname)
1065 # status of the part's generation:
1065 # status of the part's generation:
1066 # - None: not started,
1066 # - None: not started,
1067 # - False: currently generated,
1067 # - False: currently generated,
1068 # - True: generation done.
1068 # - True: generation done.
1069 self._generated = None
1069 self._generated = None
1070 self.mandatory = mandatory
1070 self.mandatory = mandatory
1071
1071
1072 def __repr__(self):
1072 def __repr__(self):
1073 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1073 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1074 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1074 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1075 cls,
1075 cls,
1076 id(self),
1076 id(self),
1077 self.id,
1077 self.id,
1078 self.type,
1078 self.type,
1079 self.mandatory,
1079 self.mandatory,
1080 )
1080 )
1081
1081
1082 def copy(self):
1082 def copy(self):
1083 """return a copy of the part
1083 """return a copy of the part
1084
1084
1085 The new part have the very same content but no partid assigned yet.
1085 The new part have the very same content but no partid assigned yet.
1086 Parts with generated data cannot be copied."""
1086 Parts with generated data cannot be copied."""
1087 assert not hasattr(self.data, 'next')
1087 assert not hasattr(self.data, 'next')
1088 return self.__class__(
1088 return self.__class__(
1089 self.type,
1089 self.type,
1090 self._mandatoryparams,
1090 self._mandatoryparams,
1091 self._advisoryparams,
1091 self._advisoryparams,
1092 self._data,
1092 self._data,
1093 self.mandatory,
1093 self.mandatory,
1094 )
1094 )
1095
1095
1096 # methods used to defines the part content
1096 # methods used to defines the part content
1097 @property
1097 @property
1098 def data(self):
1098 def data(self):
1099 return self._data
1099 return self._data
1100
1100
1101 @data.setter
1101 @data.setter
1102 def data(self, data):
1102 def data(self, data):
1103 if self._generated is not None:
1103 if self._generated is not None:
1104 raise error.ReadOnlyPartError(b'part is being generated')
1104 raise error.ReadOnlyPartError(b'part is being generated')
1105 self._data = data
1105 self._data = data
1106
1106
1107 @property
1107 @property
1108 def mandatoryparams(self):
1108 def mandatoryparams(self):
1109 # make it an immutable tuple to force people through ``addparam``
1109 # make it an immutable tuple to force people through ``addparam``
1110 return tuple(self._mandatoryparams)
1110 return tuple(self._mandatoryparams)
1111
1111
1112 @property
1112 @property
1113 def advisoryparams(self):
1113 def advisoryparams(self):
1114 # make it an immutable tuple to force people through ``addparam``
1114 # make it an immutable tuple to force people through ``addparam``
1115 return tuple(self._advisoryparams)
1115 return tuple(self._advisoryparams)
1116
1116
1117 def addparam(self, name, value=b'', mandatory=True):
1117 def addparam(self, name, value=b'', mandatory=True):
1118 """add a parameter to the part
1118 """add a parameter to the part
1119
1119
1120 If 'mandatory' is set to True, the remote handler must claim support
1120 If 'mandatory' is set to True, the remote handler must claim support
1121 for this parameter or the unbundling will be aborted.
1121 for this parameter or the unbundling will be aborted.
1122
1122
1123 The 'name' and 'value' cannot exceed 255 bytes each.
1123 The 'name' and 'value' cannot exceed 255 bytes each.
1124 """
1124 """
1125 if self._generated is not None:
1125 if self._generated is not None:
1126 raise error.ReadOnlyPartError(b'part is being generated')
1126 raise error.ReadOnlyPartError(b'part is being generated')
1127 if name in self._seenparams:
1127 if name in self._seenparams:
1128 raise ValueError(b'duplicated params: %s' % name)
1128 raise ValueError(b'duplicated params: %s' % name)
1129 self._seenparams.add(name)
1129 self._seenparams.add(name)
1130 params = self._advisoryparams
1130 params = self._advisoryparams
1131 if mandatory:
1131 if mandatory:
1132 params = self._mandatoryparams
1132 params = self._mandatoryparams
1133 params.append((name, value))
1133 params.append((name, value))
1134
1134
1135 # methods used to generates the bundle2 stream
1135 # methods used to generates the bundle2 stream
1136 def getchunks(self, ui):
1136 def getchunks(self, ui):
1137 if self._generated is not None:
1137 if self._generated is not None:
1138 raise error.ProgrammingError(b'part can only be consumed once')
1138 raise error.ProgrammingError(b'part can only be consumed once')
1139 self._generated = False
1139 self._generated = False
1140
1140
1141 if ui.debugflag:
1141 if ui.debugflag:
1142 msg = [b'bundle2-output-part: "%s"' % self.type]
1142 msg = [b'bundle2-output-part: "%s"' % self.type]
1143 if not self.mandatory:
1143 if not self.mandatory:
1144 msg.append(b' (advisory)')
1144 msg.append(b' (advisory)')
1145 nbmp = len(self.mandatoryparams)
1145 nbmp = len(self.mandatoryparams)
1146 nbap = len(self.advisoryparams)
1146 nbap = len(self.advisoryparams)
1147 if nbmp or nbap:
1147 if nbmp or nbap:
1148 msg.append(b' (params:')
1148 msg.append(b' (params:')
1149 if nbmp:
1149 if nbmp:
1150 msg.append(b' %i mandatory' % nbmp)
1150 msg.append(b' %i mandatory' % nbmp)
1151 if nbap:
1151 if nbap:
1152 msg.append(b' %i advisory' % nbmp)
1152 msg.append(b' %i advisory' % nbmp)
1153 msg.append(b')')
1153 msg.append(b')')
1154 if not self.data:
1154 if not self.data:
1155 msg.append(b' empty payload')
1155 msg.append(b' empty payload')
1156 elif hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1156 elif hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1157 msg.append(b' streamed payload')
1157 msg.append(b' streamed payload')
1158 else:
1158 else:
1159 msg.append(b' %i bytes payload' % len(self.data))
1159 msg.append(b' %i bytes payload' % len(self.data))
1160 msg.append(b'\n')
1160 msg.append(b'\n')
1161 ui.debug(b''.join(msg))
1161 ui.debug(b''.join(msg))
1162
1162
1163 #### header
1163 #### header
1164 if self.mandatory:
1164 if self.mandatory:
1165 parttype = self.type.upper()
1165 parttype = self.type.upper()
1166 else:
1166 else:
1167 parttype = self.type.lower()
1167 parttype = self.type.lower()
1168 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1168 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1169 ## parttype
1169 ## parttype
1170 header = [
1170 header = [
1171 _pack(_fparttypesize, len(parttype)),
1171 _pack(_fparttypesize, len(parttype)),
1172 parttype,
1172 parttype,
1173 _pack(_fpartid, self.id),
1173 _pack(_fpartid, self.id),
1174 ]
1174 ]
1175 ## parameters
1175 ## parameters
1176 # count
1176 # count
1177 manpar = self.mandatoryparams
1177 manpar = self.mandatoryparams
1178 advpar = self.advisoryparams
1178 advpar = self.advisoryparams
1179 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1179 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1180 # size
1180 # size
1181 parsizes = []
1181 parsizes = []
1182 for key, value in manpar:
1182 for key, value in manpar:
1183 parsizes.append(len(key))
1183 parsizes.append(len(key))
1184 parsizes.append(len(value))
1184 parsizes.append(len(value))
1185 for key, value in advpar:
1185 for key, value in advpar:
1186 parsizes.append(len(key))
1186 parsizes.append(len(key))
1187 parsizes.append(len(value))
1187 parsizes.append(len(value))
1188 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1188 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1189 header.append(paramsizes)
1189 header.append(paramsizes)
1190 # key, value
1190 # key, value
1191 for key, value in manpar:
1191 for key, value in manpar:
1192 header.append(key)
1192 header.append(key)
1193 header.append(value)
1193 header.append(value)
1194 for key, value in advpar:
1194 for key, value in advpar:
1195 header.append(key)
1195 header.append(key)
1196 header.append(value)
1196 header.append(value)
1197 ## finalize header
1197 ## finalize header
1198 try:
1198 try:
1199 headerchunk = b''.join(header)
1199 headerchunk = b''.join(header)
1200 except TypeError:
1200 except TypeError:
1201 raise TypeError(
1201 raise TypeError(
1202 'Found a non-bytes trying to '
1202 'Found a non-bytes trying to '
1203 'build bundle part header: %r' % header
1203 'build bundle part header: %r' % header
1204 )
1204 )
1205 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1205 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1206 yield _pack(_fpartheadersize, len(headerchunk))
1206 yield _pack(_fpartheadersize, len(headerchunk))
1207 yield headerchunk
1207 yield headerchunk
1208 ## payload
1208 ## payload
1209 try:
1209 try:
1210 for chunk in self._payloadchunks():
1210 for chunk in self._payloadchunks():
1211 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1211 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1212 yield _pack(_fpayloadsize, len(chunk))
1212 yield _pack(_fpayloadsize, len(chunk))
1213 yield chunk
1213 yield chunk
1214 except GeneratorExit:
1214 except GeneratorExit:
1215 # GeneratorExit means that nobody is listening for our
1215 # GeneratorExit means that nobody is listening for our
1216 # results anyway, so just bail quickly rather than trying
1216 # results anyway, so just bail quickly rather than trying
1217 # to produce an error part.
1217 # to produce an error part.
1218 ui.debug(b'bundle2-generatorexit\n')
1218 ui.debug(b'bundle2-generatorexit\n')
1219 raise
1219 raise
1220 except BaseException as exc:
1220 except BaseException as exc:
1221 bexc = stringutil.forcebytestr(exc)
1221 bexc = stringutil.forcebytestr(exc)
1222 # backup exception data for later
1222 # backup exception data for later
1223 ui.debug(
1223 ui.debug(
1224 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1224 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1225 )
1225 )
1226 tb = sys.exc_info()[2]
1226 tb = sys.exc_info()[2]
1227 msg = b'unexpected error: %s' % bexc
1227 msg = b'unexpected error: %s' % bexc
1228 interpart = bundlepart(
1228 interpart = bundlepart(
1229 b'error:abort', [(b'message', msg)], mandatory=False
1229 b'error:abort', [(b'message', msg)], mandatory=False
1230 )
1230 )
1231 interpart.id = 0
1231 interpart.id = 0
1232 yield _pack(_fpayloadsize, -1)
1232 yield _pack(_fpayloadsize, -1)
1233 for chunk in interpart.getchunks(ui=ui):
1233 for chunk in interpart.getchunks(ui=ui):
1234 yield chunk
1234 yield chunk
1235 outdebug(ui, b'closing payload chunk')
1235 outdebug(ui, b'closing payload chunk')
1236 # abort current part payload
1236 # abort current part payload
1237 yield _pack(_fpayloadsize, 0)
1237 yield _pack(_fpayloadsize, 0)
1238 pycompat.raisewithtb(exc, tb)
1238 pycompat.raisewithtb(exc, tb)
1239 # end of payload
1239 # end of payload
1240 outdebug(ui, b'closing payload chunk')
1240 outdebug(ui, b'closing payload chunk')
1241 yield _pack(_fpayloadsize, 0)
1241 yield _pack(_fpayloadsize, 0)
1242 self._generated = True
1242 self._generated = True
1243
1243
1244 def _payloadchunks(self):
1244 def _payloadchunks(self):
1245 """yield chunks of a the part payload
1245 """yield chunks of a the part payload
1246
1246
1247 Exists to handle the different methods to provide data to a part."""
1247 Exists to handle the different methods to provide data to a part."""
1248 # we only support fixed size data now.
1248 # we only support fixed size data now.
1249 # This will be improved in the future.
1249 # This will be improved in the future.
1250 if hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1250 if hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1251 buff = util.chunkbuffer(self.data)
1251 buff = util.chunkbuffer(self.data)
1252 chunk = buff.read(preferedchunksize)
1252 chunk = buff.read(preferedchunksize)
1253 while chunk:
1253 while chunk:
1254 yield chunk
1254 yield chunk
1255 chunk = buff.read(preferedchunksize)
1255 chunk = buff.read(preferedchunksize)
1256 elif len(self.data):
1256 elif len(self.data):
1257 yield self.data
1257 yield self.data
1258
1258
1259
1259
1260 flaginterrupt = -1
1260 flaginterrupt = -1
1261
1261
1262
1262
1263 class interrupthandler(unpackermixin):
1263 class interrupthandler(unpackermixin):
1264 """read one part and process it with restricted capability
1264 """read one part and process it with restricted capability
1265
1265
1266 This allows to transmit exception raised on the producer size during part
1266 This allows to transmit exception raised on the producer size during part
1267 iteration while the consumer is reading a part.
1267 iteration while the consumer is reading a part.
1268
1268
1269 Part processed in this manner only have access to a ui object,"""
1269 Part processed in this manner only have access to a ui object,"""
1270
1270
1271 def __init__(self, ui, fp):
1271 def __init__(self, ui, fp):
1272 super(interrupthandler, self).__init__(fp)
1272 super(interrupthandler, self).__init__(fp)
1273 self.ui = ui
1273 self.ui = ui
1274
1274
1275 def _readpartheader(self):
1275 def _readpartheader(self):
1276 """reads a part header size and return the bytes blob
1276 """reads a part header size and return the bytes blob
1277
1277
1278 returns None if empty"""
1278 returns None if empty"""
1279 headersize = self._unpack(_fpartheadersize)[0]
1279 headersize = self._unpack(_fpartheadersize)[0]
1280 if headersize < 0:
1280 if headersize < 0:
1281 raise error.BundleValueError(
1281 raise error.BundleValueError(
1282 b'negative part header size: %i' % headersize
1282 b'negative part header size: %i' % headersize
1283 )
1283 )
1284 indebug(self.ui, b'part header size: %i\n' % headersize)
1284 indebug(self.ui, b'part header size: %i\n' % headersize)
1285 if headersize:
1285 if headersize:
1286 return self._readexact(headersize)
1286 return self._readexact(headersize)
1287 return None
1287 return None
1288
1288
1289 def __call__(self):
1289 def __call__(self):
1290 self.ui.debug(
1290 self.ui.debug(
1291 b'bundle2-input-stream-interrupt: opening out of band context\n'
1291 b'bundle2-input-stream-interrupt: opening out of band context\n'
1292 )
1292 )
1293 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1293 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1294 headerblock = self._readpartheader()
1294 headerblock = self._readpartheader()
1295 if headerblock is None:
1295 if headerblock is None:
1296 indebug(self.ui, b'no part found during interruption.')
1296 indebug(self.ui, b'no part found during interruption.')
1297 return
1297 return
1298 part = unbundlepart(self.ui, headerblock, self._fp)
1298 part = unbundlepart(self.ui, headerblock, self._fp)
1299 op = interruptoperation(self.ui)
1299 op = interruptoperation(self.ui)
1300 hardabort = False
1300 hardabort = False
1301 try:
1301 try:
1302 _processpart(op, part)
1302 _processpart(op, part)
1303 except (SystemExit, KeyboardInterrupt):
1303 except (SystemExit, KeyboardInterrupt):
1304 hardabort = True
1304 hardabort = True
1305 raise
1305 raise
1306 finally:
1306 finally:
1307 if not hardabort:
1307 if not hardabort:
1308 part.consume()
1308 part.consume()
1309 self.ui.debug(
1309 self.ui.debug(
1310 b'bundle2-input-stream-interrupt: closing out of band context\n'
1310 b'bundle2-input-stream-interrupt: closing out of band context\n'
1311 )
1311 )
1312
1312
1313
1313
1314 class interruptoperation:
1314 class interruptoperation:
1315 """A limited operation to be use by part handler during interruption
1315 """A limited operation to be use by part handler during interruption
1316
1316
1317 It only have access to an ui object.
1317 It only have access to an ui object.
1318 """
1318 """
1319
1319
1320 def __init__(self, ui):
1320 def __init__(self, ui):
1321 self.ui = ui
1321 self.ui = ui
1322 self.reply = None
1322 self.reply = None
1323 self.captureoutput = False
1323 self.captureoutput = False
1324
1324
1325 @property
1325 @property
1326 def repo(self):
1326 def repo(self):
1327 raise error.ProgrammingError(b'no repo access from stream interruption')
1327 raise error.ProgrammingError(b'no repo access from stream interruption')
1328
1328
1329 def gettransaction(self):
1329 def gettransaction(self):
1330 raise TransactionUnavailable(b'no repo access from stream interruption')
1330 raise TransactionUnavailable(b'no repo access from stream interruption')
1331
1331
1332
1332
1333 def decodepayloadchunks(ui, fh):
1333 def decodepayloadchunks(ui, fh):
1334 """Reads bundle2 part payload data into chunks.
1334 """Reads bundle2 part payload data into chunks.
1335
1335
1336 Part payload data consists of framed chunks. This function takes
1336 Part payload data consists of framed chunks. This function takes
1337 a file handle and emits those chunks.
1337 a file handle and emits those chunks.
1338 """
1338 """
1339 dolog = ui.configbool(b'devel', b'bundle2.debug')
1339 dolog = ui.configbool(b'devel', b'bundle2.debug')
1340 debug = ui.debug
1340 debug = ui.debug
1341
1341
1342 headerstruct = struct.Struct(_fpayloadsize)
1342 headerstruct = struct.Struct(_fpayloadsize)
1343 headersize = headerstruct.size
1343 headersize = headerstruct.size
1344 unpack = headerstruct.unpack
1344 unpack = headerstruct.unpack
1345
1345
1346 readexactly = changegroup.readexactly
1346 readexactly = changegroup.readexactly
1347 read = fh.read
1347 read = fh.read
1348
1348
1349 chunksize = unpack(readexactly(fh, headersize))[0]
1349 chunksize = unpack(readexactly(fh, headersize))[0]
1350 indebug(ui, b'payload chunk size: %i' % chunksize)
1350 indebug(ui, b'payload chunk size: %i' % chunksize)
1351
1351
1352 # changegroup.readexactly() is inlined below for performance.
1352 # changegroup.readexactly() is inlined below for performance.
1353 while chunksize:
1353 while chunksize:
1354 if chunksize >= 0:
1354 if chunksize >= 0:
1355 s = read(chunksize)
1355 s = read(chunksize)
1356 if len(s) < chunksize:
1356 if len(s) < chunksize:
1357 raise error.Abort(
1357 raise error.Abort(
1358 _(
1358 _(
1359 b'stream ended unexpectedly '
1359 b'stream ended unexpectedly '
1360 b' (got %d bytes, expected %d)'
1360 b' (got %d bytes, expected %d)'
1361 )
1361 )
1362 % (len(s), chunksize)
1362 % (len(s), chunksize)
1363 )
1363 )
1364
1364
1365 yield s
1365 yield s
1366 elif chunksize == flaginterrupt:
1366 elif chunksize == flaginterrupt:
1367 # Interrupt "signal" detected. The regular stream is interrupted
1367 # Interrupt "signal" detected. The regular stream is interrupted
1368 # and a bundle2 part follows. Consume it.
1368 # and a bundle2 part follows. Consume it.
1369 interrupthandler(ui, fh)()
1369 interrupthandler(ui, fh)()
1370 else:
1370 else:
1371 raise error.BundleValueError(
1371 raise error.BundleValueError(
1372 b'negative payload chunk size: %s' % chunksize
1372 b'negative payload chunk size: %s' % chunksize
1373 )
1373 )
1374
1374
1375 s = read(headersize)
1375 s = read(headersize)
1376 if len(s) < headersize:
1376 if len(s) < headersize:
1377 raise error.Abort(
1377 raise error.Abort(
1378 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1378 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1379 % (len(s), chunksize)
1379 % (len(s), chunksize)
1380 )
1380 )
1381
1381
1382 chunksize = unpack(s)[0]
1382 chunksize = unpack(s)[0]
1383
1383
1384 # indebug() inlined for performance.
1384 # indebug() inlined for performance.
1385 if dolog:
1385 if dolog:
1386 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1386 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1387
1387
1388
1388
1389 class unbundlepart(unpackermixin):
1389 class unbundlepart(unpackermixin):
1390 """a bundle part read from a bundle"""
1390 """a bundle part read from a bundle"""
1391
1391
1392 def __init__(self, ui, header, fp):
1392 def __init__(self, ui, header, fp):
1393 super(unbundlepart, self).__init__(fp)
1393 super(unbundlepart, self).__init__(fp)
1394 self._seekable = hasattr(fp, 'seek') and hasattr(fp, 'tell')
1394 self._seekable = hasattr(fp, 'seek') and hasattr(fp, 'tell')
1395 self.ui = ui
1395 self.ui = ui
1396 # unbundle state attr
1396 # unbundle state attr
1397 self._headerdata = header
1397 self._headerdata = header
1398 self._headeroffset = 0
1398 self._headeroffset = 0
1399 self._initialized = False
1399 self._initialized = False
1400 self.consumed = False
1400 self.consumed = False
1401 # part data
1401 # part data
1402 self.id = None
1402 self.id = None
1403 self.type = None
1403 self.type = None
1404 self.mandatoryparams = None
1404 self.mandatoryparams = None
1405 self.advisoryparams = None
1405 self.advisoryparams = None
1406 self.params = None
1406 self.params = None
1407 self.mandatorykeys = ()
1407 self.mandatorykeys = ()
1408 self._readheader()
1408 self._readheader()
1409 self._mandatory = None
1409 self._mandatory = None
1410 self._pos = 0
1410 self._pos = 0
1411
1411
1412 def _fromheader(self, size):
1412 def _fromheader(self, size):
1413 """return the next <size> byte from the header"""
1413 """return the next <size> byte from the header"""
1414 offset = self._headeroffset
1414 offset = self._headeroffset
1415 data = self._headerdata[offset : (offset + size)]
1415 data = self._headerdata[offset : (offset + size)]
1416 self._headeroffset = offset + size
1416 self._headeroffset = offset + size
1417 return data
1417 return data
1418
1418
1419 def _unpackheader(self, format):
1419 def _unpackheader(self, format):
1420 """read given format from header
1420 """read given format from header
1421
1421
1422 This automatically compute the size of the format to read."""
1422 This automatically compute the size of the format to read."""
1423 data = self._fromheader(struct.calcsize(format))
1423 data = self._fromheader(struct.calcsize(format))
1424 return _unpack(format, data)
1424 return _unpack(format, data)
1425
1425
1426 def _initparams(self, mandatoryparams, advisoryparams):
1426 def _initparams(self, mandatoryparams, advisoryparams):
1427 """internal function to setup all logic related parameters"""
1427 """internal function to setup all logic related parameters"""
1428 # make it read only to prevent people touching it by mistake.
1428 # make it read only to prevent people touching it by mistake.
1429 self.mandatoryparams = tuple(mandatoryparams)
1429 self.mandatoryparams = tuple(mandatoryparams)
1430 self.advisoryparams = tuple(advisoryparams)
1430 self.advisoryparams = tuple(advisoryparams)
1431 # user friendly UI
1431 # user friendly UI
1432 self.params = util.sortdict(self.mandatoryparams)
1432 self.params = util.sortdict(self.mandatoryparams)
1433 self.params.update(self.advisoryparams)
1433 self.params.update(self.advisoryparams)
1434 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1434 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1435
1435
1436 def _readheader(self):
1436 def _readheader(self):
1437 """read the header and setup the object"""
1437 """read the header and setup the object"""
1438 typesize = self._unpackheader(_fparttypesize)[0]
1438 typesize = self._unpackheader(_fparttypesize)[0]
1439 self.type = self._fromheader(typesize)
1439 self.type = self._fromheader(typesize)
1440 indebug(self.ui, b'part type: "%s"' % self.type)
1440 indebug(self.ui, b'part type: "%s"' % self.type)
1441 self.id = self._unpackheader(_fpartid)[0]
1441 self.id = self._unpackheader(_fpartid)[0]
1442 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1442 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1443 # extract mandatory bit from type
1443 # extract mandatory bit from type
1444 self.mandatory = self.type != self.type.lower()
1444 self.mandatory = self.type != self.type.lower()
1445 self.type = self.type.lower()
1445 self.type = self.type.lower()
1446 ## reading parameters
1446 ## reading parameters
1447 # param count
1447 # param count
1448 mancount, advcount = self._unpackheader(_fpartparamcount)
1448 mancount, advcount = self._unpackheader(_fpartparamcount)
1449 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1449 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1450 # param size
1450 # param size
1451 fparamsizes = _makefpartparamsizes(mancount + advcount)
1451 fparamsizes = _makefpartparamsizes(mancount + advcount)
1452 paramsizes = self._unpackheader(fparamsizes)
1452 paramsizes = self._unpackheader(fparamsizes)
1453 # make it a list of couple again
1453 # make it a list of couple again
1454 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1454 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1455 # split mandatory from advisory
1455 # split mandatory from advisory
1456 mansizes = paramsizes[:mancount]
1456 mansizes = paramsizes[:mancount]
1457 advsizes = paramsizes[mancount:]
1457 advsizes = paramsizes[mancount:]
1458 # retrieve param value
1458 # retrieve param value
1459 manparams = []
1459 manparams = []
1460 for key, value in mansizes:
1460 for key, value in mansizes:
1461 manparams.append((self._fromheader(key), self._fromheader(value)))
1461 manparams.append((self._fromheader(key), self._fromheader(value)))
1462 advparams = []
1462 advparams = []
1463 for key, value in advsizes:
1463 for key, value in advsizes:
1464 advparams.append((self._fromheader(key), self._fromheader(value)))
1464 advparams.append((self._fromheader(key), self._fromheader(value)))
1465 self._initparams(manparams, advparams)
1465 self._initparams(manparams, advparams)
1466 ## part payload
1466 ## part payload
1467 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1467 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1468 # we read the data, tell it
1468 # we read the data, tell it
1469 self._initialized = True
1469 self._initialized = True
1470
1470
1471 def _payloadchunks(self):
1471 def _payloadchunks(self):
1472 """Generator of decoded chunks in the payload."""
1472 """Generator of decoded chunks in the payload."""
1473 return decodepayloadchunks(self.ui, self._fp)
1473 return decodepayloadchunks(self.ui, self._fp)
1474
1474
1475 def consume(self):
1475 def consume(self):
1476 """Read the part payload until completion.
1476 """Read the part payload until completion.
1477
1477
1478 By consuming the part data, the underlying stream read offset will
1478 By consuming the part data, the underlying stream read offset will
1479 be advanced to the next part (or end of stream).
1479 be advanced to the next part (or end of stream).
1480 """
1480 """
1481 if self.consumed:
1481 if self.consumed:
1482 return
1482 return
1483
1483
1484 chunk = self.read(32768)
1484 chunk = self.read(32768)
1485 while chunk:
1485 while chunk:
1486 self._pos += len(chunk)
1486 self._pos += len(chunk)
1487 chunk = self.read(32768)
1487 chunk = self.read(32768)
1488
1488
1489 def read(self, size=None):
1489 def read(self, size=None):
1490 """read payload data"""
1490 """read payload data"""
1491 if not self._initialized:
1491 if not self._initialized:
1492 self._readheader()
1492 self._readheader()
1493 if size is None:
1493 if size is None:
1494 data = self._payloadstream.read()
1494 data = self._payloadstream.read()
1495 else:
1495 else:
1496 data = self._payloadstream.read(size)
1496 data = self._payloadstream.read(size)
1497 self._pos += len(data)
1497 self._pos += len(data)
1498 if size is None or len(data) < size:
1498 if size is None or len(data) < size:
1499 if not self.consumed and self._pos:
1499 if not self.consumed and self._pos:
1500 self.ui.debug(
1500 self.ui.debug(
1501 b'bundle2-input-part: total payload size %i\n' % self._pos
1501 b'bundle2-input-part: total payload size %i\n' % self._pos
1502 )
1502 )
1503 self.consumed = True
1503 self.consumed = True
1504 return data
1504 return data
1505
1505
1506
1506
1507 class seekableunbundlepart(unbundlepart):
1507 class seekableunbundlepart(unbundlepart):
1508 """A bundle2 part in a bundle that is seekable.
1508 """A bundle2 part in a bundle that is seekable.
1509
1509
1510 Regular ``unbundlepart`` instances can only be read once. This class
1510 Regular ``unbundlepart`` instances can only be read once. This class
1511 extends ``unbundlepart`` to enable bi-directional seeking within the
1511 extends ``unbundlepart`` to enable bi-directional seeking within the
1512 part.
1512 part.
1513
1513
1514 Bundle2 part data consists of framed chunks. Offsets when seeking
1514 Bundle2 part data consists of framed chunks. Offsets when seeking
1515 refer to the decoded data, not the offsets in the underlying bundle2
1515 refer to the decoded data, not the offsets in the underlying bundle2
1516 stream.
1516 stream.
1517
1517
1518 To facilitate quickly seeking within the decoded data, instances of this
1518 To facilitate quickly seeking within the decoded data, instances of this
1519 class maintain a mapping between offsets in the underlying stream and
1519 class maintain a mapping between offsets in the underlying stream and
1520 the decoded payload. This mapping will consume memory in proportion
1520 the decoded payload. This mapping will consume memory in proportion
1521 to the number of chunks within the payload (which almost certainly
1521 to the number of chunks within the payload (which almost certainly
1522 increases in proportion with the size of the part).
1522 increases in proportion with the size of the part).
1523 """
1523 """
1524
1524
1525 def __init__(self, ui, header, fp):
1525 def __init__(self, ui, header, fp):
1526 # (payload, file) offsets for chunk starts.
1526 # (payload, file) offsets for chunk starts.
1527 self._chunkindex = []
1527 self._chunkindex = []
1528
1528
1529 super(seekableunbundlepart, self).__init__(ui, header, fp)
1529 super(seekableunbundlepart, self).__init__(ui, header, fp)
1530
1530
1531 def _payloadchunks(self, chunknum=0):
1531 def _payloadchunks(self, chunknum=0):
1532 '''seek to specified chunk and start yielding data'''
1532 '''seek to specified chunk and start yielding data'''
1533 if len(self._chunkindex) == 0:
1533 if len(self._chunkindex) == 0:
1534 assert chunknum == 0, b'Must start with chunk 0'
1534 assert chunknum == 0, b'Must start with chunk 0'
1535 self._chunkindex.append((0, self._tellfp()))
1535 self._chunkindex.append((0, self._tellfp()))
1536 else:
1536 else:
1537 assert chunknum < len(self._chunkindex), (
1537 assert chunknum < len(self._chunkindex), (
1538 b'Unknown chunk %d' % chunknum
1538 b'Unknown chunk %d' % chunknum
1539 )
1539 )
1540 self._seekfp(self._chunkindex[chunknum][1])
1540 self._seekfp(self._chunkindex[chunknum][1])
1541
1541
1542 pos = self._chunkindex[chunknum][0]
1542 pos = self._chunkindex[chunknum][0]
1543
1543
1544 for chunk in decodepayloadchunks(self.ui, self._fp):
1544 for chunk in decodepayloadchunks(self.ui, self._fp):
1545 chunknum += 1
1545 chunknum += 1
1546 pos += len(chunk)
1546 pos += len(chunk)
1547 if chunknum == len(self._chunkindex):
1547 if chunknum == len(self._chunkindex):
1548 self._chunkindex.append((pos, self._tellfp()))
1548 self._chunkindex.append((pos, self._tellfp()))
1549
1549
1550 yield chunk
1550 yield chunk
1551
1551
1552 def _findchunk(self, pos):
1552 def _findchunk(self, pos):
1553 '''for a given payload position, return a chunk number and offset'''
1553 '''for a given payload position, return a chunk number and offset'''
1554 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1554 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1555 if ppos == pos:
1555 if ppos == pos:
1556 return chunk, 0
1556 return chunk, 0
1557 elif ppos > pos:
1557 elif ppos > pos:
1558 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1558 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1559 raise ValueError(b'Unknown chunk')
1559 raise ValueError(b'Unknown chunk')
1560
1560
1561 def tell(self):
1561 def tell(self):
1562 return self._pos
1562 return self._pos
1563
1563
1564 def seek(self, offset, whence=os.SEEK_SET):
1564 def seek(self, offset, whence=os.SEEK_SET):
1565 if whence == os.SEEK_SET:
1565 if whence == os.SEEK_SET:
1566 newpos = offset
1566 newpos = offset
1567 elif whence == os.SEEK_CUR:
1567 elif whence == os.SEEK_CUR:
1568 newpos = self._pos + offset
1568 newpos = self._pos + offset
1569 elif whence == os.SEEK_END:
1569 elif whence == os.SEEK_END:
1570 if not self.consumed:
1570 if not self.consumed:
1571 # Can't use self.consume() here because it advances self._pos.
1571 # Can't use self.consume() here because it advances self._pos.
1572 chunk = self.read(32768)
1572 chunk = self.read(32768)
1573 while chunk:
1573 while chunk:
1574 chunk = self.read(32768)
1574 chunk = self.read(32768)
1575 newpos = self._chunkindex[-1][0] - offset
1575 newpos = self._chunkindex[-1][0] - offset
1576 else:
1576 else:
1577 raise ValueError(b'Unknown whence value: %r' % (whence,))
1577 raise ValueError(b'Unknown whence value: %r' % (whence,))
1578
1578
1579 if newpos > self._chunkindex[-1][0] and not self.consumed:
1579 if newpos > self._chunkindex[-1][0] and not self.consumed:
1580 # Can't use self.consume() here because it advances self._pos.
1580 # Can't use self.consume() here because it advances self._pos.
1581 chunk = self.read(32768)
1581 chunk = self.read(32768)
1582 while chunk:
1582 while chunk:
1583 chunk = self.read(32668)
1583 chunk = self.read(32668)
1584
1584
1585 if not 0 <= newpos <= self._chunkindex[-1][0]:
1585 if not 0 <= newpos <= self._chunkindex[-1][0]:
1586 raise ValueError(b'Offset out of range')
1586 raise ValueError(b'Offset out of range')
1587
1587
1588 if self._pos != newpos:
1588 if self._pos != newpos:
1589 chunk, internaloffset = self._findchunk(newpos)
1589 chunk, internaloffset = self._findchunk(newpos)
1590 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1590 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1591 adjust = self.read(internaloffset)
1591 adjust = self.read(internaloffset)
1592 if len(adjust) != internaloffset:
1592 if len(adjust) != internaloffset:
1593 raise error.Abort(_(b'Seek failed\n'))
1593 raise error.Abort(_(b'Seek failed\n'))
1594 self._pos = newpos
1594 self._pos = newpos
1595
1595
1596 def _seekfp(self, offset, whence=0):
1596 def _seekfp(self, offset, whence=0):
1597 """move the underlying file pointer
1597 """move the underlying file pointer
1598
1598
1599 This method is meant for internal usage by the bundle2 protocol only.
1599 This method is meant for internal usage by the bundle2 protocol only.
1600 They directly manipulate the low level stream including bundle2 level
1600 They directly manipulate the low level stream including bundle2 level
1601 instruction.
1601 instruction.
1602
1602
1603 Do not use it to implement higher-level logic or methods."""
1603 Do not use it to implement higher-level logic or methods."""
1604 if self._seekable:
1604 if self._seekable:
1605 return self._fp.seek(offset, whence)
1605 return self._fp.seek(offset, whence)
1606 else:
1606 else:
1607 raise NotImplementedError(_(b'File pointer is not seekable'))
1607 raise NotImplementedError(_(b'File pointer is not seekable'))
1608
1608
1609 def _tellfp(self):
1609 def _tellfp(self):
1610 """return the file offset, or None if file is not seekable
1610 """return the file offset, or None if file is not seekable
1611
1611
1612 This method is meant for internal usage by the bundle2 protocol only.
1612 This method is meant for internal usage by the bundle2 protocol only.
1613 They directly manipulate the low level stream including bundle2 level
1613 They directly manipulate the low level stream including bundle2 level
1614 instruction.
1614 instruction.
1615
1615
1616 Do not use it to implement higher-level logic or methods."""
1616 Do not use it to implement higher-level logic or methods."""
1617 if self._seekable:
1617 if self._seekable:
1618 try:
1618 try:
1619 return self._fp.tell()
1619 return self._fp.tell()
1620 except IOError as e:
1620 except IOError as e:
1621 if e.errno == errno.ESPIPE:
1621 if e.errno == errno.ESPIPE:
1622 self._seekable = False
1622 self._seekable = False
1623 else:
1623 else:
1624 raise
1624 raise
1625 return None
1625 return None
1626
1626
1627
1627
1628 # These are only the static capabilities.
1628 # These are only the static capabilities.
1629 # Check the 'getrepocaps' function for the rest.
1629 # Check the 'getrepocaps' function for the rest.
1630 capabilities: "Capabilities" = {
1630 capabilities: "Capabilities" = {
1631 b'HG20': (),
1631 b'HG20': (),
1632 b'bookmarks': (),
1632 b'bookmarks': (),
1633 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1633 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1634 b'listkeys': (),
1634 b'listkeys': (),
1635 b'pushkey': (),
1635 b'pushkey': (),
1636 b'digests': tuple(sorted(util.DIGESTS.keys())),
1636 b'digests': tuple(sorted(util.DIGESTS.keys())),
1637 b'remote-changegroup': (b'http', b'https'),
1637 b'remote-changegroup': (b'http', b'https'),
1638 b'hgtagsfnodes': (),
1638 b'hgtagsfnodes': (),
1639 b'phases': (b'heads',),
1639 b'phases': (b'heads',),
1640 b'stream': (b'v2',),
1640 b'stream': (b'v2',),
1641 }
1641 }
1642
1642
1643
1643
1644 # TODO: drop the default value for 'role'
1644 # TODO: drop the default value for 'role'
1645 def getrepocaps(repo, allowpushback: bool = False, role=None) -> "Capabilities":
1645 def getrepocaps(repo, allowpushback: bool = False, role=None) -> "Capabilities":
1646 """return the bundle2 capabilities for a given repo
1646 """return the bundle2 capabilities for a given repo
1647
1647
1648 Exists to allow extensions (like evolution) to mutate the capabilities.
1648 Exists to allow extensions (like evolution) to mutate the capabilities.
1649
1649
1650 The returned value is used for servers advertising their capabilities as
1650 The returned value is used for servers advertising their capabilities as
1651 well as clients advertising their capabilities to servers as part of
1651 well as clients advertising their capabilities to servers as part of
1652 bundle2 requests. The ``role`` argument specifies which is which.
1652 bundle2 requests. The ``role`` argument specifies which is which.
1653 """
1653 """
1654 if role not in (b'client', b'server'):
1654 if role not in (b'client', b'server'):
1655 raise error.ProgrammingError(b'role argument must be client or server')
1655 raise error.ProgrammingError(b'role argument must be client or server')
1656
1656
1657 caps = capabilities.copy()
1657 caps = capabilities.copy()
1658 caps[b'changegroup'] = tuple(
1658 caps[b'changegroup'] = tuple(
1659 sorted(changegroup.supportedincomingversions(repo))
1659 sorted(changegroup.supportedincomingversions(repo))
1660 )
1660 )
1661 if obsolete.isenabled(repo, obsolete.exchangeopt):
1661 if obsolete.isenabled(repo, obsolete.exchangeopt):
1662 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1662 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1663 caps[b'obsmarkers'] = supportedformat
1663 caps[b'obsmarkers'] = supportedformat
1664 if allowpushback:
1664 if allowpushback:
1665 caps[b'pushback'] = ()
1665 caps[b'pushback'] = ()
1666 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1666 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1667 if cpmode == b'check-related':
1667 if cpmode == b'check-related':
1668 caps[b'checkheads'] = (b'related',)
1668 caps[b'checkheads'] = (b'related',)
1669 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1669 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1670 caps.pop(b'phases')
1670 caps.pop(b'phases')
1671
1671
1672 # Don't advertise stream clone support in server mode if not configured.
1672 # Don't advertise stream clone support in server mode if not configured.
1673 if role == b'server':
1673 if role == b'server':
1674 streamsupported = repo.ui.configbool(
1674 streamsupported = repo.ui.configbool(
1675 b'server', b'uncompressed', untrusted=True
1675 b'server', b'uncompressed', untrusted=True
1676 )
1676 )
1677 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1677 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1678
1678
1679 if not streamsupported or not featuresupported:
1679 if not streamsupported or not featuresupported:
1680 caps.pop(b'stream')
1680 caps.pop(b'stream')
1681 # Else always advertise support on client, because payload support
1681 # Else always advertise support on client, because payload support
1682 # should always be advertised.
1682 # should always be advertised.
1683
1683
1684 if repo.ui.configbool(b'experimental', b'stream-v3'):
1684 if repo.ui.configbool(b'experimental', b'stream-v3'):
1685 if b'stream' in caps:
1685 if b'stream' in caps:
1686 caps[b'stream'] += (b'v3-exp',)
1686 caps[b'stream'] += (b'v3-exp',)
1687
1687
1688 # b'rev-branch-cache is no longer advertised, but still supported
1688 # b'rev-branch-cache is no longer advertised, but still supported
1689 # for legacy clients.
1689 # for legacy clients.
1690
1690
1691 return caps
1691 return caps
1692
1692
1693
1693
1694 def bundle2caps(remote) -> "Capabilities":
1694 def bundle2caps(remote) -> "Capabilities":
1695 """return the bundle capabilities of a peer as dict"""
1695 """return the bundle capabilities of a peer as dict"""
1696 raw = remote.capable(b'bundle2')
1696 raw = remote.capable(b'bundle2')
1697 if not raw and raw != b'':
1697 if not raw and raw != b'':
1698 return {}
1698 return {}
1699 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1699 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1700 return decodecaps(capsblob)
1700 return decodecaps(capsblob)
1701
1701
1702
1702
1703 def obsmarkersversion(caps: "Capabilities"):
1703 def obsmarkersversion(caps: "Capabilities"):
1704 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1704 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1705 obscaps = caps.get(b'obsmarkers', ())
1705 obscaps = caps.get(b'obsmarkers', ())
1706 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1706 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1707
1707
1708
1708
1709 def writenewbundle(
1709 def writenewbundle(
1710 ui,
1710 ui,
1711 repo,
1711 repo,
1712 source,
1712 source,
1713 filename,
1713 filename,
1714 bundletype,
1714 bundletype,
1715 outgoing,
1715 outgoing,
1716 opts,
1716 opts,
1717 vfs=None,
1717 vfs=None,
1718 compression=None,
1718 compression=None,
1719 compopts=None,
1719 compopts=None,
1720 allow_internal=False,
1720 allow_internal=False,
1721 ):
1721 ):
1722 if bundletype.startswith(b'HG10'):
1722 if bundletype.startswith(b'HG10'):
1723 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1723 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1724 return writebundle(
1724 return writebundle(
1725 ui,
1725 ui,
1726 cg,
1726 cg,
1727 filename,
1727 filename,
1728 bundletype,
1728 bundletype,
1729 vfs=vfs,
1729 vfs=vfs,
1730 compression=compression,
1730 compression=compression,
1731 compopts=compopts,
1731 compopts=compopts,
1732 )
1732 )
1733 elif not bundletype.startswith(b'HG20'):
1733 elif not bundletype.startswith(b'HG20'):
1734 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1734 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1735
1735
1736 # enforce that no internal phase are to be bundled
1736 # enforce that no internal phase are to be bundled
1737 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1737 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1738 if bundled_internal and not allow_internal:
1738 if bundled_internal and not allow_internal:
1739 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1739 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1740 msg = "backup bundle would contains %d internal changesets"
1740 msg = "backup bundle would contains %d internal changesets"
1741 msg %= count
1741 msg %= count
1742 raise error.ProgrammingError(msg)
1742 raise error.ProgrammingError(msg)
1743
1743
1744 caps: "Capabilities" = {}
1744 caps: "Capabilities" = {}
1745 if opts.get(b'obsolescence', False):
1745 if opts.get(b'obsolescence', False):
1746 caps[b'obsmarkers'] = (b'V1',)
1746 caps[b'obsmarkers'] = (b'V1',)
1747 stream_version = opts.get(b'stream', b"")
1747 stream_version = opts.get(b'stream', b"")
1748 if stream_version == b"v2":
1748 if stream_version == b"v2":
1749 caps[b'stream'] = [b'v2']
1749 caps[b'stream'] = [b'v2']
1750 elif stream_version == b"v3-exp":
1750 elif stream_version == b"v3-exp":
1751 caps[b'stream'] = [b'v3-exp']
1751 caps[b'stream'] = [b'v3-exp']
1752 bundle = bundle20(ui, caps)
1752 bundle = bundle20(ui, caps)
1753 bundle.setcompression(compression, compopts)
1753 bundle.setcompression(compression, compopts)
1754 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1754 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1755 chunkiter = bundle.getchunks()
1755 chunkiter = bundle.getchunks()
1756
1756
1757 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1757 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1758
1758
1759
1759
1760 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1760 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1761 # We should eventually reconcile this logic with the one behind
1761 # We should eventually reconcile this logic with the one behind
1762 # 'exchange.getbundle2partsgenerator'.
1762 # 'exchange.getbundle2partsgenerator'.
1763 #
1763 #
1764 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1764 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1765 # different right now. So we keep them separated for now for the sake of
1765 # different right now. So we keep them separated for now for the sake of
1766 # simplicity.
1766 # simplicity.
1767
1767
1768 # we might not always want a changegroup in such bundle, for example in
1768 # we might not always want a changegroup in such bundle, for example in
1769 # stream bundles
1769 # stream bundles
1770 if opts.get(b'changegroup', True):
1770 if opts.get(b'changegroup', True):
1771 cgversion = opts.get(b'cg.version')
1771 cgversion = opts.get(b'cg.version')
1772 if cgversion is None:
1772 if cgversion is None:
1773 cgversion = changegroup.safeversion(repo)
1773 cgversion = changegroup.safeversion(repo)
1774 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1774 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1775 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1775 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1776 part.addparam(b'version', cg.version)
1776 part.addparam(b'version', cg.version)
1777 if b'clcount' in cg.extras:
1777 if b'clcount' in cg.extras:
1778 part.addparam(
1778 part.addparam(
1779 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1779 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1780 )
1780 )
1781 if opts.get(b'phases'):
1781 if opts.get(b'phases'):
1782 target_phase = phases.draft
1782 target_phase = phases.draft
1783 for head in outgoing.ancestorsof:
1783 for head in outgoing.ancestorsof:
1784 target_phase = max(target_phase, repo[head].phase())
1784 target_phase = max(target_phase, repo[head].phase())
1785 if target_phase > phases.draft:
1785 if target_phase > phases.draft:
1786 part.addparam(
1786 part.addparam(
1787 b'targetphase',
1787 b'targetphase',
1788 b'%d' % target_phase,
1788 b'%d' % target_phase,
1789 mandatory=False,
1789 mandatory=False,
1790 )
1790 )
1791 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1791 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1792 part.addparam(b'exp-sidedata', b'1')
1792 part.addparam(b'exp-sidedata', b'1')
1793
1793
1794 if opts.get(b'stream', b"") == b"v2":
1794 if opts.get(b'stream', b"") == b"v2":
1795 addpartbundlestream2(bundler, repo, stream=True)
1795 addpartbundlestream2(bundler, repo, stream=True)
1796
1796
1797 if opts.get(b'stream', b"") == b"v3-exp":
1797 if opts.get(b'stream', b"") == b"v3-exp":
1798 addpartbundlestream2(bundler, repo, stream=True)
1798 addpartbundlestream2(bundler, repo, stream=True)
1799
1799
1800 if opts.get(b'tagsfnodescache', True):
1800 if opts.get(b'tagsfnodescache', True):
1801 addparttagsfnodescache(repo, bundler, outgoing)
1801 addparttagsfnodescache(repo, bundler, outgoing)
1802
1802
1803 if opts.get(b'revbranchcache', True):
1803 if opts.get(b'revbranchcache', True):
1804 addpartrevbranchcache(repo, bundler, outgoing)
1804 addpartrevbranchcache(repo, bundler, outgoing)
1805
1805
1806 if opts.get(b'obsolescence', False):
1806 if opts.get(b'obsolescence', False):
1807 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1807 obsmarkers = repo.obsstore.relevantmarkers(nodes=outgoing.missing)
1808 buildobsmarkerspart(
1808 buildobsmarkerspart(
1809 bundler,
1809 bundler,
1810 obsmarkers,
1810 obsmarkers,
1811 mandatory=opts.get(b'obsolescence-mandatory', True),
1811 mandatory=opts.get(b'obsolescence-mandatory', True),
1812 )
1812 )
1813
1813
1814 if opts.get(b'phases', False):
1814 if opts.get(b'phases', False):
1815 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1815 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1816 phasedata = phases.binaryencode(headsbyphase)
1816 phasedata = phases.binaryencode(headsbyphase)
1817 bundler.newpart(b'phase-heads', data=phasedata)
1817 bundler.newpart(b'phase-heads', data=phasedata)
1818
1818
1819
1819
1820 def addparttagsfnodescache(repo, bundler, outgoing):
1820 def addparttagsfnodescache(repo, bundler, outgoing):
1821 # we include the tags fnode cache for the bundle changeset
1821 # we include the tags fnode cache for the bundle changeset
1822 # (as an optional parts)
1822 # (as an optional parts)
1823 cache = tags.hgtagsfnodescache(repo.unfiltered())
1823 cache = tags.hgtagsfnodescache(repo.unfiltered())
1824 chunks = []
1824 chunks = []
1825
1825
1826 # .hgtags fnodes are only relevant for head changesets. While we could
1826 # .hgtags fnodes are only relevant for head changesets. While we could
1827 # transfer values for all known nodes, there will likely be little to
1827 # transfer values for all known nodes, there will likely be little to
1828 # no benefit.
1828 # no benefit.
1829 #
1829 #
1830 # We don't bother using a generator to produce output data because
1830 # We don't bother using a generator to produce output data because
1831 # a) we only have 40 bytes per head and even esoteric numbers of heads
1831 # a) we only have 40 bytes per head and even esoteric numbers of heads
1832 # consume little memory (1M heads is 40MB) b) we don't want to send the
1832 # consume little memory (1M heads is 40MB) b) we don't want to send the
1833 # part if we don't have entries and knowing if we have entries requires
1833 # part if we don't have entries and knowing if we have entries requires
1834 # cache lookups.
1834 # cache lookups.
1835 for node in outgoing.ancestorsof:
1835 for node in outgoing.ancestorsof:
1836 # Don't compute missing, as this may slow down serving.
1836 # Don't compute missing, as this may slow down serving.
1837 fnode = cache.getfnode(node, computemissing=False)
1837 fnode = cache.getfnode(node, computemissing=False)
1838 if fnode:
1838 if fnode:
1839 chunks.extend([node, fnode])
1839 chunks.extend([node, fnode])
1840
1840
1841 if chunks:
1841 if chunks:
1842 bundler.newpart(
1842 bundler.newpart(
1843 b'hgtagsfnodes',
1843 b'hgtagsfnodes',
1844 mandatory=False,
1844 mandatory=False,
1845 data=b''.join(chunks),
1845 data=b''.join(chunks),
1846 )
1846 )
1847
1847
1848
1848
1849 def addpartrevbranchcache(repo, bundler, outgoing):
1849 def addpartrevbranchcache(repo, bundler, outgoing):
1850 # we include the rev branch cache for the bundle changeset
1850 # we include the rev branch cache for the bundle changeset
1851 # (as an optional parts)
1851 # (as an optional parts)
1852 cache = repo.revbranchcache()
1852 cache = repo.revbranchcache()
1853 cl = repo.unfiltered().changelog
1853 cl = repo.unfiltered().changelog
1854 branchesdata = collections.defaultdict(lambda: (set(), set()))
1854 branchesdata = collections.defaultdict(lambda: (set(), set()))
1855 for node in outgoing.missing:
1855 for node in outgoing.missing:
1856 branch, close = cache.branchinfo(cl.rev(node))
1856 branch, close = cache.branchinfo(cl.rev(node))
1857 branchesdata[branch][close].add(node)
1857 branchesdata[branch][close].add(node)
1858
1858
1859 def generate():
1859 def generate():
1860 for branch, (nodes, closed) in sorted(branchesdata.items()):
1860 for branch, (nodes, closed) in sorted(branchesdata.items()):
1861 utf8branch = encoding.fromlocal(branch)
1861 utf8branch = encoding.fromlocal(branch)
1862 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1862 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1863 yield utf8branch
1863 yield utf8branch
1864 for n in sorted(nodes):
1864 for n in sorted(nodes):
1865 yield n
1865 yield n
1866 for n in sorted(closed):
1866 for n in sorted(closed):
1867 yield n
1867 yield n
1868
1868
1869 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1869 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1870
1870
1871
1871
1872 def _formatrequirementsspec(requirements):
1872 def _formatrequirementsspec(requirements):
1873 requirements = [req for req in requirements if req != b"shared"]
1873 requirements = [req for req in requirements if req != b"shared"]
1874 return urlreq.quote(b','.join(sorted(requirements)))
1874 return urlreq.quote(b','.join(sorted(requirements)))
1875
1875
1876
1876
1877 def _formatrequirementsparams(requirements):
1877 def _formatrequirementsparams(requirements):
1878 requirements = _formatrequirementsspec(requirements)
1878 requirements = _formatrequirementsspec(requirements)
1879 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1879 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1880 return params
1880 return params
1881
1881
1882
1882
1883 def format_remote_wanted_sidedata(repo):
1883 def format_remote_wanted_sidedata(repo):
1884 """Formats a repo's wanted sidedata categories into a bytestring for
1884 """Formats a repo's wanted sidedata categories into a bytestring for
1885 capabilities exchange."""
1885 capabilities exchange."""
1886 wanted = b""
1886 wanted = b""
1887 if repo._wanted_sidedata:
1887 if repo._wanted_sidedata:
1888 wanted = b','.join(
1888 wanted = b','.join(
1889 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1889 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1890 )
1890 )
1891 return wanted
1891 return wanted
1892
1892
1893
1893
1894 def read_remote_wanted_sidedata(remote):
1894 def read_remote_wanted_sidedata(remote):
1895 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1895 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1896 return read_wanted_sidedata(sidedata_categories)
1896 return read_wanted_sidedata(sidedata_categories)
1897
1897
1898
1898
1899 def read_wanted_sidedata(formatted):
1899 def read_wanted_sidedata(formatted):
1900 if formatted:
1900 if formatted:
1901 return set(formatted.split(b','))
1901 return set(formatted.split(b','))
1902 return set()
1902 return set()
1903
1903
1904
1904
1905 def addpartbundlestream2(bundler, repo, **kwargs):
1905 def addpartbundlestream2(bundler, repo, **kwargs):
1906 if not kwargs.get('stream', False):
1906 if not kwargs.get('stream', False):
1907 return
1907 return
1908
1908
1909 if not streamclone.allowservergeneration(repo):
1909 if not streamclone.allowservergeneration(repo):
1910 msg = _(b'stream data requested but server does not allow this feature')
1910 msg = _(b'stream data requested but server does not allow this feature')
1911 hint = _(b'the client seems buggy')
1911 hint = _(b'the client seems buggy')
1912 raise error.Abort(msg, hint=hint)
1912 raise error.Abort(msg, hint=hint)
1913 if not (b'stream' in bundler.capabilities):
1913 if not (b'stream' in bundler.capabilities):
1914 msg = _(
1914 msg = _(
1915 b'stream data requested but supported streaming clone versions were not specified'
1915 b'stream data requested but supported streaming clone versions were not specified'
1916 )
1916 )
1917 hint = _(b'the client seems buggy')
1917 hint = _(b'the client seems buggy')
1918 raise error.Abort(msg, hint=hint)
1918 raise error.Abort(msg, hint=hint)
1919 client_supported = set(bundler.capabilities[b'stream'])
1919 client_supported = set(bundler.capabilities[b'stream'])
1920 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1920 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1921 common_supported = client_supported & server_supported
1921 common_supported = client_supported & server_supported
1922 if not common_supported:
1922 if not common_supported:
1923 msg = _(b'no common supported version with the client: %s; %s')
1923 msg = _(b'no common supported version with the client: %s; %s')
1924 str_server = b','.join(sorted(server_supported))
1924 str_server = b','.join(sorted(server_supported))
1925 str_client = b','.join(sorted(client_supported))
1925 str_client = b','.join(sorted(client_supported))
1926 msg %= (str_server, str_client)
1926 msg %= (str_server, str_client)
1927 raise error.Abort(msg)
1927 raise error.Abort(msg)
1928 version = max(common_supported)
1928 version = max(common_supported)
1929
1929
1930 # Stream clones don't compress well. And compression undermines a
1930 # Stream clones don't compress well. And compression undermines a
1931 # goal of stream clones, which is to be fast. Communicate the desire
1931 # goal of stream clones, which is to be fast. Communicate the desire
1932 # to avoid compression to consumers of the bundle.
1932 # to avoid compression to consumers of the bundle.
1933 bundler.prefercompressed = False
1933 bundler.prefercompressed = False
1934
1934
1935 # get the includes and excludes
1935 # get the includes and excludes
1936 includepats = kwargs.get('includepats')
1936 includepats = kwargs.get('includepats')
1937 excludepats = kwargs.get('excludepats')
1937 excludepats = kwargs.get('excludepats')
1938
1938
1939 narrowstream = repo.ui.configbool(
1939 narrowstream = repo.ui.configbool(
1940 b'experimental', b'server.stream-narrow-clones'
1940 b'experimental', b'server.stream-narrow-clones'
1941 )
1941 )
1942
1942
1943 if (includepats or excludepats) and not narrowstream:
1943 if (includepats or excludepats) and not narrowstream:
1944 raise error.Abort(_(b'server does not support narrow stream clones'))
1944 raise error.Abort(_(b'server does not support narrow stream clones'))
1945
1945
1946 includeobsmarkers = False
1946 includeobsmarkers = False
1947 if repo.obsstore:
1947 if repo.obsstore:
1948 remoteversions = obsmarkersversion(bundler.capabilities)
1948 remoteversions = obsmarkersversion(bundler.capabilities)
1949 if not remoteversions:
1949 if not remoteversions:
1950 raise error.Abort(
1950 raise error.Abort(
1951 _(
1951 _(
1952 b'server has obsolescence markers, but client '
1952 b'server has obsolescence markers, but client '
1953 b'cannot receive them via stream clone'
1953 b'cannot receive them via stream clone'
1954 )
1954 )
1955 )
1955 )
1956 elif repo.obsstore._version in remoteversions:
1956 elif repo.obsstore._version in remoteversions:
1957 includeobsmarkers = True
1957 includeobsmarkers = True
1958
1958
1959 if version == b"v2":
1959 if version == b"v2":
1960 filecount, bytecount, it = streamclone.generatev2(
1960 filecount, bytecount, it = streamclone.generatev2(
1961 repo, includepats, excludepats, includeobsmarkers
1961 repo, includepats, excludepats, includeobsmarkers
1962 )
1962 )
1963 requirements = streamclone.streamed_requirements(repo)
1963 requirements = streamclone.streamed_requirements(repo)
1964 requirements = _formatrequirementsspec(requirements)
1964 requirements = _formatrequirementsspec(requirements)
1965 part = bundler.newpart(b'stream2', data=it)
1965 part = bundler.newpart(b'stream2', data=it)
1966 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1966 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1967 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1967 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1968 part.addparam(b'requirements', requirements, mandatory=True)
1968 part.addparam(b'requirements', requirements, mandatory=True)
1969 elif version == b"v3-exp":
1969 elif version == b"v3-exp":
1970 it = streamclone.generatev3(
1970 it = streamclone.generatev3(
1971 repo, includepats, excludepats, includeobsmarkers
1971 repo, includepats, excludepats, includeobsmarkers
1972 )
1972 )
1973 requirements = streamclone.streamed_requirements(repo)
1973 requirements = streamclone.streamed_requirements(repo)
1974 requirements = _formatrequirementsspec(requirements)
1974 requirements = _formatrequirementsspec(requirements)
1975 part = bundler.newpart(b'stream3-exp', data=it)
1975 part = bundler.newpart(b'stream3-exp', data=it)
1976 part.addparam(b'requirements', requirements, mandatory=True)
1976 part.addparam(b'requirements', requirements, mandatory=True)
1977
1977
1978
1978
1979 def buildobsmarkerspart(bundler, markers, mandatory=True):
1979 def buildobsmarkerspart(bundler, markers, mandatory=True):
1980 """add an obsmarker part to the bundler with <markers>
1980 """add an obsmarker part to the bundler with <markers>
1981
1981
1982 No part is created if markers is empty.
1982 No part is created if markers is empty.
1983 Raises ValueError if the bundler doesn't support any known obsmarker format.
1983 Raises ValueError if the bundler doesn't support any known obsmarker format.
1984 """
1984 """
1985 if not markers:
1985 if not markers:
1986 return None
1986 return None
1987
1987
1988 remoteversions = obsmarkersversion(bundler.capabilities)
1988 remoteversions = obsmarkersversion(bundler.capabilities)
1989 version = obsolete.commonversion(remoteversions)
1989 version = obsolete.commonversion(remoteversions)
1990 if version is None:
1990 if version is None:
1991 raise ValueError(b'bundler does not support common obsmarker format')
1991 raise ValueError(b'bundler does not support common obsmarker format')
1992 stream = obsolete.encodemarkers(markers, True, version=version)
1992 stream = obsolete.encodemarkers(markers, True, version=version)
1993 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1993 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1994
1994
1995
1995
1996 def writebundle(
1996 def writebundle(
1997 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1997 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1998 ):
1998 ):
1999 """Write a bundle file and return its filename.
1999 """Write a bundle file and return its filename.
2000
2000
2001 Existing files will not be overwritten.
2001 Existing files will not be overwritten.
2002 If no filename is specified, a temporary file is created.
2002 If no filename is specified, a temporary file is created.
2003 bz2 compression can be turned off.
2003 bz2 compression can be turned off.
2004 The bundle file will be deleted in case of errors.
2004 The bundle file will be deleted in case of errors.
2005 """
2005 """
2006
2006
2007 if bundletype == b"HG20":
2007 if bundletype == b"HG20":
2008 bundle = bundle20(ui)
2008 bundle = bundle20(ui)
2009 bundle.setcompression(compression, compopts)
2009 bundle.setcompression(compression, compopts)
2010 part = bundle.newpart(b'changegroup', data=cg.getchunks())
2010 part = bundle.newpart(b'changegroup', data=cg.getchunks())
2011 part.addparam(b'version', cg.version)
2011 part.addparam(b'version', cg.version)
2012 if b'clcount' in cg.extras:
2012 if b'clcount' in cg.extras:
2013 part.addparam(
2013 part.addparam(
2014 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
2014 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
2015 )
2015 )
2016 chunkiter = bundle.getchunks()
2016 chunkiter = bundle.getchunks()
2017 else:
2017 else:
2018 # compression argument is only for the bundle2 case
2018 # compression argument is only for the bundle2 case
2019 assert compression is None
2019 assert compression is None
2020 if cg.version != b'01':
2020 if cg.version != b'01':
2021 raise error.Abort(
2021 raise error.Abort(
2022 _(b'old bundle types only supports v1 changegroups')
2022 _(b'old bundle types only supports v1 changegroups')
2023 )
2023 )
2024
2024
2025 # HG20 is the case without 2 values to unpack, but is handled above.
2025 # HG20 is the case without 2 values to unpack, but is handled above.
2026 # pytype: disable=bad-unpacking
2026 # pytype: disable=bad-unpacking
2027 header, comp = bundletypes[bundletype]
2027 header, comp = bundletypes[bundletype]
2028 # pytype: enable=bad-unpacking
2028 # pytype: enable=bad-unpacking
2029
2029
2030 if comp not in util.compengines.supportedbundletypes:
2030 if comp not in util.compengines.supportedbundletypes:
2031 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2031 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2032 compengine = util.compengines.forbundletype(comp)
2032 compengine = util.compengines.forbundletype(comp)
2033
2033
2034 def chunkiter():
2034 def chunkiter():
2035 yield header
2035 yield header
2036 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2036 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2037 yield chunk
2037 yield chunk
2038
2038
2039 chunkiter = chunkiter()
2039 chunkiter = chunkiter()
2040
2040
2041 # parse the changegroup data, otherwise we will block
2041 # parse the changegroup data, otherwise we will block
2042 # in case of sshrepo because we don't know the end of the stream
2042 # in case of sshrepo because we don't know the end of the stream
2043 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2043 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2044
2044
2045
2045
2046 def combinechangegroupresults(op):
2046 def combinechangegroupresults(op):
2047 """logic to combine 0 or more addchangegroup results into one"""
2047 """logic to combine 0 or more addchangegroup results into one"""
2048 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2048 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2049 changedheads = 0
2049 changedheads = 0
2050 result = 1
2050 result = 1
2051 for ret in results:
2051 for ret in results:
2052 # If any changegroup result is 0, return 0
2052 # If any changegroup result is 0, return 0
2053 if ret == 0:
2053 if ret == 0:
2054 result = 0
2054 result = 0
2055 break
2055 break
2056 if ret < -1:
2056 if ret < -1:
2057 changedheads += ret + 1
2057 changedheads += ret + 1
2058 elif ret > 1:
2058 elif ret > 1:
2059 changedheads += ret - 1
2059 changedheads += ret - 1
2060 if changedheads > 0:
2060 if changedheads > 0:
2061 result = 1 + changedheads
2061 result = 1 + changedheads
2062 elif changedheads < 0:
2062 elif changedheads < 0:
2063 result = -1 + changedheads
2063 result = -1 + changedheads
2064 return result
2064 return result
2065
2065
2066
2066
2067 @parthandler(
2067 @parthandler(
2068 b'changegroup',
2068 b'changegroup',
2069 (
2069 (
2070 b'version',
2070 b'version',
2071 b'nbchanges',
2071 b'nbchanges',
2072 b'exp-sidedata',
2072 b'exp-sidedata',
2073 b'exp-wanted-sidedata',
2073 b'exp-wanted-sidedata',
2074 b'treemanifest',
2074 b'treemanifest',
2075 b'targetphase',
2075 b'targetphase',
2076 ),
2076 ),
2077 )
2077 )
2078 def handlechangegroup(op, inpart):
2078 def handlechangegroup(op, inpart):
2079 """apply a changegroup part on the repo"""
2079 """apply a changegroup part on the repo"""
2080 from . import localrepo
2080 from . import localrepo
2081
2081
2082 tr = op.gettransaction()
2082 tr = op.gettransaction()
2083 unpackerversion = inpart.params.get(b'version', b'01')
2083 unpackerversion = inpart.params.get(b'version', b'01')
2084 # We should raise an appropriate exception here
2084 # We should raise an appropriate exception here
2085 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2085 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2086 # the source and url passed here are overwritten by the one contained in
2086 # the source and url passed here are overwritten by the one contained in
2087 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2087 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2088 nbchangesets = None
2088 nbchangesets = None
2089 if b'nbchanges' in inpart.params:
2089 if b'nbchanges' in inpart.params:
2090 nbchangesets = int(inpart.params.get(b'nbchanges'))
2090 nbchangesets = int(inpart.params.get(b'nbchanges'))
2091 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2091 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2092 if len(op.repo.changelog) != 0:
2092 if len(op.repo.changelog) != 0:
2093 raise error.Abort(
2093 raise error.Abort(
2094 _(
2094 _(
2095 b"bundle contains tree manifests, but local repo is "
2095 b"bundle contains tree manifests, but local repo is "
2096 b"non-empty and does not use tree manifests"
2096 b"non-empty and does not use tree manifests"
2097 )
2097 )
2098 )
2098 )
2099 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2099 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2100 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2100 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2101 op.repo.ui, op.repo.requirements, op.repo.features
2101 op.repo.ui, op.repo.requirements, op.repo.features
2102 )
2102 )
2103 scmutil.writereporequirements(op.repo)
2103 scmutil.writereporequirements(op.repo)
2104
2104
2105 extrakwargs = {}
2105 extrakwargs = {}
2106 targetphase = inpart.params.get(b'targetphase')
2106 targetphase = inpart.params.get(b'targetphase')
2107 if targetphase is not None:
2107 if targetphase is not None:
2108 extrakwargs['targetphase'] = int(targetphase)
2108 extrakwargs['targetphase'] = int(targetphase)
2109
2109
2110 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2110 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2111 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2111 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2112
2112
2113 ret = _processchangegroup(
2113 ret = _processchangegroup(
2114 op,
2114 op,
2115 cg,
2115 cg,
2116 tr,
2116 tr,
2117 op.source,
2117 op.source,
2118 b'bundle2',
2118 b'bundle2',
2119 expectedtotal=nbchangesets,
2119 expectedtotal=nbchangesets,
2120 **extrakwargs,
2120 **extrakwargs,
2121 )
2121 )
2122 if op.reply is not None:
2122 if op.reply is not None:
2123 # This is definitely not the final form of this
2123 # This is definitely not the final form of this
2124 # return. But one need to start somewhere.
2124 # return. But one need to start somewhere.
2125 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2125 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2126 part.addparam(
2126 part.addparam(
2127 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2127 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2128 )
2128 )
2129 part.addparam(b'return', b'%i' % ret, mandatory=False)
2129 part.addparam(b'return', b'%i' % ret, mandatory=False)
2130 assert not inpart.read()
2130 assert not inpart.read()
2131
2131
2132
2132
2133 _remotechangegroupparams = tuple(
2133 _remotechangegroupparams = tuple(
2134 [b'url', b'size', b'digests']
2134 [b'url', b'size', b'digests']
2135 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2135 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2136 )
2136 )
2137
2137
2138
2138
2139 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2139 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2140 def handleremotechangegroup(op, inpart):
2140 def handleremotechangegroup(op, inpart):
2141 """apply a bundle10 on the repo, given an url and validation information
2141 """apply a bundle10 on the repo, given an url and validation information
2142
2142
2143 All the information about the remote bundle to import are given as
2143 All the information about the remote bundle to import are given as
2144 parameters. The parameters include:
2144 parameters. The parameters include:
2145 - url: the url to the bundle10.
2145 - url: the url to the bundle10.
2146 - size: the bundle10 file size. It is used to validate what was
2146 - size: the bundle10 file size. It is used to validate what was
2147 retrieved by the client matches the server knowledge about the bundle.
2147 retrieved by the client matches the server knowledge about the bundle.
2148 - digests: a space separated list of the digest types provided as
2148 - digests: a space separated list of the digest types provided as
2149 parameters.
2149 parameters.
2150 - digest:<digest-type>: the hexadecimal representation of the digest with
2150 - digest:<digest-type>: the hexadecimal representation of the digest with
2151 that name. Like the size, it is used to validate what was retrieved by
2151 that name. Like the size, it is used to validate what was retrieved by
2152 the client matches what the server knows about the bundle.
2152 the client matches what the server knows about the bundle.
2153
2153
2154 When multiple digest types are given, all of them are checked.
2154 When multiple digest types are given, all of them are checked.
2155 """
2155 """
2156 try:
2156 try:
2157 raw_url = inpart.params[b'url']
2157 raw_url = inpart.params[b'url']
2158 except KeyError:
2158 except KeyError:
2159 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2159 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2160 parsed_url = urlutil.url(raw_url)
2160 parsed_url = urlutil.url(raw_url)
2161 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2161 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2162 raise error.Abort(
2162 raise error.Abort(
2163 _(b'remote-changegroup does not support %s urls')
2163 _(b'remote-changegroup does not support %s urls')
2164 % parsed_url.scheme
2164 % parsed_url.scheme
2165 )
2165 )
2166
2166
2167 try:
2167 try:
2168 size = int(inpart.params[b'size'])
2168 size = int(inpart.params[b'size'])
2169 except ValueError:
2169 except ValueError:
2170 raise error.Abort(
2170 raise error.Abort(
2171 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2171 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2172 )
2172 )
2173 except KeyError:
2173 except KeyError:
2174 raise error.Abort(
2174 raise error.Abort(
2175 _(b'remote-changegroup: missing "%s" param') % b'size'
2175 _(b'remote-changegroup: missing "%s" param') % b'size'
2176 )
2176 )
2177
2177
2178 digests = {}
2178 digests = {}
2179 for typ in inpart.params.get(b'digests', b'').split():
2179 for typ in inpart.params.get(b'digests', b'').split():
2180 param = b'digest:%s' % typ
2180 param = b'digest:%s' % typ
2181 try:
2181 try:
2182 value = inpart.params[param]
2182 value = inpart.params[param]
2183 except KeyError:
2183 except KeyError:
2184 raise error.Abort(
2184 raise error.Abort(
2185 _(b'remote-changegroup: missing "%s" param') % param
2185 _(b'remote-changegroup: missing "%s" param') % param
2186 )
2186 )
2187 digests[typ] = value
2187 digests[typ] = value
2188
2188
2189 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2189 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2190
2190
2191 tr = op.gettransaction()
2191 tr = op.gettransaction()
2192 from . import exchange
2192 from . import exchange
2193
2193
2194 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2194 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2195 if not isinstance(cg, changegroup.cg1unpacker):
2195 if not isinstance(cg, changegroup.cg1unpacker):
2196 raise error.Abort(
2196 raise error.Abort(
2197 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2197 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2198 )
2198 )
2199 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2199 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2200 if op.reply is not None:
2200 if op.reply is not None:
2201 # This is definitely not the final form of this
2201 # This is definitely not the final form of this
2202 # return. But one need to start somewhere.
2202 # return. But one need to start somewhere.
2203 part = op.reply.newpart(b'reply:changegroup')
2203 part = op.reply.newpart(b'reply:changegroup')
2204 part.addparam(
2204 part.addparam(
2205 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2205 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2206 )
2206 )
2207 part.addparam(b'return', b'%i' % ret, mandatory=False)
2207 part.addparam(b'return', b'%i' % ret, mandatory=False)
2208 try:
2208 try:
2209 real_part.validate()
2209 real_part.validate()
2210 except error.Abort as e:
2210 except error.Abort as e:
2211 raise error.Abort(
2211 raise error.Abort(
2212 _(b'bundle at %s is corrupted:\n%s')
2212 _(b'bundle at %s is corrupted:\n%s')
2213 % (urlutil.hidepassword(raw_url), e.message)
2213 % (urlutil.hidepassword(raw_url), e.message)
2214 )
2214 )
2215 assert not inpart.read()
2215 assert not inpart.read()
2216
2216
2217
2217
2218 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2218 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2219 def handlereplychangegroup(op, inpart):
2219 def handlereplychangegroup(op, inpart):
2220 ret = int(inpart.params[b'return'])
2220 ret = int(inpart.params[b'return'])
2221 replyto = int(inpart.params[b'in-reply-to'])
2221 replyto = int(inpart.params[b'in-reply-to'])
2222 op.records.add(b'changegroup', {b'return': ret}, replyto)
2222 op.records.add(b'changegroup', {b'return': ret}, replyto)
2223
2223
2224
2224
2225 @parthandler(b'check:bookmarks')
2225 @parthandler(b'check:bookmarks')
2226 def handlecheckbookmarks(op, inpart):
2226 def handlecheckbookmarks(op, inpart):
2227 """check location of bookmarks
2227 """check location of bookmarks
2228
2228
2229 This part is to be used to detect push race regarding bookmark, it
2229 This part is to be used to detect push race regarding bookmark, it
2230 contains binary encoded (bookmark, node) tuple. If the local state does
2230 contains binary encoded (bookmark, node) tuple. If the local state does
2231 not marks the one in the part, a PushRaced exception is raised
2231 not marks the one in the part, a PushRaced exception is raised
2232 """
2232 """
2233 bookdata = bookmarks.binarydecode(op.repo, inpart)
2233 bookdata = bookmarks.binarydecode(op.repo, inpart)
2234
2234
2235 msgstandard = (
2235 msgstandard = (
2236 b'remote repository changed while pushing - please try again '
2236 b'remote repository changed while pushing - please try again '
2237 b'(bookmark "%s" move from %s to %s)'
2237 b'(bookmark "%s" move from %s to %s)'
2238 )
2238 )
2239 msgmissing = (
2239 msgmissing = (
2240 b'remote repository changed while pushing - please try again '
2240 b'remote repository changed while pushing - please try again '
2241 b'(bookmark "%s" is missing, expected %s)'
2241 b'(bookmark "%s" is missing, expected %s)'
2242 )
2242 )
2243 msgexist = (
2243 msgexist = (
2244 b'remote repository changed while pushing - please try again '
2244 b'remote repository changed while pushing - please try again '
2245 b'(bookmark "%s" set on %s, expected missing)'
2245 b'(bookmark "%s" set on %s, expected missing)'
2246 )
2246 )
2247 for book, node in bookdata:
2247 for book, node in bookdata:
2248 currentnode = op.repo._bookmarks.get(book)
2248 currentnode = op.repo._bookmarks.get(book)
2249 if currentnode != node:
2249 if currentnode != node:
2250 if node is None:
2250 if node is None:
2251 finalmsg = msgexist % (book, short(currentnode))
2251 finalmsg = msgexist % (book, short(currentnode))
2252 elif currentnode is None:
2252 elif currentnode is None:
2253 finalmsg = msgmissing % (book, short(node))
2253 finalmsg = msgmissing % (book, short(node))
2254 else:
2254 else:
2255 finalmsg = msgstandard % (
2255 finalmsg = msgstandard % (
2256 book,
2256 book,
2257 short(node),
2257 short(node),
2258 short(currentnode),
2258 short(currentnode),
2259 )
2259 )
2260 raise error.PushRaced(finalmsg)
2260 raise error.PushRaced(finalmsg)
2261
2261
2262
2262
2263 @parthandler(b'check:heads')
2263 @parthandler(b'check:heads')
2264 def handlecheckheads(op, inpart):
2264 def handlecheckheads(op, inpart):
2265 """check that head of the repo did not change
2265 """check that head of the repo did not change
2266
2266
2267 This is used to detect a push race when using unbundle.
2267 This is used to detect a push race when using unbundle.
2268 This replaces the "heads" argument of unbundle."""
2268 This replaces the "heads" argument of unbundle."""
2269 h = inpart.read(20)
2269 h = inpart.read(20)
2270 heads = []
2270 heads = []
2271 while len(h) == 20:
2271 while len(h) == 20:
2272 heads.append(h)
2272 heads.append(h)
2273 h = inpart.read(20)
2273 h = inpart.read(20)
2274 assert not h
2274 assert not h
2275 # Trigger a transaction so that we are guaranteed to have the lock now.
2275 # Trigger a transaction so that we are guaranteed to have the lock now.
2276 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2276 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2277 op.gettransaction()
2277 op.gettransaction()
2278 if sorted(heads) != sorted(op.repo.heads()):
2278 if sorted(heads) != sorted(op.repo.heads()):
2279 raise error.PushRaced(
2279 raise error.PushRaced(
2280 b'remote repository changed while pushing - please try again'
2280 b'remote repository changed while pushing - please try again'
2281 )
2281 )
2282
2282
2283
2283
2284 @parthandler(b'check:updated-heads')
2284 @parthandler(b'check:updated-heads')
2285 def handlecheckupdatedheads(op, inpart):
2285 def handlecheckupdatedheads(op, inpart):
2286 """check for race on the heads touched by a push
2286 """check for race on the heads touched by a push
2287
2287
2288 This is similar to 'check:heads' but focus on the heads actually updated
2288 This is similar to 'check:heads' but focus on the heads actually updated
2289 during the push. If other activities happen on unrelated heads, it is
2289 during the push. If other activities happen on unrelated heads, it is
2290 ignored.
2290 ignored.
2291
2291
2292 This allow server with high traffic to avoid push contention as long as
2292 This allow server with high traffic to avoid push contention as long as
2293 unrelated parts of the graph are involved."""
2293 unrelated parts of the graph are involved."""
2294 h = inpart.read(20)
2294 h = inpart.read(20)
2295 heads = []
2295 heads = []
2296 while len(h) == 20:
2296 while len(h) == 20:
2297 heads.append(h)
2297 heads.append(h)
2298 h = inpart.read(20)
2298 h = inpart.read(20)
2299 assert not h
2299 assert not h
2300 # trigger a transaction so that we are guaranteed to have the lock now.
2300 # trigger a transaction so that we are guaranteed to have the lock now.
2301 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2301 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2302 op.gettransaction()
2302 op.gettransaction()
2303
2303
2304 currentheads = set()
2304 currentheads = set()
2305 for ls in op.repo.branchmap().iterheads():
2305 for ls in op.repo.branchmap().iterheads():
2306 currentheads.update(ls)
2306 currentheads.update(ls)
2307
2307
2308 for h in heads:
2308 for h in heads:
2309 if h not in currentheads:
2309 if h not in currentheads:
2310 raise error.PushRaced(
2310 raise error.PushRaced(
2311 b'remote repository changed while pushing - '
2311 b'remote repository changed while pushing - '
2312 b'please try again'
2312 b'please try again'
2313 )
2313 )
2314
2314
2315
2315
2316 @parthandler(b'check:phases')
2316 @parthandler(b'check:phases')
2317 def handlecheckphases(op, inpart):
2317 def handlecheckphases(op, inpart):
2318 """check that phase boundaries of the repository did not change
2318 """check that phase boundaries of the repository did not change
2319
2319
2320 This is used to detect a push race.
2320 This is used to detect a push race.
2321 """
2321 """
2322 phasetonodes = phases.binarydecode(inpart)
2322 phasetonodes = phases.binarydecode(inpart)
2323 unfi = op.repo.unfiltered()
2323 unfi = op.repo.unfiltered()
2324 cl = unfi.changelog
2324 cl = unfi.changelog
2325 phasecache = unfi._phasecache
2325 phasecache = unfi._phasecache
2326 msg = (
2326 msg = (
2327 b'remote repository changed while pushing - please try again '
2327 b'remote repository changed while pushing - please try again '
2328 b'(%s is %s expected %s)'
2328 b'(%s is %s expected %s)'
2329 )
2329 )
2330 for expectedphase, nodes in phasetonodes.items():
2330 for expectedphase, nodes in phasetonodes.items():
2331 for n in nodes:
2331 for n in nodes:
2332 actualphase = phasecache.phase(unfi, cl.rev(n))
2332 actualphase = phasecache.phase(unfi, cl.rev(n))
2333 if actualphase != expectedphase:
2333 if actualphase != expectedphase:
2334 finalmsg = msg % (
2334 finalmsg = msg % (
2335 short(n),
2335 short(n),
2336 phases.phasenames[actualphase],
2336 phases.phasenames[actualphase],
2337 phases.phasenames[expectedphase],
2337 phases.phasenames[expectedphase],
2338 )
2338 )
2339 raise error.PushRaced(finalmsg)
2339 raise error.PushRaced(finalmsg)
2340
2340
2341
2341
2342 @parthandler(b'output')
2342 @parthandler(b'output')
2343 def handleoutput(op, inpart):
2343 def handleoutput(op, inpart):
2344 """forward output captured on the server to the client"""
2344 """forward output captured on the server to the client"""
2345 for line in inpart.read().splitlines():
2345 for line in inpart.read().splitlines():
2346 op.ui.status(_(b'remote: %s\n') % line)
2346 op.ui.status(_(b'remote: %s\n') % line)
2347
2347
2348
2348
2349 @parthandler(b'replycaps')
2349 @parthandler(b'replycaps')
2350 def handlereplycaps(op, inpart):
2350 def handlereplycaps(op, inpart):
2351 """Notify that a reply bundle should be created
2351 """Notify that a reply bundle should be created
2352
2352
2353 The payload contains the capabilities information for the reply"""
2353 The payload contains the capabilities information for the reply"""
2354 caps = decodecaps(inpart.read())
2354 caps = decodecaps(inpart.read())
2355 if op.reply is None:
2355 if op.reply is None:
2356 op.reply = bundle20(op.ui, caps)
2356 op.reply = bundle20(op.ui, caps)
2357
2357
2358
2358
2359 class AbortFromPart(error.Abort):
2359 class AbortFromPart(error.Abort):
2360 """Sub-class of Abort that denotes an error from a bundle2 part."""
2360 """Sub-class of Abort that denotes an error from a bundle2 part."""
2361
2361
2362
2362
2363 @parthandler(b'error:abort', (b'message', b'hint'))
2363 @parthandler(b'error:abort', (b'message', b'hint'))
2364 def handleerrorabort(op, inpart):
2364 def handleerrorabort(op, inpart):
2365 """Used to transmit abort error over the wire"""
2365 """Used to transmit abort error over the wire"""
2366 raise AbortFromPart(
2366 raise AbortFromPart(
2367 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2367 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2368 )
2368 )
2369
2369
2370
2370
2371 @parthandler(
2371 @parthandler(
2372 b'error:pushkey',
2372 b'error:pushkey',
2373 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2373 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2374 )
2374 )
2375 def handleerrorpushkey(op, inpart):
2375 def handleerrorpushkey(op, inpart):
2376 """Used to transmit failure of a mandatory pushkey over the wire"""
2376 """Used to transmit failure of a mandatory pushkey over the wire"""
2377 kwargs = {}
2377 kwargs = {}
2378 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2378 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2379 value = inpart.params.get(name)
2379 value = inpart.params.get(name)
2380 if value is not None:
2380 if value is not None:
2381 kwargs[name] = value
2381 kwargs[name] = value
2382 raise error.PushkeyFailed(
2382 raise error.PushkeyFailed(
2383 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2383 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2384 )
2384 )
2385
2385
2386
2386
2387 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2387 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2388 def handleerrorunsupportedcontent(op, inpart):
2388 def handleerrorunsupportedcontent(op, inpart):
2389 """Used to transmit unknown content error over the wire"""
2389 """Used to transmit unknown content error over the wire"""
2390 kwargs = {}
2390 kwargs = {}
2391 parttype = inpart.params.get(b'parttype')
2391 parttype = inpart.params.get(b'parttype')
2392 if parttype is not None:
2392 if parttype is not None:
2393 kwargs[b'parttype'] = parttype
2393 kwargs[b'parttype'] = parttype
2394 params = inpart.params.get(b'params')
2394 params = inpart.params.get(b'params')
2395 if params is not None:
2395 if params is not None:
2396 kwargs[b'params'] = params.split(b'\0')
2396 kwargs[b'params'] = params.split(b'\0')
2397
2397
2398 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2398 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2399
2399
2400
2400
2401 @parthandler(b'error:pushraced', (b'message',))
2401 @parthandler(b'error:pushraced', (b'message',))
2402 def handleerrorpushraced(op, inpart):
2402 def handleerrorpushraced(op, inpart):
2403 """Used to transmit push race error over the wire"""
2403 """Used to transmit push race error over the wire"""
2404 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2404 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2405
2405
2406
2406
2407 @parthandler(b'listkeys', (b'namespace',))
2407 @parthandler(b'listkeys', (b'namespace',))
2408 def handlelistkeys(op, inpart):
2408 def handlelistkeys(op, inpart):
2409 """retrieve pushkey namespace content stored in a bundle2"""
2409 """retrieve pushkey namespace content stored in a bundle2"""
2410 namespace = inpart.params[b'namespace']
2410 namespace = inpart.params[b'namespace']
2411 r = pushkey.decodekeys(inpart.read())
2411 r = pushkey.decodekeys(inpart.read())
2412 op.records.add(b'listkeys', (namespace, r))
2412 op.records.add(b'listkeys', (namespace, r))
2413
2413
2414
2414
2415 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2415 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2416 def handlepushkey(op, inpart):
2416 def handlepushkey(op, inpart):
2417 """process a pushkey request"""
2417 """process a pushkey request"""
2418 dec = pushkey.decode
2418 dec = pushkey.decode
2419 namespace = dec(inpart.params[b'namespace'])
2419 namespace = dec(inpart.params[b'namespace'])
2420 key = dec(inpart.params[b'key'])
2420 key = dec(inpart.params[b'key'])
2421 old = dec(inpart.params[b'old'])
2421 old = dec(inpart.params[b'old'])
2422 new = dec(inpart.params[b'new'])
2422 new = dec(inpart.params[b'new'])
2423 # Grab the transaction to ensure that we have the lock before performing the
2423 # Grab the transaction to ensure that we have the lock before performing the
2424 # pushkey.
2424 # pushkey.
2425 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2425 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2426 op.gettransaction()
2426 op.gettransaction()
2427 ret = op.repo.pushkey(namespace, key, old, new)
2427 ret = op.repo.pushkey(namespace, key, old, new)
2428 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2428 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2429 op.records.add(b'pushkey', record)
2429 op.records.add(b'pushkey', record)
2430 if op.reply is not None:
2430 if op.reply is not None:
2431 rpart = op.reply.newpart(b'reply:pushkey')
2431 rpart = op.reply.newpart(b'reply:pushkey')
2432 rpart.addparam(
2432 rpart.addparam(
2433 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2433 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2434 )
2434 )
2435 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2435 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2436 if inpart.mandatory and not ret:
2436 if inpart.mandatory and not ret:
2437 kwargs = {}
2437 kwargs = {}
2438 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2438 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2439 if key in inpart.params:
2439 if key in inpart.params:
2440 kwargs[key] = inpart.params[key]
2440 kwargs[key] = inpart.params[key]
2441 raise error.PushkeyFailed(
2441 raise error.PushkeyFailed(
2442 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2442 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2443 )
2443 )
2444
2444
2445
2445
2446 @parthandler(b'bookmarks')
2446 @parthandler(b'bookmarks')
2447 def handlebookmark(op, inpart):
2447 def handlebookmark(op, inpart):
2448 """transmit bookmark information
2448 """transmit bookmark information
2449
2449
2450 The part contains binary encoded bookmark information.
2450 The part contains binary encoded bookmark information.
2451
2451
2452 The exact behavior of this part can be controlled by the 'bookmarks' mode
2452 The exact behavior of this part can be controlled by the 'bookmarks' mode
2453 on the bundle operation.
2453 on the bundle operation.
2454
2454
2455 When mode is 'apply' (the default) the bookmark information is applied as
2455 When mode is 'apply' (the default) the bookmark information is applied as
2456 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2456 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2457 issued earlier to check for push races in such update. This behavior is
2457 issued earlier to check for push races in such update. This behavior is
2458 suitable for pushing.
2458 suitable for pushing.
2459
2459
2460 When mode is 'records', the information is recorded into the 'bookmarks'
2460 When mode is 'records', the information is recorded into the 'bookmarks'
2461 records of the bundle operation. This behavior is suitable for pulling.
2461 records of the bundle operation. This behavior is suitable for pulling.
2462 """
2462 """
2463 changes = bookmarks.binarydecode(op.repo, inpart)
2463 changes = bookmarks.binarydecode(op.repo, inpart)
2464
2464
2465 pushkeycompat = op.repo.ui.configbool(
2465 pushkeycompat = op.repo.ui.configbool(
2466 b'server', b'bookmarks-pushkey-compat'
2466 b'server', b'bookmarks-pushkey-compat'
2467 )
2467 )
2468 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2468 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2469
2469
2470 if bookmarksmode == b'apply':
2470 if bookmarksmode == b'apply':
2471 tr = op.gettransaction()
2471 tr = op.gettransaction()
2472 bookstore = op.repo._bookmarks
2472 bookstore = op.repo._bookmarks
2473 if pushkeycompat:
2473 if pushkeycompat:
2474 allhooks = []
2474 allhooks = []
2475 for book, node in changes:
2475 for book, node in changes:
2476 hookargs = tr.hookargs.copy()
2476 hookargs = tr.hookargs.copy()
2477 hookargs[b'pushkeycompat'] = b'1'
2477 hookargs[b'pushkeycompat'] = b'1'
2478 hookargs[b'namespace'] = b'bookmarks'
2478 hookargs[b'namespace'] = b'bookmarks'
2479 hookargs[b'key'] = book
2479 hookargs[b'key'] = book
2480 hookargs[b'old'] = hex(bookstore.get(book, b''))
2480 hookargs[b'old'] = hex(bookstore.get(book, b''))
2481 hookargs[b'new'] = hex(node if node is not None else b'')
2481 hookargs[b'new'] = hex(node if node is not None else b'')
2482 allhooks.append(hookargs)
2482 allhooks.append(hookargs)
2483
2483
2484 for hookargs in allhooks:
2484 for hookargs in allhooks:
2485 op.repo.hook(
2485 op.repo.hook(
2486 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2486 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2487 )
2487 )
2488
2488
2489 for book, node in changes:
2489 for book, node in changes:
2490 if bookmarks.isdivergent(book):
2490 if bookmarks.isdivergent(book):
2491 msg = _(b'cannot accept divergent bookmark %s!') % book
2491 msg = _(b'cannot accept divergent bookmark %s!') % book
2492 raise error.Abort(msg)
2492 raise error.Abort(msg)
2493
2493
2494 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2494 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2495
2495
2496 if pushkeycompat:
2496 if pushkeycompat:
2497
2497
2498 def runhook(unused_success):
2498 def runhook(unused_success):
2499 for hookargs in allhooks:
2499 for hookargs in allhooks:
2500 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2500 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2501
2501
2502 op.repo._afterlock(runhook)
2502 op.repo._afterlock(runhook)
2503
2503
2504 elif bookmarksmode == b'records':
2504 elif bookmarksmode == b'records':
2505 for book, node in changes:
2505 for book, node in changes:
2506 record = {b'bookmark': book, b'node': node}
2506 record = {b'bookmark': book, b'node': node}
2507 op.records.add(b'bookmarks', record)
2507 op.records.add(b'bookmarks', record)
2508 else:
2508 else:
2509 raise error.ProgrammingError(
2509 raise error.ProgrammingError(
2510 b'unknown bookmark mode: %s' % bookmarksmode
2510 b'unknown bookmark mode: %s' % bookmarksmode
2511 )
2511 )
2512
2512
2513
2513
2514 @parthandler(b'phase-heads')
2514 @parthandler(b'phase-heads')
2515 def handlephases(op, inpart):
2515 def handlephases(op, inpart):
2516 """apply phases from bundle part to repo"""
2516 """apply phases from bundle part to repo"""
2517 headsbyphase = phases.binarydecode(inpart)
2517 headsbyphase = phases.binarydecode(inpart)
2518 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2518 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2519
2519
2520
2520
2521 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2521 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2522 def handlepushkeyreply(op, inpart):
2522 def handlepushkeyreply(op, inpart):
2523 """retrieve the result of a pushkey request"""
2523 """retrieve the result of a pushkey request"""
2524 ret = int(inpart.params[b'return'])
2524 ret = int(inpart.params[b'return'])
2525 partid = int(inpart.params[b'in-reply-to'])
2525 partid = int(inpart.params[b'in-reply-to'])
2526 op.records.add(b'pushkey', {b'return': ret}, partid)
2526 op.records.add(b'pushkey', {b'return': ret}, partid)
2527
2527
2528
2528
2529 @parthandler(b'obsmarkers')
2529 @parthandler(b'obsmarkers')
2530 def handleobsmarker(op, inpart):
2530 def handleobsmarker(op, inpart):
2531 """add a stream of obsmarkers to the repo"""
2531 """add a stream of obsmarkers to the repo"""
2532 tr = op.gettransaction()
2532 tr = op.gettransaction()
2533 markerdata = inpart.read()
2533 markerdata = inpart.read()
2534 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2534 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2535 op.ui.writenoi18n(
2535 op.ui.writenoi18n(
2536 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2536 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2537 )
2537 )
2538 # The mergemarkers call will crash if marker creation is not enabled.
2538 # The mergemarkers call will crash if marker creation is not enabled.
2539 # we want to avoid this if the part is advisory.
2539 # we want to avoid this if the part is advisory.
2540 if not inpart.mandatory and op.repo.obsstore.readonly:
2540 if not inpart.mandatory and op.repo.obsstore.readonly:
2541 op.repo.ui.debug(
2541 op.repo.ui.debug(
2542 b'ignoring obsolescence markers, feature not enabled\n'
2542 b'ignoring obsolescence markers, feature not enabled\n'
2543 )
2543 )
2544 return
2544 return
2545 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2545 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2546 op.repo.invalidatevolatilesets()
2546 op.repo.invalidatevolatilesets()
2547 op.records.add(b'obsmarkers', {b'new': new})
2547 op.records.add(b'obsmarkers', {b'new': new})
2548 if op.reply is not None:
2548 if op.reply is not None:
2549 rpart = op.reply.newpart(b'reply:obsmarkers')
2549 rpart = op.reply.newpart(b'reply:obsmarkers')
2550 rpart.addparam(
2550 rpart.addparam(
2551 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2551 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2552 )
2552 )
2553 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2553 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2554
2554
2555
2555
2556 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2556 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2557 def handleobsmarkerreply(op, inpart):
2557 def handleobsmarkerreply(op, inpart):
2558 """retrieve the result of a pushkey request"""
2558 """retrieve the result of a pushkey request"""
2559 ret = int(inpart.params[b'new'])
2559 ret = int(inpart.params[b'new'])
2560 partid = int(inpart.params[b'in-reply-to'])
2560 partid = int(inpart.params[b'in-reply-to'])
2561 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2561 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2562
2562
2563
2563
2564 @parthandler(b'hgtagsfnodes')
2564 @parthandler(b'hgtagsfnodes')
2565 def handlehgtagsfnodes(op, inpart):
2565 def handlehgtagsfnodes(op, inpart):
2566 """Applies .hgtags fnodes cache entries to the local repo.
2566 """Applies .hgtags fnodes cache entries to the local repo.
2567
2567
2568 Payload is pairs of 20 byte changeset nodes and filenodes.
2568 Payload is pairs of 20 byte changeset nodes and filenodes.
2569 """
2569 """
2570 # Grab the transaction so we ensure that we have the lock at this point.
2570 # Grab the transaction so we ensure that we have the lock at this point.
2571 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2571 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2572 op.gettransaction()
2572 op.gettransaction()
2573 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2573 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2574
2574
2575 count = 0
2575 count = 0
2576 while True:
2576 while True:
2577 node = inpart.read(20)
2577 node = inpart.read(20)
2578 fnode = inpart.read(20)
2578 fnode = inpart.read(20)
2579 if len(node) < 20 or len(fnode) < 20:
2579 if len(node) < 20 or len(fnode) < 20:
2580 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2580 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2581 break
2581 break
2582 cache.setfnode(node, fnode)
2582 cache.setfnode(node, fnode)
2583 count += 1
2583 count += 1
2584
2584
2585 cache.write()
2585 cache.write()
2586 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2586 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2587
2587
2588
2588
2589 rbcstruct = struct.Struct(b'>III')
2589 rbcstruct = struct.Struct(b'>III')
2590
2590
2591
2591
2592 @parthandler(b'cache:rev-branch-cache')
2592 @parthandler(b'cache:rev-branch-cache')
2593 def handlerbc(op, inpart):
2593 def handlerbc(op, inpart):
2594 """Legacy part, ignored for compatibility with bundles from or
2594 """Legacy part, ignored for compatibility with bundles from or
2595 for Mercurial before 5.7. Newer Mercurial computes the cache
2595 for Mercurial before 5.7. Newer Mercurial computes the cache
2596 efficiently enough during unbundling that the additional transfer
2596 efficiently enough during unbundling that the additional transfer
2597 is unnecessary."""
2597 is unnecessary."""
2598
2598
2599
2599
2600 @parthandler(b'pushvars')
2600 @parthandler(b'pushvars')
2601 def bundle2getvars(op, part):
2601 def bundle2getvars(op, part):
2602 '''unbundle a bundle2 containing shellvars on the server'''
2602 '''unbundle a bundle2 containing shellvars on the server'''
2603 # An option to disable unbundling on server-side for security reasons
2603 # An option to disable unbundling on server-side for security reasons
2604 if op.ui.configbool(b'push', b'pushvars.server'):
2604 if op.ui.configbool(b'push', b'pushvars.server'):
2605 hookargs = {}
2605 hookargs = {}
2606 for key, value in part.advisoryparams:
2606 for key, value in part.advisoryparams:
2607 key = key.upper()
2607 key = key.upper()
2608 # We want pushed variables to have USERVAR_ prepended so we know
2608 # We want pushed variables to have USERVAR_ prepended so we know
2609 # they came from the --pushvar flag.
2609 # they came from the --pushvar flag.
2610 key = b"USERVAR_" + key
2610 key = b"USERVAR_" + key
2611 hookargs[key] = value
2611 hookargs[key] = value
2612 op.addhookargs(hookargs)
2612 op.addhookargs(hookargs)
2613
2613
2614
2614
2615 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2615 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2616 def handlestreamv2bundle(op, part):
2616 def handlestreamv2bundle(op, part):
2617 requirements = urlreq.unquote(part.params[b'requirements'])
2617 requirements = urlreq.unquote(part.params[b'requirements'])
2618 requirements = requirements.split(b',') if requirements else []
2618 requirements = requirements.split(b',') if requirements else []
2619 filecount = int(part.params[b'filecount'])
2619 filecount = int(part.params[b'filecount'])
2620 bytecount = int(part.params[b'bytecount'])
2620 bytecount = int(part.params[b'bytecount'])
2621
2621
2622 repo = op.repo
2622 repo = op.repo
2623 if len(repo):
2623 if len(repo):
2624 msg = _(b'cannot apply stream clone to non empty repository')
2624 msg = _(b'cannot apply stream clone to non empty repository')
2625 raise error.Abort(msg)
2625 raise error.Abort(msg)
2626
2626
2627 repo.ui.debug(b'applying stream bundle\n')
2627 repo.ui.debug(b'applying stream bundle\n')
2628 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2628 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2629
2629
2630
2630
2631 @parthandler(b'stream3-exp', (b'requirements',))
2631 @parthandler(b'stream3-exp', (b'requirements',))
2632 def handlestreamv3bundle(op, part):
2632 def handlestreamv3bundle(op, part):
2633 requirements = urlreq.unquote(part.params[b'requirements'])
2633 requirements = urlreq.unquote(part.params[b'requirements'])
2634 requirements = requirements.split(b',') if requirements else []
2634 requirements = requirements.split(b',') if requirements else []
2635
2635
2636 repo = op.repo
2636 repo = op.repo
2637 if len(repo):
2637 if len(repo):
2638 msg = _(b'cannot apply stream clone to non empty repository')
2638 msg = _(b'cannot apply stream clone to non empty repository')
2639 raise error.Abort(msg)
2639 raise error.Abort(msg)
2640
2640
2641 repo.ui.debug(b'applying stream bundle\n')
2641 repo.ui.debug(b'applying stream bundle\n')
2642 streamclone.applybundlev3(repo, part, requirements)
2642 streamclone.applybundlev3(repo, part, requirements)
2643
2643
2644
2644
2645 def widen_bundle(
2645 def widen_bundle(
2646 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2646 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2647 ):
2647 ):
2648 """generates bundle2 for widening a narrow clone
2648 """generates bundle2 for widening a narrow clone
2649
2649
2650 bundler is the bundle to which data should be added
2650 bundler is the bundle to which data should be added
2651 repo is the localrepository instance
2651 repo is the localrepository instance
2652 oldmatcher matches what the client already has
2652 oldmatcher matches what the client already has
2653 newmatcher matches what the client needs (including what it already has)
2653 newmatcher matches what the client needs (including what it already has)
2654 common is set of common heads between server and client
2654 common is set of common heads between server and client
2655 known is a set of revs known on the client side (used in ellipses)
2655 known is a set of revs known on the client side (used in ellipses)
2656 cgversion is the changegroup version to send
2656 cgversion is the changegroup version to send
2657 ellipses is boolean value telling whether to send ellipses data or not
2657 ellipses is boolean value telling whether to send ellipses data or not
2658
2658
2659 returns bundle2 of the data required for extending
2659 returns bundle2 of the data required for extending
2660 """
2660 """
2661 commonnodes = set()
2661 commonnodes = set()
2662 cl = repo.changelog
2662 cl = repo.changelog
2663 for r in repo.revs(b"::%ln", common):
2663 for r in repo.revs(b"::%ln", common):
2664 commonnodes.add(cl.node(r))
2664 commonnodes.add(cl.node(r))
2665 if commonnodes:
2665 if commonnodes:
2666 packer = changegroup.getbundler(
2666 packer = changegroup.getbundler(
2667 cgversion,
2667 cgversion,
2668 repo,
2668 repo,
2669 oldmatcher=oldmatcher,
2669 oldmatcher=oldmatcher,
2670 matcher=newmatcher,
2670 matcher=newmatcher,
2671 fullnodes=commonnodes,
2671 fullnodes=commonnodes,
2672 )
2672 )
2673 cgdata = packer.generate(
2673 cgdata = packer.generate(
2674 {repo.nullid},
2674 {repo.nullid},
2675 list(commonnodes),
2675 list(commonnodes),
2676 False,
2676 False,
2677 b'narrow_widen',
2677 b'narrow_widen',
2678 changelog=False,
2678 changelog=False,
2679 )
2679 )
2680
2680
2681 part = bundler.newpart(b'changegroup', data=cgdata)
2681 part = bundler.newpart(b'changegroup', data=cgdata)
2682 part.addparam(b'version', cgversion)
2682 part.addparam(b'version', cgversion)
2683 if scmutil.istreemanifest(repo):
2683 if scmutil.istreemanifest(repo):
2684 part.addparam(b'treemanifest', b'1')
2684 part.addparam(b'treemanifest', b'1')
2685 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2685 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2686 part.addparam(b'exp-sidedata', b'1')
2686 part.addparam(b'exp-sidedata', b'1')
2687 wanted = format_remote_wanted_sidedata(repo)
2687 wanted = format_remote_wanted_sidedata(repo)
2688 part.addparam(b'exp-wanted-sidedata', wanted)
2688 part.addparam(b'exp-wanted-sidedata', wanted)
2689
2689
2690 return bundler
2690 return bundler
@@ -1,2954 +1,2959
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import annotations
8 from __future__ import annotations
9
9
10 import collections
10 import collections
11 import weakref
11 import weakref
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullrev,
16 nullrev,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 bundlecaches,
21 bundlecaches,
22 changegroup,
22 changegroup,
23 discovery,
23 discovery,
24 error,
24 error,
25 lock as lockmod,
25 lock as lockmod,
26 logexchange,
26 logexchange,
27 narrowspec,
27 narrowspec,
28 obsolete,
28 obsolete,
29 obsutil,
29 obsutil,
30 phases,
30 phases,
31 pushkey,
31 pushkey,
32 pycompat,
32 pycompat,
33 requirements,
33 requirements,
34 scmutil,
34 scmutil,
35 streamclone,
35 streamclone,
36 url as urlmod,
36 url as urlmod,
37 util,
37 util,
38 wireprototypes,
38 wireprototypes,
39 )
39 )
40 from .utils import (
40 from .utils import (
41 hashutil,
41 hashutil,
42 stringutil,
42 stringutil,
43 urlutil,
43 urlutil,
44 )
44 )
45 from .interfaces import repository
45 from .interfaces import repository
46
46
47 urlerr = util.urlerr
47 urlerr = util.urlerr
48 urlreq = util.urlreq
48 urlreq = util.urlreq
49
49
50 _NARROWACL_SECTION = b'narrowacl'
50 _NARROWACL_SECTION = b'narrowacl'
51
51
52
52
53 def readbundle(ui, fh, fname, vfs=None):
53 def readbundle(ui, fh, fname, vfs=None):
54 header = changegroup.readexactly(fh, 4)
54 header = changegroup.readexactly(fh, 4)
55
55
56 alg = None
56 alg = None
57 if not fname:
57 if not fname:
58 fname = b"stream"
58 fname = b"stream"
59 if not header.startswith(b'HG') and header.startswith(b'\0'):
59 if not header.startswith(b'HG') and header.startswith(b'\0'):
60 fh = changegroup.headerlessfixup(fh, header)
60 fh = changegroup.headerlessfixup(fh, header)
61 header = b"HG10"
61 header = b"HG10"
62 alg = b'UN'
62 alg = b'UN'
63 elif vfs:
63 elif vfs:
64 fname = vfs.join(fname)
64 fname = vfs.join(fname)
65
65
66 magic, version = header[0:2], header[2:4]
66 magic, version = header[0:2], header[2:4]
67
67
68 if magic != b'HG':
68 if magic != b'HG':
69 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
69 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
70 if version == b'10':
70 if version == b'10':
71 if alg is None:
71 if alg is None:
72 alg = changegroup.readexactly(fh, 2)
72 alg = changegroup.readexactly(fh, 2)
73 return changegroup.cg1unpacker(fh, alg)
73 return changegroup.cg1unpacker(fh, alg)
74 elif version.startswith(b'2'):
74 elif version.startswith(b'2'):
75 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
75 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
76 elif version == b'S1':
76 elif version == b'S1':
77 return streamclone.streamcloneapplier(fh)
77 return streamclone.streamcloneapplier(fh)
78 else:
78 else:
79 raise error.Abort(
79 raise error.Abort(
80 _(b'%s: unknown bundle version %s') % (fname, version)
80 _(b'%s: unknown bundle version %s') % (fname, version)
81 )
81 )
82
82
83
83
84 def _format_params(params):
84 def _format_params(params):
85 parts = []
85 parts = []
86 for key, value in sorted(params.items()):
86 for key, value in sorted(params.items()):
87 value = urlreq.quote(value)
87 value = urlreq.quote(value)
88 parts.append(b"%s=%s" % (key, value))
88 parts.append(b"%s=%s" % (key, value))
89 return b';'.join(parts)
89 return b';'.join(parts)
90
90
91
91
92 def getbundlespec(ui, fh):
92 def getbundlespec(ui, fh):
93 """Infer the bundlespec from a bundle file handle.
93 """Infer the bundlespec from a bundle file handle.
94
94
95 The input file handle is seeked and the original seek position is not
95 The input file handle is seeked and the original seek position is not
96 restored.
96 restored.
97 """
97 """
98
98
99 def speccompression(alg):
99 def speccompression(alg):
100 try:
100 try:
101 return util.compengines.forbundletype(alg).bundletype()[0]
101 return util.compengines.forbundletype(alg).bundletype()[0]
102 except KeyError:
102 except KeyError:
103 return None
103 return None
104
104
105 params = {}
105 params = {}
106
106
107 b = readbundle(ui, fh, None)
107 b = readbundle(ui, fh, None)
108 if isinstance(b, changegroup.cg1unpacker):
108 if isinstance(b, changegroup.cg1unpacker):
109 alg = b._type
109 alg = b._type
110 if alg == b'_truncatedBZ':
110 if alg == b'_truncatedBZ':
111 alg = b'BZ'
111 alg = b'BZ'
112 comp = speccompression(alg)
112 comp = speccompression(alg)
113 if not comp:
113 if not comp:
114 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
114 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
115 return b'%s-v1' % comp
115 return b'%s-v1' % comp
116 elif isinstance(b, bundle2.unbundle20):
116 elif isinstance(b, bundle2.unbundle20):
117 if b'Compression' in b.params:
117 if b'Compression' in b.params:
118 comp = speccompression(b.params[b'Compression'])
118 comp = speccompression(b.params[b'Compression'])
119 if not comp:
119 if not comp:
120 raise error.Abort(
120 raise error.Abort(
121 _(b'unknown compression algorithm: %s') % comp
121 _(b'unknown compression algorithm: %s') % comp
122 )
122 )
123 else:
123 else:
124 comp = b'none'
124 comp = b'none'
125
125
126 version = None
126 version = None
127 for part in b.iterparts():
127 for part in b.iterparts():
128 if part.type == b'changegroup':
128 if part.type == b'changegroup':
129 cgversion = part.params[b'version']
129 cgversion = part.params[b'version']
130 if cgversion in (b'01', b'02'):
130 if cgversion in (b'01', b'02'):
131 version = b'v2'
131 version = b'v2'
132 elif cgversion in (b'03',):
132 elif cgversion in (b'03',):
133 version = b'v2'
133 version = b'v2'
134 params[b'cg.version'] = cgversion
134 params[b'cg.version'] = cgversion
135 else:
135 else:
136 raise error.Abort(
136 raise error.Abort(
137 _(
137 _(
138 b'changegroup version %s does not have '
138 b'changegroup version %s does not have '
139 b'a known bundlespec'
139 b'a known bundlespec'
140 )
140 )
141 % version,
141 % version,
142 hint=_(b'try upgrading your Mercurial client'),
142 hint=_(b'try upgrading your Mercurial client'),
143 )
143 )
144 elif part.type == b'stream2' and version is None:
144 elif part.type == b'stream2' and version is None:
145 # A stream2 part requires to be part of a v2 bundle
145 # A stream2 part requires to be part of a v2 bundle
146 requirements = urlreq.unquote(part.params[b'requirements'])
146 requirements = urlreq.unquote(part.params[b'requirements'])
147 splitted = requirements.split()
147 splitted = requirements.split()
148 params = bundle2._formatrequirementsparams(splitted)
148 params = bundle2._formatrequirementsparams(splitted)
149 return b'none-v2;stream=v2;%s' % params
149 return b'none-v2;stream=v2;%s' % params
150 elif part.type == b'stream3-exp' and version is None:
150 elif part.type == b'stream3-exp' and version is None:
151 # A stream3 part requires to be part of a v2 bundle
151 # A stream3 part requires to be part of a v2 bundle
152 requirements = urlreq.unquote(part.params[b'requirements'])
152 requirements = urlreq.unquote(part.params[b'requirements'])
153 splitted = requirements.split()
153 splitted = requirements.split()
154 params = bundle2._formatrequirementsparams(splitted)
154 params = bundle2._formatrequirementsparams(splitted)
155 return b'none-v2;stream=v3-exp;%s' % params
155 return b'none-v2;stream=v3-exp;%s' % params
156 elif part.type == b'obsmarkers':
156 elif part.type == b'obsmarkers':
157 params[b'obsolescence'] = b'yes'
157 params[b'obsolescence'] = b'yes'
158 if not part.mandatory:
158 if not part.mandatory:
159 params[b'obsolescence-mandatory'] = b'no'
159 params[b'obsolescence-mandatory'] = b'no'
160
160
161 if not version:
161 if not version:
162 params[b'changegroup'] = b'no'
162 params[b'changegroup'] = b'no'
163 version = b'v2'
163 version = b'v2'
164 spec = b'%s-%s' % (comp, version)
164 spec = b'%s-%s' % (comp, version)
165 if params:
165 if params:
166 spec += b';'
166 spec += b';'
167 spec += _format_params(params)
167 spec += _format_params(params)
168 return spec
168 return spec
169
169
170 elif isinstance(b, streamclone.streamcloneapplier):
170 elif isinstance(b, streamclone.streamcloneapplier):
171 requirements = streamclone.readbundle1header(fh)[2]
171 requirements = streamclone.readbundle1header(fh)[2]
172 formatted = bundle2._formatrequirementsparams(requirements)
172 formatted = bundle2._formatrequirementsparams(requirements)
173 return b'none-packed1;%s' % formatted
173 return b'none-packed1;%s' % formatted
174 else:
174 else:
175 raise error.Abort(_(b'unknown bundle type: %s') % b)
175 raise error.Abort(_(b'unknown bundle type: %s') % b)
176
176
177
177
178 def _computeoutgoing(repo, heads, common):
178 def _computeoutgoing(repo, heads, common):
179 """Computes which revs are outgoing given a set of common
179 """Computes which revs are outgoing given a set of common
180 and a set of heads.
180 and a set of heads.
181
181
182 This is a separate function so extensions can have access to
182 This is a separate function so extensions can have access to
183 the logic.
183 the logic.
184
184
185 Returns a discovery.outgoing object.
185 Returns a discovery.outgoing object.
186 """
186 """
187 cl = repo.changelog
187 cl = repo.changelog
188 if common:
188 if common:
189 hasnode = cl.hasnode
189 hasnode = cl.hasnode
190 common = [n for n in common if hasnode(n)]
190 common = [n for n in common if hasnode(n)]
191 else:
191 else:
192 common = [repo.nullid]
192 common = [repo.nullid]
193 if not heads:
193 if not heads:
194 heads = cl.heads()
194 heads = cl.heads()
195 return discovery.outgoing(repo, common, heads)
195 return discovery.outgoing(repo, common, heads)
196
196
197
197
198 def _checkpublish(pushop):
198 def _checkpublish(pushop):
199 repo = pushop.repo
199 repo = pushop.repo
200 ui = repo.ui
200 ui = repo.ui
201 behavior = ui.config(b'experimental', b'auto-publish')
201 behavior = ui.config(b'experimental', b'auto-publish')
202 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
202 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
203 return
203 return
204 remotephases = listkeys(pushop.remote, b'phases')
204 remotephases = listkeys(pushop.remote, b'phases')
205 if not remotephases.get(b'publishing', False):
205 if not remotephases.get(b'publishing', False):
206 return
206 return
207
207
208 if pushop.revs is None:
208 if pushop.revs is None:
209 published = repo.filtered(b'served').revs(b'not public()')
209 published = repo.filtered(b'served').revs(b'not public()')
210 else:
210 else:
211 published = repo.revs(b'::%ln - public()', pushop.revs)
211 published = repo.revs(b'::%ln - public()', pushop.revs)
212 # we want to use pushop.revs in the revset even if they themselves are
212 # we want to use pushop.revs in the revset even if they themselves are
213 # secret, but we don't want to have anything that the server won't see
213 # secret, but we don't want to have anything that the server won't see
214 # in the result of this expression
214 # in the result of this expression
215 published &= repo.filtered(b'served')
215 published &= repo.filtered(b'served')
216 if published:
216 if published:
217 if behavior == b'warn':
217 if behavior == b'warn':
218 ui.warn(
218 ui.warn(
219 _(b'%i changesets about to be published\n') % len(published)
219 _(b'%i changesets about to be published\n') % len(published)
220 )
220 )
221 elif behavior == b'confirm':
221 elif behavior == b'confirm':
222 if ui.promptchoice(
222 if ui.promptchoice(
223 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
223 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
224 % len(published)
224 % len(published)
225 ):
225 ):
226 raise error.CanceledError(_(b'user quit'))
226 raise error.CanceledError(_(b'user quit'))
227 elif behavior == b'abort':
227 elif behavior == b'abort':
228 msg = _(b'push would publish %i changesets') % len(published)
228 msg = _(b'push would publish %i changesets') % len(published)
229 hint = _(
229 hint = _(
230 b"use --publish or adjust 'experimental.auto-publish'"
230 b"use --publish or adjust 'experimental.auto-publish'"
231 b" config"
231 b" config"
232 )
232 )
233 raise error.Abort(msg, hint=hint)
233 raise error.Abort(msg, hint=hint)
234
234
235
235
236 def _forcebundle1(op):
236 def _forcebundle1(op):
237 """return true if a pull/push must use bundle1
237 """return true if a pull/push must use bundle1
238
238
239 This function is used to allow testing of the older bundle version"""
239 This function is used to allow testing of the older bundle version"""
240 ui = op.repo.ui
240 ui = op.repo.ui
241 # The goal is this config is to allow developer to choose the bundle
241 # The goal is this config is to allow developer to choose the bundle
242 # version used during exchanged. This is especially handy during test.
242 # version used during exchanged. This is especially handy during test.
243 # Value is a list of bundle version to be picked from, highest version
243 # Value is a list of bundle version to be picked from, highest version
244 # should be used.
244 # should be used.
245 #
245 #
246 # developer config: devel.legacy.exchange
246 # developer config: devel.legacy.exchange
247 exchange = ui.configlist(b'devel', b'legacy.exchange')
247 exchange = ui.configlist(b'devel', b'legacy.exchange')
248 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
248 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
249 return forcebundle1 or not op.remote.capable(b'bundle2')
249 return forcebundle1 or not op.remote.capable(b'bundle2')
250
250
251
251
252 class pushoperation:
252 class pushoperation:
253 """A object that represent a single push operation
253 """A object that represent a single push operation
254
254
255 Its purpose is to carry push related state and very common operations.
255 Its purpose is to carry push related state and very common operations.
256
256
257 A new pushoperation should be created at the beginning of each push and
257 A new pushoperation should be created at the beginning of each push and
258 discarded afterward.
258 discarded afterward.
259 """
259 """
260
260
261 def __init__(
261 def __init__(
262 self,
262 self,
263 repo,
263 repo,
264 remote,
264 remote,
265 force=False,
265 force=False,
266 revs=None,
266 revs=None,
267 newbranch=False,
267 newbranch=False,
268 bookmarks=(),
268 bookmarks=(),
269 publish=False,
269 publish=False,
270 pushvars=None,
270 pushvars=None,
271 ):
271 ):
272 # repo we push from
272 # repo we push from
273 self.repo = repo
273 self.repo = repo
274 self.ui = repo.ui
274 self.ui = repo.ui
275 # repo we push to
275 # repo we push to
276 self.remote = remote
276 self.remote = remote
277 # force option provided
277 # force option provided
278 self.force = force
278 self.force = force
279 # revs to be pushed (None is "all")
279 # revs to be pushed (None is "all")
280 self.revs = revs
280 self.revs = revs
281 # bookmark explicitly pushed
281 # bookmark explicitly pushed
282 self.bookmarks = bookmarks
282 self.bookmarks = bookmarks
283 # allow push of new branch
283 # allow push of new branch
284 self.newbranch = newbranch
284 self.newbranch = newbranch
285 # step already performed
285 # step already performed
286 # (used to check what steps have been already performed through bundle2)
286 # (used to check what steps have been already performed through bundle2)
287 self.stepsdone = set()
287 self.stepsdone = set()
288 # Integer version of the changegroup push result
288 # Integer version of the changegroup push result
289 # - None means nothing to push
289 # - None means nothing to push
290 # - 0 means HTTP error
290 # - 0 means HTTP error
291 # - 1 means we pushed and remote head count is unchanged *or*
291 # - 1 means we pushed and remote head count is unchanged *or*
292 # we have outgoing changesets but refused to push
292 # we have outgoing changesets but refused to push
293 # - other values as described by addchangegroup()
293 # - other values as described by addchangegroup()
294 self.cgresult = None
294 self.cgresult = None
295 # Boolean value for the bookmark push
295 # Boolean value for the bookmark push
296 self.bkresult = None
296 self.bkresult = None
297 # discover.outgoing object (contains common and outgoing data)
297 # discover.outgoing object (contains common and outgoing data)
298 self.outgoing = None
298 self.outgoing = None
299 # all remote topological heads before the push
299 # all remote topological heads before the push
300 self.remoteheads = None
300 self.remoteheads = None
301 # Details of the remote branch pre and post push
301 # Details of the remote branch pre and post push
302 #
302 #
303 # mapping: {'branch': ([remoteheads],
303 # mapping: {'branch': ([remoteheads],
304 # [newheads],
304 # [newheads],
305 # [unsyncedheads],
305 # [unsyncedheads],
306 # [discardedheads])}
306 # [discardedheads])}
307 # - branch: the branch name
307 # - branch: the branch name
308 # - remoteheads: the list of remote heads known locally
308 # - remoteheads: the list of remote heads known locally
309 # None if the branch is new
309 # None if the branch is new
310 # - newheads: the new remote heads (known locally) with outgoing pushed
310 # - newheads: the new remote heads (known locally) with outgoing pushed
311 # - unsyncedheads: the list of remote heads unknown locally.
311 # - unsyncedheads: the list of remote heads unknown locally.
312 # - discardedheads: the list of remote heads made obsolete by the push
312 # - discardedheads: the list of remote heads made obsolete by the push
313 self.pushbranchmap = None
313 self.pushbranchmap = None
314 # testable as a boolean indicating if any nodes are missing locally.
314 # testable as a boolean indicating if any nodes are missing locally.
315 self.incoming = None
315 self.incoming = None
316 # summary of the remote phase situation
316 # summary of the remote phase situation
317 self.remotephases = None
317 self.remotephases = None
318 # phases changes that must be pushed along side the changesets
318 # phases changes that must be pushed along side the changesets
319 self.outdatedphases = None
319 self.outdatedphases = None
320 # phases changes that must be pushed if changeset push fails
320 # phases changes that must be pushed if changeset push fails
321 self.fallbackoutdatedphases = None
321 self.fallbackoutdatedphases = None
322 # outgoing obsmarkers
322 # outgoing obsmarkers
323 self.outobsmarkers = set()
323 self.outobsmarkers = set()
324 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
324 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
325 self.outbookmarks = []
325 self.outbookmarks = []
326 # transaction manager
326 # transaction manager
327 self.trmanager = None
327 self.trmanager = None
328 # map { pushkey partid -> callback handling failure}
328 # map { pushkey partid -> callback handling failure}
329 # used to handle exception from mandatory pushkey part failure
329 # used to handle exception from mandatory pushkey part failure
330 self.pkfailcb = {}
330 self.pkfailcb = {}
331 # an iterable of pushvars or None
331 # an iterable of pushvars or None
332 self.pushvars = pushvars
332 self.pushvars = pushvars
333 # publish pushed changesets
333 # publish pushed changesets
334 self.publish = publish
334 self.publish = publish
335
335
336 @util.propertycache
336 @util.propertycache
337 def futureheads(self):
337 def futureheads(self):
338 """future remote heads if the changeset push succeeds"""
338 """future remote heads if the changeset push succeeds"""
339 return self.outgoing.ancestorsof
339 return self.outgoing.ancestorsof
340
340
341 @util.propertycache
341 @util.propertycache
342 def fallbackheads(self):
342 def fallbackheads(self):
343 """future remote heads if the changeset push fails"""
343 """future remote heads if the changeset push fails"""
344 if self.revs is None:
344 if self.revs is None:
345 # not target to push, all common are relevant
345 # not target to push, all common are relevant
346 return self.outgoing.commonheads
346 return self.outgoing.commonheads
347 unfi = self.repo.unfiltered()
347 unfi = self.repo.unfiltered()
348 # I want cheads = heads(::push_heads and ::commonheads)
348 # I want cheads = heads(::push_heads and ::commonheads)
349 #
349 #
350 # To push, we already computed
350 # To push, we already computed
351 # common = (::commonheads)
351 # common = (::commonheads)
352 # missing = ((commonheads::push_heads) - commonheads)
352 # missing = ((commonheads::push_heads) - commonheads)
353 #
353 #
354 # So we basically search
354 # So we basically search
355 #
355 #
356 # almost_heads = heads((parents(missing) + push_heads) & common)
356 # almost_heads = heads((parents(missing) + push_heads) & common)
357 #
357 #
358 # We use "almost" here as this can return revision that are ancestors
358 # We use "almost" here as this can return revision that are ancestors
359 # of other in the set and we need to explicitly turn it into an
359 # of other in the set and we need to explicitly turn it into an
360 # antichain later. We can do so using:
360 # antichain later. We can do so using:
361 #
361 #
362 # cheads = heads(almost_heads::almost_heads)
362 # cheads = heads(almost_heads::almost_heads)
363 #
363 #
364 # In pratice the code is a bit more convulted to avoid some extra
364 # In pratice the code is a bit more convulted to avoid some extra
365 # computation. It aims at doing the same computation as highlighted
365 # computation. It aims at doing the same computation as highlighted
366 # above however.
366 # above however.
367 common = self.outgoing.common
367 common = self.outgoing.common
368 unfi = self.repo.unfiltered()
368 unfi = self.repo.unfiltered()
369 cl = unfi.changelog
369 cl = unfi.changelog
370 to_rev = cl.index.rev
370 to_rev = cl.index.rev
371 to_node = cl.node
371 to_node = cl.node
372 parent_revs = cl.parentrevs
372 parent_revs = cl.parentrevs
373 unselected = []
373 unselected = []
374 cheads = set()
374 cheads = set()
375 # XXX-perf: `self.revs` and `outgoing.missing` could hold revs directly
375 # XXX-perf: `self.revs` and `outgoing.missing` could hold revs directly
376 for n in self.revs:
376 for n in self.revs:
377 r = to_rev(n)
377 r = to_rev(n)
378 if r in common:
378 if r in common:
379 cheads.add(r)
379 cheads.add(r)
380 else:
380 else:
381 unselected.append(r)
381 unselected.append(r)
382 known_non_heads = cl.ancestors(cheads, inclusive=True)
382 known_non_heads = cl.ancestors(cheads, inclusive=True)
383 if unselected:
383 if unselected:
384 missing_revs = {to_rev(n) for n in self.outgoing.missing}
384 missing_revs = {to_rev(n) for n in self.outgoing.missing}
385 missing_revs.add(nullrev)
385 missing_revs.add(nullrev)
386 root_points = set()
386 root_points = set()
387 for r in missing_revs:
387 for r in missing_revs:
388 p1, p2 = parent_revs(r)
388 p1, p2 = parent_revs(r)
389 if p1 not in missing_revs and p1 not in known_non_heads:
389 if p1 not in missing_revs and p1 not in known_non_heads:
390 root_points.add(p1)
390 root_points.add(p1)
391 if p2 not in missing_revs and p2 not in known_non_heads:
391 if p2 not in missing_revs and p2 not in known_non_heads:
392 root_points.add(p2)
392 root_points.add(p2)
393 if root_points:
393 if root_points:
394 heads = unfi.revs('heads(%ld::%ld)', root_points, root_points)
394 heads = unfi.revs('heads(%ld::%ld)', root_points, root_points)
395 cheads.update(heads)
395 cheads.update(heads)
396 # XXX-perf: could this be a set of revision?
396 # XXX-perf: could this be a set of revision?
397 return [to_node(r) for r in sorted(cheads)]
397 return [to_node(r) for r in sorted(cheads)]
398
398
399 @property
399 @property
400 def commonheads(self):
400 def commonheads(self):
401 """set of all common heads after changeset bundle push"""
401 """set of all common heads after changeset bundle push"""
402 if self.cgresult:
402 if self.cgresult:
403 return self.futureheads
403 return self.futureheads
404 else:
404 else:
405 return self.fallbackheads
405 return self.fallbackheads
406
406
407
407
408 # mapping of message used when pushing bookmark
408 # mapping of message used when pushing bookmark
409 bookmsgmap = {
409 bookmsgmap = {
410 b'update': (
410 b'update': (
411 _(b"updating bookmark %s\n"),
411 _(b"updating bookmark %s\n"),
412 _(b'updating bookmark %s failed\n'),
412 _(b'updating bookmark %s failed\n'),
413 ),
413 ),
414 b'export': (
414 b'export': (
415 _(b"exporting bookmark %s\n"),
415 _(b"exporting bookmark %s\n"),
416 _(b'exporting bookmark %s failed\n'),
416 _(b'exporting bookmark %s failed\n'),
417 ),
417 ),
418 b'delete': (
418 b'delete': (
419 _(b"deleting remote bookmark %s\n"),
419 _(b"deleting remote bookmark %s\n"),
420 _(b'deleting remote bookmark %s failed\n'),
420 _(b'deleting remote bookmark %s failed\n'),
421 ),
421 ),
422 }
422 }
423
423
424
424
425 def push(
425 def push(
426 repo,
426 repo,
427 remote,
427 remote,
428 force=False,
428 force=False,
429 revs=None,
429 revs=None,
430 newbranch=False,
430 newbranch=False,
431 bookmarks=(),
431 bookmarks=(),
432 publish=False,
432 publish=False,
433 opargs=None,
433 opargs=None,
434 ):
434 ):
435 """Push outgoing changesets (limited by revs) from a local
435 """Push outgoing changesets (limited by revs) from a local
436 repository to remote. Return an integer:
436 repository to remote. Return an integer:
437 - None means nothing to push
437 - None means nothing to push
438 - 0 means HTTP error
438 - 0 means HTTP error
439 - 1 means we pushed and remote head count is unchanged *or*
439 - 1 means we pushed and remote head count is unchanged *or*
440 we have outgoing changesets but refused to push
440 we have outgoing changesets but refused to push
441 - other values as described by addchangegroup()
441 - other values as described by addchangegroup()
442 """
442 """
443 if opargs is None:
443 if opargs is None:
444 opargs = {}
444 opargs = {}
445 pushop = pushoperation(
445 pushop = pushoperation(
446 repo,
446 repo,
447 remote,
447 remote,
448 force,
448 force,
449 revs,
449 revs,
450 newbranch,
450 newbranch,
451 bookmarks,
451 bookmarks,
452 publish,
452 publish,
453 **pycompat.strkwargs(opargs),
453 **pycompat.strkwargs(opargs),
454 )
454 )
455 if pushop.remote.local():
455 if pushop.remote.local():
456 missing = (
456 missing = (
457 set(pushop.repo.requirements) - pushop.remote.local().supported
457 set(pushop.repo.requirements) - pushop.remote.local().supported
458 )
458 )
459 if missing:
459 if missing:
460 msg = _(
460 msg = _(
461 b"required features are not"
461 b"required features are not"
462 b" supported in the destination:"
462 b" supported in the destination:"
463 b" %s"
463 b" %s"
464 ) % (b', '.join(sorted(missing)))
464 ) % (b', '.join(sorted(missing)))
465 raise error.Abort(msg)
465 raise error.Abort(msg)
466
466
467 if not pushop.remote.canpush():
467 if not pushop.remote.canpush():
468 raise error.Abort(_(b"destination does not support push"))
468 raise error.Abort(_(b"destination does not support push"))
469
469
470 if not pushop.remote.capable(b'unbundle'):
470 if not pushop.remote.capable(b'unbundle'):
471 raise error.Abort(
471 raise error.Abort(
472 _(
472 _(
473 b'cannot push: destination does not support the '
473 b'cannot push: destination does not support the '
474 b'unbundle wire protocol command'
474 b'unbundle wire protocol command'
475 )
475 )
476 )
476 )
477 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
477 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
478 # Check that a computer is registered for that category for at least
478 # Check that a computer is registered for that category for at least
479 # one revlog kind.
479 # one revlog kind.
480 for kind, computers in repo._sidedata_computers.items():
480 for kind, computers in repo._sidedata_computers.items():
481 if computers.get(category):
481 if computers.get(category):
482 break
482 break
483 else:
483 else:
484 raise error.Abort(
484 raise error.Abort(
485 _(
485 _(
486 b'cannot push: required sidedata category not supported'
486 b'cannot push: required sidedata category not supported'
487 b" by this client: '%s'"
487 b" by this client: '%s'"
488 )
488 )
489 % pycompat.bytestr(category)
489 % pycompat.bytestr(category)
490 )
490 )
491 # get lock as we might write phase data
491 # get lock as we might write phase data
492 wlock = lock = None
492 wlock = lock = None
493 try:
493 try:
494 try:
494 try:
495 # bundle2 push may receive a reply bundle touching bookmarks
495 # bundle2 push may receive a reply bundle touching bookmarks
496 # requiring the wlock. Take it now to ensure proper ordering.
496 # requiring the wlock. Take it now to ensure proper ordering.
497 maypushback = pushop.ui.configbool(
497 maypushback = pushop.ui.configbool(
498 b'experimental',
498 b'experimental',
499 b'bundle2.pushback',
499 b'bundle2.pushback',
500 )
500 )
501 if (
501 if (
502 (not _forcebundle1(pushop))
502 (not _forcebundle1(pushop))
503 and maypushback
503 and maypushback
504 and not bookmod.bookmarksinstore(repo)
504 and not bookmod.bookmarksinstore(repo)
505 ):
505 ):
506 wlock = pushop.repo.wlock()
506 wlock = pushop.repo.wlock()
507 lock = pushop.repo.lock()
507 lock = pushop.repo.lock()
508 pushop.trmanager = transactionmanager(
508 pushop.trmanager = transactionmanager(
509 pushop.repo, b'push-response', pushop.remote.url()
509 pushop.repo, b'push-response', pushop.remote.url()
510 )
510 )
511 except error.LockUnavailable as err:
511 except error.LockUnavailable as err:
512 # source repo cannot be locked.
512 # source repo cannot be locked.
513 # We do not abort the push, but just disable the local phase
513 # We do not abort the push, but just disable the local phase
514 # synchronisation.
514 # synchronisation.
515 msg = b'cannot lock source repository: %s\n'
515 msg = b'cannot lock source repository: %s\n'
516 msg %= stringutil.forcebytestr(err)
516 msg %= stringutil.forcebytestr(err)
517 pushop.ui.debug(msg)
517 pushop.ui.debug(msg)
518
518
519 pushop.repo.checkpush(pushop)
519 pushop.repo.checkpush(pushop)
520 _checkpublish(pushop)
520 _checkpublish(pushop)
521 _pushdiscovery(pushop)
521 _pushdiscovery(pushop)
522 if not pushop.force:
522 if not pushop.force:
523 _checksubrepostate(pushop)
523 _checksubrepostate(pushop)
524 if not _forcebundle1(pushop):
524 if not _forcebundle1(pushop):
525 _pushbundle2(pushop)
525 _pushbundle2(pushop)
526 _pushchangeset(pushop)
526 _pushchangeset(pushop)
527 _pushsyncphase(pushop)
527 _pushsyncphase(pushop)
528 _pushobsolete(pushop)
528 _pushobsolete(pushop)
529 _pushbookmark(pushop)
529 _pushbookmark(pushop)
530 if pushop.trmanager is not None:
530 if pushop.trmanager is not None:
531 pushop.trmanager.close()
531 pushop.trmanager.close()
532 finally:
532 finally:
533 lockmod.release(pushop.trmanager, lock, wlock)
533 lockmod.release(pushop.trmanager, lock, wlock)
534
534
535 if repo.ui.configbool(b'experimental', b'remotenames'):
535 if repo.ui.configbool(b'experimental', b'remotenames'):
536 logexchange.pullremotenames(repo, remote)
536 logexchange.pullremotenames(repo, remote)
537
537
538 return pushop
538 return pushop
539
539
540
540
541 # list of steps to perform discovery before push
541 # list of steps to perform discovery before push
542 pushdiscoveryorder = []
542 pushdiscoveryorder = []
543
543
544 # Mapping between step name and function
544 # Mapping between step name and function
545 #
545 #
546 # This exists to help extensions wrap steps if necessary
546 # This exists to help extensions wrap steps if necessary
547 pushdiscoverymapping = {}
547 pushdiscoverymapping = {}
548
548
549
549
550 def pushdiscovery(stepname):
550 def pushdiscovery(stepname):
551 """decorator for function performing discovery before push
551 """decorator for function performing discovery before push
552
552
553 The function is added to the step -> function mapping and appended to the
553 The function is added to the step -> function mapping and appended to the
554 list of steps. Beware that decorated function will be added in order (this
554 list of steps. Beware that decorated function will be added in order (this
555 may matter).
555 may matter).
556
556
557 You can only use this decorator for a new step, if you want to wrap a step
557 You can only use this decorator for a new step, if you want to wrap a step
558 from an extension, change the pushdiscovery dictionary directly."""
558 from an extension, change the pushdiscovery dictionary directly."""
559
559
560 def dec(func):
560 def dec(func):
561 assert stepname not in pushdiscoverymapping
561 assert stepname not in pushdiscoverymapping
562 pushdiscoverymapping[stepname] = func
562 pushdiscoverymapping[stepname] = func
563 pushdiscoveryorder.append(stepname)
563 pushdiscoveryorder.append(stepname)
564 return func
564 return func
565
565
566 return dec
566 return dec
567
567
568
568
569 def _pushdiscovery(pushop):
569 def _pushdiscovery(pushop):
570 """Run all discovery steps"""
570 """Run all discovery steps"""
571 for stepname in pushdiscoveryorder:
571 for stepname in pushdiscoveryorder:
572 step = pushdiscoverymapping[stepname]
572 step = pushdiscoverymapping[stepname]
573 step(pushop)
573 step(pushop)
574
574
575
575
576 def _checksubrepostate(pushop):
576 def _checksubrepostate(pushop):
577 """Ensure all outgoing referenced subrepo revisions are present locally"""
577 """Ensure all outgoing referenced subrepo revisions are present locally"""
578
578
579 repo = pushop.repo
579 repo = pushop.repo
580
580
581 # If the repository does not use subrepos, skip the expensive
581 # If the repository does not use subrepos, skip the expensive
582 # manifest checks.
582 # manifest checks.
583 if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')):
583 if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')):
584 return
584 return
585
585
586 for n in pushop.outgoing.missing:
586 for n in pushop.outgoing.missing:
587 ctx = repo[n]
587 ctx = repo[n]
588
588
589 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
589 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
590 for subpath in sorted(ctx.substate):
590 for subpath in sorted(ctx.substate):
591 sub = ctx.sub(subpath)
591 sub = ctx.sub(subpath)
592 sub.verify(onpush=True)
592 sub.verify(onpush=True)
593
593
594
594
595 @pushdiscovery(b'changeset')
595 @pushdiscovery(b'changeset')
596 def _pushdiscoverychangeset(pushop):
596 def _pushdiscoverychangeset(pushop):
597 """discover the changeset that need to be pushed"""
597 """discover the changeset that need to be pushed"""
598 fci = discovery.findcommonincoming
598 fci = discovery.findcommonincoming
599 if pushop.revs:
599 if pushop.revs:
600 commoninc = fci(
600 commoninc = fci(
601 pushop.repo,
601 pushop.repo,
602 pushop.remote,
602 pushop.remote,
603 force=pushop.force,
603 force=pushop.force,
604 ancestorsof=pushop.revs,
604 ancestorsof=pushop.revs,
605 )
605 )
606 else:
606 else:
607 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
607 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
608 common, inc, remoteheads = commoninc
608 common, inc, remoteheads = commoninc
609 fco = discovery.findcommonoutgoing
609 fco = discovery.findcommonoutgoing
610 outgoing = fco(
610 outgoing = fco(
611 pushop.repo,
611 pushop.repo,
612 pushop.remote,
612 pushop.remote,
613 onlyheads=pushop.revs,
613 onlyheads=pushop.revs,
614 commoninc=commoninc,
614 commoninc=commoninc,
615 force=pushop.force,
615 force=pushop.force,
616 )
616 )
617 pushop.outgoing = outgoing
617 pushop.outgoing = outgoing
618 pushop.remoteheads = remoteheads
618 pushop.remoteheads = remoteheads
619 pushop.incoming = inc
619 pushop.incoming = inc
620
620
621
621
622 @pushdiscovery(b'phase')
622 @pushdiscovery(b'phase')
623 def _pushdiscoveryphase(pushop):
623 def _pushdiscoveryphase(pushop):
624 """discover the phase that needs to be pushed
624 """discover the phase that needs to be pushed
625
625
626 (computed for both success and failure case for changesets push)"""
626 (computed for both success and failure case for changesets push)"""
627 outgoing = pushop.outgoing
627 outgoing = pushop.outgoing
628 repo = pushop.repo
628 repo = pushop.repo
629 unfi = repo.unfiltered()
629 unfi = repo.unfiltered()
630 cl = unfi.changelog
630 cl = unfi.changelog
631 to_rev = cl.index.rev
631 to_rev = cl.index.rev
632 remotephases = listkeys(pushop.remote, b'phases')
632 remotephases = listkeys(pushop.remote, b'phases')
633
633
634 if (
634 if (
635 pushop.ui.configbool(b'ui', b'_usedassubrepo')
635 pushop.ui.configbool(b'ui', b'_usedassubrepo')
636 and remotephases # server supports phases
636 and remotephases # server supports phases
637 and not pushop.outgoing.missing # no changesets to be pushed
637 and not pushop.outgoing.missing # no changesets to be pushed
638 and remotephases.get(b'publishing', False)
638 and remotephases.get(b'publishing', False)
639 ):
639 ):
640 # When:
640 # When:
641 # - this is a subrepo push
641 # - this is a subrepo push
642 # - and remote support phase
642 # - and remote support phase
643 # - and no changeset are to be pushed
643 # - and no changeset are to be pushed
644 # - and remote is publishing
644 # - and remote is publishing
645 # We may be in issue 3781 case!
645 # We may be in issue 3781 case!
646 # We drop the possible phase synchronisation done by
646 # We drop the possible phase synchronisation done by
647 # courtesy to publish changesets possibly locally draft
647 # courtesy to publish changesets possibly locally draft
648 # on the remote.
648 # on the remote.
649 pushop.outdatedphases = []
649 pushop.outdatedphases = []
650 pushop.fallbackoutdatedphases = []
650 pushop.fallbackoutdatedphases = []
651 return
651 return
652
652
653 fallbackheads_rev = {to_rev(n) for n in pushop.fallbackheads}
653 fallbackheads_rev = {to_rev(n) for n in pushop.fallbackheads}
654 pushop.remotephases = phases.RemotePhasesSummary(
654 pushop.remotephases = phases.RemotePhasesSummary(
655 pushop.repo,
655 pushop.repo,
656 fallbackheads_rev,
656 fallbackheads_rev,
657 remotephases,
657 remotephases,
658 )
658 )
659 droots = set(pushop.remotephases.draft_roots)
659 droots = set(pushop.remotephases.draft_roots)
660
660
661 fallback_publishing = pushop.remotephases.publishing
661 fallback_publishing = pushop.remotephases.publishing
662 push_publishing = pushop.remotephases.publishing or pushop.publish
662 push_publishing = pushop.remotephases.publishing or pushop.publish
663 missing_revs = {to_rev(n) for n in outgoing.missing}
663 missing_revs = {to_rev(n) for n in outgoing.missing}
664 drafts = unfi._phasecache.get_raw_set(unfi, phases.draft)
664 drafts = unfi._phasecache.get_raw_set(unfi, phases.draft)
665
665
666 if fallback_publishing:
666 if fallback_publishing:
667 fallback_roots = droots - missing_revs
667 fallback_roots = droots - missing_revs
668 revset = b'heads(%ld::%ld)'
668 revset = b'heads(%ld::%ld)'
669 else:
669 else:
670 fallback_roots = droots - drafts
670 fallback_roots = droots - drafts
671 fallback_roots -= missing_revs
671 fallback_roots -= missing_revs
672 # Get the list of all revs draft on remote but public here.
672 # Get the list of all revs draft on remote but public here.
673 revset = b'heads((%ld::%ld) and public())'
673 revset = b'heads((%ld::%ld) and public())'
674 if not fallback_roots:
674 if not fallback_roots:
675 fallback = fallback_rev = []
675 fallback = fallback_rev = []
676 else:
676 else:
677 fallback_rev = unfi.revs(revset, fallback_roots, fallbackheads_rev)
677 fallback_rev = unfi.revs(revset, fallback_roots, fallbackheads_rev)
678 fallback = [repo[r] for r in fallback_rev]
678 fallback = [repo[r] for r in fallback_rev]
679
679
680 if push_publishing:
680 if push_publishing:
681 published = missing_revs.copy()
681 published = missing_revs.copy()
682 else:
682 else:
683 published = missing_revs - drafts
683 published = missing_revs - drafts
684 if pushop.publish:
684 if pushop.publish:
685 published.update(fallbackheads_rev & drafts)
685 published.update(fallbackheads_rev & drafts)
686 elif fallback:
686 elif fallback:
687 published.update(fallback_rev)
687 published.update(fallback_rev)
688
688
689 pushop.outdatedphases = [repo[r] for r in cl.headrevs(published)]
689 pushop.outdatedphases = [repo[r] for r in cl.headrevs(published)]
690 pushop.fallbackoutdatedphases = fallback
690 pushop.fallbackoutdatedphases = fallback
691
691
692
692
693 @pushdiscovery(b'obsmarker')
693 @pushdiscovery(b'obsmarker')
694 def _pushdiscoveryobsmarkers(pushop):
694 def _pushdiscoveryobsmarkers(pushop):
695 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
695 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
696 return
696 return
697
697
698 if not pushop.repo.obsstore:
698 if not pushop.repo.obsstore:
699 return
699 return
700
700
701 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
701 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
702 return
702 return
703
703
704 repo = pushop.repo
704 repo = pushop.repo
705 # very naive computation, that can be quite expensive on big repo.
705 # very naive computation, that can be quite expensive on big repo.
706 # However: evolution is currently slow on them anyway.
706 # However: evolution is currently slow on them anyway.
707 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
707 revs = repo.revs(b'::%ln', pushop.futureheads)
708 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
708 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(revs=revs)
709
709
710
710
711 @pushdiscovery(b'bookmarks')
711 @pushdiscovery(b'bookmarks')
712 def _pushdiscoverybookmarks(pushop):
712 def _pushdiscoverybookmarks(pushop):
713 ui = pushop.ui
713 ui = pushop.ui
714 repo = pushop.repo.unfiltered()
714 repo = pushop.repo.unfiltered()
715 remote = pushop.remote
715 remote = pushop.remote
716 ui.debug(b"checking for updated bookmarks\n")
716 ui.debug(b"checking for updated bookmarks\n")
717 ancestors = ()
717 ancestors = ()
718 if pushop.revs:
718 if pushop.revs:
719 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
719 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
720 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
720 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
721
721
722 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
722 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
723
723
724 explicit = {
724 explicit = {
725 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
725 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
726 }
726 }
727
727
728 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
728 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
729 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
729 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
730
730
731
731
732 def _processcompared(pushop, pushed, explicit, remotebms, comp):
732 def _processcompared(pushop, pushed, explicit, remotebms, comp):
733 """take decision on bookmarks to push to the remote repo
733 """take decision on bookmarks to push to the remote repo
734
734
735 Exists to help extensions alter this behavior.
735 Exists to help extensions alter this behavior.
736 """
736 """
737 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
737 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
738
738
739 repo = pushop.repo
739 repo = pushop.repo
740
740
741 for b, scid, dcid in advsrc:
741 for b, scid, dcid in advsrc:
742 if b in explicit:
742 if b in explicit:
743 explicit.remove(b)
743 explicit.remove(b)
744 if not pushed or repo[scid].rev() in pushed:
744 if not pushed or repo[scid].rev() in pushed:
745 pushop.outbookmarks.append((b, dcid, scid))
745 pushop.outbookmarks.append((b, dcid, scid))
746 # search added bookmark
746 # search added bookmark
747 for b, scid, dcid in addsrc:
747 for b, scid, dcid in addsrc:
748 if b in explicit:
748 if b in explicit:
749 explicit.remove(b)
749 explicit.remove(b)
750 if bookmod.isdivergent(b):
750 if bookmod.isdivergent(b):
751 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
751 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
752 pushop.bkresult = 2
752 pushop.bkresult = 2
753 elif pushed and repo[scid].rev() not in pushed:
753 elif pushed and repo[scid].rev() not in pushed:
754 # in case of race or secret
754 # in case of race or secret
755 msg = _(b'cannot push bookmark X without its revision: %s!\n')
755 msg = _(b'cannot push bookmark X without its revision: %s!\n')
756 pushop.ui.warn(msg % b)
756 pushop.ui.warn(msg % b)
757 pushop.bkresult = 2
757 pushop.bkresult = 2
758 else:
758 else:
759 pushop.outbookmarks.append((b, b'', scid))
759 pushop.outbookmarks.append((b, b'', scid))
760 # search for overwritten bookmark
760 # search for overwritten bookmark
761 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
761 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
762 if b in explicit:
762 if b in explicit:
763 explicit.remove(b)
763 explicit.remove(b)
764 if not pushed or repo[scid].rev() in pushed:
764 if not pushed or repo[scid].rev() in pushed:
765 pushop.outbookmarks.append((b, dcid, scid))
765 pushop.outbookmarks.append((b, dcid, scid))
766 # search for bookmark to delete
766 # search for bookmark to delete
767 for b, scid, dcid in adddst:
767 for b, scid, dcid in adddst:
768 if b in explicit:
768 if b in explicit:
769 explicit.remove(b)
769 explicit.remove(b)
770 # treat as "deleted locally"
770 # treat as "deleted locally"
771 pushop.outbookmarks.append((b, dcid, b''))
771 pushop.outbookmarks.append((b, dcid, b''))
772 # identical bookmarks shouldn't get reported
772 # identical bookmarks shouldn't get reported
773 for b, scid, dcid in same:
773 for b, scid, dcid in same:
774 if b in explicit:
774 if b in explicit:
775 explicit.remove(b)
775 explicit.remove(b)
776
776
777 if explicit:
777 if explicit:
778 explicit = sorted(explicit)
778 explicit = sorted(explicit)
779 # we should probably list all of them
779 # we should probably list all of them
780 pushop.ui.warn(
780 pushop.ui.warn(
781 _(
781 _(
782 b'bookmark %s does not exist on the local '
782 b'bookmark %s does not exist on the local '
783 b'or remote repository!\n'
783 b'or remote repository!\n'
784 )
784 )
785 % explicit[0]
785 % explicit[0]
786 )
786 )
787 pushop.bkresult = 2
787 pushop.bkresult = 2
788
788
789 pushop.outbookmarks.sort()
789 pushop.outbookmarks.sort()
790
790
791
791
792 def _pushcheckoutgoing(pushop):
792 def _pushcheckoutgoing(pushop):
793 outgoing = pushop.outgoing
793 outgoing = pushop.outgoing
794 unfi = pushop.repo.unfiltered()
794 unfi = pushop.repo.unfiltered()
795 if not outgoing.missing:
795 if not outgoing.missing:
796 # nothing to push
796 # nothing to push
797 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
797 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
798 return False
798 return False
799 # something to push
799 # something to push
800 if not pushop.force:
800 if not pushop.force:
801 # if repo.obsstore == False --> no obsolete
801 # if repo.obsstore == False --> no obsolete
802 # then, save the iteration
802 # then, save the iteration
803 if unfi.obsstore:
803 if unfi.obsstore:
804 # this message are here for 80 char limit reason
804 # this message are here for 80 char limit reason
805 mso = _(b"push includes obsolete changeset: %s!")
805 mso = _(b"push includes obsolete changeset: %s!")
806 mspd = _(b"push includes phase-divergent changeset: %s!")
806 mspd = _(b"push includes phase-divergent changeset: %s!")
807 mscd = _(b"push includes content-divergent changeset: %s!")
807 mscd = _(b"push includes content-divergent changeset: %s!")
808 mst = {
808 mst = {
809 b"orphan": _(b"push includes orphan changeset: %s!"),
809 b"orphan": _(b"push includes orphan changeset: %s!"),
810 b"phase-divergent": mspd,
810 b"phase-divergent": mspd,
811 b"content-divergent": mscd,
811 b"content-divergent": mscd,
812 }
812 }
813 # If we are to push if there is at least one
813 # If we are to push if there is at least one
814 # obsolete or unstable changeset in missing, at
814 # obsolete or unstable changeset in missing, at
815 # least one of the missinghead will be obsolete or
815 # least one of the missinghead will be obsolete or
816 # unstable. So checking heads only is ok
816 # unstable. So checking heads only is ok
817 for node in outgoing.ancestorsof:
817 for node in outgoing.ancestorsof:
818 ctx = unfi[node]
818 ctx = unfi[node]
819 if ctx.obsolete():
819 if ctx.obsolete():
820 raise error.Abort(mso % ctx)
820 raise error.Abort(mso % ctx)
821 elif ctx.isunstable():
821 elif ctx.isunstable():
822 # TODO print more than one instability in the abort
822 # TODO print more than one instability in the abort
823 # message
823 # message
824 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
824 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
825
825
826 discovery.checkheads(pushop)
826 discovery.checkheads(pushop)
827 return True
827 return True
828
828
829
829
830 # List of names of steps to perform for an outgoing bundle2, order matters.
830 # List of names of steps to perform for an outgoing bundle2, order matters.
831 b2partsgenorder = []
831 b2partsgenorder = []
832
832
833 # Mapping between step name and function
833 # Mapping between step name and function
834 #
834 #
835 # This exists to help extensions wrap steps if necessary
835 # This exists to help extensions wrap steps if necessary
836 b2partsgenmapping = {}
836 b2partsgenmapping = {}
837
837
838
838
839 def b2partsgenerator(stepname, idx=None):
839 def b2partsgenerator(stepname, idx=None):
840 """decorator for function generating bundle2 part
840 """decorator for function generating bundle2 part
841
841
842 The function is added to the step -> function mapping and appended to the
842 The function is added to the step -> function mapping and appended to the
843 list of steps. Beware that decorated functions will be added in order
843 list of steps. Beware that decorated functions will be added in order
844 (this may matter).
844 (this may matter).
845
845
846 You can only use this decorator for new steps, if you want to wrap a step
846 You can only use this decorator for new steps, if you want to wrap a step
847 from an extension, attack the b2partsgenmapping dictionary directly."""
847 from an extension, attack the b2partsgenmapping dictionary directly."""
848
848
849 def dec(func):
849 def dec(func):
850 assert stepname not in b2partsgenmapping
850 assert stepname not in b2partsgenmapping
851 b2partsgenmapping[stepname] = func
851 b2partsgenmapping[stepname] = func
852 if idx is None:
852 if idx is None:
853 b2partsgenorder.append(stepname)
853 b2partsgenorder.append(stepname)
854 else:
854 else:
855 b2partsgenorder.insert(idx, stepname)
855 b2partsgenorder.insert(idx, stepname)
856 return func
856 return func
857
857
858 return dec
858 return dec
859
859
860
860
861 def _pushb2ctxcheckheads(pushop, bundler):
861 def _pushb2ctxcheckheads(pushop, bundler):
862 """Generate race condition checking parts
862 """Generate race condition checking parts
863
863
864 Exists as an independent function to aid extensions
864 Exists as an independent function to aid extensions
865 """
865 """
866 # * 'force' do not check for push race,
866 # * 'force' do not check for push race,
867 # * if we don't push anything, there are nothing to check.
867 # * if we don't push anything, there are nothing to check.
868 if not pushop.force and pushop.outgoing.ancestorsof:
868 if not pushop.force and pushop.outgoing.ancestorsof:
869 allowunrelated = b'related' in bundler.capabilities.get(
869 allowunrelated = b'related' in bundler.capabilities.get(
870 b'checkheads', ()
870 b'checkheads', ()
871 )
871 )
872 emptyremote = pushop.pushbranchmap is None
872 emptyremote = pushop.pushbranchmap is None
873 if not allowunrelated or emptyremote:
873 if not allowunrelated or emptyremote:
874 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
874 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
875 else:
875 else:
876 affected = set()
876 affected = set()
877 for branch, heads in pushop.pushbranchmap.items():
877 for branch, heads in pushop.pushbranchmap.items():
878 remoteheads, newheads, unsyncedheads, discardedheads = heads
878 remoteheads, newheads, unsyncedheads, discardedheads = heads
879 if remoteheads is not None:
879 if remoteheads is not None:
880 remote = set(remoteheads)
880 remote = set(remoteheads)
881 affected |= set(discardedheads) & remote
881 affected |= set(discardedheads) & remote
882 affected |= remote - set(newheads)
882 affected |= remote - set(newheads)
883 if affected:
883 if affected:
884 data = iter(sorted(affected))
884 data = iter(sorted(affected))
885 bundler.newpart(b'check:updated-heads', data=data)
885 bundler.newpart(b'check:updated-heads', data=data)
886
886
887
887
888 def _pushing(pushop):
888 def _pushing(pushop):
889 """return True if we are pushing anything"""
889 """return True if we are pushing anything"""
890 return bool(
890 return bool(
891 pushop.outgoing.missing
891 pushop.outgoing.missing
892 or pushop.outdatedphases
892 or pushop.outdatedphases
893 or pushop.outobsmarkers
893 or pushop.outobsmarkers
894 or pushop.outbookmarks
894 or pushop.outbookmarks
895 )
895 )
896
896
897
897
898 @b2partsgenerator(b'check-bookmarks')
898 @b2partsgenerator(b'check-bookmarks')
899 def _pushb2checkbookmarks(pushop, bundler):
899 def _pushb2checkbookmarks(pushop, bundler):
900 """insert bookmark move checking"""
900 """insert bookmark move checking"""
901 if not _pushing(pushop) or pushop.force:
901 if not _pushing(pushop) or pushop.force:
902 return
902 return
903 b2caps = bundle2.bundle2caps(pushop.remote)
903 b2caps = bundle2.bundle2caps(pushop.remote)
904 hasbookmarkcheck = b'bookmarks' in b2caps
904 hasbookmarkcheck = b'bookmarks' in b2caps
905 if not (pushop.outbookmarks and hasbookmarkcheck):
905 if not (pushop.outbookmarks and hasbookmarkcheck):
906 return
906 return
907 data = []
907 data = []
908 for book, old, new in pushop.outbookmarks:
908 for book, old, new in pushop.outbookmarks:
909 data.append((book, old))
909 data.append((book, old))
910 checkdata = bookmod.binaryencode(pushop.repo, data)
910 checkdata = bookmod.binaryencode(pushop.repo, data)
911 bundler.newpart(b'check:bookmarks', data=checkdata)
911 bundler.newpart(b'check:bookmarks', data=checkdata)
912
912
913
913
914 @b2partsgenerator(b'check-phases')
914 @b2partsgenerator(b'check-phases')
915 def _pushb2checkphases(pushop, bundler):
915 def _pushb2checkphases(pushop, bundler):
916 """insert phase move checking"""
916 """insert phase move checking"""
917 if not _pushing(pushop) or pushop.force:
917 if not _pushing(pushop) or pushop.force:
918 return
918 return
919 b2caps = bundle2.bundle2caps(pushop.remote)
919 b2caps = bundle2.bundle2caps(pushop.remote)
920 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
920 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
921 if pushop.remotephases is not None and hasphaseheads:
921 if pushop.remotephases is not None and hasphaseheads:
922 # check that the remote phase has not changed
922 # check that the remote phase has not changed
923 checks = {p: [] for p in phases.allphases}
923 checks = {p: [] for p in phases.allphases}
924 to_node = pushop.repo.unfiltered().changelog.node
924 to_node = pushop.repo.unfiltered().changelog.node
925 checks[phases.public].extend(
925 checks[phases.public].extend(
926 to_node(r) for r in pushop.remotephases.public_heads
926 to_node(r) for r in pushop.remotephases.public_heads
927 )
927 )
928 checks[phases.draft].extend(
928 checks[phases.draft].extend(
929 to_node(r) for r in pushop.remotephases.draft_roots
929 to_node(r) for r in pushop.remotephases.draft_roots
930 )
930 )
931 if any(checks.values()):
931 if any(checks.values()):
932 for phase in checks:
932 for phase in checks:
933 checks[phase].sort()
933 checks[phase].sort()
934 checkdata = phases.binaryencode(checks)
934 checkdata = phases.binaryencode(checks)
935 bundler.newpart(b'check:phases', data=checkdata)
935 bundler.newpart(b'check:phases', data=checkdata)
936
936
937
937
938 @b2partsgenerator(b'changeset')
938 @b2partsgenerator(b'changeset')
939 def _pushb2ctx(pushop, bundler):
939 def _pushb2ctx(pushop, bundler):
940 """handle changegroup push through bundle2
940 """handle changegroup push through bundle2
941
941
942 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
942 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
943 """
943 """
944 if b'changesets' in pushop.stepsdone:
944 if b'changesets' in pushop.stepsdone:
945 return
945 return
946 pushop.stepsdone.add(b'changesets')
946 pushop.stepsdone.add(b'changesets')
947 # Send known heads to the server for race detection.
947 # Send known heads to the server for race detection.
948 if not _pushcheckoutgoing(pushop):
948 if not _pushcheckoutgoing(pushop):
949 return
949 return
950 pushop.repo.prepushoutgoinghooks(pushop)
950 pushop.repo.prepushoutgoinghooks(pushop)
951
951
952 _pushb2ctxcheckheads(pushop, bundler)
952 _pushb2ctxcheckheads(pushop, bundler)
953
953
954 b2caps = bundle2.bundle2caps(pushop.remote)
954 b2caps = bundle2.bundle2caps(pushop.remote)
955 version = b'01'
955 version = b'01'
956 cgversions = b2caps.get(b'changegroup')
956 cgversions = b2caps.get(b'changegroup')
957 if cgversions: # 3.1 and 3.2 ship with an empty value
957 if cgversions: # 3.1 and 3.2 ship with an empty value
958 cgversions = [
958 cgversions = [
959 v
959 v
960 for v in cgversions
960 for v in cgversions
961 if v in changegroup.supportedoutgoingversions(pushop.repo)
961 if v in changegroup.supportedoutgoingversions(pushop.repo)
962 ]
962 ]
963 if not cgversions:
963 if not cgversions:
964 raise error.Abort(_(b'no common changegroup version'))
964 raise error.Abort(_(b'no common changegroup version'))
965 version = max(cgversions)
965 version = max(cgversions)
966
966
967 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
967 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
968 cgstream = changegroup.makestream(
968 cgstream = changegroup.makestream(
969 pushop.repo,
969 pushop.repo,
970 pushop.outgoing,
970 pushop.outgoing,
971 version,
971 version,
972 b'push',
972 b'push',
973 bundlecaps=b2caps,
973 bundlecaps=b2caps,
974 remote_sidedata=remote_sidedata,
974 remote_sidedata=remote_sidedata,
975 )
975 )
976 cgpart = bundler.newpart(b'changegroup', data=cgstream)
976 cgpart = bundler.newpart(b'changegroup', data=cgstream)
977 if cgversions:
977 if cgversions:
978 cgpart.addparam(b'version', version)
978 cgpart.addparam(b'version', version)
979 if scmutil.istreemanifest(pushop.repo):
979 if scmutil.istreemanifest(pushop.repo):
980 cgpart.addparam(b'treemanifest', b'1')
980 cgpart.addparam(b'treemanifest', b'1')
981 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
981 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
982 cgpart.addparam(b'exp-sidedata', b'1')
982 cgpart.addparam(b'exp-sidedata', b'1')
983
983
984 def handlereply(op):
984 def handlereply(op):
985 """extract addchangegroup returns from server reply"""
985 """extract addchangegroup returns from server reply"""
986 cgreplies = op.records.getreplies(cgpart.id)
986 cgreplies = op.records.getreplies(cgpart.id)
987 assert len(cgreplies[b'changegroup']) == 1
987 assert len(cgreplies[b'changegroup']) == 1
988 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
988 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
989
989
990 return handlereply
990 return handlereply
991
991
992
992
993 @b2partsgenerator(b'phase')
993 @b2partsgenerator(b'phase')
994 def _pushb2phases(pushop, bundler):
994 def _pushb2phases(pushop, bundler):
995 """handle phase push through bundle2"""
995 """handle phase push through bundle2"""
996 if b'phases' in pushop.stepsdone:
996 if b'phases' in pushop.stepsdone:
997 return
997 return
998 b2caps = bundle2.bundle2caps(pushop.remote)
998 b2caps = bundle2.bundle2caps(pushop.remote)
999 ui = pushop.repo.ui
999 ui = pushop.repo.ui
1000
1000
1001 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1001 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1002 haspushkey = b'pushkey' in b2caps
1002 haspushkey = b'pushkey' in b2caps
1003 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1003 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1004
1004
1005 if hasphaseheads and not legacyphase:
1005 if hasphaseheads and not legacyphase:
1006 return _pushb2phaseheads(pushop, bundler)
1006 return _pushb2phaseheads(pushop, bundler)
1007 elif haspushkey:
1007 elif haspushkey:
1008 return _pushb2phasespushkey(pushop, bundler)
1008 return _pushb2phasespushkey(pushop, bundler)
1009
1009
1010
1010
1011 def _pushb2phaseheads(pushop, bundler):
1011 def _pushb2phaseheads(pushop, bundler):
1012 """push phase information through a bundle2 - binary part"""
1012 """push phase information through a bundle2 - binary part"""
1013 pushop.stepsdone.add(b'phases')
1013 pushop.stepsdone.add(b'phases')
1014 if pushop.outdatedphases:
1014 if pushop.outdatedphases:
1015 updates = {p: [] for p in phases.allphases}
1015 updates = {p: [] for p in phases.allphases}
1016 updates[0].extend(h.node() for h in pushop.outdatedphases)
1016 updates[0].extend(h.node() for h in pushop.outdatedphases)
1017 phasedata = phases.binaryencode(updates)
1017 phasedata = phases.binaryencode(updates)
1018 bundler.newpart(b'phase-heads', data=phasedata)
1018 bundler.newpart(b'phase-heads', data=phasedata)
1019
1019
1020
1020
1021 def _pushb2phasespushkey(pushop, bundler):
1021 def _pushb2phasespushkey(pushop, bundler):
1022 """push phase information through a bundle2 - pushkey part"""
1022 """push phase information through a bundle2 - pushkey part"""
1023 pushop.stepsdone.add(b'phases')
1023 pushop.stepsdone.add(b'phases')
1024 part2node = []
1024 part2node = []
1025
1025
1026 def handlefailure(pushop, exc):
1026 def handlefailure(pushop, exc):
1027 targetid = int(exc.partid)
1027 targetid = int(exc.partid)
1028 for partid, node in part2node:
1028 for partid, node in part2node:
1029 if partid == targetid:
1029 if partid == targetid:
1030 raise error.Abort(_(b'updating %s to public failed') % node)
1030 raise error.Abort(_(b'updating %s to public failed') % node)
1031
1031
1032 enc = pushkey.encode
1032 enc = pushkey.encode
1033 for newremotehead in pushop.outdatedphases:
1033 for newremotehead in pushop.outdatedphases:
1034 part = bundler.newpart(b'pushkey')
1034 part = bundler.newpart(b'pushkey')
1035 part.addparam(b'namespace', enc(b'phases'))
1035 part.addparam(b'namespace', enc(b'phases'))
1036 part.addparam(b'key', enc(newremotehead.hex()))
1036 part.addparam(b'key', enc(newremotehead.hex()))
1037 part.addparam(b'old', enc(b'%d' % phases.draft))
1037 part.addparam(b'old', enc(b'%d' % phases.draft))
1038 part.addparam(b'new', enc(b'%d' % phases.public))
1038 part.addparam(b'new', enc(b'%d' % phases.public))
1039 part2node.append((part.id, newremotehead))
1039 part2node.append((part.id, newremotehead))
1040 pushop.pkfailcb[part.id] = handlefailure
1040 pushop.pkfailcb[part.id] = handlefailure
1041
1041
1042 def handlereply(op):
1042 def handlereply(op):
1043 for partid, node in part2node:
1043 for partid, node in part2node:
1044 partrep = op.records.getreplies(partid)
1044 partrep = op.records.getreplies(partid)
1045 results = partrep[b'pushkey']
1045 results = partrep[b'pushkey']
1046 assert len(results) <= 1
1046 assert len(results) <= 1
1047 msg = None
1047 msg = None
1048 if not results:
1048 if not results:
1049 msg = _(b'server ignored update of %s to public!\n') % node
1049 msg = _(b'server ignored update of %s to public!\n') % node
1050 elif not int(results[0][b'return']):
1050 elif not int(results[0][b'return']):
1051 msg = _(b'updating %s to public failed!\n') % node
1051 msg = _(b'updating %s to public failed!\n') % node
1052 if msg is not None:
1052 if msg is not None:
1053 pushop.ui.warn(msg)
1053 pushop.ui.warn(msg)
1054
1054
1055 return handlereply
1055 return handlereply
1056
1056
1057
1057
1058 @b2partsgenerator(b'obsmarkers')
1058 @b2partsgenerator(b'obsmarkers')
1059 def _pushb2obsmarkers(pushop, bundler):
1059 def _pushb2obsmarkers(pushop, bundler):
1060 if b'obsmarkers' in pushop.stepsdone:
1060 if b'obsmarkers' in pushop.stepsdone:
1061 return
1061 return
1062 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1062 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1063 if obsolete.commonversion(remoteversions) is None:
1063 if obsolete.commonversion(remoteversions) is None:
1064 return
1064 return
1065 pushop.stepsdone.add(b'obsmarkers')
1065 pushop.stepsdone.add(b'obsmarkers')
1066 if pushop.outobsmarkers:
1066 if pushop.outobsmarkers:
1067 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1067 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1068 bundle2.buildobsmarkerspart(bundler, markers)
1068 bundle2.buildobsmarkerspart(bundler, markers)
1069
1069
1070
1070
1071 @b2partsgenerator(b'bookmarks')
1071 @b2partsgenerator(b'bookmarks')
1072 def _pushb2bookmarks(pushop, bundler):
1072 def _pushb2bookmarks(pushop, bundler):
1073 """handle bookmark push through bundle2"""
1073 """handle bookmark push through bundle2"""
1074 if b'bookmarks' in pushop.stepsdone:
1074 if b'bookmarks' in pushop.stepsdone:
1075 return
1075 return
1076 b2caps = bundle2.bundle2caps(pushop.remote)
1076 b2caps = bundle2.bundle2caps(pushop.remote)
1077
1077
1078 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1078 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1079 legacybooks = b'bookmarks' in legacy
1079 legacybooks = b'bookmarks' in legacy
1080
1080
1081 if not legacybooks and b'bookmarks' in b2caps:
1081 if not legacybooks and b'bookmarks' in b2caps:
1082 return _pushb2bookmarkspart(pushop, bundler)
1082 return _pushb2bookmarkspart(pushop, bundler)
1083 elif b'pushkey' in b2caps:
1083 elif b'pushkey' in b2caps:
1084 return _pushb2bookmarkspushkey(pushop, bundler)
1084 return _pushb2bookmarkspushkey(pushop, bundler)
1085
1085
1086
1086
1087 def _bmaction(old, new):
1087 def _bmaction(old, new):
1088 """small utility for bookmark pushing"""
1088 """small utility for bookmark pushing"""
1089 if not old:
1089 if not old:
1090 return b'export'
1090 return b'export'
1091 elif not new:
1091 elif not new:
1092 return b'delete'
1092 return b'delete'
1093 return b'update'
1093 return b'update'
1094
1094
1095
1095
1096 def _abortonsecretctx(pushop, node, b):
1096 def _abortonsecretctx(pushop, node, b):
1097 """abort if a given bookmark points to a secret changeset"""
1097 """abort if a given bookmark points to a secret changeset"""
1098 if node and pushop.repo[node].phase() == phases.secret:
1098 if node and pushop.repo[node].phase() == phases.secret:
1099 raise error.Abort(
1099 raise error.Abort(
1100 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1100 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1101 )
1101 )
1102
1102
1103
1103
1104 def _pushb2bookmarkspart(pushop, bundler):
1104 def _pushb2bookmarkspart(pushop, bundler):
1105 pushop.stepsdone.add(b'bookmarks')
1105 pushop.stepsdone.add(b'bookmarks')
1106 if not pushop.outbookmarks:
1106 if not pushop.outbookmarks:
1107 return
1107 return
1108
1108
1109 allactions = []
1109 allactions = []
1110 data = []
1110 data = []
1111 for book, old, new in pushop.outbookmarks:
1111 for book, old, new in pushop.outbookmarks:
1112 _abortonsecretctx(pushop, new, book)
1112 _abortonsecretctx(pushop, new, book)
1113 data.append((book, new))
1113 data.append((book, new))
1114 allactions.append((book, _bmaction(old, new)))
1114 allactions.append((book, _bmaction(old, new)))
1115 checkdata = bookmod.binaryencode(pushop.repo, data)
1115 checkdata = bookmod.binaryencode(pushop.repo, data)
1116 bundler.newpart(b'bookmarks', data=checkdata)
1116 bundler.newpart(b'bookmarks', data=checkdata)
1117
1117
1118 def handlereply(op):
1118 def handlereply(op):
1119 ui = pushop.ui
1119 ui = pushop.ui
1120 # if success
1120 # if success
1121 for book, action in allactions:
1121 for book, action in allactions:
1122 ui.status(bookmsgmap[action][0] % book)
1122 ui.status(bookmsgmap[action][0] % book)
1123
1123
1124 return handlereply
1124 return handlereply
1125
1125
1126
1126
1127 def _pushb2bookmarkspushkey(pushop, bundler):
1127 def _pushb2bookmarkspushkey(pushop, bundler):
1128 pushop.stepsdone.add(b'bookmarks')
1128 pushop.stepsdone.add(b'bookmarks')
1129 part2book = []
1129 part2book = []
1130 enc = pushkey.encode
1130 enc = pushkey.encode
1131
1131
1132 def handlefailure(pushop, exc):
1132 def handlefailure(pushop, exc):
1133 targetid = int(exc.partid)
1133 targetid = int(exc.partid)
1134 for partid, book, action in part2book:
1134 for partid, book, action in part2book:
1135 if partid == targetid:
1135 if partid == targetid:
1136 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1136 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1137 # we should not be called for part we did not generated
1137 # we should not be called for part we did not generated
1138 assert False
1138 assert False
1139
1139
1140 for book, old, new in pushop.outbookmarks:
1140 for book, old, new in pushop.outbookmarks:
1141 _abortonsecretctx(pushop, new, book)
1141 _abortonsecretctx(pushop, new, book)
1142 part = bundler.newpart(b'pushkey')
1142 part = bundler.newpart(b'pushkey')
1143 part.addparam(b'namespace', enc(b'bookmarks'))
1143 part.addparam(b'namespace', enc(b'bookmarks'))
1144 part.addparam(b'key', enc(book))
1144 part.addparam(b'key', enc(book))
1145 part.addparam(b'old', enc(hex(old)))
1145 part.addparam(b'old', enc(hex(old)))
1146 part.addparam(b'new', enc(hex(new)))
1146 part.addparam(b'new', enc(hex(new)))
1147 action = b'update'
1147 action = b'update'
1148 if not old:
1148 if not old:
1149 action = b'export'
1149 action = b'export'
1150 elif not new:
1150 elif not new:
1151 action = b'delete'
1151 action = b'delete'
1152 part2book.append((part.id, book, action))
1152 part2book.append((part.id, book, action))
1153 pushop.pkfailcb[part.id] = handlefailure
1153 pushop.pkfailcb[part.id] = handlefailure
1154
1154
1155 def handlereply(op):
1155 def handlereply(op):
1156 ui = pushop.ui
1156 ui = pushop.ui
1157 for partid, book, action in part2book:
1157 for partid, book, action in part2book:
1158 partrep = op.records.getreplies(partid)
1158 partrep = op.records.getreplies(partid)
1159 results = partrep[b'pushkey']
1159 results = partrep[b'pushkey']
1160 assert len(results) <= 1
1160 assert len(results) <= 1
1161 if not results:
1161 if not results:
1162 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1162 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1163 else:
1163 else:
1164 ret = int(results[0][b'return'])
1164 ret = int(results[0][b'return'])
1165 if ret:
1165 if ret:
1166 ui.status(bookmsgmap[action][0] % book)
1166 ui.status(bookmsgmap[action][0] % book)
1167 else:
1167 else:
1168 ui.warn(bookmsgmap[action][1] % book)
1168 ui.warn(bookmsgmap[action][1] % book)
1169 if pushop.bkresult is not None:
1169 if pushop.bkresult is not None:
1170 pushop.bkresult = 1
1170 pushop.bkresult = 1
1171
1171
1172 return handlereply
1172 return handlereply
1173
1173
1174
1174
1175 @b2partsgenerator(b'pushvars', idx=0)
1175 @b2partsgenerator(b'pushvars', idx=0)
1176 def _getbundlesendvars(pushop, bundler):
1176 def _getbundlesendvars(pushop, bundler):
1177 '''send shellvars via bundle2'''
1177 '''send shellvars via bundle2'''
1178 pushvars = pushop.pushvars
1178 pushvars = pushop.pushvars
1179 if pushvars:
1179 if pushvars:
1180 shellvars = {}
1180 shellvars = {}
1181 for raw in pushvars:
1181 for raw in pushvars:
1182 if b'=' not in raw:
1182 if b'=' not in raw:
1183 msg = (
1183 msg = (
1184 b"unable to parse variable '%s', should follow "
1184 b"unable to parse variable '%s', should follow "
1185 b"'KEY=VALUE' or 'KEY=' format"
1185 b"'KEY=VALUE' or 'KEY=' format"
1186 )
1186 )
1187 raise error.Abort(msg % raw)
1187 raise error.Abort(msg % raw)
1188 k, v = raw.split(b'=', 1)
1188 k, v = raw.split(b'=', 1)
1189 shellvars[k] = v
1189 shellvars[k] = v
1190
1190
1191 part = bundler.newpart(b'pushvars')
1191 part = bundler.newpart(b'pushvars')
1192
1192
1193 for key, value in shellvars.items():
1193 for key, value in shellvars.items():
1194 part.addparam(key, value, mandatory=False)
1194 part.addparam(key, value, mandatory=False)
1195
1195
1196
1196
1197 def _pushbundle2(pushop):
1197 def _pushbundle2(pushop):
1198 """push data to the remote using bundle2
1198 """push data to the remote using bundle2
1199
1199
1200 The only currently supported type of data is changegroup but this will
1200 The only currently supported type of data is changegroup but this will
1201 evolve in the future."""
1201 evolve in the future."""
1202 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1202 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1203 pushback = pushop.trmanager and pushop.ui.configbool(
1203 pushback = pushop.trmanager and pushop.ui.configbool(
1204 b'experimental', b'bundle2.pushback'
1204 b'experimental', b'bundle2.pushback'
1205 )
1205 )
1206
1206
1207 # create reply capability
1207 # create reply capability
1208 capsblob = bundle2.encodecaps(
1208 capsblob = bundle2.encodecaps(
1209 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1209 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1210 )
1210 )
1211 bundler.newpart(b'replycaps', data=capsblob)
1211 bundler.newpart(b'replycaps', data=capsblob)
1212 replyhandlers = []
1212 replyhandlers = []
1213 for partgenname in b2partsgenorder:
1213 for partgenname in b2partsgenorder:
1214 partgen = b2partsgenmapping[partgenname]
1214 partgen = b2partsgenmapping[partgenname]
1215 ret = partgen(pushop, bundler)
1215 ret = partgen(pushop, bundler)
1216 if callable(ret):
1216 if callable(ret):
1217 replyhandlers.append(ret)
1217 replyhandlers.append(ret)
1218 # do not push if nothing to push
1218 # do not push if nothing to push
1219 if bundler.nbparts <= 1:
1219 if bundler.nbparts <= 1:
1220 return
1220 return
1221 stream = util.chunkbuffer(bundler.getchunks())
1221 stream = util.chunkbuffer(bundler.getchunks())
1222 try:
1222 try:
1223 try:
1223 try:
1224 with pushop.remote.commandexecutor() as e:
1224 with pushop.remote.commandexecutor() as e:
1225 reply = e.callcommand(
1225 reply = e.callcommand(
1226 b'unbundle',
1226 b'unbundle',
1227 {
1227 {
1228 b'bundle': stream,
1228 b'bundle': stream,
1229 b'heads': [b'force'],
1229 b'heads': [b'force'],
1230 b'url': pushop.remote.url(),
1230 b'url': pushop.remote.url(),
1231 },
1231 },
1232 ).result()
1232 ).result()
1233 except error.BundleValueError as exc:
1233 except error.BundleValueError as exc:
1234 raise error.RemoteError(_(b'missing support for %s') % exc)
1234 raise error.RemoteError(_(b'missing support for %s') % exc)
1235 try:
1235 try:
1236 trgetter = None
1236 trgetter = None
1237 if pushback:
1237 if pushback:
1238 trgetter = pushop.trmanager.transaction
1238 trgetter = pushop.trmanager.transaction
1239 op = bundle2.processbundle(
1239 op = bundle2.processbundle(
1240 pushop.repo,
1240 pushop.repo,
1241 reply,
1241 reply,
1242 trgetter,
1242 trgetter,
1243 remote=pushop.remote,
1243 remote=pushop.remote,
1244 )
1244 )
1245 except error.BundleValueError as exc:
1245 except error.BundleValueError as exc:
1246 raise error.RemoteError(_(b'missing support for %s') % exc)
1246 raise error.RemoteError(_(b'missing support for %s') % exc)
1247 except bundle2.AbortFromPart as exc:
1247 except bundle2.AbortFromPart as exc:
1248 pushop.ui.error(_(b'remote: %s\n') % exc)
1248 pushop.ui.error(_(b'remote: %s\n') % exc)
1249 if exc.hint is not None:
1249 if exc.hint is not None:
1250 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1250 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1251 raise error.RemoteError(_(b'push failed on remote'))
1251 raise error.RemoteError(_(b'push failed on remote'))
1252 except error.PushkeyFailed as exc:
1252 except error.PushkeyFailed as exc:
1253 partid = int(exc.partid)
1253 partid = int(exc.partid)
1254 if partid not in pushop.pkfailcb:
1254 if partid not in pushop.pkfailcb:
1255 raise
1255 raise
1256 pushop.pkfailcb[partid](pushop, exc)
1256 pushop.pkfailcb[partid](pushop, exc)
1257 for rephand in replyhandlers:
1257 for rephand in replyhandlers:
1258 rephand(op)
1258 rephand(op)
1259
1259
1260
1260
1261 def _pushchangeset(pushop):
1261 def _pushchangeset(pushop):
1262 """Make the actual push of changeset bundle to remote repo"""
1262 """Make the actual push of changeset bundle to remote repo"""
1263 if b'changesets' in pushop.stepsdone:
1263 if b'changesets' in pushop.stepsdone:
1264 return
1264 return
1265 pushop.stepsdone.add(b'changesets')
1265 pushop.stepsdone.add(b'changesets')
1266 if not _pushcheckoutgoing(pushop):
1266 if not _pushcheckoutgoing(pushop):
1267 return
1267 return
1268
1268
1269 # Should have verified this in push().
1269 # Should have verified this in push().
1270 assert pushop.remote.capable(b'unbundle')
1270 assert pushop.remote.capable(b'unbundle')
1271
1271
1272 pushop.repo.prepushoutgoinghooks(pushop)
1272 pushop.repo.prepushoutgoinghooks(pushop)
1273 outgoing = pushop.outgoing
1273 outgoing = pushop.outgoing
1274 # TODO: get bundlecaps from remote
1274 # TODO: get bundlecaps from remote
1275 bundlecaps = None
1275 bundlecaps = None
1276 # create a changegroup from local
1276 # create a changegroup from local
1277 if pushop.revs is None and not (
1277 if pushop.revs is None and not (
1278 outgoing.excluded or pushop.repo.changelog.filteredrevs
1278 outgoing.excluded or pushop.repo.changelog.filteredrevs
1279 ):
1279 ):
1280 # push everything,
1280 # push everything,
1281 # use the fast path, no race possible on push
1281 # use the fast path, no race possible on push
1282 fastpath = True
1282 fastpath = True
1283 else:
1283 else:
1284 fastpath = False
1284 fastpath = False
1285
1285
1286 cg = changegroup.makechangegroup(
1286 cg = changegroup.makechangegroup(
1287 pushop.repo,
1287 pushop.repo,
1288 outgoing,
1288 outgoing,
1289 b'01',
1289 b'01',
1290 b'push',
1290 b'push',
1291 fastpath=fastpath,
1291 fastpath=fastpath,
1292 bundlecaps=bundlecaps,
1292 bundlecaps=bundlecaps,
1293 )
1293 )
1294
1294
1295 # apply changegroup to remote
1295 # apply changegroup to remote
1296 # local repo finds heads on server, finds out what
1296 # local repo finds heads on server, finds out what
1297 # revs it must push. once revs transferred, if server
1297 # revs it must push. once revs transferred, if server
1298 # finds it has different heads (someone else won
1298 # finds it has different heads (someone else won
1299 # commit/push race), server aborts.
1299 # commit/push race), server aborts.
1300 if pushop.force:
1300 if pushop.force:
1301 remoteheads = [b'force']
1301 remoteheads = [b'force']
1302 else:
1302 else:
1303 remoteheads = pushop.remoteheads
1303 remoteheads = pushop.remoteheads
1304 # ssh: return remote's addchangegroup()
1304 # ssh: return remote's addchangegroup()
1305 # http: return remote's addchangegroup() or 0 for error
1305 # http: return remote's addchangegroup() or 0 for error
1306 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1306 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1307
1307
1308
1308
1309 def _pushsyncphase(pushop):
1309 def _pushsyncphase(pushop):
1310 """synchronise phase information locally and remotely"""
1310 """synchronise phase information locally and remotely"""
1311 cheads = pushop.commonheads
1311 cheads = pushop.commonheads
1312 # even when we don't push, exchanging phase data is useful
1312 # even when we don't push, exchanging phase data is useful
1313 remotephases = listkeys(pushop.remote, b'phases')
1313 remotephases = listkeys(pushop.remote, b'phases')
1314 if (
1314 if (
1315 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1315 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1316 and remotephases # server supports phases
1316 and remotephases # server supports phases
1317 and pushop.cgresult is None # nothing was pushed
1317 and pushop.cgresult is None # nothing was pushed
1318 and remotephases.get(b'publishing', False)
1318 and remotephases.get(b'publishing', False)
1319 ):
1319 ):
1320 # When:
1320 # When:
1321 # - this is a subrepo push
1321 # - this is a subrepo push
1322 # - and remote support phase
1322 # - and remote support phase
1323 # - and no changeset was pushed
1323 # - and no changeset was pushed
1324 # - and remote is publishing
1324 # - and remote is publishing
1325 # We may be in issue 3871 case!
1325 # We may be in issue 3871 case!
1326 # We drop the possible phase synchronisation done by
1326 # We drop the possible phase synchronisation done by
1327 # courtesy to publish changesets possibly locally draft
1327 # courtesy to publish changesets possibly locally draft
1328 # on the remote.
1328 # on the remote.
1329 remotephases = {b'publishing': b'True'}
1329 remotephases = {b'publishing': b'True'}
1330 if not remotephases: # old server or public only reply from non-publishing
1330 if not remotephases: # old server or public only reply from non-publishing
1331 _localphasemove(pushop, cheads)
1331 _localphasemove(pushop, cheads)
1332 # don't push any phase data as there is nothing to push
1332 # don't push any phase data as there is nothing to push
1333 else:
1333 else:
1334 unfi = pushop.repo.unfiltered()
1334 unfi = pushop.repo.unfiltered()
1335 to_rev = unfi.changelog.index.rev
1335 to_rev = unfi.changelog.index.rev
1336 to_node = unfi.changelog.node
1336 to_node = unfi.changelog.node
1337 cheads_revs = [to_rev(n) for n in cheads]
1337 cheads_revs = [to_rev(n) for n in cheads]
1338 pheads_revs, _dr = phases.analyze_remote_phases(
1338 pheads_revs, _dr = phases.analyze_remote_phases(
1339 pushop.repo,
1339 pushop.repo,
1340 cheads_revs,
1340 cheads_revs,
1341 remotephases,
1341 remotephases,
1342 )
1342 )
1343 pheads = [to_node(r) for r in pheads_revs]
1343 pheads = [to_node(r) for r in pheads_revs]
1344 ### Apply remote phase on local
1344 ### Apply remote phase on local
1345 if remotephases.get(b'publishing', False):
1345 if remotephases.get(b'publishing', False):
1346 _localphasemove(pushop, cheads)
1346 _localphasemove(pushop, cheads)
1347 else: # publish = False
1347 else: # publish = False
1348 _localphasemove(pushop, pheads)
1348 _localphasemove(pushop, pheads)
1349 _localphasemove(pushop, cheads, phases.draft)
1349 _localphasemove(pushop, cheads, phases.draft)
1350 ### Apply local phase on remote
1350 ### Apply local phase on remote
1351
1351
1352 if pushop.cgresult:
1352 if pushop.cgresult:
1353 if b'phases' in pushop.stepsdone:
1353 if b'phases' in pushop.stepsdone:
1354 # phases already pushed though bundle2
1354 # phases already pushed though bundle2
1355 return
1355 return
1356 outdated = pushop.outdatedphases
1356 outdated = pushop.outdatedphases
1357 else:
1357 else:
1358 outdated = pushop.fallbackoutdatedphases
1358 outdated = pushop.fallbackoutdatedphases
1359
1359
1360 pushop.stepsdone.add(b'phases')
1360 pushop.stepsdone.add(b'phases')
1361
1361
1362 # filter heads already turned public by the push
1362 # filter heads already turned public by the push
1363 outdated = [c for c in outdated if c.node() not in pheads]
1363 outdated = [c for c in outdated if c.node() not in pheads]
1364 # fallback to independent pushkey command
1364 # fallback to independent pushkey command
1365 for newremotehead in outdated:
1365 for newremotehead in outdated:
1366 with pushop.remote.commandexecutor() as e:
1366 with pushop.remote.commandexecutor() as e:
1367 r = e.callcommand(
1367 r = e.callcommand(
1368 b'pushkey',
1368 b'pushkey',
1369 {
1369 {
1370 b'namespace': b'phases',
1370 b'namespace': b'phases',
1371 b'key': newremotehead.hex(),
1371 b'key': newremotehead.hex(),
1372 b'old': b'%d' % phases.draft,
1372 b'old': b'%d' % phases.draft,
1373 b'new': b'%d' % phases.public,
1373 b'new': b'%d' % phases.public,
1374 },
1374 },
1375 ).result()
1375 ).result()
1376
1376
1377 if not r:
1377 if not r:
1378 pushop.ui.warn(
1378 pushop.ui.warn(
1379 _(b'updating %s to public failed!\n') % newremotehead
1379 _(b'updating %s to public failed!\n') % newremotehead
1380 )
1380 )
1381
1381
1382
1382
1383 def _localphasemove(pushop, nodes, phase=phases.public):
1383 def _localphasemove(pushop, nodes, phase=phases.public):
1384 """move <nodes> to <phase> in the local source repo"""
1384 """move <nodes> to <phase> in the local source repo"""
1385 if pushop.trmanager:
1385 if pushop.trmanager:
1386 phases.advanceboundary(
1386 phases.advanceboundary(
1387 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1387 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1388 )
1388 )
1389 else:
1389 else:
1390 # repo is not locked, do not change any phases!
1390 # repo is not locked, do not change any phases!
1391 # Informs the user that phases should have been moved when
1391 # Informs the user that phases should have been moved when
1392 # applicable.
1392 # applicable.
1393 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1393 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1394 phasestr = phases.phasenames[phase]
1394 phasestr = phases.phasenames[phase]
1395 if actualmoves:
1395 if actualmoves:
1396 pushop.ui.status(
1396 pushop.ui.status(
1397 _(
1397 _(
1398 b'cannot lock source repo, skipping '
1398 b'cannot lock source repo, skipping '
1399 b'local %s phase update\n'
1399 b'local %s phase update\n'
1400 )
1400 )
1401 % phasestr
1401 % phasestr
1402 )
1402 )
1403
1403
1404
1404
1405 def _pushobsolete(pushop):
1405 def _pushobsolete(pushop):
1406 """utility function to push obsolete markers to a remote"""
1406 """utility function to push obsolete markers to a remote"""
1407 if b'obsmarkers' in pushop.stepsdone:
1407 if b'obsmarkers' in pushop.stepsdone:
1408 return
1408 return
1409 repo = pushop.repo
1409 repo = pushop.repo
1410 remote = pushop.remote
1410 remote = pushop.remote
1411 pushop.stepsdone.add(b'obsmarkers')
1411 pushop.stepsdone.add(b'obsmarkers')
1412 if pushop.outobsmarkers:
1412 if pushop.outobsmarkers:
1413 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1413 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1414 rslts = []
1414 rslts = []
1415 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1415 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1416 remotedata = obsolete._pushkeyescape(markers)
1416 remotedata = obsolete._pushkeyescape(markers)
1417 for key in sorted(remotedata, reverse=True):
1417 for key in sorted(remotedata, reverse=True):
1418 # reverse sort to ensure we end with dump0
1418 # reverse sort to ensure we end with dump0
1419 data = remotedata[key]
1419 data = remotedata[key]
1420 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1420 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1421 if [r for r in rslts if not r]:
1421 if [r for r in rslts if not r]:
1422 msg = _(b'failed to push some obsolete markers!\n')
1422 msg = _(b'failed to push some obsolete markers!\n')
1423 repo.ui.warn(msg)
1423 repo.ui.warn(msg)
1424
1424
1425
1425
1426 def _pushbookmark(pushop):
1426 def _pushbookmark(pushop):
1427 """Update bookmark position on remote"""
1427 """Update bookmark position on remote"""
1428 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1428 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1429 return
1429 return
1430 pushop.stepsdone.add(b'bookmarks')
1430 pushop.stepsdone.add(b'bookmarks')
1431 ui = pushop.ui
1431 ui = pushop.ui
1432 remote = pushop.remote
1432 remote = pushop.remote
1433
1433
1434 for b, old, new in pushop.outbookmarks:
1434 for b, old, new in pushop.outbookmarks:
1435 action = b'update'
1435 action = b'update'
1436 if not old:
1436 if not old:
1437 action = b'export'
1437 action = b'export'
1438 elif not new:
1438 elif not new:
1439 action = b'delete'
1439 action = b'delete'
1440
1440
1441 with remote.commandexecutor() as e:
1441 with remote.commandexecutor() as e:
1442 r = e.callcommand(
1442 r = e.callcommand(
1443 b'pushkey',
1443 b'pushkey',
1444 {
1444 {
1445 b'namespace': b'bookmarks',
1445 b'namespace': b'bookmarks',
1446 b'key': b,
1446 b'key': b,
1447 b'old': hex(old),
1447 b'old': hex(old),
1448 b'new': hex(new),
1448 b'new': hex(new),
1449 },
1449 },
1450 ).result()
1450 ).result()
1451
1451
1452 if r:
1452 if r:
1453 ui.status(bookmsgmap[action][0] % b)
1453 ui.status(bookmsgmap[action][0] % b)
1454 else:
1454 else:
1455 ui.warn(bookmsgmap[action][1] % b)
1455 ui.warn(bookmsgmap[action][1] % b)
1456 # discovery can have set the value form invalid entry
1456 # discovery can have set the value form invalid entry
1457 if pushop.bkresult is not None:
1457 if pushop.bkresult is not None:
1458 pushop.bkresult = 1
1458 pushop.bkresult = 1
1459
1459
1460
1460
1461 class pulloperation:
1461 class pulloperation:
1462 """A object that represent a single pull operation
1462 """A object that represent a single pull operation
1463
1463
1464 It purpose is to carry pull related state and very common operation.
1464 It purpose is to carry pull related state and very common operation.
1465
1465
1466 A new should be created at the beginning of each pull and discarded
1466 A new should be created at the beginning of each pull and discarded
1467 afterward.
1467 afterward.
1468 """
1468 """
1469
1469
1470 def __init__(
1470 def __init__(
1471 self,
1471 self,
1472 repo,
1472 repo,
1473 remote,
1473 remote,
1474 heads=None,
1474 heads=None,
1475 force=False,
1475 force=False,
1476 bookmarks=(),
1476 bookmarks=(),
1477 remotebookmarks=None,
1477 remotebookmarks=None,
1478 streamclonerequested=None,
1478 streamclonerequested=None,
1479 includepats=None,
1479 includepats=None,
1480 excludepats=None,
1480 excludepats=None,
1481 depth=None,
1481 depth=None,
1482 path=None,
1482 path=None,
1483 ):
1483 ):
1484 # repo we pull into
1484 # repo we pull into
1485 self.repo = repo
1485 self.repo = repo
1486 # repo we pull from
1486 # repo we pull from
1487 self.remote = remote
1487 self.remote = remote
1488 # path object used to build this remote
1488 # path object used to build this remote
1489 #
1489 #
1490 # Ideally, the remote peer would carry that directly.
1490 # Ideally, the remote peer would carry that directly.
1491 self.remote_path = path
1491 self.remote_path = path
1492 # revision we try to pull (None is "all")
1492 # revision we try to pull (None is "all")
1493 self.heads = heads
1493 self.heads = heads
1494 # bookmark pulled explicitly
1494 # bookmark pulled explicitly
1495 self.explicitbookmarks = [
1495 self.explicitbookmarks = [
1496 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1496 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1497 ]
1497 ]
1498 # do we force pull?
1498 # do we force pull?
1499 self.force = force
1499 self.force = force
1500 # whether a streaming clone was requested
1500 # whether a streaming clone was requested
1501 self.streamclonerequested = streamclonerequested
1501 self.streamclonerequested = streamclonerequested
1502 # transaction manager
1502 # transaction manager
1503 self.trmanager = None
1503 self.trmanager = None
1504 # set of common changeset between local and remote before pull
1504 # set of common changeset between local and remote before pull
1505 self.common = None
1505 self.common = None
1506 # set of pulled head
1506 # set of pulled head
1507 self.rheads = None
1507 self.rheads = None
1508 # list of missing changeset to fetch remotely
1508 # list of missing changeset to fetch remotely
1509 self.fetch = None
1509 self.fetch = None
1510 # remote bookmarks data
1510 # remote bookmarks data
1511 self.remotebookmarks = remotebookmarks
1511 self.remotebookmarks = remotebookmarks
1512 # result of changegroup pulling (used as return code by pull)
1512 # result of changegroup pulling (used as return code by pull)
1513 self.cgresult = None
1513 self.cgresult = None
1514 # list of step already done
1514 # list of step already done
1515 self.stepsdone = set()
1515 self.stepsdone = set()
1516 # Whether we attempted a clone from pre-generated bundles.
1516 # Whether we attempted a clone from pre-generated bundles.
1517 self.clonebundleattempted = False
1517 self.clonebundleattempted = False
1518 # Set of file patterns to include.
1518 # Set of file patterns to include.
1519 self.includepats = includepats
1519 self.includepats = includepats
1520 # Set of file patterns to exclude.
1520 # Set of file patterns to exclude.
1521 self.excludepats = excludepats
1521 self.excludepats = excludepats
1522 # Number of ancestor changesets to pull from each pulled head.
1522 # Number of ancestor changesets to pull from each pulled head.
1523 self.depth = depth
1523 self.depth = depth
1524
1524
1525 @util.propertycache
1525 @util.propertycache
1526 def pulledsubset(self):
1526 def pulledsubset(self):
1527 """heads of the set of changeset target by the pull"""
1527 """heads of the set of changeset target by the pull"""
1528 # compute target subset
1528 # compute target subset
1529 if self.heads is None:
1529 if self.heads is None:
1530 # We pulled every thing possible
1530 # We pulled every thing possible
1531 # sync on everything common
1531 # sync on everything common
1532 c = set(self.common)
1532 c = set(self.common)
1533 ret = list(self.common)
1533 ret = list(self.common)
1534 for n in self.rheads:
1534 for n in self.rheads:
1535 if n not in c:
1535 if n not in c:
1536 ret.append(n)
1536 ret.append(n)
1537 return ret
1537 return ret
1538 else:
1538 else:
1539 # We pulled a specific subset
1539 # We pulled a specific subset
1540 # sync on this subset
1540 # sync on this subset
1541 return self.heads
1541 return self.heads
1542
1542
1543 @util.propertycache
1543 @util.propertycache
1544 def canusebundle2(self):
1544 def canusebundle2(self):
1545 return not _forcebundle1(self)
1545 return not _forcebundle1(self)
1546
1546
1547 @util.propertycache
1547 @util.propertycache
1548 def remotebundle2caps(self):
1548 def remotebundle2caps(self):
1549 return bundle2.bundle2caps(self.remote)
1549 return bundle2.bundle2caps(self.remote)
1550
1550
1551 def gettransaction(self):
1551 def gettransaction(self):
1552 # deprecated; talk to trmanager directly
1552 # deprecated; talk to trmanager directly
1553 return self.trmanager.transaction()
1553 return self.trmanager.transaction()
1554
1554
1555
1555
1556 class transactionmanager(util.transactional):
1556 class transactionmanager(util.transactional):
1557 """An object to manage the life cycle of a transaction
1557 """An object to manage the life cycle of a transaction
1558
1558
1559 It creates the transaction on demand and calls the appropriate hooks when
1559 It creates the transaction on demand and calls the appropriate hooks when
1560 closing the transaction."""
1560 closing the transaction."""
1561
1561
1562 def __init__(self, repo, source, url):
1562 def __init__(self, repo, source, url):
1563 self.repo = repo
1563 self.repo = repo
1564 self.source = source
1564 self.source = source
1565 self.url = url
1565 self.url = url
1566 self._tr = None
1566 self._tr = None
1567
1567
1568 def transaction(self):
1568 def transaction(self):
1569 """Return an open transaction object, constructing if necessary"""
1569 """Return an open transaction object, constructing if necessary"""
1570 if not self._tr:
1570 if not self._tr:
1571 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1571 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1572 self._tr = self.repo.transaction(trname)
1572 self._tr = self.repo.transaction(trname)
1573 self._tr.hookargs[b'source'] = self.source
1573 self._tr.hookargs[b'source'] = self.source
1574 self._tr.hookargs[b'url'] = self.url
1574 self._tr.hookargs[b'url'] = self.url
1575 return self._tr
1575 return self._tr
1576
1576
1577 def close(self):
1577 def close(self):
1578 """close transaction if created"""
1578 """close transaction if created"""
1579 if self._tr is not None:
1579 if self._tr is not None:
1580 self._tr.close()
1580 self._tr.close()
1581
1581
1582 def release(self):
1582 def release(self):
1583 """release transaction if created"""
1583 """release transaction if created"""
1584 if self._tr is not None:
1584 if self._tr is not None:
1585 self._tr.release()
1585 self._tr.release()
1586
1586
1587
1587
1588 def listkeys(remote, namespace):
1588 def listkeys(remote, namespace):
1589 with remote.commandexecutor() as e:
1589 with remote.commandexecutor() as e:
1590 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1590 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1591
1591
1592
1592
1593 def _fullpullbundle2(repo, pullop):
1593 def _fullpullbundle2(repo, pullop):
1594 # The server may send a partial reply, i.e. when inlining
1594 # The server may send a partial reply, i.e. when inlining
1595 # pre-computed bundles. In that case, update the common
1595 # pre-computed bundles. In that case, update the common
1596 # set based on the results and pull another bundle.
1596 # set based on the results and pull another bundle.
1597 #
1597 #
1598 # There are two indicators that the process is finished:
1598 # There are two indicators that the process is finished:
1599 # - no changeset has been added, or
1599 # - no changeset has been added, or
1600 # - all remote heads are known locally.
1600 # - all remote heads are known locally.
1601 # The head check must use the unfiltered view as obsoletion
1601 # The head check must use the unfiltered view as obsoletion
1602 # markers can hide heads.
1602 # markers can hide heads.
1603 unfi = repo.unfiltered()
1603 unfi = repo.unfiltered()
1604 unficl = unfi.changelog
1604 unficl = unfi.changelog
1605
1605
1606 def headsofdiff(h1, h2):
1606 def headsofdiff(h1, h2):
1607 """Returns heads(h1 % h2)"""
1607 """Returns heads(h1 % h2)"""
1608 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1608 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1609 return {ctx.node() for ctx in res}
1609 return {ctx.node() for ctx in res}
1610
1610
1611 def headsofunion(h1, h2):
1611 def headsofunion(h1, h2):
1612 """Returns heads((h1 + h2) - null)"""
1612 """Returns heads((h1 + h2) - null)"""
1613 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1613 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1614 return {ctx.node() for ctx in res}
1614 return {ctx.node() for ctx in res}
1615
1615
1616 while True:
1616 while True:
1617 old_heads = unficl.heads()
1617 old_heads = unficl.heads()
1618 clstart = len(unficl)
1618 clstart = len(unficl)
1619 _pullbundle2(pullop)
1619 _pullbundle2(pullop)
1620 if requirements.NARROW_REQUIREMENT in repo.requirements:
1620 if requirements.NARROW_REQUIREMENT in repo.requirements:
1621 # XXX narrow clones filter the heads on the server side during
1621 # XXX narrow clones filter the heads on the server side during
1622 # XXX getbundle and result in partial replies as well.
1622 # XXX getbundle and result in partial replies as well.
1623 # XXX Disable pull bundles in this case as band aid to avoid
1623 # XXX Disable pull bundles in this case as band aid to avoid
1624 # XXX extra round trips.
1624 # XXX extra round trips.
1625 break
1625 break
1626 if clstart == len(unficl):
1626 if clstart == len(unficl):
1627 break
1627 break
1628 if all(unficl.hasnode(n) for n in pullop.rheads):
1628 if all(unficl.hasnode(n) for n in pullop.rheads):
1629 break
1629 break
1630 new_heads = headsofdiff(unficl.heads(), old_heads)
1630 new_heads = headsofdiff(unficl.heads(), old_heads)
1631 pullop.common = headsofunion(new_heads, pullop.common)
1631 pullop.common = headsofunion(new_heads, pullop.common)
1632 pullop.rheads = set(pullop.rheads) - pullop.common
1632 pullop.rheads = set(pullop.rheads) - pullop.common
1633
1633
1634
1634
1635 def add_confirm_callback(repo, pullop):
1635 def add_confirm_callback(repo, pullop):
1636 """adds a finalize callback to transaction which can be used to show stats
1636 """adds a finalize callback to transaction which can be used to show stats
1637 to user and confirm the pull before committing transaction"""
1637 to user and confirm the pull before committing transaction"""
1638
1638
1639 tr = pullop.trmanager.transaction()
1639 tr = pullop.trmanager.transaction()
1640 scmutil.registersummarycallback(
1640 scmutil.registersummarycallback(
1641 repo, tr, txnname=b'pull', as_validator=True
1641 repo, tr, txnname=b'pull', as_validator=True
1642 )
1642 )
1643 reporef = weakref.ref(repo.unfiltered())
1643 reporef = weakref.ref(repo.unfiltered())
1644
1644
1645 def prompt(tr):
1645 def prompt(tr):
1646 repo = reporef()
1646 repo = reporef()
1647 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1647 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1648 if repo.ui.promptchoice(cm):
1648 if repo.ui.promptchoice(cm):
1649 raise error.Abort(b"user aborted")
1649 raise error.Abort(b"user aborted")
1650
1650
1651 tr.addvalidator(b'900-pull-prompt', prompt)
1651 tr.addvalidator(b'900-pull-prompt', prompt)
1652
1652
1653
1653
1654 def pull(
1654 def pull(
1655 repo,
1655 repo,
1656 remote,
1656 remote,
1657 path=None,
1657 path=None,
1658 heads=None,
1658 heads=None,
1659 force=False,
1659 force=False,
1660 bookmarks=(),
1660 bookmarks=(),
1661 opargs=None,
1661 opargs=None,
1662 streamclonerequested=None,
1662 streamclonerequested=None,
1663 includepats=None,
1663 includepats=None,
1664 excludepats=None,
1664 excludepats=None,
1665 depth=None,
1665 depth=None,
1666 confirm=None,
1666 confirm=None,
1667 ):
1667 ):
1668 """Fetch repository data from a remote.
1668 """Fetch repository data from a remote.
1669
1669
1670 This is the main function used to retrieve data from a remote repository.
1670 This is the main function used to retrieve data from a remote repository.
1671
1671
1672 ``repo`` is the local repository to clone into.
1672 ``repo`` is the local repository to clone into.
1673 ``remote`` is a peer instance.
1673 ``remote`` is a peer instance.
1674 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1674 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1675 default) means to pull everything from the remote.
1675 default) means to pull everything from the remote.
1676 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1676 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1677 default, all remote bookmarks are pulled.
1677 default, all remote bookmarks are pulled.
1678 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1678 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1679 initialization.
1679 initialization.
1680 ``streamclonerequested`` is a boolean indicating whether a "streaming
1680 ``streamclonerequested`` is a boolean indicating whether a "streaming
1681 clone" is requested. A "streaming clone" is essentially a raw file copy
1681 clone" is requested. A "streaming clone" is essentially a raw file copy
1682 of revlogs from the server. This only works when the local repository is
1682 of revlogs from the server. This only works when the local repository is
1683 empty. The default value of ``None`` means to respect the server
1683 empty. The default value of ``None`` means to respect the server
1684 configuration for preferring stream clones.
1684 configuration for preferring stream clones.
1685 ``includepats`` and ``excludepats`` define explicit file patterns to
1685 ``includepats`` and ``excludepats`` define explicit file patterns to
1686 include and exclude in storage, respectively. If not defined, narrow
1686 include and exclude in storage, respectively. If not defined, narrow
1687 patterns from the repo instance are used, if available.
1687 patterns from the repo instance are used, if available.
1688 ``depth`` is an integer indicating the DAG depth of history we're
1688 ``depth`` is an integer indicating the DAG depth of history we're
1689 interested in. If defined, for each revision specified in ``heads``, we
1689 interested in. If defined, for each revision specified in ``heads``, we
1690 will fetch up to this many of its ancestors and data associated with them.
1690 will fetch up to this many of its ancestors and data associated with them.
1691 ``confirm`` is a boolean indicating whether the pull should be confirmed
1691 ``confirm`` is a boolean indicating whether the pull should be confirmed
1692 before committing the transaction. This overrides HGPLAIN.
1692 before committing the transaction. This overrides HGPLAIN.
1693
1693
1694 Returns the ``pulloperation`` created for this pull.
1694 Returns the ``pulloperation`` created for this pull.
1695 """
1695 """
1696 if opargs is None:
1696 if opargs is None:
1697 opargs = {}
1697 opargs = {}
1698
1698
1699 # We allow the narrow patterns to be passed in explicitly to provide more
1699 # We allow the narrow patterns to be passed in explicitly to provide more
1700 # flexibility for API consumers.
1700 # flexibility for API consumers.
1701 if includepats is not None or excludepats is not None:
1701 if includepats is not None or excludepats is not None:
1702 includepats = includepats or set()
1702 includepats = includepats or set()
1703 excludepats = excludepats or set()
1703 excludepats = excludepats or set()
1704 else:
1704 else:
1705 includepats, excludepats = repo.narrowpats
1705 includepats, excludepats = repo.narrowpats
1706
1706
1707 narrowspec.validatepatterns(includepats)
1707 narrowspec.validatepatterns(includepats)
1708 narrowspec.validatepatterns(excludepats)
1708 narrowspec.validatepatterns(excludepats)
1709
1709
1710 pullop = pulloperation(
1710 pullop = pulloperation(
1711 repo,
1711 repo,
1712 remote,
1712 remote,
1713 path=path,
1713 path=path,
1714 heads=heads,
1714 heads=heads,
1715 force=force,
1715 force=force,
1716 bookmarks=bookmarks,
1716 bookmarks=bookmarks,
1717 streamclonerequested=streamclonerequested,
1717 streamclonerequested=streamclonerequested,
1718 includepats=includepats,
1718 includepats=includepats,
1719 excludepats=excludepats,
1719 excludepats=excludepats,
1720 depth=depth,
1720 depth=depth,
1721 **pycompat.strkwargs(opargs),
1721 **pycompat.strkwargs(opargs),
1722 )
1722 )
1723
1723
1724 peerlocal = pullop.remote.local()
1724 peerlocal = pullop.remote.local()
1725 if peerlocal:
1725 if peerlocal:
1726 missing = set(peerlocal.requirements) - pullop.repo.supported
1726 missing = set(peerlocal.requirements) - pullop.repo.supported
1727 if missing:
1727 if missing:
1728 msg = _(
1728 msg = _(
1729 b"required features are not"
1729 b"required features are not"
1730 b" supported in the destination:"
1730 b" supported in the destination:"
1731 b" %s"
1731 b" %s"
1732 ) % (b', '.join(sorted(missing)))
1732 ) % (b', '.join(sorted(missing)))
1733 raise error.Abort(msg)
1733 raise error.Abort(msg)
1734
1734
1735 for category in repo._wanted_sidedata:
1735 for category in repo._wanted_sidedata:
1736 # Check that a computer is registered for that category for at least
1736 # Check that a computer is registered for that category for at least
1737 # one revlog kind.
1737 # one revlog kind.
1738 for kind, computers in repo._sidedata_computers.items():
1738 for kind, computers in repo._sidedata_computers.items():
1739 if computers.get(category):
1739 if computers.get(category):
1740 break
1740 break
1741 else:
1741 else:
1742 # This should never happen since repos are supposed to be able to
1742 # This should never happen since repos are supposed to be able to
1743 # generate the sidedata they require.
1743 # generate the sidedata they require.
1744 raise error.ProgrammingError(
1744 raise error.ProgrammingError(
1745 _(
1745 _(
1746 b'sidedata category requested by local side without local'
1746 b'sidedata category requested by local side without local'
1747 b"support: '%s'"
1747 b"support: '%s'"
1748 )
1748 )
1749 % pycompat.bytestr(category)
1749 % pycompat.bytestr(category)
1750 )
1750 )
1751
1751
1752 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1752 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1753 wlock = util.nullcontextmanager
1753 wlock = util.nullcontextmanager
1754 if not bookmod.bookmarksinstore(repo):
1754 if not bookmod.bookmarksinstore(repo):
1755 wlock = repo.wlock
1755 wlock = repo.wlock
1756 with wlock(), repo.lock(), pullop.trmanager:
1756 with wlock(), repo.lock(), pullop.trmanager:
1757 if confirm or (
1757 if confirm or (
1758 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1758 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1759 ):
1759 ):
1760 add_confirm_callback(repo, pullop)
1760 add_confirm_callback(repo, pullop)
1761
1761
1762 # This should ideally be in _pullbundle2(). However, it needs to run
1762 # This should ideally be in _pullbundle2(). However, it needs to run
1763 # before discovery to avoid extra work.
1763 # before discovery to avoid extra work.
1764 _maybeapplyclonebundle(pullop)
1764 _maybeapplyclonebundle(pullop)
1765 streamclone.maybeperformlegacystreamclone(pullop)
1765 streamclone.maybeperformlegacystreamclone(pullop)
1766 _pulldiscovery(pullop)
1766 _pulldiscovery(pullop)
1767 if pullop.canusebundle2:
1767 if pullop.canusebundle2:
1768 _fullpullbundle2(repo, pullop)
1768 _fullpullbundle2(repo, pullop)
1769 _pullchangeset(pullop)
1769 _pullchangeset(pullop)
1770 _pullphase(pullop)
1770 _pullphase(pullop)
1771 _pullbookmarks(pullop)
1771 _pullbookmarks(pullop)
1772 _pullobsolete(pullop)
1772 _pullobsolete(pullop)
1773
1773
1774 # storing remotenames
1774 # storing remotenames
1775 if repo.ui.configbool(b'experimental', b'remotenames'):
1775 if repo.ui.configbool(b'experimental', b'remotenames'):
1776 logexchange.pullremotenames(repo, remote)
1776 logexchange.pullremotenames(repo, remote)
1777
1777
1778 return pullop
1778 return pullop
1779
1779
1780
1780
1781 # list of steps to perform discovery before pull
1781 # list of steps to perform discovery before pull
1782 pulldiscoveryorder = []
1782 pulldiscoveryorder = []
1783
1783
1784 # Mapping between step name and function
1784 # Mapping between step name and function
1785 #
1785 #
1786 # This exists to help extensions wrap steps if necessary
1786 # This exists to help extensions wrap steps if necessary
1787 pulldiscoverymapping = {}
1787 pulldiscoverymapping = {}
1788
1788
1789
1789
1790 def pulldiscovery(stepname):
1790 def pulldiscovery(stepname):
1791 """decorator for function performing discovery before pull
1791 """decorator for function performing discovery before pull
1792
1792
1793 The function is added to the step -> function mapping and appended to the
1793 The function is added to the step -> function mapping and appended to the
1794 list of steps. Beware that decorated function will be added in order (this
1794 list of steps. Beware that decorated function will be added in order (this
1795 may matter).
1795 may matter).
1796
1796
1797 You can only use this decorator for a new step, if you want to wrap a step
1797 You can only use this decorator for a new step, if you want to wrap a step
1798 from an extension, change the pulldiscovery dictionary directly."""
1798 from an extension, change the pulldiscovery dictionary directly."""
1799
1799
1800 def dec(func):
1800 def dec(func):
1801 assert stepname not in pulldiscoverymapping
1801 assert stepname not in pulldiscoverymapping
1802 pulldiscoverymapping[stepname] = func
1802 pulldiscoverymapping[stepname] = func
1803 pulldiscoveryorder.append(stepname)
1803 pulldiscoveryorder.append(stepname)
1804 return func
1804 return func
1805
1805
1806 return dec
1806 return dec
1807
1807
1808
1808
1809 def _pulldiscovery(pullop):
1809 def _pulldiscovery(pullop):
1810 """Run all discovery steps"""
1810 """Run all discovery steps"""
1811 for stepname in pulldiscoveryorder:
1811 for stepname in pulldiscoveryorder:
1812 step = pulldiscoverymapping[stepname]
1812 step = pulldiscoverymapping[stepname]
1813 step(pullop)
1813 step(pullop)
1814
1814
1815
1815
1816 @pulldiscovery(b'b1:bookmarks')
1816 @pulldiscovery(b'b1:bookmarks')
1817 def _pullbookmarkbundle1(pullop):
1817 def _pullbookmarkbundle1(pullop):
1818 """fetch bookmark data in bundle1 case
1818 """fetch bookmark data in bundle1 case
1819
1819
1820 If not using bundle2, we have to fetch bookmarks before changeset
1820 If not using bundle2, we have to fetch bookmarks before changeset
1821 discovery to reduce the chance and impact of race conditions."""
1821 discovery to reduce the chance and impact of race conditions."""
1822 if pullop.remotebookmarks is not None:
1822 if pullop.remotebookmarks is not None:
1823 return
1823 return
1824 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1824 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1825 # all known bundle2 servers now support listkeys, but lets be nice with
1825 # all known bundle2 servers now support listkeys, but lets be nice with
1826 # new implementation.
1826 # new implementation.
1827 return
1827 return
1828 books = listkeys(pullop.remote, b'bookmarks')
1828 books = listkeys(pullop.remote, b'bookmarks')
1829 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1829 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1830
1830
1831
1831
1832 @pulldiscovery(b'changegroup')
1832 @pulldiscovery(b'changegroup')
1833 def _pulldiscoverychangegroup(pullop):
1833 def _pulldiscoverychangegroup(pullop):
1834 """discovery phase for the pull
1834 """discovery phase for the pull
1835
1835
1836 Current handle changeset discovery only, will change handle all discovery
1836 Current handle changeset discovery only, will change handle all discovery
1837 at some point."""
1837 at some point."""
1838 tmp = discovery.findcommonincoming(
1838 tmp = discovery.findcommonincoming(
1839 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1839 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1840 )
1840 )
1841 common, fetch, rheads = tmp
1841 common, fetch, rheads = tmp
1842 has_node = pullop.repo.unfiltered().changelog.index.has_node
1842 has_node = pullop.repo.unfiltered().changelog.index.has_node
1843 if fetch and rheads:
1843 if fetch and rheads:
1844 # If a remote heads is filtered locally, put in back in common.
1844 # If a remote heads is filtered locally, put in back in common.
1845 #
1845 #
1846 # This is a hackish solution to catch most of "common but locally
1846 # This is a hackish solution to catch most of "common but locally
1847 # hidden situation". We do not performs discovery on unfiltered
1847 # hidden situation". We do not performs discovery on unfiltered
1848 # repository because it end up doing a pathological amount of round
1848 # repository because it end up doing a pathological amount of round
1849 # trip for w huge amount of changeset we do not care about.
1849 # trip for w huge amount of changeset we do not care about.
1850 #
1850 #
1851 # If a set of such "common but filtered" changeset exist on the server
1851 # If a set of such "common but filtered" changeset exist on the server
1852 # but are not including a remote heads, we'll not be able to detect it,
1852 # but are not including a remote heads, we'll not be able to detect it,
1853 scommon = set(common)
1853 scommon = set(common)
1854 for n in rheads:
1854 for n in rheads:
1855 if has_node(n):
1855 if has_node(n):
1856 if n not in scommon:
1856 if n not in scommon:
1857 common.append(n)
1857 common.append(n)
1858 if set(rheads).issubset(set(common)):
1858 if set(rheads).issubset(set(common)):
1859 fetch = []
1859 fetch = []
1860 pullop.common = common
1860 pullop.common = common
1861 pullop.fetch = fetch
1861 pullop.fetch = fetch
1862 pullop.rheads = rheads
1862 pullop.rheads = rheads
1863
1863
1864
1864
1865 def _pullbundle2(pullop):
1865 def _pullbundle2(pullop):
1866 """pull data using bundle2
1866 """pull data using bundle2
1867
1867
1868 For now, the only supported data are changegroup."""
1868 For now, the only supported data are changegroup."""
1869 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1869 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1870
1870
1871 # make ui easier to access
1871 # make ui easier to access
1872 ui = pullop.repo.ui
1872 ui = pullop.repo.ui
1873
1873
1874 # At the moment we don't do stream clones over bundle2. If that is
1874 # At the moment we don't do stream clones over bundle2. If that is
1875 # implemented then here's where the check for that will go.
1875 # implemented then here's where the check for that will go.
1876 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1876 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1877
1877
1878 # declare pull perimeters
1878 # declare pull perimeters
1879 kwargs[b'common'] = pullop.common
1879 kwargs[b'common'] = pullop.common
1880 kwargs[b'heads'] = pullop.heads or pullop.rheads
1880 kwargs[b'heads'] = pullop.heads or pullop.rheads
1881
1881
1882 # check server supports narrow and then adding includepats and excludepats
1882 # check server supports narrow and then adding includepats and excludepats
1883 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1883 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1884 if servernarrow and pullop.includepats:
1884 if servernarrow and pullop.includepats:
1885 kwargs[b'includepats'] = pullop.includepats
1885 kwargs[b'includepats'] = pullop.includepats
1886 if servernarrow and pullop.excludepats:
1886 if servernarrow and pullop.excludepats:
1887 kwargs[b'excludepats'] = pullop.excludepats
1887 kwargs[b'excludepats'] = pullop.excludepats
1888
1888
1889 if streaming:
1889 if streaming:
1890 kwargs[b'cg'] = False
1890 kwargs[b'cg'] = False
1891 kwargs[b'stream'] = True
1891 kwargs[b'stream'] = True
1892 pullop.stepsdone.add(b'changegroup')
1892 pullop.stepsdone.add(b'changegroup')
1893 pullop.stepsdone.add(b'phases')
1893 pullop.stepsdone.add(b'phases')
1894
1894
1895 else:
1895 else:
1896 # pulling changegroup
1896 # pulling changegroup
1897 pullop.stepsdone.add(b'changegroup')
1897 pullop.stepsdone.add(b'changegroup')
1898
1898
1899 kwargs[b'cg'] = pullop.fetch
1899 kwargs[b'cg'] = pullop.fetch
1900
1900
1901 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1901 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1902 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1902 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1903 if not legacyphase and hasbinaryphase:
1903 if not legacyphase and hasbinaryphase:
1904 kwargs[b'phases'] = True
1904 kwargs[b'phases'] = True
1905 pullop.stepsdone.add(b'phases')
1905 pullop.stepsdone.add(b'phases')
1906
1906
1907 if b'listkeys' in pullop.remotebundle2caps:
1907 if b'listkeys' in pullop.remotebundle2caps:
1908 if b'phases' not in pullop.stepsdone:
1908 if b'phases' not in pullop.stepsdone:
1909 kwargs[b'listkeys'] = [b'phases']
1909 kwargs[b'listkeys'] = [b'phases']
1910
1910
1911 bookmarksrequested = False
1911 bookmarksrequested = False
1912 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1912 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1913 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1913 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1914
1914
1915 if pullop.remotebookmarks is not None:
1915 if pullop.remotebookmarks is not None:
1916 pullop.stepsdone.add(b'request-bookmarks')
1916 pullop.stepsdone.add(b'request-bookmarks')
1917
1917
1918 if (
1918 if (
1919 b'request-bookmarks' not in pullop.stepsdone
1919 b'request-bookmarks' not in pullop.stepsdone
1920 and pullop.remotebookmarks is None
1920 and pullop.remotebookmarks is None
1921 and not legacybookmark
1921 and not legacybookmark
1922 and hasbinarybook
1922 and hasbinarybook
1923 ):
1923 ):
1924 kwargs[b'bookmarks'] = True
1924 kwargs[b'bookmarks'] = True
1925 bookmarksrequested = True
1925 bookmarksrequested = True
1926
1926
1927 if b'listkeys' in pullop.remotebundle2caps:
1927 if b'listkeys' in pullop.remotebundle2caps:
1928 if b'request-bookmarks' not in pullop.stepsdone:
1928 if b'request-bookmarks' not in pullop.stepsdone:
1929 # make sure to always includes bookmark data when migrating
1929 # make sure to always includes bookmark data when migrating
1930 # `hg incoming --bundle` to using this function.
1930 # `hg incoming --bundle` to using this function.
1931 pullop.stepsdone.add(b'request-bookmarks')
1931 pullop.stepsdone.add(b'request-bookmarks')
1932 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1932 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1933
1933
1934 # If this is a full pull / clone and the server supports the clone bundles
1934 # If this is a full pull / clone and the server supports the clone bundles
1935 # feature, tell the server whether we attempted a clone bundle. The
1935 # feature, tell the server whether we attempted a clone bundle. The
1936 # presence of this flag indicates the client supports clone bundles. This
1936 # presence of this flag indicates the client supports clone bundles. This
1937 # will enable the server to treat clients that support clone bundles
1937 # will enable the server to treat clients that support clone bundles
1938 # differently from those that don't.
1938 # differently from those that don't.
1939 if (
1939 if (
1940 pullop.remote.capable(b'clonebundles')
1940 pullop.remote.capable(b'clonebundles')
1941 and pullop.heads is None
1941 and pullop.heads is None
1942 and list(pullop.common) == [pullop.repo.nullid]
1942 and list(pullop.common) == [pullop.repo.nullid]
1943 ):
1943 ):
1944 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1944 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1945
1945
1946 if streaming:
1946 if streaming:
1947 pullop.repo.ui.status(_(b'streaming all changes\n'))
1947 pullop.repo.ui.status(_(b'streaming all changes\n'))
1948 elif not pullop.fetch:
1948 elif not pullop.fetch:
1949 pullop.repo.ui.status(_(b"no changes found\n"))
1949 pullop.repo.ui.status(_(b"no changes found\n"))
1950 pullop.cgresult = 0
1950 pullop.cgresult = 0
1951 else:
1951 else:
1952 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1952 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1953 pullop.repo.ui.status(_(b"requesting all changes\n"))
1953 pullop.repo.ui.status(_(b"requesting all changes\n"))
1954 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1954 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1955 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1955 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1956 if obsolete.commonversion(remoteversions) is not None:
1956 if obsolete.commonversion(remoteversions) is not None:
1957 kwargs[b'obsmarkers'] = True
1957 kwargs[b'obsmarkers'] = True
1958 pullop.stepsdone.add(b'obsmarkers')
1958 pullop.stepsdone.add(b'obsmarkers')
1959 _pullbundle2extraprepare(pullop, kwargs)
1959 _pullbundle2extraprepare(pullop, kwargs)
1960
1960
1961 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1961 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1962 if remote_sidedata:
1962 if remote_sidedata:
1963 kwargs[b'remote_sidedata'] = remote_sidedata
1963 kwargs[b'remote_sidedata'] = remote_sidedata
1964
1964
1965 with pullop.remote.commandexecutor() as e:
1965 with pullop.remote.commandexecutor() as e:
1966 args = dict(kwargs)
1966 args = dict(kwargs)
1967 args[b'source'] = b'pull'
1967 args[b'source'] = b'pull'
1968 bundle = e.callcommand(b'getbundle', args).result()
1968 bundle = e.callcommand(b'getbundle', args).result()
1969
1969
1970 try:
1970 try:
1971 op = bundle2.bundleoperation(
1971 op = bundle2.bundleoperation(
1972 pullop.repo,
1972 pullop.repo,
1973 pullop.gettransaction,
1973 pullop.gettransaction,
1974 source=b'pull',
1974 source=b'pull',
1975 remote=pullop.remote,
1975 remote=pullop.remote,
1976 )
1976 )
1977 op.modes[b'bookmarks'] = b'records'
1977 op.modes[b'bookmarks'] = b'records'
1978 bundle2.processbundle(
1978 bundle2.processbundle(
1979 pullop.repo,
1979 pullop.repo,
1980 bundle,
1980 bundle,
1981 op=op,
1981 op=op,
1982 remote=pullop.remote,
1982 remote=pullop.remote,
1983 )
1983 )
1984 except bundle2.AbortFromPart as exc:
1984 except bundle2.AbortFromPart as exc:
1985 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1985 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1986 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1986 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1987 except error.BundleValueError as exc:
1987 except error.BundleValueError as exc:
1988 raise error.RemoteError(_(b'missing support for %s') % exc)
1988 raise error.RemoteError(_(b'missing support for %s') % exc)
1989
1989
1990 if pullop.fetch:
1990 if pullop.fetch:
1991 pullop.cgresult = bundle2.combinechangegroupresults(op)
1991 pullop.cgresult = bundle2.combinechangegroupresults(op)
1992
1992
1993 # processing phases change
1993 # processing phases change
1994 for namespace, value in op.records[b'listkeys']:
1994 for namespace, value in op.records[b'listkeys']:
1995 if namespace == b'phases':
1995 if namespace == b'phases':
1996 _pullapplyphases(pullop, value)
1996 _pullapplyphases(pullop, value)
1997
1997
1998 # processing bookmark update
1998 # processing bookmark update
1999 if bookmarksrequested:
1999 if bookmarksrequested:
2000 books = {}
2000 books = {}
2001 for record in op.records[b'bookmarks']:
2001 for record in op.records[b'bookmarks']:
2002 books[record[b'bookmark']] = record[b"node"]
2002 books[record[b'bookmark']] = record[b"node"]
2003 pullop.remotebookmarks = books
2003 pullop.remotebookmarks = books
2004 else:
2004 else:
2005 for namespace, value in op.records[b'listkeys']:
2005 for namespace, value in op.records[b'listkeys']:
2006 if namespace == b'bookmarks':
2006 if namespace == b'bookmarks':
2007 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2007 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2008
2008
2009 # bookmark data were either already there or pulled in the bundle
2009 # bookmark data were either already there or pulled in the bundle
2010 if pullop.remotebookmarks is not None:
2010 if pullop.remotebookmarks is not None:
2011 _pullbookmarks(pullop)
2011 _pullbookmarks(pullop)
2012
2012
2013
2013
2014 def _pullbundle2extraprepare(pullop, kwargs):
2014 def _pullbundle2extraprepare(pullop, kwargs):
2015 """hook function so that extensions can extend the getbundle call"""
2015 """hook function so that extensions can extend the getbundle call"""
2016
2016
2017
2017
2018 def _pullchangeset(pullop):
2018 def _pullchangeset(pullop):
2019 """pull changeset from unbundle into the local repo"""
2019 """pull changeset from unbundle into the local repo"""
2020 # We delay the open of the transaction as late as possible so we
2020 # We delay the open of the transaction as late as possible so we
2021 # don't open transaction for nothing or you break future useful
2021 # don't open transaction for nothing or you break future useful
2022 # rollback call
2022 # rollback call
2023 if b'changegroup' in pullop.stepsdone:
2023 if b'changegroup' in pullop.stepsdone:
2024 return
2024 return
2025 pullop.stepsdone.add(b'changegroup')
2025 pullop.stepsdone.add(b'changegroup')
2026 if not pullop.fetch:
2026 if not pullop.fetch:
2027 pullop.repo.ui.status(_(b"no changes found\n"))
2027 pullop.repo.ui.status(_(b"no changes found\n"))
2028 pullop.cgresult = 0
2028 pullop.cgresult = 0
2029 return
2029 return
2030 tr = pullop.gettransaction()
2030 tr = pullop.gettransaction()
2031 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
2031 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
2032 pullop.repo.ui.status(_(b"requesting all changes\n"))
2032 pullop.repo.ui.status(_(b"requesting all changes\n"))
2033 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2033 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2034 # issue1320, avoid a race if remote changed after discovery
2034 # issue1320, avoid a race if remote changed after discovery
2035 pullop.heads = pullop.rheads
2035 pullop.heads = pullop.rheads
2036
2036
2037 if pullop.remote.capable(b'getbundle'):
2037 if pullop.remote.capable(b'getbundle'):
2038 # TODO: get bundlecaps from remote
2038 # TODO: get bundlecaps from remote
2039 cg = pullop.remote.getbundle(
2039 cg = pullop.remote.getbundle(
2040 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2040 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2041 )
2041 )
2042 elif pullop.heads is None:
2042 elif pullop.heads is None:
2043 with pullop.remote.commandexecutor() as e:
2043 with pullop.remote.commandexecutor() as e:
2044 cg = e.callcommand(
2044 cg = e.callcommand(
2045 b'changegroup',
2045 b'changegroup',
2046 {
2046 {
2047 b'nodes': pullop.fetch,
2047 b'nodes': pullop.fetch,
2048 b'source': b'pull',
2048 b'source': b'pull',
2049 },
2049 },
2050 ).result()
2050 ).result()
2051
2051
2052 elif not pullop.remote.capable(b'changegroupsubset'):
2052 elif not pullop.remote.capable(b'changegroupsubset'):
2053 raise error.Abort(
2053 raise error.Abort(
2054 _(
2054 _(
2055 b"partial pull cannot be done because "
2055 b"partial pull cannot be done because "
2056 b"other repository doesn't support "
2056 b"other repository doesn't support "
2057 b"changegroupsubset."
2057 b"changegroupsubset."
2058 )
2058 )
2059 )
2059 )
2060 else:
2060 else:
2061 with pullop.remote.commandexecutor() as e:
2061 with pullop.remote.commandexecutor() as e:
2062 cg = e.callcommand(
2062 cg = e.callcommand(
2063 b'changegroupsubset',
2063 b'changegroupsubset',
2064 {
2064 {
2065 b'bases': pullop.fetch,
2065 b'bases': pullop.fetch,
2066 b'heads': pullop.heads,
2066 b'heads': pullop.heads,
2067 b'source': b'pull',
2067 b'source': b'pull',
2068 },
2068 },
2069 ).result()
2069 ).result()
2070
2070
2071 bundleop = bundle2.applybundle(
2071 bundleop = bundle2.applybundle(
2072 pullop.repo,
2072 pullop.repo,
2073 cg,
2073 cg,
2074 tr,
2074 tr,
2075 b'pull',
2075 b'pull',
2076 pullop.remote.url(),
2076 pullop.remote.url(),
2077 remote=pullop.remote,
2077 remote=pullop.remote,
2078 )
2078 )
2079 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2079 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2080
2080
2081
2081
2082 def _pullphase(pullop):
2082 def _pullphase(pullop):
2083 # Get remote phases data from remote
2083 # Get remote phases data from remote
2084 if b'phases' in pullop.stepsdone:
2084 if b'phases' in pullop.stepsdone:
2085 return
2085 return
2086 remotephases = listkeys(pullop.remote, b'phases')
2086 remotephases = listkeys(pullop.remote, b'phases')
2087 _pullapplyphases(pullop, remotephases)
2087 _pullapplyphases(pullop, remotephases)
2088
2088
2089
2089
2090 def _pullapplyphases(pullop, remotephases):
2090 def _pullapplyphases(pullop, remotephases):
2091 """apply phase movement from observed remote state"""
2091 """apply phase movement from observed remote state"""
2092 if b'phases' in pullop.stepsdone:
2092 if b'phases' in pullop.stepsdone:
2093 return
2093 return
2094 pullop.stepsdone.add(b'phases')
2094 pullop.stepsdone.add(b'phases')
2095 publishing = bool(remotephases.get(b'publishing', False))
2095 publishing = bool(remotephases.get(b'publishing', False))
2096 if remotephases and not publishing:
2096 if remotephases and not publishing:
2097 unfi = pullop.repo.unfiltered()
2097 unfi = pullop.repo.unfiltered()
2098 to_rev = unfi.changelog.index.rev
2098 to_rev = unfi.changelog.index.rev
2099 to_node = unfi.changelog.node
2099 to_node = unfi.changelog.node
2100 pulledsubset_revs = [to_rev(n) for n in pullop.pulledsubset]
2100 pulledsubset_revs = [to_rev(n) for n in pullop.pulledsubset]
2101 # remote is new and non-publishing
2101 # remote is new and non-publishing
2102 pheads_revs, _dr = phases.analyze_remote_phases(
2102 pheads_revs, _dr = phases.analyze_remote_phases(
2103 pullop.repo,
2103 pullop.repo,
2104 pulledsubset_revs,
2104 pulledsubset_revs,
2105 remotephases,
2105 remotephases,
2106 )
2106 )
2107 pheads = [to_node(r) for r in pheads_revs]
2107 pheads = [to_node(r) for r in pheads_revs]
2108 dheads = pullop.pulledsubset
2108 dheads = pullop.pulledsubset
2109 else:
2109 else:
2110 # Remote is old or publishing all common changesets
2110 # Remote is old or publishing all common changesets
2111 # should be seen as public
2111 # should be seen as public
2112 pheads = pullop.pulledsubset
2112 pheads = pullop.pulledsubset
2113 dheads = []
2113 dheads = []
2114 unfi = pullop.repo.unfiltered()
2114 unfi = pullop.repo.unfiltered()
2115 phase = unfi._phasecache.phase
2115 phase = unfi._phasecache.phase
2116 rev = unfi.changelog.index.get_rev
2116 rev = unfi.changelog.index.get_rev
2117 public = phases.public
2117 public = phases.public
2118 draft = phases.draft
2118 draft = phases.draft
2119
2119
2120 # exclude changesets already public locally and update the others
2120 # exclude changesets already public locally and update the others
2121 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2121 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2122 if pheads:
2122 if pheads:
2123 tr = pullop.gettransaction()
2123 tr = pullop.gettransaction()
2124 phases.advanceboundary(pullop.repo, tr, public, pheads)
2124 phases.advanceboundary(pullop.repo, tr, public, pheads)
2125
2125
2126 # exclude changesets already draft locally and update the others
2126 # exclude changesets already draft locally and update the others
2127 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2127 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2128 if dheads:
2128 if dheads:
2129 tr = pullop.gettransaction()
2129 tr = pullop.gettransaction()
2130 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2130 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2131
2131
2132
2132
2133 def _pullbookmarks(pullop):
2133 def _pullbookmarks(pullop):
2134 """process the remote bookmark information to update the local one"""
2134 """process the remote bookmark information to update the local one"""
2135 if b'bookmarks' in pullop.stepsdone:
2135 if b'bookmarks' in pullop.stepsdone:
2136 return
2136 return
2137 pullop.stepsdone.add(b'bookmarks')
2137 pullop.stepsdone.add(b'bookmarks')
2138 repo = pullop.repo
2138 repo = pullop.repo
2139 remotebookmarks = pullop.remotebookmarks
2139 remotebookmarks = pullop.remotebookmarks
2140 bookmarks_mode = None
2140 bookmarks_mode = None
2141 if pullop.remote_path is not None:
2141 if pullop.remote_path is not None:
2142 bookmarks_mode = pullop.remote_path.bookmarks_mode
2142 bookmarks_mode = pullop.remote_path.bookmarks_mode
2143 bookmod.updatefromremote(
2143 bookmod.updatefromremote(
2144 repo.ui,
2144 repo.ui,
2145 repo,
2145 repo,
2146 remotebookmarks,
2146 remotebookmarks,
2147 pullop.remote.url(),
2147 pullop.remote.url(),
2148 pullop.gettransaction,
2148 pullop.gettransaction,
2149 explicit=pullop.explicitbookmarks,
2149 explicit=pullop.explicitbookmarks,
2150 mode=bookmarks_mode,
2150 mode=bookmarks_mode,
2151 )
2151 )
2152
2152
2153
2153
2154 def _pullobsolete(pullop):
2154 def _pullobsolete(pullop):
2155 """utility function to pull obsolete markers from a remote
2155 """utility function to pull obsolete markers from a remote
2156
2156
2157 The `gettransaction` is function that return the pull transaction, creating
2157 The `gettransaction` is function that return the pull transaction, creating
2158 one if necessary. We return the transaction to inform the calling code that
2158 one if necessary. We return the transaction to inform the calling code that
2159 a new transaction have been created (when applicable).
2159 a new transaction have been created (when applicable).
2160
2160
2161 Exists mostly to allow overriding for experimentation purpose"""
2161 Exists mostly to allow overriding for experimentation purpose"""
2162 if b'obsmarkers' in pullop.stepsdone:
2162 if b'obsmarkers' in pullop.stepsdone:
2163 return
2163 return
2164 pullop.stepsdone.add(b'obsmarkers')
2164 pullop.stepsdone.add(b'obsmarkers')
2165 tr = None
2165 tr = None
2166 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2166 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2167 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2167 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2168 remoteobs = listkeys(pullop.remote, b'obsolete')
2168 remoteobs = listkeys(pullop.remote, b'obsolete')
2169 if b'dump0' in remoteobs:
2169 if b'dump0' in remoteobs:
2170 tr = pullop.gettransaction()
2170 tr = pullop.gettransaction()
2171 markers = []
2171 markers = []
2172 for key in sorted(remoteobs, reverse=True):
2172 for key in sorted(remoteobs, reverse=True):
2173 if key.startswith(b'dump'):
2173 if key.startswith(b'dump'):
2174 data = util.b85decode(remoteobs[key])
2174 data = util.b85decode(remoteobs[key])
2175 version, newmarks = obsolete._readmarkers(data)
2175 version, newmarks = obsolete._readmarkers(data)
2176 markers += newmarks
2176 markers += newmarks
2177 if markers:
2177 if markers:
2178 pullop.repo.obsstore.add(tr, markers)
2178 pullop.repo.obsstore.add(tr, markers)
2179 pullop.repo.invalidatevolatilesets()
2179 pullop.repo.invalidatevolatilesets()
2180 return tr
2180 return tr
2181
2181
2182
2182
2183 def applynarrowacl(repo, kwargs):
2183 def applynarrowacl(repo, kwargs):
2184 """Apply narrow fetch access control.
2184 """Apply narrow fetch access control.
2185
2185
2186 This massages the named arguments for getbundle wire protocol commands
2186 This massages the named arguments for getbundle wire protocol commands
2187 so requested data is filtered through access control rules.
2187 so requested data is filtered through access control rules.
2188 """
2188 """
2189 ui = repo.ui
2189 ui = repo.ui
2190 # TODO this assumes existence of HTTP and is a layering violation.
2190 # TODO this assumes existence of HTTP and is a layering violation.
2191 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2191 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2192 user_includes = ui.configlist(
2192 user_includes = ui.configlist(
2193 _NARROWACL_SECTION,
2193 _NARROWACL_SECTION,
2194 username + b'.includes',
2194 username + b'.includes',
2195 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2195 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2196 )
2196 )
2197 user_excludes = ui.configlist(
2197 user_excludes = ui.configlist(
2198 _NARROWACL_SECTION,
2198 _NARROWACL_SECTION,
2199 username + b'.excludes',
2199 username + b'.excludes',
2200 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2200 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2201 )
2201 )
2202 if not user_includes:
2202 if not user_includes:
2203 raise error.Abort(
2203 raise error.Abort(
2204 _(b"%s configuration for user %s is empty")
2204 _(b"%s configuration for user %s is empty")
2205 % (_NARROWACL_SECTION, username)
2205 % (_NARROWACL_SECTION, username)
2206 )
2206 )
2207
2207
2208 user_includes = [
2208 user_includes = [
2209 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2209 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2210 ]
2210 ]
2211 user_excludes = [
2211 user_excludes = [
2212 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2212 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2213 ]
2213 ]
2214
2214
2215 req_includes = set(kwargs.get('includepats', []))
2215 req_includes = set(kwargs.get('includepats', []))
2216 req_excludes = set(kwargs.get('excludepats', []))
2216 req_excludes = set(kwargs.get('excludepats', []))
2217
2217
2218 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2218 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2219 req_includes, req_excludes, user_includes, user_excludes
2219 req_includes, req_excludes, user_includes, user_excludes
2220 )
2220 )
2221
2221
2222 if invalid_includes:
2222 if invalid_includes:
2223 raise error.Abort(
2223 raise error.Abort(
2224 _(b"The following includes are not accessible for %s: %s")
2224 _(b"The following includes are not accessible for %s: %s")
2225 % (username, stringutil.pprint(invalid_includes))
2225 % (username, stringutil.pprint(invalid_includes))
2226 )
2226 )
2227
2227
2228 new_args = {}
2228 new_args = {}
2229 new_args.update(kwargs)
2229 new_args.update(kwargs)
2230 new_args['narrow'] = True
2230 new_args['narrow'] = True
2231 new_args['narrow_acl'] = True
2231 new_args['narrow_acl'] = True
2232 new_args['includepats'] = req_includes
2232 new_args['includepats'] = req_includes
2233 if req_excludes:
2233 if req_excludes:
2234 new_args['excludepats'] = req_excludes
2234 new_args['excludepats'] = req_excludes
2235
2235
2236 return new_args
2236 return new_args
2237
2237
2238
2238
2239 def _computeellipsis(repo, common, heads, known, match, depth=None):
2239 def _computeellipsis(repo, common, heads, known, match, depth=None):
2240 """Compute the shape of a narrowed DAG.
2240 """Compute the shape of a narrowed DAG.
2241
2241
2242 Args:
2242 Args:
2243 repo: The repository we're transferring.
2243 repo: The repository we're transferring.
2244 common: The roots of the DAG range we're transferring.
2244 common: The roots of the DAG range we're transferring.
2245 May be just [nullid], which means all ancestors of heads.
2245 May be just [nullid], which means all ancestors of heads.
2246 heads: The heads of the DAG range we're transferring.
2246 heads: The heads of the DAG range we're transferring.
2247 match: The narrowmatcher that allows us to identify relevant changes.
2247 match: The narrowmatcher that allows us to identify relevant changes.
2248 depth: If not None, only consider nodes to be full nodes if they are at
2248 depth: If not None, only consider nodes to be full nodes if they are at
2249 most depth changesets away from one of heads.
2249 most depth changesets away from one of heads.
2250
2250
2251 Returns:
2251 Returns:
2252 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2252 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2253
2253
2254 visitnodes: The list of nodes (either full or ellipsis) which
2254 visitnodes: The list of nodes (either full or ellipsis) which
2255 need to be sent to the client.
2255 need to be sent to the client.
2256 relevant_nodes: The set of changelog nodes which change a file inside
2256 relevant_nodes: The set of changelog nodes which change a file inside
2257 the narrowspec. The client needs these as non-ellipsis nodes.
2257 the narrowspec. The client needs these as non-ellipsis nodes.
2258 ellipsisroots: A dict of {rev: parents} that is used in
2258 ellipsisroots: A dict of {rev: parents} that is used in
2259 narrowchangegroup to produce ellipsis nodes with the
2259 narrowchangegroup to produce ellipsis nodes with the
2260 correct parents.
2260 correct parents.
2261 """
2261 """
2262 cl = repo.changelog
2262 cl = repo.changelog
2263 mfl = repo.manifestlog
2263 mfl = repo.manifestlog
2264
2264
2265 clrev = cl.rev
2265 clrev = cl.rev
2266
2266
2267 commonrevs = {clrev(n) for n in common} | {nullrev}
2267 commonrevs = {clrev(n) for n in common} | {nullrev}
2268 headsrevs = {clrev(n) for n in heads}
2268 headsrevs = {clrev(n) for n in heads}
2269
2269
2270 if depth:
2270 if depth:
2271 revdepth = {h: 0 for h in headsrevs}
2271 revdepth = {h: 0 for h in headsrevs}
2272
2272
2273 ellipsisheads = collections.defaultdict(set)
2273 ellipsisheads = collections.defaultdict(set)
2274 ellipsisroots = collections.defaultdict(set)
2274 ellipsisroots = collections.defaultdict(set)
2275
2275
2276 def addroot(head, curchange):
2276 def addroot(head, curchange):
2277 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2277 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2278 ellipsisroots[head].add(curchange)
2278 ellipsisroots[head].add(curchange)
2279 # Recursively split ellipsis heads with 3 roots by finding the
2279 # Recursively split ellipsis heads with 3 roots by finding the
2280 # roots' youngest common descendant which is an elided merge commit.
2280 # roots' youngest common descendant which is an elided merge commit.
2281 # That descendant takes 2 of the 3 roots as its own, and becomes a
2281 # That descendant takes 2 of the 3 roots as its own, and becomes a
2282 # root of the head.
2282 # root of the head.
2283 while len(ellipsisroots[head]) > 2:
2283 while len(ellipsisroots[head]) > 2:
2284 child, roots = splithead(head)
2284 child, roots = splithead(head)
2285 splitroots(head, child, roots)
2285 splitroots(head, child, roots)
2286 head = child # Recurse in case we just added a 3rd root
2286 head = child # Recurse in case we just added a 3rd root
2287
2287
2288 def splitroots(head, child, roots):
2288 def splitroots(head, child, roots):
2289 ellipsisroots[head].difference_update(roots)
2289 ellipsisroots[head].difference_update(roots)
2290 ellipsisroots[head].add(child)
2290 ellipsisroots[head].add(child)
2291 ellipsisroots[child].update(roots)
2291 ellipsisroots[child].update(roots)
2292 ellipsisroots[child].discard(child)
2292 ellipsisroots[child].discard(child)
2293
2293
2294 def splithead(head):
2294 def splithead(head):
2295 r1, r2, r3 = sorted(ellipsisroots[head])
2295 r1, r2, r3 = sorted(ellipsisroots[head])
2296 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2296 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2297 mid = repo.revs(
2297 mid = repo.revs(
2298 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2298 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2299 )
2299 )
2300 for j in mid:
2300 for j in mid:
2301 if j == nr2:
2301 if j == nr2:
2302 return nr2, (nr1, nr2)
2302 return nr2, (nr1, nr2)
2303 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2303 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2304 return j, (nr1, nr2)
2304 return j, (nr1, nr2)
2305 raise error.Abort(
2305 raise error.Abort(
2306 _(
2306 _(
2307 b'Failed to split up ellipsis node! head: %d, '
2307 b'Failed to split up ellipsis node! head: %d, '
2308 b'roots: %d %d %d'
2308 b'roots: %d %d %d'
2309 )
2309 )
2310 % (head, r1, r2, r3)
2310 % (head, r1, r2, r3)
2311 )
2311 )
2312
2312
2313 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2313 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2314 visit = reversed(missing)
2314 visit = reversed(missing)
2315 relevant_nodes = set()
2315 relevant_nodes = set()
2316 visitnodes = [cl.node(m) for m in missing]
2316 visitnodes = [cl.node(m) for m in missing]
2317 required = set(headsrevs) | known
2317 required = set(headsrevs) | known
2318 for rev in visit:
2318 for rev in visit:
2319 clrev = cl.changelogrevision(rev)
2319 clrev = cl.changelogrevision(rev)
2320 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2320 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2321 if depth is not None:
2321 if depth is not None:
2322 curdepth = revdepth[rev]
2322 curdepth = revdepth[rev]
2323 for p in ps:
2323 for p in ps:
2324 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2324 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2325 needed = False
2325 needed = False
2326 shallow_enough = depth is None or revdepth[rev] <= depth
2326 shallow_enough = depth is None or revdepth[rev] <= depth
2327 if shallow_enough:
2327 if shallow_enough:
2328 curmf = mfl[clrev.manifest].read()
2328 curmf = mfl[clrev.manifest].read()
2329 if ps:
2329 if ps:
2330 # We choose to not trust the changed files list in
2330 # We choose to not trust the changed files list in
2331 # changesets because it's not always correct. TODO: could
2331 # changesets because it's not always correct. TODO: could
2332 # we trust it for the non-merge case?
2332 # we trust it for the non-merge case?
2333 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2333 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2334 needed = bool(curmf.diff(p1mf, match))
2334 needed = bool(curmf.diff(p1mf, match))
2335 if not needed and len(ps) > 1:
2335 if not needed and len(ps) > 1:
2336 # For merge changes, the list of changed files is not
2336 # For merge changes, the list of changed files is not
2337 # helpful, since we need to emit the merge if a file
2337 # helpful, since we need to emit the merge if a file
2338 # in the narrow spec has changed on either side of the
2338 # in the narrow spec has changed on either side of the
2339 # merge. As a result, we do a manifest diff to check.
2339 # merge. As a result, we do a manifest diff to check.
2340 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2340 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2341 needed = bool(curmf.diff(p2mf, match))
2341 needed = bool(curmf.diff(p2mf, match))
2342 else:
2342 else:
2343 # For a root node, we need to include the node if any
2343 # For a root node, we need to include the node if any
2344 # files in the node match the narrowspec.
2344 # files in the node match the narrowspec.
2345 needed = any(curmf.walk(match))
2345 needed = any(curmf.walk(match))
2346
2346
2347 if needed:
2347 if needed:
2348 for head in ellipsisheads[rev]:
2348 for head in ellipsisheads[rev]:
2349 addroot(head, rev)
2349 addroot(head, rev)
2350 for p in ps:
2350 for p in ps:
2351 required.add(p)
2351 required.add(p)
2352 relevant_nodes.add(cl.node(rev))
2352 relevant_nodes.add(cl.node(rev))
2353 else:
2353 else:
2354 if not ps:
2354 if not ps:
2355 ps = [nullrev]
2355 ps = [nullrev]
2356 if rev in required:
2356 if rev in required:
2357 for head in ellipsisheads[rev]:
2357 for head in ellipsisheads[rev]:
2358 addroot(head, rev)
2358 addroot(head, rev)
2359 for p in ps:
2359 for p in ps:
2360 ellipsisheads[p].add(rev)
2360 ellipsisheads[p].add(rev)
2361 else:
2361 else:
2362 for p in ps:
2362 for p in ps:
2363 ellipsisheads[p] |= ellipsisheads[rev]
2363 ellipsisheads[p] |= ellipsisheads[rev]
2364
2364
2365 # add common changesets as roots of their reachable ellipsis heads
2365 # add common changesets as roots of their reachable ellipsis heads
2366 for c in commonrevs:
2366 for c in commonrevs:
2367 for head in ellipsisheads[c]:
2367 for head in ellipsisheads[c]:
2368 addroot(head, c)
2368 addroot(head, c)
2369 return visitnodes, relevant_nodes, ellipsisroots
2369 return visitnodes, relevant_nodes, ellipsisroots
2370
2370
2371
2371
2372 def caps20to10(repo, role):
2372 def caps20to10(repo, role):
2373 """return a set with appropriate options to use bundle20 during getbundle"""
2373 """return a set with appropriate options to use bundle20 during getbundle"""
2374 caps = {b'HG20'}
2374 caps = {b'HG20'}
2375 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2375 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2376 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2376 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2377 return caps
2377 return caps
2378
2378
2379
2379
2380 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2380 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2381 getbundle2partsorder = []
2381 getbundle2partsorder = []
2382
2382
2383 # Mapping between step name and function
2383 # Mapping between step name and function
2384 #
2384 #
2385 # This exists to help extensions wrap steps if necessary
2385 # This exists to help extensions wrap steps if necessary
2386 getbundle2partsmapping = {}
2386 getbundle2partsmapping = {}
2387
2387
2388
2388
2389 def getbundle2partsgenerator(stepname, idx=None):
2389 def getbundle2partsgenerator(stepname, idx=None):
2390 """decorator for function generating bundle2 part for getbundle
2390 """decorator for function generating bundle2 part for getbundle
2391
2391
2392 The function is added to the step -> function mapping and appended to the
2392 The function is added to the step -> function mapping and appended to the
2393 list of steps. Beware that decorated functions will be added in order
2393 list of steps. Beware that decorated functions will be added in order
2394 (this may matter).
2394 (this may matter).
2395
2395
2396 You can only use this decorator for new steps, if you want to wrap a step
2396 You can only use this decorator for new steps, if you want to wrap a step
2397 from an extension, attack the getbundle2partsmapping dictionary directly."""
2397 from an extension, attack the getbundle2partsmapping dictionary directly."""
2398
2398
2399 def dec(func):
2399 def dec(func):
2400 assert stepname not in getbundle2partsmapping
2400 assert stepname not in getbundle2partsmapping
2401 getbundle2partsmapping[stepname] = func
2401 getbundle2partsmapping[stepname] = func
2402 if idx is None:
2402 if idx is None:
2403 getbundle2partsorder.append(stepname)
2403 getbundle2partsorder.append(stepname)
2404 else:
2404 else:
2405 getbundle2partsorder.insert(idx, stepname)
2405 getbundle2partsorder.insert(idx, stepname)
2406 return func
2406 return func
2407
2407
2408 return dec
2408 return dec
2409
2409
2410
2410
2411 def bundle2requested(bundlecaps):
2411 def bundle2requested(bundlecaps):
2412 if bundlecaps is not None:
2412 if bundlecaps is not None:
2413 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2413 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2414 return False
2414 return False
2415
2415
2416
2416
2417 def getbundlechunks(
2417 def getbundlechunks(
2418 repo,
2418 repo,
2419 source,
2419 source,
2420 heads=None,
2420 heads=None,
2421 common=None,
2421 common=None,
2422 bundlecaps=None,
2422 bundlecaps=None,
2423 remote_sidedata=None,
2423 remote_sidedata=None,
2424 **kwargs,
2424 **kwargs,
2425 ):
2425 ):
2426 """Return chunks constituting a bundle's raw data.
2426 """Return chunks constituting a bundle's raw data.
2427
2427
2428 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2428 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2429 passed.
2429 passed.
2430
2430
2431 Returns a 2-tuple of a dict with metadata about the generated bundle
2431 Returns a 2-tuple of a dict with metadata about the generated bundle
2432 and an iterator over raw chunks (of varying sizes).
2432 and an iterator over raw chunks (of varying sizes).
2433 """
2433 """
2434 kwargs = pycompat.byteskwargs(kwargs)
2434 kwargs = pycompat.byteskwargs(kwargs)
2435 info = {}
2435 info = {}
2436 usebundle2 = bundle2requested(bundlecaps)
2436 usebundle2 = bundle2requested(bundlecaps)
2437 # bundle10 case
2437 # bundle10 case
2438 if not usebundle2:
2438 if not usebundle2:
2439 if bundlecaps and not kwargs.get(b'cg', True):
2439 if bundlecaps and not kwargs.get(b'cg', True):
2440 raise ValueError(
2440 raise ValueError(
2441 _(b'request for bundle10 must include changegroup')
2441 _(b'request for bundle10 must include changegroup')
2442 )
2442 )
2443
2443
2444 if kwargs:
2444 if kwargs:
2445 raise ValueError(
2445 raise ValueError(
2446 _(b'unsupported getbundle arguments: %s')
2446 _(b'unsupported getbundle arguments: %s')
2447 % b', '.join(sorted(kwargs.keys()))
2447 % b', '.join(sorted(kwargs.keys()))
2448 )
2448 )
2449 outgoing = _computeoutgoing(repo, heads, common)
2449 outgoing = _computeoutgoing(repo, heads, common)
2450 info[b'bundleversion'] = 1
2450 info[b'bundleversion'] = 1
2451 return (
2451 return (
2452 info,
2452 info,
2453 changegroup.makestream(
2453 changegroup.makestream(
2454 repo,
2454 repo,
2455 outgoing,
2455 outgoing,
2456 b'01',
2456 b'01',
2457 source,
2457 source,
2458 bundlecaps=bundlecaps,
2458 bundlecaps=bundlecaps,
2459 remote_sidedata=remote_sidedata,
2459 remote_sidedata=remote_sidedata,
2460 ),
2460 ),
2461 )
2461 )
2462
2462
2463 # bundle20 case
2463 # bundle20 case
2464 info[b'bundleversion'] = 2
2464 info[b'bundleversion'] = 2
2465 b2caps = {}
2465 b2caps = {}
2466 for bcaps in bundlecaps:
2466 for bcaps in bundlecaps:
2467 if bcaps.startswith(b'bundle2='):
2467 if bcaps.startswith(b'bundle2='):
2468 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2468 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2469 b2caps.update(bundle2.decodecaps(blob))
2469 b2caps.update(bundle2.decodecaps(blob))
2470 bundler = bundle2.bundle20(repo.ui, b2caps)
2470 bundler = bundle2.bundle20(repo.ui, b2caps)
2471
2471
2472 kwargs[b'heads'] = heads
2472 kwargs[b'heads'] = heads
2473 kwargs[b'common'] = common
2473 kwargs[b'common'] = common
2474
2474
2475 for name in getbundle2partsorder:
2475 for name in getbundle2partsorder:
2476 func = getbundle2partsmapping[name]
2476 func = getbundle2partsmapping[name]
2477 func(
2477 func(
2478 bundler,
2478 bundler,
2479 repo,
2479 repo,
2480 source,
2480 source,
2481 bundlecaps=bundlecaps,
2481 bundlecaps=bundlecaps,
2482 b2caps=b2caps,
2482 b2caps=b2caps,
2483 remote_sidedata=remote_sidedata,
2483 remote_sidedata=remote_sidedata,
2484 **pycompat.strkwargs(kwargs),
2484 **pycompat.strkwargs(kwargs),
2485 )
2485 )
2486
2486
2487 info[b'prefercompressed'] = bundler.prefercompressed
2487 info[b'prefercompressed'] = bundler.prefercompressed
2488
2488
2489 return info, bundler.getchunks()
2489 return info, bundler.getchunks()
2490
2490
2491
2491
2492 @getbundle2partsgenerator(b'stream')
2492 @getbundle2partsgenerator(b'stream')
2493 def _getbundlestream2(bundler, repo, *args, **kwargs):
2493 def _getbundlestream2(bundler, repo, *args, **kwargs):
2494 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2494 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2495
2495
2496
2496
2497 @getbundle2partsgenerator(b'changegroup')
2497 @getbundle2partsgenerator(b'changegroup')
2498 def _getbundlechangegrouppart(
2498 def _getbundlechangegrouppart(
2499 bundler,
2499 bundler,
2500 repo,
2500 repo,
2501 source,
2501 source,
2502 bundlecaps=None,
2502 bundlecaps=None,
2503 b2caps=None,
2503 b2caps=None,
2504 heads=None,
2504 heads=None,
2505 common=None,
2505 common=None,
2506 remote_sidedata=None,
2506 remote_sidedata=None,
2507 **kwargs,
2507 **kwargs,
2508 ):
2508 ):
2509 """add a changegroup part to the requested bundle"""
2509 """add a changegroup part to the requested bundle"""
2510 if not kwargs.get('cg', True) or not b2caps:
2510 if not kwargs.get('cg', True) or not b2caps:
2511 return
2511 return
2512
2512
2513 version = b'01'
2513 version = b'01'
2514 cgversions = b2caps.get(b'changegroup')
2514 cgversions = b2caps.get(b'changegroup')
2515 if cgversions: # 3.1 and 3.2 ship with an empty value
2515 if cgversions: # 3.1 and 3.2 ship with an empty value
2516 cgversions = [
2516 cgversions = [
2517 v
2517 v
2518 for v in cgversions
2518 for v in cgversions
2519 if v in changegroup.supportedoutgoingversions(repo)
2519 if v in changegroup.supportedoutgoingversions(repo)
2520 ]
2520 ]
2521 if not cgversions:
2521 if not cgversions:
2522 raise error.Abort(_(b'no common changegroup version'))
2522 raise error.Abort(_(b'no common changegroup version'))
2523 version = max(cgversions)
2523 version = max(cgversions)
2524
2524
2525 outgoing = _computeoutgoing(repo, heads, common)
2525 outgoing = _computeoutgoing(repo, heads, common)
2526 if not outgoing.missing:
2526 if not outgoing.missing:
2527 return
2527 return
2528
2528
2529 if kwargs.get('narrow', False):
2529 if kwargs.get('narrow', False):
2530 include = sorted(filter(bool, kwargs.get('includepats', [])))
2530 include = sorted(filter(bool, kwargs.get('includepats', [])))
2531 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2531 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2532 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2532 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2533 else:
2533 else:
2534 matcher = None
2534 matcher = None
2535
2535
2536 cgstream = changegroup.makestream(
2536 cgstream = changegroup.makestream(
2537 repo,
2537 repo,
2538 outgoing,
2538 outgoing,
2539 version,
2539 version,
2540 source,
2540 source,
2541 bundlecaps=bundlecaps,
2541 bundlecaps=bundlecaps,
2542 matcher=matcher,
2542 matcher=matcher,
2543 remote_sidedata=remote_sidedata,
2543 remote_sidedata=remote_sidedata,
2544 )
2544 )
2545
2545
2546 part = bundler.newpart(b'changegroup', data=cgstream)
2546 part = bundler.newpart(b'changegroup', data=cgstream)
2547 if cgversions:
2547 if cgversions:
2548 part.addparam(b'version', version)
2548 part.addparam(b'version', version)
2549
2549
2550 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2550 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2551
2551
2552 if scmutil.istreemanifest(repo):
2552 if scmutil.istreemanifest(repo):
2553 part.addparam(b'treemanifest', b'1')
2553 part.addparam(b'treemanifest', b'1')
2554
2554
2555 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2555 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2556 part.addparam(b'exp-sidedata', b'1')
2556 part.addparam(b'exp-sidedata', b'1')
2557 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2557 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2558 part.addparam(b'exp-wanted-sidedata', sidedata)
2558 part.addparam(b'exp-wanted-sidedata', sidedata)
2559
2559
2560 if (
2560 if (
2561 kwargs.get('narrow', False)
2561 kwargs.get('narrow', False)
2562 and kwargs.get('narrow_acl', False)
2562 and kwargs.get('narrow_acl', False)
2563 and (include or exclude)
2563 and (include or exclude)
2564 ):
2564 ):
2565 # this is mandatory because otherwise ACL clients won't work
2565 # this is mandatory because otherwise ACL clients won't work
2566 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2566 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2567 narrowspecpart.data = b'%s\0%s' % (
2567 narrowspecpart.data = b'%s\0%s' % (
2568 b'\n'.join(include),
2568 b'\n'.join(include),
2569 b'\n'.join(exclude),
2569 b'\n'.join(exclude),
2570 )
2570 )
2571
2571
2572
2572
2573 @getbundle2partsgenerator(b'bookmarks')
2573 @getbundle2partsgenerator(b'bookmarks')
2574 def _getbundlebookmarkpart(
2574 def _getbundlebookmarkpart(
2575 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2575 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2576 ):
2576 ):
2577 """add a bookmark part to the requested bundle"""
2577 """add a bookmark part to the requested bundle"""
2578 if not kwargs.get('bookmarks', False):
2578 if not kwargs.get('bookmarks', False):
2579 return
2579 return
2580 if not b2caps or b'bookmarks' not in b2caps:
2580 if not b2caps or b'bookmarks' not in b2caps:
2581 raise error.Abort(_(b'no common bookmarks exchange method'))
2581 raise error.Abort(_(b'no common bookmarks exchange method'))
2582 books = bookmod.listbinbookmarks(repo)
2582 books = bookmod.listbinbookmarks(repo)
2583 data = bookmod.binaryencode(repo, books)
2583 data = bookmod.binaryencode(repo, books)
2584 if data:
2584 if data:
2585 bundler.newpart(b'bookmarks', data=data)
2585 bundler.newpart(b'bookmarks', data=data)
2586
2586
2587
2587
2588 @getbundle2partsgenerator(b'listkeys')
2588 @getbundle2partsgenerator(b'listkeys')
2589 def _getbundlelistkeysparts(
2589 def _getbundlelistkeysparts(
2590 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2590 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2591 ):
2591 ):
2592 """add parts containing listkeys namespaces to the requested bundle"""
2592 """add parts containing listkeys namespaces to the requested bundle"""
2593 listkeys = kwargs.get('listkeys', ())
2593 listkeys = kwargs.get('listkeys', ())
2594 for namespace in listkeys:
2594 for namespace in listkeys:
2595 part = bundler.newpart(b'listkeys')
2595 part = bundler.newpart(b'listkeys')
2596 part.addparam(b'namespace', namespace)
2596 part.addparam(b'namespace', namespace)
2597 keys = repo.listkeys(namespace).items()
2597 keys = repo.listkeys(namespace).items()
2598 part.data = pushkey.encodekeys(keys)
2598 part.data = pushkey.encodekeys(keys)
2599
2599
2600
2600
2601 @getbundle2partsgenerator(b'obsmarkers')
2601 @getbundle2partsgenerator(b'obsmarkers')
2602 def _getbundleobsmarkerpart(
2602 def _getbundleobsmarkerpart(
2603 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2603 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2604 ):
2604 ):
2605 """add an obsolescence markers part to the requested bundle"""
2605 """add an obsolescence markers part to the requested bundle"""
2606 if kwargs.get('obsmarkers', False):
2606 if kwargs.get('obsmarkers', False):
2607 unfi_cl = repo.unfiltered().changelog
2607 if heads is None:
2608 if heads is None:
2608 heads = repo.heads()
2609 headrevs = repo.changelog.headrevs()
2609 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2610 else:
2610 markers = repo.obsstore.relevantmarkers(subset)
2611 get_rev = unfi_cl.index.get_rev
2612 headrevs = [get_rev(node) for node in heads]
2613 headrevs = [rev for rev in headrevs if rev is not None]
2614 revs = unfi_cl.ancestors(headrevs, inclusive=True)
2615 markers = repo.obsstore.relevantmarkers(revs=revs)
2611 markers = obsutil.sortedmarkers(markers)
2616 markers = obsutil.sortedmarkers(markers)
2612 bundle2.buildobsmarkerspart(bundler, markers)
2617 bundle2.buildobsmarkerspart(bundler, markers)
2613
2618
2614
2619
2615 @getbundle2partsgenerator(b'phases')
2620 @getbundle2partsgenerator(b'phases')
2616 def _getbundlephasespart(
2621 def _getbundlephasespart(
2617 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2622 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2618 ):
2623 ):
2619 """add phase heads part to the requested bundle"""
2624 """add phase heads part to the requested bundle"""
2620 if kwargs.get('phases', False):
2625 if kwargs.get('phases', False):
2621 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2626 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2622 raise error.Abort(_(b'no common phases exchange method'))
2627 raise error.Abort(_(b'no common phases exchange method'))
2623 if heads is None:
2628 if heads is None:
2624 heads = repo.heads()
2629 heads = repo.heads()
2625
2630
2626 headsbyphase = collections.defaultdict(set)
2631 headsbyphase = collections.defaultdict(set)
2627 if repo.publishing():
2632 if repo.publishing():
2628 headsbyphase[phases.public] = heads
2633 headsbyphase[phases.public] = heads
2629 else:
2634 else:
2630 # find the appropriate heads to move
2635 # find the appropriate heads to move
2631
2636
2632 phase = repo._phasecache.phase
2637 phase = repo._phasecache.phase
2633 node = repo.changelog.node
2638 node = repo.changelog.node
2634 rev = repo.changelog.rev
2639 rev = repo.changelog.rev
2635 for h in heads:
2640 for h in heads:
2636 headsbyphase[phase(repo, rev(h))].add(h)
2641 headsbyphase[phase(repo, rev(h))].add(h)
2637 seenphases = list(headsbyphase.keys())
2642 seenphases = list(headsbyphase.keys())
2638
2643
2639 # We do not handle anything but public and draft phase for now)
2644 # We do not handle anything but public and draft phase for now)
2640 if seenphases:
2645 if seenphases:
2641 assert max(seenphases) <= phases.draft
2646 assert max(seenphases) <= phases.draft
2642
2647
2643 # if client is pulling non-public changesets, we need to find
2648 # if client is pulling non-public changesets, we need to find
2644 # intermediate public heads.
2649 # intermediate public heads.
2645 draftheads = headsbyphase.get(phases.draft, set())
2650 draftheads = headsbyphase.get(phases.draft, set())
2646 if draftheads:
2651 if draftheads:
2647 publicheads = headsbyphase.get(phases.public, set())
2652 publicheads = headsbyphase.get(phases.public, set())
2648
2653
2649 revset = b'heads(only(%ln, %ln) and public())'
2654 revset = b'heads(only(%ln, %ln) and public())'
2650 extraheads = repo.revs(revset, draftheads, publicheads)
2655 extraheads = repo.revs(revset, draftheads, publicheads)
2651 for r in extraheads:
2656 for r in extraheads:
2652 headsbyphase[phases.public].add(node(r))
2657 headsbyphase[phases.public].add(node(r))
2653
2658
2654 # transform data in a format used by the encoding function
2659 # transform data in a format used by the encoding function
2655 phasemapping = {
2660 phasemapping = {
2656 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2661 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2657 }
2662 }
2658
2663
2659 # generate the actual part
2664 # generate the actual part
2660 phasedata = phases.binaryencode(phasemapping)
2665 phasedata = phases.binaryencode(phasemapping)
2661 bundler.newpart(b'phase-heads', data=phasedata)
2666 bundler.newpart(b'phase-heads', data=phasedata)
2662
2667
2663
2668
2664 @getbundle2partsgenerator(b'hgtagsfnodes')
2669 @getbundle2partsgenerator(b'hgtagsfnodes')
2665 def _getbundletagsfnodes(
2670 def _getbundletagsfnodes(
2666 bundler,
2671 bundler,
2667 repo,
2672 repo,
2668 source,
2673 source,
2669 bundlecaps=None,
2674 bundlecaps=None,
2670 b2caps=None,
2675 b2caps=None,
2671 heads=None,
2676 heads=None,
2672 common=None,
2677 common=None,
2673 **kwargs,
2678 **kwargs,
2674 ):
2679 ):
2675 """Transfer the .hgtags filenodes mapping.
2680 """Transfer the .hgtags filenodes mapping.
2676
2681
2677 Only values for heads in this bundle will be transferred.
2682 Only values for heads in this bundle will be transferred.
2678
2683
2679 The part data consists of pairs of 20 byte changeset node and .hgtags
2684 The part data consists of pairs of 20 byte changeset node and .hgtags
2680 filenodes raw values.
2685 filenodes raw values.
2681 """
2686 """
2682 # Don't send unless:
2687 # Don't send unless:
2683 # - changeset are being exchanged,
2688 # - changeset are being exchanged,
2684 # - the client supports it.
2689 # - the client supports it.
2685 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2690 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2686 return
2691 return
2687
2692
2688 outgoing = _computeoutgoing(repo, heads, common)
2693 outgoing = _computeoutgoing(repo, heads, common)
2689 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2694 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2690
2695
2691
2696
2692 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2697 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2693 def _getbundlerevbranchcache(
2698 def _getbundlerevbranchcache(
2694 bundler,
2699 bundler,
2695 repo,
2700 repo,
2696 source,
2701 source,
2697 bundlecaps=None,
2702 bundlecaps=None,
2698 b2caps=None,
2703 b2caps=None,
2699 heads=None,
2704 heads=None,
2700 common=None,
2705 common=None,
2701 **kwargs,
2706 **kwargs,
2702 ):
2707 ):
2703 """Transfer the rev-branch-cache mapping
2708 """Transfer the rev-branch-cache mapping
2704
2709
2705 The payload is a series of data related to each branch
2710 The payload is a series of data related to each branch
2706
2711
2707 1) branch name length
2712 1) branch name length
2708 2) number of open heads
2713 2) number of open heads
2709 3) number of closed heads
2714 3) number of closed heads
2710 4) open heads nodes
2715 4) open heads nodes
2711 5) closed heads nodes
2716 5) closed heads nodes
2712 """
2717 """
2713 # Don't send unless:
2718 # Don't send unless:
2714 # - changeset are being exchanged,
2719 # - changeset are being exchanged,
2715 # - the client supports it.
2720 # - the client supports it.
2716 # - narrow bundle isn't in play (not currently compatible).
2721 # - narrow bundle isn't in play (not currently compatible).
2717 if (
2722 if (
2718 not kwargs.get('cg', True)
2723 not kwargs.get('cg', True)
2719 or not b2caps
2724 or not b2caps
2720 or b'rev-branch-cache' not in b2caps
2725 or b'rev-branch-cache' not in b2caps
2721 or kwargs.get('narrow', False)
2726 or kwargs.get('narrow', False)
2722 or repo.ui.has_section(_NARROWACL_SECTION)
2727 or repo.ui.has_section(_NARROWACL_SECTION)
2723 ):
2728 ):
2724 return
2729 return
2725
2730
2726 outgoing = _computeoutgoing(repo, heads, common)
2731 outgoing = _computeoutgoing(repo, heads, common)
2727 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2732 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2728
2733
2729
2734
2730 def check_heads(repo, their_heads, context):
2735 def check_heads(repo, their_heads, context):
2731 """check if the heads of a repo have been modified
2736 """check if the heads of a repo have been modified
2732
2737
2733 Used by peer for unbundling.
2738 Used by peer for unbundling.
2734 """
2739 """
2735 heads = repo.heads()
2740 heads = repo.heads()
2736 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2741 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2737 if not (
2742 if not (
2738 their_heads == [b'force']
2743 their_heads == [b'force']
2739 or their_heads == heads
2744 or their_heads == heads
2740 or their_heads == [b'hashed', heads_hash]
2745 or their_heads == [b'hashed', heads_hash]
2741 ):
2746 ):
2742 # someone else committed/pushed/unbundled while we
2747 # someone else committed/pushed/unbundled while we
2743 # were transferring data
2748 # were transferring data
2744 raise error.PushRaced(
2749 raise error.PushRaced(
2745 b'repository changed while %s - please try again' % context
2750 b'repository changed while %s - please try again' % context
2746 )
2751 )
2747
2752
2748
2753
2749 def unbundle(repo, cg, heads, source, url):
2754 def unbundle(repo, cg, heads, source, url):
2750 """Apply a bundle to a repo.
2755 """Apply a bundle to a repo.
2751
2756
2752 this function makes sure the repo is locked during the application and have
2757 this function makes sure the repo is locked during the application and have
2753 mechanism to check that no push race occurred between the creation of the
2758 mechanism to check that no push race occurred between the creation of the
2754 bundle and its application.
2759 bundle and its application.
2755
2760
2756 If the push was raced as PushRaced exception is raised."""
2761 If the push was raced as PushRaced exception is raised."""
2757 r = 0
2762 r = 0
2758 # need a transaction when processing a bundle2 stream
2763 # need a transaction when processing a bundle2 stream
2759 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2764 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2760 lockandtr = [None, None, None]
2765 lockandtr = [None, None, None]
2761 recordout = None
2766 recordout = None
2762 # quick fix for output mismatch with bundle2 in 3.4
2767 # quick fix for output mismatch with bundle2 in 3.4
2763 captureoutput = repo.ui.configbool(
2768 captureoutput = repo.ui.configbool(
2764 b'experimental', b'bundle2-output-capture'
2769 b'experimental', b'bundle2-output-capture'
2765 )
2770 )
2766 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2771 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2767 captureoutput = True
2772 captureoutput = True
2768 try:
2773 try:
2769 # note: outside bundle1, 'heads' is expected to be empty and this
2774 # note: outside bundle1, 'heads' is expected to be empty and this
2770 # 'check_heads' call wil be a no-op
2775 # 'check_heads' call wil be a no-op
2771 check_heads(repo, heads, b'uploading changes')
2776 check_heads(repo, heads, b'uploading changes')
2772 # push can proceed
2777 # push can proceed
2773 if not isinstance(cg, bundle2.unbundle20):
2778 if not isinstance(cg, bundle2.unbundle20):
2774 # legacy case: bundle1 (changegroup 01)
2779 # legacy case: bundle1 (changegroup 01)
2775 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2780 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2776 with repo.lock(), repo.transaction(txnname) as tr:
2781 with repo.lock(), repo.transaction(txnname) as tr:
2777 op = bundle2.applybundle(repo, cg, tr, source, url)
2782 op = bundle2.applybundle(repo, cg, tr, source, url)
2778 r = bundle2.combinechangegroupresults(op)
2783 r = bundle2.combinechangegroupresults(op)
2779 else:
2784 else:
2780 r = None
2785 r = None
2781 try:
2786 try:
2782
2787
2783 def gettransaction():
2788 def gettransaction():
2784 if not lockandtr[2]:
2789 if not lockandtr[2]:
2785 if not bookmod.bookmarksinstore(repo):
2790 if not bookmod.bookmarksinstore(repo):
2786 lockandtr[0] = repo.wlock()
2791 lockandtr[0] = repo.wlock()
2787 lockandtr[1] = repo.lock()
2792 lockandtr[1] = repo.lock()
2788 lockandtr[2] = repo.transaction(source)
2793 lockandtr[2] = repo.transaction(source)
2789 lockandtr[2].hookargs[b'source'] = source
2794 lockandtr[2].hookargs[b'source'] = source
2790 lockandtr[2].hookargs[b'url'] = url
2795 lockandtr[2].hookargs[b'url'] = url
2791 lockandtr[2].hookargs[b'bundle2'] = b'1'
2796 lockandtr[2].hookargs[b'bundle2'] = b'1'
2792 return lockandtr[2]
2797 return lockandtr[2]
2793
2798
2794 # Do greedy locking by default until we're satisfied with lazy
2799 # Do greedy locking by default until we're satisfied with lazy
2795 # locking.
2800 # locking.
2796 if not repo.ui.configbool(
2801 if not repo.ui.configbool(
2797 b'experimental', b'bundle2lazylocking'
2802 b'experimental', b'bundle2lazylocking'
2798 ):
2803 ):
2799 gettransaction()
2804 gettransaction()
2800
2805
2801 op = bundle2.bundleoperation(
2806 op = bundle2.bundleoperation(
2802 repo,
2807 repo,
2803 gettransaction,
2808 gettransaction,
2804 captureoutput=captureoutput,
2809 captureoutput=captureoutput,
2805 source=b'push',
2810 source=b'push',
2806 )
2811 )
2807 try:
2812 try:
2808 op = bundle2.processbundle(repo, cg, op=op)
2813 op = bundle2.processbundle(repo, cg, op=op)
2809 finally:
2814 finally:
2810 r = op.reply
2815 r = op.reply
2811 if captureoutput and r is not None:
2816 if captureoutput and r is not None:
2812 repo.ui.pushbuffer(error=True, subproc=True)
2817 repo.ui.pushbuffer(error=True, subproc=True)
2813
2818
2814 def recordout(output):
2819 def recordout(output):
2815 r.newpart(b'output', data=output, mandatory=False)
2820 r.newpart(b'output', data=output, mandatory=False)
2816
2821
2817 if lockandtr[2] is not None:
2822 if lockandtr[2] is not None:
2818 lockandtr[2].close()
2823 lockandtr[2].close()
2819 except BaseException as exc:
2824 except BaseException as exc:
2820 exc.duringunbundle2 = True
2825 exc.duringunbundle2 = True
2821 if captureoutput and r is not None:
2826 if captureoutput and r is not None:
2822 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2827 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2823
2828
2824 def recordout(output):
2829 def recordout(output):
2825 part = bundle2.bundlepart(
2830 part = bundle2.bundlepart(
2826 b'output', data=output, mandatory=False
2831 b'output', data=output, mandatory=False
2827 )
2832 )
2828 parts.append(part)
2833 parts.append(part)
2829
2834
2830 raise
2835 raise
2831 finally:
2836 finally:
2832 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2837 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2833 if recordout is not None:
2838 if recordout is not None:
2834 recordout(repo.ui.popbuffer())
2839 recordout(repo.ui.popbuffer())
2835 return r
2840 return r
2836
2841
2837
2842
2838 def _maybeapplyclonebundle(pullop):
2843 def _maybeapplyclonebundle(pullop):
2839 """Apply a clone bundle from a remote, if possible."""
2844 """Apply a clone bundle from a remote, if possible."""
2840
2845
2841 repo = pullop.repo
2846 repo = pullop.repo
2842 remote = pullop.remote
2847 remote = pullop.remote
2843
2848
2844 if not repo.ui.configbool(b'ui', b'clonebundles'):
2849 if not repo.ui.configbool(b'ui', b'clonebundles'):
2845 return
2850 return
2846
2851
2847 # Only run if local repo is empty.
2852 # Only run if local repo is empty.
2848 if len(repo):
2853 if len(repo):
2849 return
2854 return
2850
2855
2851 if pullop.heads:
2856 if pullop.heads:
2852 return
2857 return
2853
2858
2854 if not remote.capable(b'clonebundles'):
2859 if not remote.capable(b'clonebundles'):
2855 return
2860 return
2856
2861
2857 with remote.commandexecutor() as e:
2862 with remote.commandexecutor() as e:
2858 res = e.callcommand(b'clonebundles', {}).result()
2863 res = e.callcommand(b'clonebundles', {}).result()
2859
2864
2860 # If we call the wire protocol command, that's good enough to record the
2865 # If we call the wire protocol command, that's good enough to record the
2861 # attempt.
2866 # attempt.
2862 pullop.clonebundleattempted = True
2867 pullop.clonebundleattempted = True
2863
2868
2864 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2869 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2865 if not entries:
2870 if not entries:
2866 repo.ui.note(
2871 repo.ui.note(
2867 _(
2872 _(
2868 b'no clone bundles available on remote; '
2873 b'no clone bundles available on remote; '
2869 b'falling back to regular clone\n'
2874 b'falling back to regular clone\n'
2870 )
2875 )
2871 )
2876 )
2872 return
2877 return
2873
2878
2874 entries = bundlecaches.filterclonebundleentries(
2879 entries = bundlecaches.filterclonebundleentries(
2875 repo, entries, streamclonerequested=pullop.streamclonerequested
2880 repo, entries, streamclonerequested=pullop.streamclonerequested
2876 )
2881 )
2877
2882
2878 if not entries:
2883 if not entries:
2879 # There is a thundering herd concern here. However, if a server
2884 # There is a thundering herd concern here. However, if a server
2880 # operator doesn't advertise bundles appropriate for its clients,
2885 # operator doesn't advertise bundles appropriate for its clients,
2881 # they deserve what's coming. Furthermore, from a client's
2886 # they deserve what's coming. Furthermore, from a client's
2882 # perspective, no automatic fallback would mean not being able to
2887 # perspective, no automatic fallback would mean not being able to
2883 # clone!
2888 # clone!
2884 repo.ui.warn(
2889 repo.ui.warn(
2885 _(
2890 _(
2886 b'no compatible clone bundles available on server; '
2891 b'no compatible clone bundles available on server; '
2887 b'falling back to regular clone\n'
2892 b'falling back to regular clone\n'
2888 )
2893 )
2889 )
2894 )
2890 repo.ui.warn(
2895 repo.ui.warn(
2891 _(b'(you may want to report this to the server operator)\n')
2896 _(b'(you may want to report this to the server operator)\n')
2892 )
2897 )
2893 return
2898 return
2894
2899
2895 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2900 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2896
2901
2897 url = entries[0][b'URL']
2902 url = entries[0][b'URL']
2898 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2903 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2899 if trypullbundlefromurl(repo.ui, repo, url, remote):
2904 if trypullbundlefromurl(repo.ui, repo, url, remote):
2900 repo.ui.status(_(b'finished applying clone bundle\n'))
2905 repo.ui.status(_(b'finished applying clone bundle\n'))
2901 # Bundle failed.
2906 # Bundle failed.
2902 #
2907 #
2903 # We abort by default to avoid the thundering herd of
2908 # We abort by default to avoid the thundering herd of
2904 # clients flooding a server that was expecting expensive
2909 # clients flooding a server that was expecting expensive
2905 # clone load to be offloaded.
2910 # clone load to be offloaded.
2906 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2911 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2907 repo.ui.warn(_(b'falling back to normal clone\n'))
2912 repo.ui.warn(_(b'falling back to normal clone\n'))
2908 else:
2913 else:
2909 raise error.Abort(
2914 raise error.Abort(
2910 _(b'error applying bundle'),
2915 _(b'error applying bundle'),
2911 hint=_(
2916 hint=_(
2912 b'if this error persists, consider contacting '
2917 b'if this error persists, consider contacting '
2913 b'the server operator or disable clone '
2918 b'the server operator or disable clone '
2914 b'bundles via '
2919 b'bundles via '
2915 b'"--config ui.clonebundles=false"'
2920 b'"--config ui.clonebundles=false"'
2916 ),
2921 ),
2917 )
2922 )
2918
2923
2919
2924
2920 def inline_clone_bundle_open(ui, url, peer):
2925 def inline_clone_bundle_open(ui, url, peer):
2921 if not peer:
2926 if not peer:
2922 raise error.Abort(_(b'no remote repository supplied for %s' % url))
2927 raise error.Abort(_(b'no remote repository supplied for %s' % url))
2923 clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :]
2928 clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :]
2924 peerclonebundle = peer.get_cached_bundle_inline(clonebundleid)
2929 peerclonebundle = peer.get_cached_bundle_inline(clonebundleid)
2925 return util.chunkbuffer(peerclonebundle)
2930 return util.chunkbuffer(peerclonebundle)
2926
2931
2927
2932
2928 def trypullbundlefromurl(ui, repo, url, peer):
2933 def trypullbundlefromurl(ui, repo, url, peer):
2929 """Attempt to apply a bundle from a URL."""
2934 """Attempt to apply a bundle from a URL."""
2930 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2935 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2931 try:
2936 try:
2932 if url.startswith(bundlecaches.CLONEBUNDLESCHEME):
2937 if url.startswith(bundlecaches.CLONEBUNDLESCHEME):
2933 fh = inline_clone_bundle_open(ui, url, peer)
2938 fh = inline_clone_bundle_open(ui, url, peer)
2934 else:
2939 else:
2935 fh = urlmod.open(ui, url)
2940 fh = urlmod.open(ui, url)
2936 cg = readbundle(ui, fh, b'stream')
2941 cg = readbundle(ui, fh, b'stream')
2937
2942
2938 if isinstance(cg, streamclone.streamcloneapplier):
2943 if isinstance(cg, streamclone.streamcloneapplier):
2939 cg.apply(repo)
2944 cg.apply(repo)
2940 else:
2945 else:
2941 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2946 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2942 return True
2947 return True
2943 except urlerr.httperror as e:
2948 except urlerr.httperror as e:
2944 ui.warn(
2949 ui.warn(
2945 _(b'HTTP error fetching bundle: %s\n')
2950 _(b'HTTP error fetching bundle: %s\n')
2946 % stringutil.forcebytestr(e)
2951 % stringutil.forcebytestr(e)
2947 )
2952 )
2948 except urlerr.urlerror as e:
2953 except urlerr.urlerror as e:
2949 ui.warn(
2954 ui.warn(
2950 _(b'error fetching bundle: %s\n')
2955 _(b'error fetching bundle: %s\n')
2951 % stringutil.forcebytestr(e.reason)
2956 % stringutil.forcebytestr(e.reason)
2952 )
2957 )
2953
2958
2954 return False
2959 return False
@@ -1,1156 +1,1177
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "predecessor" and possible
23 The old obsoleted changeset is called a "predecessor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a predecessor are called "successor markers of X" because they hold
25 a predecessor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "predecessor markers of Y" because they hold
27 a successors are call "predecessor markers of Y" because they hold
28 information about the predecessors of Y.
28 information about the predecessors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker is used:
45 - When changeset A is split into B and C, a single marker is used:
46
46
47 (A, (B, C))
47 (A, (B, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66 The header is followed by the markers. Marker format depend of the version. See
66 The header is followed by the markers. Marker format depend of the version. See
67 comment associated with each format for details.
67 comment associated with each format for details.
68
68
69 """
69 """
70
70
71 from __future__ import annotations
71 from __future__ import annotations
72
72
73 import binascii
73 import binascii
74 import struct
74 import struct
75 import weakref
75 import weakref
76
76
77 from .i18n import _
77 from .i18n import _
78 from .node import (
78 from .node import (
79 bin,
79 bin,
80 hex,
80 hex,
81 )
81 )
82 from . import (
82 from . import (
83 encoding,
83 encoding,
84 error,
84 error,
85 obsutil,
85 obsutil,
86 phases,
86 phases,
87 policy,
87 policy,
88 pycompat,
88 pycompat,
89 util,
89 util,
90 )
90 )
91 from .utils import (
91 from .utils import (
92 dateutil,
92 dateutil,
93 hashutil,
93 hashutil,
94 )
94 )
95
95
96 parsers = policy.importmod('parsers')
96 parsers = policy.importmod('parsers')
97
97
98 _pack = struct.pack
98 _pack = struct.pack
99 _unpack = struct.unpack
99 _unpack = struct.unpack
100 _calcsize = struct.calcsize
100 _calcsize = struct.calcsize
101 propertycache = util.propertycache
101 propertycache = util.propertycache
102
102
103 # Options for obsolescence
103 # Options for obsolescence
104 createmarkersopt = b'createmarkers'
104 createmarkersopt = b'createmarkers'
105 allowunstableopt = b'allowunstable'
105 allowunstableopt = b'allowunstable'
106 allowdivergenceopt = b'allowdivergence'
106 allowdivergenceopt = b'allowdivergence'
107 exchangeopt = b'exchange'
107 exchangeopt = b'exchange'
108
108
109
109
110 def _getoptionvalue(repo, option):
110 def _getoptionvalue(repo, option):
111 """Returns True if the given repository has the given obsolete option
111 """Returns True if the given repository has the given obsolete option
112 enabled.
112 enabled.
113 """
113 """
114 configkey = b'evolution.%s' % option
114 configkey = b'evolution.%s' % option
115 newconfig = repo.ui.configbool(b'experimental', configkey)
115 newconfig = repo.ui.configbool(b'experimental', configkey)
116
116
117 # Return the value only if defined
117 # Return the value only if defined
118 if newconfig is not None:
118 if newconfig is not None:
119 return newconfig
119 return newconfig
120
120
121 # Fallback on generic option
121 # Fallback on generic option
122 try:
122 try:
123 return repo.ui.configbool(b'experimental', b'evolution')
123 return repo.ui.configbool(b'experimental', b'evolution')
124 except (error.ConfigError, AttributeError):
124 except (error.ConfigError, AttributeError):
125 # Fallback on old-fashion config
125 # Fallback on old-fashion config
126 # inconsistent config: experimental.evolution
126 # inconsistent config: experimental.evolution
127 result = set(repo.ui.configlist(b'experimental', b'evolution'))
127 result = set(repo.ui.configlist(b'experimental', b'evolution'))
128
128
129 if b'all' in result:
129 if b'all' in result:
130 return True
130 return True
131
131
132 # Temporary hack for next check
132 # Temporary hack for next check
133 newconfig = repo.ui.config(b'experimental', b'evolution.createmarkers')
133 newconfig = repo.ui.config(b'experimental', b'evolution.createmarkers')
134 if newconfig:
134 if newconfig:
135 result.add(b'createmarkers')
135 result.add(b'createmarkers')
136
136
137 return option in result
137 return option in result
138
138
139
139
140 def getoptions(repo):
140 def getoptions(repo):
141 """Returns dicts showing state of obsolescence features."""
141 """Returns dicts showing state of obsolescence features."""
142
142
143 createmarkersvalue = _getoptionvalue(repo, createmarkersopt)
143 createmarkersvalue = _getoptionvalue(repo, createmarkersopt)
144 if createmarkersvalue:
144 if createmarkersvalue:
145 unstablevalue = _getoptionvalue(repo, allowunstableopt)
145 unstablevalue = _getoptionvalue(repo, allowunstableopt)
146 divergencevalue = _getoptionvalue(repo, allowdivergenceopt)
146 divergencevalue = _getoptionvalue(repo, allowdivergenceopt)
147 exchangevalue = _getoptionvalue(repo, exchangeopt)
147 exchangevalue = _getoptionvalue(repo, exchangeopt)
148 else:
148 else:
149 # if we cannot create obsolescence markers, we shouldn't exchange them
149 # if we cannot create obsolescence markers, we shouldn't exchange them
150 # or perform operations that lead to instability or divergence
150 # or perform operations that lead to instability or divergence
151 unstablevalue = False
151 unstablevalue = False
152 divergencevalue = False
152 divergencevalue = False
153 exchangevalue = False
153 exchangevalue = False
154
154
155 return {
155 return {
156 createmarkersopt: createmarkersvalue,
156 createmarkersopt: createmarkersvalue,
157 allowunstableopt: unstablevalue,
157 allowunstableopt: unstablevalue,
158 allowdivergenceopt: divergencevalue,
158 allowdivergenceopt: divergencevalue,
159 exchangeopt: exchangevalue,
159 exchangeopt: exchangevalue,
160 }
160 }
161
161
162
162
163 def isenabled(repo, option):
163 def isenabled(repo, option):
164 """Returns True if the given repository has the given obsolete option
164 """Returns True if the given repository has the given obsolete option
165 enabled.
165 enabled.
166 """
166 """
167 return getoptions(repo)[option]
167 return getoptions(repo)[option]
168
168
169
169
170 # Creating aliases for marker flags because evolve extension looks for
170 # Creating aliases for marker flags because evolve extension looks for
171 # bumpedfix in obsolete.py
171 # bumpedfix in obsolete.py
172 bumpedfix = obsutil.bumpedfix
172 bumpedfix = obsutil.bumpedfix
173 usingsha256 = obsutil.usingsha256
173 usingsha256 = obsutil.usingsha256
174
174
175 ## Parsing and writing of version "0"
175 ## Parsing and writing of version "0"
176 #
176 #
177 # The header is followed by the markers. Each marker is made of:
177 # The header is followed by the markers. Each marker is made of:
178 #
178 #
179 # - 1 uint8 : number of new changesets "N", can be zero.
179 # - 1 uint8 : number of new changesets "N", can be zero.
180 #
180 #
181 # - 1 uint32: metadata size "M" in bytes.
181 # - 1 uint32: metadata size "M" in bytes.
182 #
182 #
183 # - 1 byte: a bit field. It is reserved for flags used in common
183 # - 1 byte: a bit field. It is reserved for flags used in common
184 # obsolete marker operations, to avoid repeated decoding of metadata
184 # obsolete marker operations, to avoid repeated decoding of metadata
185 # entries.
185 # entries.
186 #
186 #
187 # - 20 bytes: obsoleted changeset identifier.
187 # - 20 bytes: obsoleted changeset identifier.
188 #
188 #
189 # - N*20 bytes: new changesets identifiers.
189 # - N*20 bytes: new changesets identifiers.
190 #
190 #
191 # - M bytes: metadata as a sequence of nul-terminated strings. Each
191 # - M bytes: metadata as a sequence of nul-terminated strings. Each
192 # string contains a key and a value, separated by a colon ':', without
192 # string contains a key and a value, separated by a colon ':', without
193 # additional encoding. Keys cannot contain '\0' or ':' and values
193 # additional encoding. Keys cannot contain '\0' or ':' and values
194 # cannot contain '\0'.
194 # cannot contain '\0'.
195 _fm0version = 0
195 _fm0version = 0
196 _fm0fixed = b'>BIB20s'
196 _fm0fixed = b'>BIB20s'
197 _fm0node = b'20s'
197 _fm0node = b'20s'
198 _fm0fsize = _calcsize(_fm0fixed)
198 _fm0fsize = _calcsize(_fm0fixed)
199 _fm0fnodesize = _calcsize(_fm0node)
199 _fm0fnodesize = _calcsize(_fm0node)
200
200
201
201
202 def _fm0readmarkers(data, off, stop):
202 def _fm0readmarkers(data, off, stop):
203 # Loop on markers
203 # Loop on markers
204 while off < stop:
204 while off < stop:
205 # read fixed part
205 # read fixed part
206 cur = data[off : off + _fm0fsize]
206 cur = data[off : off + _fm0fsize]
207 off += _fm0fsize
207 off += _fm0fsize
208 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
208 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
209 # read replacement
209 # read replacement
210 sucs = ()
210 sucs = ()
211 if numsuc:
211 if numsuc:
212 s = _fm0fnodesize * numsuc
212 s = _fm0fnodesize * numsuc
213 cur = data[off : off + s]
213 cur = data[off : off + s]
214 sucs = _unpack(_fm0node * numsuc, cur)
214 sucs = _unpack(_fm0node * numsuc, cur)
215 off += s
215 off += s
216 # read metadata
216 # read metadata
217 # (metadata will be decoded on demand)
217 # (metadata will be decoded on demand)
218 metadata = data[off : off + mdsize]
218 metadata = data[off : off + mdsize]
219 if len(metadata) != mdsize:
219 if len(metadata) != mdsize:
220 raise error.Abort(
220 raise error.Abort(
221 _(
221 _(
222 b'parsing obsolete marker: metadata is too '
222 b'parsing obsolete marker: metadata is too '
223 b'short, %d bytes expected, got %d'
223 b'short, %d bytes expected, got %d'
224 )
224 )
225 % (mdsize, len(metadata))
225 % (mdsize, len(metadata))
226 )
226 )
227 off += mdsize
227 off += mdsize
228 metadata = _fm0decodemeta(metadata)
228 metadata = _fm0decodemeta(metadata)
229 try:
229 try:
230 when, offset = metadata.pop(b'date', b'0 0').split(b' ')
230 when, offset = metadata.pop(b'date', b'0 0').split(b' ')
231 date = float(when), int(offset)
231 date = float(when), int(offset)
232 except ValueError:
232 except ValueError:
233 date = (0.0, 0)
233 date = (0.0, 0)
234 parents = None
234 parents = None
235 if b'p2' in metadata:
235 if b'p2' in metadata:
236 parents = (metadata.pop(b'p1', None), metadata.pop(b'p2', None))
236 parents = (metadata.pop(b'p1', None), metadata.pop(b'p2', None))
237 elif b'p1' in metadata:
237 elif b'p1' in metadata:
238 parents = (metadata.pop(b'p1', None),)
238 parents = (metadata.pop(b'p1', None),)
239 elif b'p0' in metadata:
239 elif b'p0' in metadata:
240 parents = ()
240 parents = ()
241 if parents is not None:
241 if parents is not None:
242 try:
242 try:
243 parents = tuple(bin(p) for p in parents)
243 parents = tuple(bin(p) for p in parents)
244 # if parent content is not a nodeid, drop the data
244 # if parent content is not a nodeid, drop the data
245 for p in parents:
245 for p in parents:
246 if len(p) != 20:
246 if len(p) != 20:
247 parents = None
247 parents = None
248 break
248 break
249 except binascii.Error:
249 except binascii.Error:
250 # if content cannot be translated to nodeid drop the data.
250 # if content cannot be translated to nodeid drop the data.
251 parents = None
251 parents = None
252
252
253 metadata = tuple(sorted(metadata.items()))
253 metadata = tuple(sorted(metadata.items()))
254
254
255 yield (pre, sucs, flags, metadata, date, parents)
255 yield (pre, sucs, flags, metadata, date, parents)
256
256
257
257
258 def _fm0encodeonemarker(marker):
258 def _fm0encodeonemarker(marker):
259 pre, sucs, flags, metadata, date, parents = marker
259 pre, sucs, flags, metadata, date, parents = marker
260 if flags & usingsha256:
260 if flags & usingsha256:
261 raise error.Abort(_(b'cannot handle sha256 with old obsstore format'))
261 raise error.Abort(_(b'cannot handle sha256 with old obsstore format'))
262 metadata = dict(metadata)
262 metadata = dict(metadata)
263 time, tz = date
263 time, tz = date
264 metadata[b'date'] = b'%r %i' % (time, tz)
264 metadata[b'date'] = b'%r %i' % (time, tz)
265 if parents is not None:
265 if parents is not None:
266 if not parents:
266 if not parents:
267 # mark that we explicitly recorded no parents
267 # mark that we explicitly recorded no parents
268 metadata[b'p0'] = b''
268 metadata[b'p0'] = b''
269 for i, p in enumerate(parents, 1):
269 for i, p in enumerate(parents, 1):
270 metadata[b'p%i' % i] = hex(p)
270 metadata[b'p%i' % i] = hex(p)
271 metadata = _fm0encodemeta(metadata)
271 metadata = _fm0encodemeta(metadata)
272 numsuc = len(sucs)
272 numsuc = len(sucs)
273 format = _fm0fixed + (_fm0node * numsuc)
273 format = _fm0fixed + (_fm0node * numsuc)
274 data = [numsuc, len(metadata), flags, pre]
274 data = [numsuc, len(metadata), flags, pre]
275 data.extend(sucs)
275 data.extend(sucs)
276 return _pack(format, *data) + metadata
276 return _pack(format, *data) + metadata
277
277
278
278
279 def _fm0encodemeta(meta):
279 def _fm0encodemeta(meta):
280 """Return encoded metadata string to string mapping.
280 """Return encoded metadata string to string mapping.
281
281
282 Assume no ':' in key and no '\0' in both key and value."""
282 Assume no ':' in key and no '\0' in both key and value."""
283 for key, value in meta.items():
283 for key, value in meta.items():
284 if b':' in key or b'\0' in key:
284 if b':' in key or b'\0' in key:
285 raise ValueError(b"':' and '\0' are forbidden in metadata key'")
285 raise ValueError(b"':' and '\0' are forbidden in metadata key'")
286 if b'\0' in value:
286 if b'\0' in value:
287 raise ValueError(b"':' is forbidden in metadata value'")
287 raise ValueError(b"':' is forbidden in metadata value'")
288 return b'\0'.join([b'%s:%s' % (k, meta[k]) for k in sorted(meta)])
288 return b'\0'.join([b'%s:%s' % (k, meta[k]) for k in sorted(meta)])
289
289
290
290
291 def _fm0decodemeta(data):
291 def _fm0decodemeta(data):
292 """Return string to string dictionary from encoded version."""
292 """Return string to string dictionary from encoded version."""
293 d = {}
293 d = {}
294 for l in data.split(b'\0'):
294 for l in data.split(b'\0'):
295 if l:
295 if l:
296 key, value = l.split(b':', 1)
296 key, value = l.split(b':', 1)
297 d[key] = value
297 d[key] = value
298 return d
298 return d
299
299
300
300
301 ## Parsing and writing of version "1"
301 ## Parsing and writing of version "1"
302 #
302 #
303 # The header is followed by the markers. Each marker is made of:
303 # The header is followed by the markers. Each marker is made of:
304 #
304 #
305 # - uint32: total size of the marker (including this field)
305 # - uint32: total size of the marker (including this field)
306 #
306 #
307 # - float64: date in seconds since epoch
307 # - float64: date in seconds since epoch
308 #
308 #
309 # - int16: timezone offset in minutes
309 # - int16: timezone offset in minutes
310 #
310 #
311 # - uint16: a bit field. It is reserved for flags used in common
311 # - uint16: a bit field. It is reserved for flags used in common
312 # obsolete marker operations, to avoid repeated decoding of metadata
312 # obsolete marker operations, to avoid repeated decoding of metadata
313 # entries.
313 # entries.
314 #
314 #
315 # - uint8: number of successors "N", can be zero.
315 # - uint8: number of successors "N", can be zero.
316 #
316 #
317 # - uint8: number of parents "P", can be zero.
317 # - uint8: number of parents "P", can be zero.
318 #
318 #
319 # 0: parents data stored but no parent,
319 # 0: parents data stored but no parent,
320 # 1: one parent stored,
320 # 1: one parent stored,
321 # 2: two parents stored,
321 # 2: two parents stored,
322 # 3: no parent data stored
322 # 3: no parent data stored
323 #
323 #
324 # - uint8: number of metadata entries M
324 # - uint8: number of metadata entries M
325 #
325 #
326 # - 20 or 32 bytes: predecessor changeset identifier.
326 # - 20 or 32 bytes: predecessor changeset identifier.
327 #
327 #
328 # - N*(20 or 32) bytes: successors changesets identifiers.
328 # - N*(20 or 32) bytes: successors changesets identifiers.
329 #
329 #
330 # - P*(20 or 32) bytes: parents of the predecessors changesets.
330 # - P*(20 or 32) bytes: parents of the predecessors changesets.
331 #
331 #
332 # - M*(uint8, uint8): size of all metadata entries (key and value)
332 # - M*(uint8, uint8): size of all metadata entries (key and value)
333 #
333 #
334 # - remaining bytes: the metadata, each (key, value) pair after the other.
334 # - remaining bytes: the metadata, each (key, value) pair after the other.
335 _fm1version = 1
335 _fm1version = 1
336 _fm1fixed = b'>IdhHBBB'
336 _fm1fixed = b'>IdhHBBB'
337 _fm1nodesha1 = b'20s'
337 _fm1nodesha1 = b'20s'
338 _fm1nodesha256 = b'32s'
338 _fm1nodesha256 = b'32s'
339 _fm1nodesha1size = _calcsize(_fm1nodesha1)
339 _fm1nodesha1size = _calcsize(_fm1nodesha1)
340 _fm1nodesha256size = _calcsize(_fm1nodesha256)
340 _fm1nodesha256size = _calcsize(_fm1nodesha256)
341 _fm1fsize = _calcsize(_fm1fixed)
341 _fm1fsize = _calcsize(_fm1fixed)
342 _fm1parentnone = 3
342 _fm1parentnone = 3
343 _fm1metapair = b'BB'
343 _fm1metapair = b'BB'
344 _fm1metapairsize = _calcsize(_fm1metapair)
344 _fm1metapairsize = _calcsize(_fm1metapair)
345
345
346
346
347 def _fm1purereadmarkers(data, off, stop):
347 def _fm1purereadmarkers(data, off, stop):
348 # make some global constants local for performance
348 # make some global constants local for performance
349 noneflag = _fm1parentnone
349 noneflag = _fm1parentnone
350 sha2flag = usingsha256
350 sha2flag = usingsha256
351 sha1size = _fm1nodesha1size
351 sha1size = _fm1nodesha1size
352 sha2size = _fm1nodesha256size
352 sha2size = _fm1nodesha256size
353 sha1fmt = _fm1nodesha1
353 sha1fmt = _fm1nodesha1
354 sha2fmt = _fm1nodesha256
354 sha2fmt = _fm1nodesha256
355 metasize = _fm1metapairsize
355 metasize = _fm1metapairsize
356 metafmt = _fm1metapair
356 metafmt = _fm1metapair
357 fsize = _fm1fsize
357 fsize = _fm1fsize
358 unpack = _unpack
358 unpack = _unpack
359
359
360 # Loop on markers
360 # Loop on markers
361 ufixed = struct.Struct(_fm1fixed).unpack
361 ufixed = struct.Struct(_fm1fixed).unpack
362
362
363 while off < stop:
363 while off < stop:
364 # read fixed part
364 # read fixed part
365 o1 = off + fsize
365 o1 = off + fsize
366 t, secs, tz, flags, numsuc, numpar, nummeta = ufixed(data[off:o1])
366 t, secs, tz, flags, numsuc, numpar, nummeta = ufixed(data[off:o1])
367
367
368 if flags & sha2flag:
368 if flags & sha2flag:
369 nodefmt = sha2fmt
369 nodefmt = sha2fmt
370 nodesize = sha2size
370 nodesize = sha2size
371 else:
371 else:
372 nodefmt = sha1fmt
372 nodefmt = sha1fmt
373 nodesize = sha1size
373 nodesize = sha1size
374
374
375 (prec,) = unpack(nodefmt, data[o1 : o1 + nodesize])
375 (prec,) = unpack(nodefmt, data[o1 : o1 + nodesize])
376 o1 += nodesize
376 o1 += nodesize
377
377
378 # read 0 or more successors
378 # read 0 or more successors
379 if numsuc == 1:
379 if numsuc == 1:
380 o2 = o1 + nodesize
380 o2 = o1 + nodesize
381 sucs = (data[o1:o2],)
381 sucs = (data[o1:o2],)
382 else:
382 else:
383 o2 = o1 + nodesize * numsuc
383 o2 = o1 + nodesize * numsuc
384 sucs = unpack(nodefmt * numsuc, data[o1:o2])
384 sucs = unpack(nodefmt * numsuc, data[o1:o2])
385
385
386 # read parents
386 # read parents
387 if numpar == noneflag:
387 if numpar == noneflag:
388 o3 = o2
388 o3 = o2
389 parents = None
389 parents = None
390 elif numpar == 1:
390 elif numpar == 1:
391 o3 = o2 + nodesize
391 o3 = o2 + nodesize
392 parents = (data[o2:o3],)
392 parents = (data[o2:o3],)
393 else:
393 else:
394 o3 = o2 + nodesize * numpar
394 o3 = o2 + nodesize * numpar
395 parents = unpack(nodefmt * numpar, data[o2:o3])
395 parents = unpack(nodefmt * numpar, data[o2:o3])
396
396
397 # read metadata
397 # read metadata
398 off = o3 + metasize * nummeta
398 off = o3 + metasize * nummeta
399 metapairsize = unpack(b'>' + (metafmt * nummeta), data[o3:off])
399 metapairsize = unpack(b'>' + (metafmt * nummeta), data[o3:off])
400 metadata = []
400 metadata = []
401 for idx in range(0, len(metapairsize), 2):
401 for idx in range(0, len(metapairsize), 2):
402 o1 = off + metapairsize[idx]
402 o1 = off + metapairsize[idx]
403 o2 = o1 + metapairsize[idx + 1]
403 o2 = o1 + metapairsize[idx + 1]
404 metadata.append((data[off:o1], data[o1:o2]))
404 metadata.append((data[off:o1], data[o1:o2]))
405 off = o2
405 off = o2
406
406
407 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
407 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
408
408
409
409
410 def _fm1encodeonemarker(marker):
410 def _fm1encodeonemarker(marker):
411 pre, sucs, flags, metadata, date, parents = marker
411 pre, sucs, flags, metadata, date, parents = marker
412 # determine node size
412 # determine node size
413 _fm1node = _fm1nodesha1
413 _fm1node = _fm1nodesha1
414 if flags & usingsha256:
414 if flags & usingsha256:
415 _fm1node = _fm1nodesha256
415 _fm1node = _fm1nodesha256
416 numsuc = len(sucs)
416 numsuc = len(sucs)
417 numextranodes = 1 + numsuc
417 numextranodes = 1 + numsuc
418 if parents is None:
418 if parents is None:
419 numpar = _fm1parentnone
419 numpar = _fm1parentnone
420 else:
420 else:
421 numpar = len(parents)
421 numpar = len(parents)
422 numextranodes += numpar
422 numextranodes += numpar
423 formatnodes = _fm1node * numextranodes
423 formatnodes = _fm1node * numextranodes
424 formatmeta = _fm1metapair * len(metadata)
424 formatmeta = _fm1metapair * len(metadata)
425 format = _fm1fixed + formatnodes + formatmeta
425 format = _fm1fixed + formatnodes + formatmeta
426 # tz is stored in minutes so we divide by 60
426 # tz is stored in minutes so we divide by 60
427 tz = date[1] // 60
427 tz = date[1] // 60
428 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
428 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
429 data.extend(sucs)
429 data.extend(sucs)
430 if parents is not None:
430 if parents is not None:
431 data.extend(parents)
431 data.extend(parents)
432 totalsize = _calcsize(format)
432 totalsize = _calcsize(format)
433 for key, value in metadata:
433 for key, value in metadata:
434 lk = len(key)
434 lk = len(key)
435 lv = len(value)
435 lv = len(value)
436 if lk > 255:
436 if lk > 255:
437 msg = (
437 msg = (
438 b'obsstore metadata key cannot be longer than 255 bytes'
438 b'obsstore metadata key cannot be longer than 255 bytes'
439 b' (key "%s" is %u bytes)'
439 b' (key "%s" is %u bytes)'
440 ) % (key, lk)
440 ) % (key, lk)
441 raise error.ProgrammingError(msg)
441 raise error.ProgrammingError(msg)
442 if lv > 255:
442 if lv > 255:
443 msg = (
443 msg = (
444 b'obsstore metadata value cannot be longer than 255 bytes'
444 b'obsstore metadata value cannot be longer than 255 bytes'
445 b' (value "%s" for key "%s" is %u bytes)'
445 b' (value "%s" for key "%s" is %u bytes)'
446 ) % (value, key, lv)
446 ) % (value, key, lv)
447 raise error.ProgrammingError(msg)
447 raise error.ProgrammingError(msg)
448 data.append(lk)
448 data.append(lk)
449 data.append(lv)
449 data.append(lv)
450 totalsize += lk + lv
450 totalsize += lk + lv
451 data[0] = totalsize
451 data[0] = totalsize
452 data = [_pack(format, *data)]
452 data = [_pack(format, *data)]
453 for key, value in metadata:
453 for key, value in metadata:
454 data.append(key)
454 data.append(key)
455 data.append(value)
455 data.append(value)
456 return b''.join(data)
456 return b''.join(data)
457
457
458
458
459 def _fm1readmarkers(data, off, stop):
459 def _fm1readmarkers(data, off, stop):
460 native = getattr(parsers, 'fm1readmarkers', None)
460 native = getattr(parsers, 'fm1readmarkers', None)
461 if not native:
461 if not native:
462 return _fm1purereadmarkers(data, off, stop)
462 return _fm1purereadmarkers(data, off, stop)
463 return native(data, off, stop)
463 return native(data, off, stop)
464
464
465
465
466 # mapping to read/write various marker formats
466 # mapping to read/write various marker formats
467 # <version> -> (decoder, encoder)
467 # <version> -> (decoder, encoder)
468 formats = {
468 formats = {
469 _fm0version: (_fm0readmarkers, _fm0encodeonemarker),
469 _fm0version: (_fm0readmarkers, _fm0encodeonemarker),
470 _fm1version: (_fm1readmarkers, _fm1encodeonemarker),
470 _fm1version: (_fm1readmarkers, _fm1encodeonemarker),
471 }
471 }
472
472
473
473
474 def _readmarkerversion(data):
474 def _readmarkerversion(data):
475 return _unpack(b'>B', data[0:1])[0]
475 return _unpack(b'>B', data[0:1])[0]
476
476
477
477
478 @util.nogc
478 @util.nogc
479 def _readmarkers(data, off=None, stop=None):
479 def _readmarkers(data, off=None, stop=None):
480 """Read and enumerate markers from raw data"""
480 """Read and enumerate markers from raw data"""
481 diskversion = _readmarkerversion(data)
481 diskversion = _readmarkerversion(data)
482 if not off:
482 if not off:
483 off = 1 # skip 1 byte version number
483 off = 1 # skip 1 byte version number
484 if stop is None:
484 if stop is None:
485 stop = len(data)
485 stop = len(data)
486 if diskversion not in formats:
486 if diskversion not in formats:
487 msg = _(b'parsing obsolete marker: unknown version %r') % diskversion
487 msg = _(b'parsing obsolete marker: unknown version %r') % diskversion
488 raise error.UnknownVersion(msg, version=diskversion)
488 raise error.UnknownVersion(msg, version=diskversion)
489 return diskversion, formats[diskversion][0](data, off, stop)
489 return diskversion, formats[diskversion][0](data, off, stop)
490
490
491
491
492 def encodeheader(version=_fm0version):
492 def encodeheader(version=_fm0version):
493 return _pack(b'>B', version)
493 return _pack(b'>B', version)
494
494
495
495
496 def encodemarkers(markers, addheader=False, version=_fm0version):
496 def encodemarkers(markers, addheader=False, version=_fm0version):
497 # Kept separate from flushmarkers(), it will be reused for
497 # Kept separate from flushmarkers(), it will be reused for
498 # markers exchange.
498 # markers exchange.
499 encodeone = formats[version][1]
499 encodeone = formats[version][1]
500 if addheader:
500 if addheader:
501 yield encodeheader(version)
501 yield encodeheader(version)
502 for marker in markers:
502 for marker in markers:
503 yield encodeone(marker)
503 yield encodeone(marker)
504
504
505
505
506 @util.nogc
506 @util.nogc
507 def _addsuccessors(successors, markers):
507 def _addsuccessors(successors, markers):
508 for mark in markers:
508 for mark in markers:
509 successors.setdefault(mark[0], set()).add(mark)
509 successors.setdefault(mark[0], set()).add(mark)
510
510
511
511
512 @util.nogc
512 @util.nogc
513 def _addpredecessors(predecessors, markers):
513 def _addpredecessors(predecessors, markers):
514 for mark in markers:
514 for mark in markers:
515 for suc in mark[1]:
515 for suc in mark[1]:
516 predecessors.setdefault(suc, set()).add(mark)
516 predecessors.setdefault(suc, set()).add(mark)
517
517
518
518
519 @util.nogc
519 @util.nogc
520 def _addchildren(children, markers):
520 def _addchildren(children, markers):
521 for mark in markers:
521 for mark in markers:
522 parents = mark[5]
522 parents = mark[5]
523 if parents is not None:
523 if parents is not None:
524 for p in parents:
524 for p in parents:
525 children.setdefault(p, set()).add(mark)
525 children.setdefault(p, set()).add(mark)
526
526
527
527
528 def _checkinvalidmarkers(repo, markers):
528 def _checkinvalidmarkers(repo, markers):
529 """search for marker with invalid data and raise error if needed
529 """search for marker with invalid data and raise error if needed
530
530
531 Exist as a separated function to allow the evolve extension for a more
531 Exist as a separated function to allow the evolve extension for a more
532 subtle handling.
532 subtle handling.
533 """
533 """
534 for mark in markers:
534 for mark in markers:
535 if repo.nullid in mark[1]:
535 if repo.nullid in mark[1]:
536 raise error.Abort(
536 raise error.Abort(
537 _(
537 _(
538 b'bad obsolescence marker detected: '
538 b'bad obsolescence marker detected: '
539 b'invalid successors nullid'
539 b'invalid successors nullid'
540 )
540 )
541 )
541 )
542
542
543
543
544 class obsstore:
544 class obsstore:
545 """Store obsolete markers
545 """Store obsolete markers
546
546
547 Markers can be accessed with two mappings:
547 Markers can be accessed with two mappings:
548 - predecessors[x] -> set(markers on predecessors edges of x)
548 - predecessors[x] -> set(markers on predecessors edges of x)
549 - successors[x] -> set(markers on successors edges of x)
549 - successors[x] -> set(markers on successors edges of x)
550 - children[x] -> set(markers on predecessors edges of children(x)
550 - children[x] -> set(markers on predecessors edges of children(x)
551 """
551 """
552
552
553 fields = (b'prec', b'succs', b'flag', b'meta', b'date', b'parents')
553 fields = (b'prec', b'succs', b'flag', b'meta', b'date', b'parents')
554 # prec: nodeid, predecessors changesets
554 # prec: nodeid, predecessors changesets
555 # succs: tuple of nodeid, successor changesets (0-N length)
555 # succs: tuple of nodeid, successor changesets (0-N length)
556 # flag: integer, flag field carrying modifier for the markers (see doc)
556 # flag: integer, flag field carrying modifier for the markers (see doc)
557 # meta: binary blob in UTF-8, encoded metadata dictionary
557 # meta: binary blob in UTF-8, encoded metadata dictionary
558 # date: (float, int) tuple, date of marker creation
558 # date: (float, int) tuple, date of marker creation
559 # parents: (tuple of nodeid) or None, parents of predecessors
559 # parents: (tuple of nodeid) or None, parents of predecessors
560 # None is used when no data has been recorded
560 # None is used when no data has been recorded
561
561
562 def __init__(self, repo, svfs, defaultformat=_fm1version, readonly=False):
562 def __init__(self, repo, svfs, defaultformat=_fm1version, readonly=False):
563 # caches for various obsolescence related cache
563 # caches for various obsolescence related cache
564 self.caches = {}
564 self.caches = {}
565 self.svfs = svfs
565 self.svfs = svfs
566 self._repo = weakref.ref(repo)
566 self._repo = weakref.ref(repo)
567 self._defaultformat = defaultformat
567 self._defaultformat = defaultformat
568 self._readonly = readonly
568 self._readonly = readonly
569
569
570 @property
570 @property
571 def repo(self):
571 def repo(self):
572 r = self._repo()
572 r = self._repo()
573 if r is None:
573 if r is None:
574 msg = "using the obsstore of a deallocated repo"
574 msg = "using the obsstore of a deallocated repo"
575 raise error.ProgrammingError(msg)
575 raise error.ProgrammingError(msg)
576 return r
576 return r
577
577
578 def __iter__(self):
578 def __iter__(self):
579 return iter(self._all)
579 return iter(self._all)
580
580
581 def __len__(self):
581 def __len__(self):
582 return len(self._all)
582 return len(self._all)
583
583
584 def __nonzero__(self):
584 def __nonzero__(self):
585 from . import statichttprepo
585 from . import statichttprepo
586
586
587 if isinstance(self.repo, statichttprepo.statichttprepository):
587 if isinstance(self.repo, statichttprepo.statichttprepository):
588 # If repo is accessed via static HTTP, then we can't use os.stat()
588 # If repo is accessed via static HTTP, then we can't use os.stat()
589 # to just peek at the file size.
589 # to just peek at the file size.
590 return len(self._data) > 1
590 return len(self._data) > 1
591 if not self._cached('_all'):
591 if not self._cached('_all'):
592 try:
592 try:
593 return self.svfs.stat(b'obsstore').st_size > 1
593 return self.svfs.stat(b'obsstore').st_size > 1
594 except FileNotFoundError:
594 except FileNotFoundError:
595 # just build an empty _all list if no obsstore exists, which
595 # just build an empty _all list if no obsstore exists, which
596 # avoids further stat() syscalls
596 # avoids further stat() syscalls
597 pass
597 pass
598 return bool(self._all)
598 return bool(self._all)
599
599
600 __bool__ = __nonzero__
600 __bool__ = __nonzero__
601
601
602 @property
602 @property
603 def readonly(self):
603 def readonly(self):
604 """True if marker creation is disabled
604 """True if marker creation is disabled
605
605
606 Remove me in the future when obsolete marker is always on."""
606 Remove me in the future when obsolete marker is always on."""
607 return self._readonly
607 return self._readonly
608
608
609 def create(
609 def create(
610 self,
610 self,
611 transaction,
611 transaction,
612 prec,
612 prec,
613 succs=(),
613 succs=(),
614 flag=0,
614 flag=0,
615 parents=None,
615 parents=None,
616 date=None,
616 date=None,
617 metadata=None,
617 metadata=None,
618 ui=None,
618 ui=None,
619 ):
619 ):
620 """obsolete: add a new obsolete marker
620 """obsolete: add a new obsolete marker
621
621
622 * ensuring it is hashable
622 * ensuring it is hashable
623 * check mandatory metadata
623 * check mandatory metadata
624 * encode metadata
624 * encode metadata
625
625
626 If you are a human writing code creating marker you want to use the
626 If you are a human writing code creating marker you want to use the
627 `createmarkers` function in this module instead.
627 `createmarkers` function in this module instead.
628
628
629 return True if a new marker have been added, False if the markers
629 return True if a new marker have been added, False if the markers
630 already existed (no op).
630 already existed (no op).
631 """
631 """
632 flag = int(flag)
632 flag = int(flag)
633 if metadata is None:
633 if metadata is None:
634 metadata = {}
634 metadata = {}
635 if date is None:
635 if date is None:
636 if b'date' in metadata:
636 if b'date' in metadata:
637 # as a courtesy for out-of-tree extensions
637 # as a courtesy for out-of-tree extensions
638 date = dateutil.parsedate(metadata.pop(b'date'))
638 date = dateutil.parsedate(metadata.pop(b'date'))
639 elif ui is not None:
639 elif ui is not None:
640 date = ui.configdate(b'devel', b'default-date')
640 date = ui.configdate(b'devel', b'default-date')
641 if date is None:
641 if date is None:
642 date = dateutil.makedate()
642 date = dateutil.makedate()
643 else:
643 else:
644 date = dateutil.makedate()
644 date = dateutil.makedate()
645 if flag & usingsha256:
645 if flag & usingsha256:
646 if len(prec) != 32:
646 if len(prec) != 32:
647 raise ValueError(prec)
647 raise ValueError(prec)
648 for succ in succs:
648 for succ in succs:
649 if len(succ) != 32:
649 if len(succ) != 32:
650 raise ValueError(succ)
650 raise ValueError(succ)
651 else:
651 else:
652 if len(prec) != 20:
652 if len(prec) != 20:
653 raise ValueError(prec)
653 raise ValueError(prec)
654 for succ in succs:
654 for succ in succs:
655 if len(succ) != 20:
655 if len(succ) != 20:
656 raise ValueError(succ)
656 raise ValueError(succ)
657 if prec in succs:
657 if prec in succs:
658 raise ValueError('in-marker cycle with %s' % prec.hex())
658 raise ValueError('in-marker cycle with %s' % prec.hex())
659
659
660 metadata = tuple(sorted(metadata.items()))
660 metadata = tuple(sorted(metadata.items()))
661 for k, v in metadata:
661 for k, v in metadata:
662 try:
662 try:
663 # might be better to reject non-ASCII keys
663 # might be better to reject non-ASCII keys
664 k.decode('utf-8')
664 k.decode('utf-8')
665 v.decode('utf-8')
665 v.decode('utf-8')
666 except UnicodeDecodeError:
666 except UnicodeDecodeError:
667 raise error.ProgrammingError(
667 raise error.ProgrammingError(
668 b'obsstore metadata must be valid UTF-8 sequence '
668 b'obsstore metadata must be valid UTF-8 sequence '
669 b'(key = %r, value = %r)'
669 b'(key = %r, value = %r)'
670 % (pycompat.bytestr(k), pycompat.bytestr(v))
670 % (pycompat.bytestr(k), pycompat.bytestr(v))
671 )
671 )
672
672
673 marker = (bytes(prec), tuple(succs), flag, metadata, date, parents)
673 marker = (bytes(prec), tuple(succs), flag, metadata, date, parents)
674 return bool(self.add(transaction, [marker]))
674 return bool(self.add(transaction, [marker]))
675
675
676 def add(self, transaction, markers):
676 def add(self, transaction, markers):
677 """Add new markers to the store
677 """Add new markers to the store
678
678
679 Take care of filtering duplicate.
679 Take care of filtering duplicate.
680 Return the number of new marker."""
680 Return the number of new marker."""
681 if self._readonly:
681 if self._readonly:
682 raise error.Abort(
682 raise error.Abort(
683 _(b'creating obsolete markers is not enabled on this repo')
683 _(b'creating obsolete markers is not enabled on this repo')
684 )
684 )
685 known = set()
685 known = set()
686 getsuccessors = self.successors.get
686 getsuccessors = self.successors.get
687 new = []
687 new = []
688 for m in markers:
688 for m in markers:
689 if m not in getsuccessors(m[0], ()) and m not in known:
689 if m not in getsuccessors(m[0], ()) and m not in known:
690 known.add(m)
690 known.add(m)
691 new.append(m)
691 new.append(m)
692 if new:
692 if new:
693 f = self.svfs(b'obsstore', b'ab')
693 f = self.svfs(b'obsstore', b'ab')
694 try:
694 try:
695 offset = f.tell()
695 offset = f.tell()
696 transaction.add(b'obsstore', offset)
696 transaction.add(b'obsstore', offset)
697 # offset == 0: new file - add the version header
697 # offset == 0: new file - add the version header
698 data = b''.join(encodemarkers(new, offset == 0, self._version))
698 data = b''.join(encodemarkers(new, offset == 0, self._version))
699 f.write(data)
699 f.write(data)
700 finally:
700 finally:
701 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
701 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
702 # call 'filecacheentry.refresh()' here
702 # call 'filecacheentry.refresh()' here
703 f.close()
703 f.close()
704 addedmarkers = transaction.changes.get(b'obsmarkers')
704 addedmarkers = transaction.changes.get(b'obsmarkers')
705 if addedmarkers is not None:
705 if addedmarkers is not None:
706 addedmarkers.update(new)
706 addedmarkers.update(new)
707 self._addmarkers(new, data)
707 self._addmarkers(new, data)
708 # new marker *may* have changed several set. invalidate the cache.
708 # new marker *may* have changed several set. invalidate the cache.
709 self.caches.clear()
709 self.caches.clear()
710 # records the number of new markers for the transaction hooks
710 # records the number of new markers for the transaction hooks
711 previous = int(transaction.hookargs.get(b'new_obsmarkers', b'0'))
711 previous = int(transaction.hookargs.get(b'new_obsmarkers', b'0'))
712 transaction.hookargs[b'new_obsmarkers'] = b'%d' % (previous + len(new))
712 transaction.hookargs[b'new_obsmarkers'] = b'%d' % (previous + len(new))
713 return len(new)
713 return len(new)
714
714
715 def mergemarkers(self, transaction, data):
715 def mergemarkers(self, transaction, data):
716 """merge a binary stream of markers inside the obsstore
716 """merge a binary stream of markers inside the obsstore
717
717
718 Returns the number of new markers added."""
718 Returns the number of new markers added."""
719 version, markers = _readmarkers(data)
719 version, markers = _readmarkers(data)
720 return self.add(transaction, markers)
720 return self.add(transaction, markers)
721
721
722 @propertycache
722 @propertycache
723 def _data(self):
723 def _data(self):
724 return self.svfs.tryread(b'obsstore')
724 return self.svfs.tryread(b'obsstore')
725
725
726 @propertycache
726 @propertycache
727 def _version(self):
727 def _version(self):
728 if len(self._data) >= 1:
728 if len(self._data) >= 1:
729 return _readmarkerversion(self._data)
729 return _readmarkerversion(self._data)
730 else:
730 else:
731 return self._defaultformat
731 return self._defaultformat
732
732
733 @propertycache
733 @propertycache
734 def _all(self):
734 def _all(self):
735 data = self._data
735 data = self._data
736 if not data:
736 if not data:
737 return []
737 return []
738 self._version, markers = _readmarkers(data)
738 self._version, markers = _readmarkers(data)
739 markers = list(markers)
739 markers = list(markers)
740 _checkinvalidmarkers(self.repo, markers)
740 _checkinvalidmarkers(self.repo, markers)
741 return markers
741 return markers
742
742
743 @propertycache
743 @propertycache
744 def successors(self):
744 def successors(self):
745 successors = {}
745 successors = {}
746 _addsuccessors(successors, self._all)
746 _addsuccessors(successors, self._all)
747 return successors
747 return successors
748
748
749 @propertycache
749 @propertycache
750 def predecessors(self):
750 def predecessors(self):
751 predecessors = {}
751 predecessors = {}
752 _addpredecessors(predecessors, self._all)
752 _addpredecessors(predecessors, self._all)
753 return predecessors
753 return predecessors
754
754
755 @propertycache
755 @propertycache
756 def children(self):
756 def children(self):
757 children = {}
757 children = {}
758 _addchildren(children, self._all)
758 _addchildren(children, self._all)
759 return children
759 return children
760
760
761 def _cached(self, attr):
761 def _cached(self, attr):
762 return attr in self.__dict__
762 return attr in self.__dict__
763
763
764 def _addmarkers(self, markers, rawdata):
764 def _addmarkers(self, markers, rawdata):
765 markers = list(markers) # to allow repeated iteration
765 markers = list(markers) # to allow repeated iteration
766 self._data = self._data + rawdata
766 self._data = self._data + rawdata
767 self._all.extend(markers)
767 self._all.extend(markers)
768 if self._cached('successors'):
768 if self._cached('successors'):
769 _addsuccessors(self.successors, markers)
769 _addsuccessors(self.successors, markers)
770 if self._cached('predecessors'):
770 if self._cached('predecessors'):
771 _addpredecessors(self.predecessors, markers)
771 _addpredecessors(self.predecessors, markers)
772 if self._cached('children'):
772 if self._cached('children'):
773 _addchildren(self.children, markers)
773 _addchildren(self.children, markers)
774 _checkinvalidmarkers(self.repo, markers)
774 _checkinvalidmarkers(self.repo, markers)
775
775
776 def relevantmarkers(self, nodes):
776 def relevantmarkers(self, nodes=None, revs=None):
777 """return a set of all obsolescence markers relevant to a set of nodes.
777 """return a set of all obsolescence markers relevant to a set of
778 nodes or revisions.
778
779
779 "relevant" to a set of nodes mean:
780 "relevant" to a set of nodes or revisions mean:
780
781
781 - marker that use this changeset as successor
782 - marker that use this changeset as successor
782 - prune marker of direct children on this changeset
783 - prune marker of direct children on this changeset
783 - recursive application of the two rules on predecessors of these
784 - recursive application of the two rules on predecessors of these
784 markers
785 markers
785
786
786 It is a set so you cannot rely on order."""
787 It is a set so you cannot rely on order."""
788 if nodes is None:
789 nodes = set()
790 if revs is None:
791 revs = set()
787
792
788 pendingnodes = set(nodes)
793 tonode = self.repo.unfiltered().changelog.node
789 seenmarkers = set()
794 pendingnodes = set()
790 seennodes = set(pendingnodes)
791 precursorsmarkers = self.predecessors
795 precursorsmarkers = self.predecessors
792 succsmarkers = self.successors
796 succsmarkers = self.successors
793 children = self.children
797 children = self.children
798 for node in nodes:
799 if (
800 node in precursorsmarkers
801 or node in succsmarkers
802 or node in children
803 ):
804 pendingnodes.add(node)
805 for rev in revs:
806 node = tonode(rev)
807 if (
808 node in precursorsmarkers
809 or node in succsmarkers
810 or node in children
811 ):
812 pendingnodes.add(node)
813 seenmarkers = set()
814 seennodes = pendingnodes.copy()
794 while pendingnodes:
815 while pendingnodes:
795 direct = set()
816 direct = set()
796 for current in pendingnodes:
817 for current in pendingnodes:
797 direct.update(precursorsmarkers.get(current, ()))
818 direct.update(precursorsmarkers.get(current, ()))
798 pruned = [m for m in children.get(current, ()) if not m[1]]
819 pruned = [m for m in children.get(current, ()) if not m[1]]
799 direct.update(pruned)
820 direct.update(pruned)
800 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
821 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
801 direct.update(pruned)
822 direct.update(pruned)
802 direct -= seenmarkers
823 direct -= seenmarkers
803 pendingnodes = {m[0] for m in direct}
824 pendingnodes = {m[0] for m in direct}
804 seenmarkers |= direct
825 seenmarkers |= direct
805 pendingnodes -= seennodes
826 pendingnodes -= seennodes
806 seennodes |= pendingnodes
827 seennodes |= pendingnodes
807 return seenmarkers
828 return seenmarkers
808
829
809
830
810 def makestore(ui, repo):
831 def makestore(ui, repo):
811 """Create an obsstore instance from a repo."""
832 """Create an obsstore instance from a repo."""
812 # read default format for new obsstore.
833 # read default format for new obsstore.
813 # developer config: format.obsstore-version
834 # developer config: format.obsstore-version
814 defaultformat = ui.configint(b'format', b'obsstore-version')
835 defaultformat = ui.configint(b'format', b'obsstore-version')
815 # rely on obsstore class default when possible.
836 # rely on obsstore class default when possible.
816 kwargs = {}
837 kwargs = {}
817 if defaultformat is not None:
838 if defaultformat is not None:
818 kwargs['defaultformat'] = defaultformat
839 kwargs['defaultformat'] = defaultformat
819 readonly = not isenabled(repo, createmarkersopt)
840 readonly = not isenabled(repo, createmarkersopt)
820 store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs)
841 store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs)
821 if store and readonly:
842 if store and readonly:
822 ui.warn(
843 ui.warn(
823 _(b'"obsolete" feature not enabled but %i markers found!\n')
844 _(b'"obsolete" feature not enabled but %i markers found!\n')
824 % len(list(store))
845 % len(list(store))
825 )
846 )
826 return store
847 return store
827
848
828
849
829 def commonversion(versions):
850 def commonversion(versions):
830 """Return the newest version listed in both versions and our local formats.
851 """Return the newest version listed in both versions and our local formats.
831
852
832 Returns None if no common version exists.
853 Returns None if no common version exists.
833 """
854 """
834 versions.sort(reverse=True)
855 versions.sort(reverse=True)
835 # search for highest version known on both side
856 # search for highest version known on both side
836 for v in versions:
857 for v in versions:
837 if v in formats:
858 if v in formats:
838 return v
859 return v
839 return None
860 return None
840
861
841
862
842 # arbitrary picked to fit into 8K limit from HTTP server
863 # arbitrary picked to fit into 8K limit from HTTP server
843 # you have to take in account:
864 # you have to take in account:
844 # - the version header
865 # - the version header
845 # - the base85 encoding
866 # - the base85 encoding
846 _maxpayload = 5300
867 _maxpayload = 5300
847
868
848
869
849 def _pushkeyescape(markers):
870 def _pushkeyescape(markers):
850 """encode markers into a dict suitable for pushkey exchange
871 """encode markers into a dict suitable for pushkey exchange
851
872
852 - binary data is base85 encoded
873 - binary data is base85 encoded
853 - split in chunks smaller than 5300 bytes"""
874 - split in chunks smaller than 5300 bytes"""
854 keys = {}
875 keys = {}
855 parts = []
876 parts = []
856 currentlen = _maxpayload * 2 # ensure we create a new part
877 currentlen = _maxpayload * 2 # ensure we create a new part
857 for marker in markers:
878 for marker in markers:
858 nextdata = _fm0encodeonemarker(marker)
879 nextdata = _fm0encodeonemarker(marker)
859 if len(nextdata) + currentlen > _maxpayload:
880 if len(nextdata) + currentlen > _maxpayload:
860 currentpart = []
881 currentpart = []
861 currentlen = 0
882 currentlen = 0
862 parts.append(currentpart)
883 parts.append(currentpart)
863 currentpart.append(nextdata)
884 currentpart.append(nextdata)
864 currentlen += len(nextdata)
885 currentlen += len(nextdata)
865 for idx, part in enumerate(reversed(parts)):
886 for idx, part in enumerate(reversed(parts)):
866 data = b''.join([_pack(b'>B', _fm0version)] + part)
887 data = b''.join([_pack(b'>B', _fm0version)] + part)
867 keys[b'dump%i' % idx] = util.b85encode(data)
888 keys[b'dump%i' % idx] = util.b85encode(data)
868 return keys
889 return keys
869
890
870
891
871 def listmarkers(repo):
892 def listmarkers(repo):
872 """List markers over pushkey"""
893 """List markers over pushkey"""
873 if not repo.obsstore:
894 if not repo.obsstore:
874 return {}
895 return {}
875 return _pushkeyescape(sorted(repo.obsstore))
896 return _pushkeyescape(sorted(repo.obsstore))
876
897
877
898
878 def pushmarker(repo, key, old, new):
899 def pushmarker(repo, key, old, new):
879 """Push markers over pushkey"""
900 """Push markers over pushkey"""
880 if not key.startswith(b'dump'):
901 if not key.startswith(b'dump'):
881 repo.ui.warn(_(b'unknown key: %r') % key)
902 repo.ui.warn(_(b'unknown key: %r') % key)
882 return False
903 return False
883 if old:
904 if old:
884 repo.ui.warn(_(b'unexpected old value for %r') % key)
905 repo.ui.warn(_(b'unexpected old value for %r') % key)
885 return False
906 return False
886 data = util.b85decode(new)
907 data = util.b85decode(new)
887 with repo.lock(), repo.transaction(b'pushkey: obsolete markers') as tr:
908 with repo.lock(), repo.transaction(b'pushkey: obsolete markers') as tr:
888 repo.obsstore.mergemarkers(tr, data)
909 repo.obsstore.mergemarkers(tr, data)
889 repo.invalidatevolatilesets()
910 repo.invalidatevolatilesets()
890 return True
911 return True
891
912
892
913
893 # mapping of 'set-name' -> <function to compute this set>
914 # mapping of 'set-name' -> <function to compute this set>
894 cachefuncs = {}
915 cachefuncs = {}
895
916
896
917
897 def cachefor(name):
918 def cachefor(name):
898 """Decorator to register a function as computing the cache for a set"""
919 """Decorator to register a function as computing the cache for a set"""
899
920
900 def decorator(func):
921 def decorator(func):
901 if name in cachefuncs:
922 if name in cachefuncs:
902 msg = b"duplicated registration for volatileset '%s' (existing: %r)"
923 msg = b"duplicated registration for volatileset '%s' (existing: %r)"
903 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
924 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
904 cachefuncs[name] = func
925 cachefuncs[name] = func
905 return func
926 return func
906
927
907 return decorator
928 return decorator
908
929
909
930
910 def getrevs(repo, name):
931 def getrevs(repo, name):
911 """Return the set of revision that belong to the <name> set
932 """Return the set of revision that belong to the <name> set
912
933
913 Such access may compute the set and cache it for future use"""
934 Such access may compute the set and cache it for future use"""
914 repo = repo.unfiltered()
935 repo = repo.unfiltered()
915 with util.timedcm('getrevs %s', name):
936 with util.timedcm('getrevs %s', name):
916 if not repo.obsstore:
937 if not repo.obsstore:
917 return frozenset()
938 return frozenset()
918 if name not in repo.obsstore.caches:
939 if name not in repo.obsstore.caches:
919 repo.obsstore.caches[name] = cachefuncs[name](repo)
940 repo.obsstore.caches[name] = cachefuncs[name](repo)
920 return repo.obsstore.caches[name]
941 return repo.obsstore.caches[name]
921
942
922
943
923 # To be simple we need to invalidate obsolescence cache when:
944 # To be simple we need to invalidate obsolescence cache when:
924 #
945 #
925 # - new changeset is added:
946 # - new changeset is added:
926 # - public phase is changed
947 # - public phase is changed
927 # - obsolescence marker are added
948 # - obsolescence marker are added
928 # - strip is used a repo
949 # - strip is used a repo
929 def clearobscaches(repo):
950 def clearobscaches(repo):
930 """Remove all obsolescence related cache from a repo
951 """Remove all obsolescence related cache from a repo
931
952
932 This remove all cache in obsstore is the obsstore already exist on the
953 This remove all cache in obsstore is the obsstore already exist on the
933 repo.
954 repo.
934
955
935 (We could be smarter here given the exact event that trigger the cache
956 (We could be smarter here given the exact event that trigger the cache
936 clearing)"""
957 clearing)"""
937 # only clear cache is there is obsstore data in this repo
958 # only clear cache is there is obsstore data in this repo
938 if b'obsstore' in repo._filecache:
959 if b'obsstore' in repo._filecache:
939 repo.obsstore.caches.clear()
960 repo.obsstore.caches.clear()
940
961
941
962
942 def _mutablerevs(repo):
963 def _mutablerevs(repo):
943 """the set of mutable revision in the repository"""
964 """the set of mutable revision in the repository"""
944 return repo._phasecache.getrevset(repo, phases.relevant_mutable_phases)
965 return repo._phasecache.getrevset(repo, phases.relevant_mutable_phases)
945
966
946
967
947 @cachefor(b'obsolete')
968 @cachefor(b'obsolete')
948 def _computeobsoleteset(repo):
969 def _computeobsoleteset(repo):
949 """the set of obsolete revisions"""
970 """the set of obsolete revisions"""
950 getnode = repo.changelog.node
971 getnode = repo.changelog.node
951 notpublic = _mutablerevs(repo)
972 notpublic = _mutablerevs(repo)
952 isobs = repo.obsstore.successors.__contains__
973 isobs = repo.obsstore.successors.__contains__
953 return frozenset(r for r in notpublic if isobs(getnode(r)))
974 return frozenset(r for r in notpublic if isobs(getnode(r)))
954
975
955
976
956 @cachefor(b'orphan')
977 @cachefor(b'orphan')
957 def _computeorphanset(repo):
978 def _computeorphanset(repo):
958 """the set of non obsolete revisions with obsolete parents"""
979 """the set of non obsolete revisions with obsolete parents"""
959 pfunc = repo.changelog.parentrevs
980 pfunc = repo.changelog.parentrevs
960 mutable = _mutablerevs(repo)
981 mutable = _mutablerevs(repo)
961 obsolete = getrevs(repo, b'obsolete')
982 obsolete = getrevs(repo, b'obsolete')
962 others = mutable - obsolete
983 others = mutable - obsolete
963 unstable = set()
984 unstable = set()
964 for r in sorted(others):
985 for r in sorted(others):
965 # A rev is unstable if one of its parent is obsolete or unstable
986 # A rev is unstable if one of its parent is obsolete or unstable
966 # this works since we traverse following growing rev order
987 # this works since we traverse following growing rev order
967 for p in pfunc(r):
988 for p in pfunc(r):
968 if p in obsolete or p in unstable:
989 if p in obsolete or p in unstable:
969 unstable.add(r)
990 unstable.add(r)
970 break
991 break
971 return frozenset(unstable)
992 return frozenset(unstable)
972
993
973
994
974 @cachefor(b'suspended')
995 @cachefor(b'suspended')
975 def _computesuspendedset(repo):
996 def _computesuspendedset(repo):
976 """the set of obsolete parents with non obsolete descendants"""
997 """the set of obsolete parents with non obsolete descendants"""
977 suspended = repo.changelog.ancestors(getrevs(repo, b'orphan'))
998 suspended = repo.changelog.ancestors(getrevs(repo, b'orphan'))
978 return frozenset(r for r in getrevs(repo, b'obsolete') if r in suspended)
999 return frozenset(r for r in getrevs(repo, b'obsolete') if r in suspended)
979
1000
980
1001
981 @cachefor(b'extinct')
1002 @cachefor(b'extinct')
982 def _computeextinctset(repo):
1003 def _computeextinctset(repo):
983 """the set of obsolete parents without non obsolete descendants"""
1004 """the set of obsolete parents without non obsolete descendants"""
984 return getrevs(repo, b'obsolete') - getrevs(repo, b'suspended')
1005 return getrevs(repo, b'obsolete') - getrevs(repo, b'suspended')
985
1006
986
1007
987 @cachefor(b'phasedivergent')
1008 @cachefor(b'phasedivergent')
988 def _computephasedivergentset(repo):
1009 def _computephasedivergentset(repo):
989 """the set of revs trying to obsolete public revisions"""
1010 """the set of revs trying to obsolete public revisions"""
990 bumped = set()
1011 bumped = set()
991 # util function (avoid attribute lookup in the loop)
1012 # util function (avoid attribute lookup in the loop)
992 phase = repo._phasecache.phase # would be faster to grab the full list
1013 phase = repo._phasecache.phase # would be faster to grab the full list
993 public = phases.public
1014 public = phases.public
994 cl = repo.changelog
1015 cl = repo.changelog
995 torev = cl.index.get_rev
1016 torev = cl.index.get_rev
996 tonode = cl.node
1017 tonode = cl.node
997 obsstore = repo.obsstore
1018 obsstore = repo.obsstore
998 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1019 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
999 for rev in candidates:
1020 for rev in candidates:
1000 # We only evaluate mutable, non-obsolete revision
1021 # We only evaluate mutable, non-obsolete revision
1001 node = tonode(rev)
1022 node = tonode(rev)
1002 # (future) A cache of predecessors may worth if split is very common
1023 # (future) A cache of predecessors may worth if split is very common
1003 for pnode in obsutil.allpredecessors(
1024 for pnode in obsutil.allpredecessors(
1004 obsstore, [node], ignoreflags=bumpedfix
1025 obsstore, [node], ignoreflags=bumpedfix
1005 ):
1026 ):
1006 prev = torev(pnode) # unfiltered! but so is phasecache
1027 prev = torev(pnode) # unfiltered! but so is phasecache
1007 if (prev is not None) and (phase(repo, prev) <= public):
1028 if (prev is not None) and (phase(repo, prev) <= public):
1008 # we have a public predecessor
1029 # we have a public predecessor
1009 bumped.add(rev)
1030 bumped.add(rev)
1010 break # Next draft!
1031 break # Next draft!
1011 return frozenset(bumped)
1032 return frozenset(bumped)
1012
1033
1013
1034
1014 @cachefor(b'contentdivergent')
1035 @cachefor(b'contentdivergent')
1015 def _computecontentdivergentset(repo):
1036 def _computecontentdivergentset(repo):
1016 """the set of rev that compete to be the final successors of some revision."""
1037 """the set of rev that compete to be the final successors of some revision."""
1017 divergent = set()
1038 divergent = set()
1018 obsstore = repo.obsstore
1039 obsstore = repo.obsstore
1019 newermap = {}
1040 newermap = {}
1020 tonode = repo.changelog.node
1041 tonode = repo.changelog.node
1021 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1042 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1022 for rev in candidates:
1043 for rev in candidates:
1023 node = tonode(rev)
1044 node = tonode(rev)
1024 mark = obsstore.predecessors.get(node, ())
1045 mark = obsstore.predecessors.get(node, ())
1025 toprocess = set(mark)
1046 toprocess = set(mark)
1026 seen = set()
1047 seen = set()
1027 while toprocess:
1048 while toprocess:
1028 prec = toprocess.pop()[0]
1049 prec = toprocess.pop()[0]
1029 if prec in seen:
1050 if prec in seen:
1030 continue # emergency cycle hanging prevention
1051 continue # emergency cycle hanging prevention
1031 seen.add(prec)
1052 seen.add(prec)
1032 if prec not in newermap:
1053 if prec not in newermap:
1033 obsutil.successorssets(repo, prec, cache=newermap)
1054 obsutil.successorssets(repo, prec, cache=newermap)
1034 newer = [n for n in newermap[prec] if n]
1055 newer = [n for n in newermap[prec] if n]
1035 if len(newer) > 1:
1056 if len(newer) > 1:
1036 divergent.add(rev)
1057 divergent.add(rev)
1037 break
1058 break
1038 toprocess.update(obsstore.predecessors.get(prec, ()))
1059 toprocess.update(obsstore.predecessors.get(prec, ()))
1039 return frozenset(divergent)
1060 return frozenset(divergent)
1040
1061
1041
1062
1042 def makefoldid(relation, user):
1063 def makefoldid(relation, user):
1043 folddigest = hashutil.sha1(user)
1064 folddigest = hashutil.sha1(user)
1044 for p in relation[0] + relation[1]:
1065 for p in relation[0] + relation[1]:
1045 folddigest.update(b'%d' % p.rev())
1066 folddigest.update(b'%d' % p.rev())
1046 folddigest.update(p.node())
1067 folddigest.update(p.node())
1047 # Since fold only has to compete against fold for the same successors, it
1068 # Since fold only has to compete against fold for the same successors, it
1048 # seems fine to use a small ID. Smaller ID save space.
1069 # seems fine to use a small ID. Smaller ID save space.
1049 return hex(folddigest.digest())[:8]
1070 return hex(folddigest.digest())[:8]
1050
1071
1051
1072
1052 def createmarkers(
1073 def createmarkers(
1053 repo, relations, flag=0, date=None, metadata=None, operation=None
1074 repo, relations, flag=0, date=None, metadata=None, operation=None
1054 ):
1075 ):
1055 """Add obsolete markers between changesets in a repo
1076 """Add obsolete markers between changesets in a repo
1056
1077
1057 <relations> must be an iterable of ((<old>,...), (<new>, ...)[,{metadata}])
1078 <relations> must be an iterable of ((<old>,...), (<new>, ...)[,{metadata}])
1058 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1079 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1059 containing metadata for this marker only. It is merged with the global
1080 containing metadata for this marker only. It is merged with the global
1060 metadata specified through the `metadata` argument of this function.
1081 metadata specified through the `metadata` argument of this function.
1061 Any string values in metadata must be UTF-8 bytes.
1082 Any string values in metadata must be UTF-8 bytes.
1062
1083
1063 Trying to obsolete a public changeset will raise an exception.
1084 Trying to obsolete a public changeset will raise an exception.
1064
1085
1065 Current user and date are used except if specified otherwise in the
1086 Current user and date are used except if specified otherwise in the
1066 metadata attribute.
1087 metadata attribute.
1067
1088
1068 This function operates within a transaction of its own, but does
1089 This function operates within a transaction of its own, but does
1069 not take any lock on the repo.
1090 not take any lock on the repo.
1070 """
1091 """
1071 # prepare metadata
1092 # prepare metadata
1072 if metadata is None:
1093 if metadata is None:
1073 metadata = {}
1094 metadata = {}
1074 if b'user' not in metadata:
1095 if b'user' not in metadata:
1075 luser = (
1096 luser = (
1076 repo.ui.config(b'devel', b'user.obsmarker') or repo.ui.username()
1097 repo.ui.config(b'devel', b'user.obsmarker') or repo.ui.username()
1077 )
1098 )
1078 metadata[b'user'] = encoding.fromlocal(luser)
1099 metadata[b'user'] = encoding.fromlocal(luser)
1079
1100
1080 # Operation metadata handling
1101 # Operation metadata handling
1081 useoperation = repo.ui.configbool(
1102 useoperation = repo.ui.configbool(
1082 b'experimental', b'evolution.track-operation'
1103 b'experimental', b'evolution.track-operation'
1083 )
1104 )
1084 if useoperation and operation:
1105 if useoperation and operation:
1085 metadata[b'operation'] = operation
1106 metadata[b'operation'] = operation
1086
1107
1087 # Effect flag metadata handling
1108 # Effect flag metadata handling
1088 saveeffectflag = repo.ui.configbool(
1109 saveeffectflag = repo.ui.configbool(
1089 b'experimental', b'evolution.effect-flags'
1110 b'experimental', b'evolution.effect-flags'
1090 )
1111 )
1091
1112
1092 with repo.transaction(b'add-obsolescence-marker') as tr:
1113 with repo.transaction(b'add-obsolescence-marker') as tr:
1093 markerargs = []
1114 markerargs = []
1094 for rel in relations:
1115 for rel in relations:
1095 predecessors = rel[0]
1116 predecessors = rel[0]
1096 if not isinstance(predecessors, tuple):
1117 if not isinstance(predecessors, tuple):
1097 # preserve compat with old API until all caller are migrated
1118 # preserve compat with old API until all caller are migrated
1098 predecessors = (predecessors,)
1119 predecessors = (predecessors,)
1099 if len(predecessors) > 1 and len(rel[1]) != 1:
1120 if len(predecessors) > 1 and len(rel[1]) != 1:
1100 msg = b'Fold markers can only have 1 successors, not %d'
1121 msg = b'Fold markers can only have 1 successors, not %d'
1101 raise error.ProgrammingError(msg % len(rel[1]))
1122 raise error.ProgrammingError(msg % len(rel[1]))
1102 foldid = None
1123 foldid = None
1103 foldsize = len(predecessors)
1124 foldsize = len(predecessors)
1104 if 1 < foldsize:
1125 if 1 < foldsize:
1105 foldid = makefoldid(rel, metadata[b'user'])
1126 foldid = makefoldid(rel, metadata[b'user'])
1106 for foldidx, prec in enumerate(predecessors, 1):
1127 for foldidx, prec in enumerate(predecessors, 1):
1107 sucs = rel[1]
1128 sucs = rel[1]
1108 localmetadata = metadata.copy()
1129 localmetadata = metadata.copy()
1109 if len(rel) > 2:
1130 if len(rel) > 2:
1110 localmetadata.update(rel[2])
1131 localmetadata.update(rel[2])
1111 if foldid is not None:
1132 if foldid is not None:
1112 localmetadata[b'fold-id'] = foldid
1133 localmetadata[b'fold-id'] = foldid
1113 localmetadata[b'fold-idx'] = b'%d' % foldidx
1134 localmetadata[b'fold-idx'] = b'%d' % foldidx
1114 localmetadata[b'fold-size'] = b'%d' % foldsize
1135 localmetadata[b'fold-size'] = b'%d' % foldsize
1115
1136
1116 if not prec.mutable():
1137 if not prec.mutable():
1117 raise error.Abort(
1138 raise error.Abort(
1118 _(b"cannot obsolete public changeset: %s") % prec,
1139 _(b"cannot obsolete public changeset: %s") % prec,
1119 hint=b"see 'hg help phases' for details",
1140 hint=b"see 'hg help phases' for details",
1120 )
1141 )
1121 nprec = prec.node()
1142 nprec = prec.node()
1122 nsucs = tuple(s.node() for s in sucs)
1143 nsucs = tuple(s.node() for s in sucs)
1123 npare = None
1144 npare = None
1124 if not nsucs:
1145 if not nsucs:
1125 npare = tuple(p.node() for p in prec.parents())
1146 npare = tuple(p.node() for p in prec.parents())
1126 if nprec in nsucs:
1147 if nprec in nsucs:
1127 raise error.Abort(
1148 raise error.Abort(
1128 _(b"changeset %s cannot obsolete itself") % prec
1149 _(b"changeset %s cannot obsolete itself") % prec
1129 )
1150 )
1130
1151
1131 # Effect flag can be different by relation
1152 # Effect flag can be different by relation
1132 if saveeffectflag:
1153 if saveeffectflag:
1133 # The effect flag is saved in a versioned field name for
1154 # The effect flag is saved in a versioned field name for
1134 # future evolution
1155 # future evolution
1135 effectflag = obsutil.geteffectflag(prec, sucs)
1156 effectflag = obsutil.geteffectflag(prec, sucs)
1136 localmetadata[obsutil.EFFECTFLAGFIELD] = b"%d" % effectflag
1157 localmetadata[obsutil.EFFECTFLAGFIELD] = b"%d" % effectflag
1137
1158
1138 # Creating the marker causes the hidden cache to become
1159 # Creating the marker causes the hidden cache to become
1139 # invalid, which causes recomputation when we ask for
1160 # invalid, which causes recomputation when we ask for
1140 # prec.parents() above. Resulting in n^2 behavior. So let's
1161 # prec.parents() above. Resulting in n^2 behavior. So let's
1141 # prepare all of the args first, then create the markers.
1162 # prepare all of the args first, then create the markers.
1142 markerargs.append((nprec, nsucs, npare, localmetadata))
1163 markerargs.append((nprec, nsucs, npare, localmetadata))
1143
1164
1144 for args in markerargs:
1165 for args in markerargs:
1145 nprec, nsucs, npare, localmetadata = args
1166 nprec, nsucs, npare, localmetadata = args
1146 repo.obsstore.create(
1167 repo.obsstore.create(
1147 tr,
1168 tr,
1148 nprec,
1169 nprec,
1149 nsucs,
1170 nsucs,
1150 flag,
1171 flag,
1151 parents=npare,
1172 parents=npare,
1152 date=date,
1173 date=date,
1153 metadata=localmetadata,
1174 metadata=localmetadata,
1154 ui=repo.ui,
1175 ui=repo.ui,
1155 )
1176 )
1156 repo.filteredrevcache.clear()
1177 repo.filteredrevcache.clear()
@@ -1,1049 +1,1049
1 # obsutil.py - utility functions for obsolescence
1 # obsutil.py - utility functions for obsolescence
2 #
2 #
3 # Copyright 2017 Boris Feld <boris.feld@octobus.net>
3 # Copyright 2017 Boris Feld <boris.feld@octobus.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import annotations
8 from __future__ import annotations
9
9
10 import re
10 import re
11
11
12 from .i18n import _
12 from .i18n import _
13 from .node import (
13 from .node import (
14 hex,
14 hex,
15 short,
15 short,
16 )
16 )
17 from . import (
17 from . import (
18 diffutil,
18 diffutil,
19 encoding,
19 encoding,
20 error,
20 error,
21 phases,
21 phases,
22 util,
22 util,
23 )
23 )
24 from .utils import dateutil
24 from .utils import dateutil
25
25
26 ### obsolescence marker flag
26 ### obsolescence marker flag
27
27
28 ## bumpedfix flag
28 ## bumpedfix flag
29 #
29 #
30 # When a changeset A' succeed to a changeset A which became public, we call A'
30 # When a changeset A' succeed to a changeset A which became public, we call A'
31 # "bumped" because it's a successors of a public changesets
31 # "bumped" because it's a successors of a public changesets
32 #
32 #
33 # o A' (bumped)
33 # o A' (bumped)
34 # |`:
34 # |`:
35 # | o A
35 # | o A
36 # |/
36 # |/
37 # o Z
37 # o Z
38 #
38 #
39 # The way to solve this situation is to create a new changeset Ad as children
39 # The way to solve this situation is to create a new changeset Ad as children
40 # of A. This changeset have the same content than A'. So the diff from A to A'
40 # of A. This changeset have the same content than A'. So the diff from A to A'
41 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
41 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
42 #
42 #
43 # o Ad
43 # o Ad
44 # |`:
44 # |`:
45 # | x A'
45 # | x A'
46 # |'|
46 # |'|
47 # o | A
47 # o | A
48 # |/
48 # |/
49 # o Z
49 # o Z
50 #
50 #
51 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
51 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
52 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
52 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
53 # This flag mean that the successors express the changes between the public and
53 # This flag mean that the successors express the changes between the public and
54 # bumped version and fix the situation, breaking the transitivity of
54 # bumped version and fix the situation, breaking the transitivity of
55 # "bumped" here.
55 # "bumped" here.
56 bumpedfix = 1
56 bumpedfix = 1
57 usingsha256 = 2
57 usingsha256 = 2
58
58
59
59
60 class marker:
60 class marker:
61 """Wrap obsolete marker raw data"""
61 """Wrap obsolete marker raw data"""
62
62
63 def __init__(self, repo, data):
63 def __init__(self, repo, data):
64 # the repo argument will be used to create changectx in later version
64 # the repo argument will be used to create changectx in later version
65 self._repo = repo
65 self._repo = repo
66 self._data = data
66 self._data = data
67 self._decodedmeta = None
67 self._decodedmeta = None
68
68
69 def __hash__(self):
69 def __hash__(self):
70 return hash(self._data)
70 return hash(self._data)
71
71
72 def __eq__(self, other):
72 def __eq__(self, other):
73 if type(other) != type(self):
73 if type(other) != type(self):
74 return False
74 return False
75 return self._data == other._data
75 return self._data == other._data
76
76
77 def prednode(self):
77 def prednode(self):
78 """Predecessor changeset node identifier"""
78 """Predecessor changeset node identifier"""
79 return self._data[0]
79 return self._data[0]
80
80
81 def succnodes(self):
81 def succnodes(self):
82 """List of successor changesets node identifiers"""
82 """List of successor changesets node identifiers"""
83 return self._data[1]
83 return self._data[1]
84
84
85 def parentnodes(self):
85 def parentnodes(self):
86 """Parents of the predecessors (None if not recorded)"""
86 """Parents of the predecessors (None if not recorded)"""
87 return self._data[5]
87 return self._data[5]
88
88
89 def metadata(self):
89 def metadata(self):
90 """Decoded metadata dictionary"""
90 """Decoded metadata dictionary"""
91 return dict(self._data[3])
91 return dict(self._data[3])
92
92
93 def date(self):
93 def date(self):
94 """Creation date as (unixtime, offset)"""
94 """Creation date as (unixtime, offset)"""
95 return self._data[4]
95 return self._data[4]
96
96
97 def flags(self):
97 def flags(self):
98 """The flags field of the marker"""
98 """The flags field of the marker"""
99 return self._data[2]
99 return self._data[2]
100
100
101
101
102 def getmarkers(repo, nodes=None, exclusive=False):
102 def getmarkers(repo, nodes=None, exclusive=False):
103 """returns markers known in a repository
103 """returns markers known in a repository
104
104
105 If <nodes> is specified, only markers "relevant" to those nodes are are
105 If <nodes> is specified, only markers "relevant" to those nodes are are
106 returned"""
106 returned"""
107 if nodes is None:
107 if nodes is None:
108 rawmarkers = repo.obsstore
108 rawmarkers = repo.obsstore
109 elif exclusive:
109 elif exclusive:
110 rawmarkers = exclusivemarkers(repo, nodes)
110 rawmarkers = exclusivemarkers(repo, nodes)
111 else:
111 else:
112 rawmarkers = repo.obsstore.relevantmarkers(nodes)
112 rawmarkers = repo.obsstore.relevantmarkers(nodes=nodes)
113
113
114 for markerdata in rawmarkers:
114 for markerdata in rawmarkers:
115 yield marker(repo, markerdata)
115 yield marker(repo, markerdata)
116
116
117
117
118 def sortedmarkers(markers):
118 def sortedmarkers(markers):
119 # last item of marker tuple ('parents') may be None or a tuple
119 # last item of marker tuple ('parents') may be None or a tuple
120 return sorted(markers, key=lambda m: m[:-1] + (m[-1] or (),))
120 return sorted(markers, key=lambda m: m[:-1] + (m[-1] or (),))
121
121
122
122
123 def closestpredecessors(repo, nodeid):
123 def closestpredecessors(repo, nodeid):
124 """yield the list of next predecessors pointing on visible changectx nodes
124 """yield the list of next predecessors pointing on visible changectx nodes
125
125
126 This function respect the repoview filtering, filtered revision will be
126 This function respect the repoview filtering, filtered revision will be
127 considered missing.
127 considered missing.
128 """
128 """
129
129
130 precursors = repo.obsstore.predecessors
130 precursors = repo.obsstore.predecessors
131 stack = [nodeid]
131 stack = [nodeid]
132 seen = set(stack)
132 seen = set(stack)
133
133
134 while stack:
134 while stack:
135 current = stack.pop()
135 current = stack.pop()
136 currentpreccs = precursors.get(current, ())
136 currentpreccs = precursors.get(current, ())
137
137
138 for prec in currentpreccs:
138 for prec in currentpreccs:
139 precnodeid = prec[0]
139 precnodeid = prec[0]
140
140
141 # Basic cycle protection
141 # Basic cycle protection
142 if precnodeid in seen:
142 if precnodeid in seen:
143 continue
143 continue
144 seen.add(precnodeid)
144 seen.add(precnodeid)
145
145
146 if precnodeid in repo:
146 if precnodeid in repo:
147 yield precnodeid
147 yield precnodeid
148 else:
148 else:
149 stack.append(precnodeid)
149 stack.append(precnodeid)
150
150
151
151
152 def allpredecessors(obsstore, nodes, ignoreflags=0):
152 def allpredecessors(obsstore, nodes, ignoreflags=0):
153 """Yield node for every precursors of <nodes>.
153 """Yield node for every precursors of <nodes>.
154
154
155 Some precursors may be unknown locally.
155 Some precursors may be unknown locally.
156
156
157 This is a linear yield unsuited to detecting folded changesets. It includes
157 This is a linear yield unsuited to detecting folded changesets. It includes
158 initial nodes too."""
158 initial nodes too."""
159
159
160 remaining = set(nodes)
160 remaining = set(nodes)
161 seen = set(remaining)
161 seen = set(remaining)
162 prec = obsstore.predecessors.get
162 prec = obsstore.predecessors.get
163 while remaining:
163 while remaining:
164 current = remaining.pop()
164 current = remaining.pop()
165 yield current
165 yield current
166 for mark in prec(current, ()):
166 for mark in prec(current, ()):
167 # ignore marker flagged with specified flag
167 # ignore marker flagged with specified flag
168 if mark[2] & ignoreflags:
168 if mark[2] & ignoreflags:
169 continue
169 continue
170 suc = mark[0]
170 suc = mark[0]
171 if suc not in seen:
171 if suc not in seen:
172 seen.add(suc)
172 seen.add(suc)
173 remaining.add(suc)
173 remaining.add(suc)
174
174
175
175
176 def allsuccessors(obsstore, nodes, ignoreflags=0):
176 def allsuccessors(obsstore, nodes, ignoreflags=0):
177 """Yield node for every successor of <nodes>.
177 """Yield node for every successor of <nodes>.
178
178
179 Some successors may be unknown locally.
179 Some successors may be unknown locally.
180
180
181 This is a linear yield unsuited to detecting split changesets. It includes
181 This is a linear yield unsuited to detecting split changesets. It includes
182 initial nodes too."""
182 initial nodes too."""
183 remaining = set(nodes)
183 remaining = set(nodes)
184 seen = set(remaining)
184 seen = set(remaining)
185 while remaining:
185 while remaining:
186 current = remaining.pop()
186 current = remaining.pop()
187 yield current
187 yield current
188 for mark in obsstore.successors.get(current, ()):
188 for mark in obsstore.successors.get(current, ()):
189 # ignore marker flagged with specified flag
189 # ignore marker flagged with specified flag
190 if mark[2] & ignoreflags:
190 if mark[2] & ignoreflags:
191 continue
191 continue
192 for suc in mark[1]:
192 for suc in mark[1]:
193 if suc not in seen:
193 if suc not in seen:
194 seen.add(suc)
194 seen.add(suc)
195 remaining.add(suc)
195 remaining.add(suc)
196
196
197
197
198 def _filterprunes(markers):
198 def _filterprunes(markers):
199 """return a set with no prune markers"""
199 """return a set with no prune markers"""
200 return {m for m in markers if m[1]}
200 return {m for m in markers if m[1]}
201
201
202
202
203 def exclusivemarkers(repo, nodes):
203 def exclusivemarkers(repo, nodes):
204 """set of markers relevant to "nodes" but no other locally-known nodes
204 """set of markers relevant to "nodes" but no other locally-known nodes
205
205
206 This function compute the set of markers "exclusive" to a locally-known
206 This function compute the set of markers "exclusive" to a locally-known
207 node. This means we walk the markers starting from <nodes> until we reach a
207 node. This means we walk the markers starting from <nodes> until we reach a
208 locally-known precursors outside of <nodes>. Element of <nodes> with
208 locally-known precursors outside of <nodes>. Element of <nodes> with
209 locally-known successors outside of <nodes> are ignored (since their
209 locally-known successors outside of <nodes> are ignored (since their
210 precursors markers are also relevant to these successors).
210 precursors markers are also relevant to these successors).
211
211
212 For example:
212 For example:
213
213
214 # (A0 rewritten as A1)
214 # (A0 rewritten as A1)
215 #
215 #
216 # A0 <-1- A1 # Marker "1" is exclusive to A1
216 # A0 <-1- A1 # Marker "1" is exclusive to A1
217
217
218 or
218 or
219
219
220 # (A0 rewritten as AX; AX rewritten as A1; AX is unknown locally)
220 # (A0 rewritten as AX; AX rewritten as A1; AX is unknown locally)
221 #
221 #
222 # <-1- A0 <-2- AX <-3- A1 # Marker "2,3" are exclusive to A1
222 # <-1- A0 <-2- AX <-3- A1 # Marker "2,3" are exclusive to A1
223
223
224 or
224 or
225
225
226 # (A0 has unknown precursors, A0 rewritten as A1 and A2 (divergence))
226 # (A0 has unknown precursors, A0 rewritten as A1 and A2 (divergence))
227 #
227 #
228 # <-2- A1 # Marker "2" is exclusive to A0,A1
228 # <-2- A1 # Marker "2" is exclusive to A0,A1
229 # /
229 # /
230 # <-1- A0
230 # <-1- A0
231 # \
231 # \
232 # <-3- A2 # Marker "3" is exclusive to A0,A2
232 # <-3- A2 # Marker "3" is exclusive to A0,A2
233 #
233 #
234 # in addition:
234 # in addition:
235 #
235 #
236 # Markers "2,3" are exclusive to A1,A2
236 # Markers "2,3" are exclusive to A1,A2
237 # Markers "1,2,3" are exclusive to A0,A1,A2
237 # Markers "1,2,3" are exclusive to A0,A1,A2
238
238
239 See test/test-obsolete-bundle-strip.t for more examples.
239 See test/test-obsolete-bundle-strip.t for more examples.
240
240
241 An example usage is strip. When stripping a changeset, we also want to
241 An example usage is strip. When stripping a changeset, we also want to
242 strip the markers exclusive to this changeset. Otherwise we would have
242 strip the markers exclusive to this changeset. Otherwise we would have
243 "dangling"" obsolescence markers from its precursors: Obsolescence markers
243 "dangling"" obsolescence markers from its precursors: Obsolescence markers
244 marking a node as obsolete without any successors available locally.
244 marking a node as obsolete without any successors available locally.
245
245
246 As for relevant markers, the prune markers for children will be followed.
246 As for relevant markers, the prune markers for children will be followed.
247 Of course, they will only be followed if the pruned children is
247 Of course, they will only be followed if the pruned children is
248 locally-known. Since the prune markers are relevant to the pruned node.
248 locally-known. Since the prune markers are relevant to the pruned node.
249 However, while prune markers are considered relevant to the parent of the
249 However, while prune markers are considered relevant to the parent of the
250 pruned changesets, prune markers for locally-known changeset (with no
250 pruned changesets, prune markers for locally-known changeset (with no
251 successors) are considered exclusive to the pruned nodes. This allows
251 successors) are considered exclusive to the pruned nodes. This allows
252 to strip the prune markers (with the rest of the exclusive chain) alongside
252 to strip the prune markers (with the rest of the exclusive chain) alongside
253 the pruned changesets.
253 the pruned changesets.
254 """
254 """
255 # running on a filtered repository would be dangerous as markers could be
255 # running on a filtered repository would be dangerous as markers could be
256 # reported as exclusive when they are relevant for other filtered nodes.
256 # reported as exclusive when they are relevant for other filtered nodes.
257 unfi = repo.unfiltered()
257 unfi = repo.unfiltered()
258
258
259 # shortcut to various useful item
259 # shortcut to various useful item
260 has_node = unfi.changelog.index.has_node
260 has_node = unfi.changelog.index.has_node
261 precursorsmarkers = unfi.obsstore.predecessors
261 precursorsmarkers = unfi.obsstore.predecessors
262 successormarkers = unfi.obsstore.successors
262 successormarkers = unfi.obsstore.successors
263 childrenmarkers = unfi.obsstore.children
263 childrenmarkers = unfi.obsstore.children
264
264
265 # exclusive markers (return of the function)
265 # exclusive markers (return of the function)
266 exclmarkers = set()
266 exclmarkers = set()
267 # we need fast membership testing
267 # we need fast membership testing
268 nodes = set(nodes)
268 nodes = set(nodes)
269 # looking for head in the obshistory
269 # looking for head in the obshistory
270 #
270 #
271 # XXX we are ignoring all issues in regard with cycle for now.
271 # XXX we are ignoring all issues in regard with cycle for now.
272 stack = [n for n in nodes if not _filterprunes(successormarkers.get(n, ()))]
272 stack = [n for n in nodes if not _filterprunes(successormarkers.get(n, ()))]
273 stack.sort()
273 stack.sort()
274 # nodes already stacked
274 # nodes already stacked
275 seennodes = set(stack)
275 seennodes = set(stack)
276 while stack:
276 while stack:
277 current = stack.pop()
277 current = stack.pop()
278 # fetch precursors markers
278 # fetch precursors markers
279 markers = list(precursorsmarkers.get(current, ()))
279 markers = list(precursorsmarkers.get(current, ()))
280 # extend the list with prune markers
280 # extend the list with prune markers
281 for mark in successormarkers.get(current, ()):
281 for mark in successormarkers.get(current, ()):
282 if not mark[1]:
282 if not mark[1]:
283 markers.append(mark)
283 markers.append(mark)
284 # and markers from children (looking for prune)
284 # and markers from children (looking for prune)
285 for mark in childrenmarkers.get(current, ()):
285 for mark in childrenmarkers.get(current, ()):
286 if not mark[1]:
286 if not mark[1]:
287 markers.append(mark)
287 markers.append(mark)
288 # traverse the markers
288 # traverse the markers
289 for mark in markers:
289 for mark in markers:
290 if mark in exclmarkers:
290 if mark in exclmarkers:
291 # markers already selected
291 # markers already selected
292 continue
292 continue
293
293
294 # If the markers is about the current node, select it
294 # If the markers is about the current node, select it
295 #
295 #
296 # (this delay the addition of markers from children)
296 # (this delay the addition of markers from children)
297 if mark[1] or mark[0] == current:
297 if mark[1] or mark[0] == current:
298 exclmarkers.add(mark)
298 exclmarkers.add(mark)
299
299
300 # should we keep traversing through the precursors?
300 # should we keep traversing through the precursors?
301 prec = mark[0]
301 prec = mark[0]
302
302
303 # nodes in the stack or already processed
303 # nodes in the stack or already processed
304 if prec in seennodes:
304 if prec in seennodes:
305 continue
305 continue
306
306
307 # is this a locally known node ?
307 # is this a locally known node ?
308 known = has_node(prec)
308 known = has_node(prec)
309 # if locally-known and not in the <nodes> set the traversal
309 # if locally-known and not in the <nodes> set the traversal
310 # stop here.
310 # stop here.
311 if known and prec not in nodes:
311 if known and prec not in nodes:
312 continue
312 continue
313
313
314 # do not keep going if there are unselected markers pointing to this
314 # do not keep going if there are unselected markers pointing to this
315 # nodes. If we end up traversing these unselected markers later the
315 # nodes. If we end up traversing these unselected markers later the
316 # node will be taken care of at that point.
316 # node will be taken care of at that point.
317 precmarkers = _filterprunes(successormarkers.get(prec))
317 precmarkers = _filterprunes(successormarkers.get(prec))
318 if precmarkers.issubset(exclmarkers):
318 if precmarkers.issubset(exclmarkers):
319 seennodes.add(prec)
319 seennodes.add(prec)
320 stack.append(prec)
320 stack.append(prec)
321
321
322 return exclmarkers
322 return exclmarkers
323
323
324
324
325 def foreground(repo, nodes):
325 def foreground(repo, nodes):
326 """return all nodes in the "foreground" of other node
326 """return all nodes in the "foreground" of other node
327
327
328 The foreground of a revision is anything reachable using parent -> children
328 The foreground of a revision is anything reachable using parent -> children
329 or precursor -> successor relation. It is very similar to "descendant" but
329 or precursor -> successor relation. It is very similar to "descendant" but
330 augmented with obsolescence information.
330 augmented with obsolescence information.
331
331
332 Beware that possible obsolescence cycle may result if complex situation.
332 Beware that possible obsolescence cycle may result if complex situation.
333 """
333 """
334 repo = repo.unfiltered()
334 repo = repo.unfiltered()
335 foreground = set(repo.set(b'%ln::', nodes))
335 foreground = set(repo.set(b'%ln::', nodes))
336 if repo.obsstore:
336 if repo.obsstore:
337 # We only need this complicated logic if there is obsolescence
337 # We only need this complicated logic if there is obsolescence
338 # XXX will probably deserve an optimised revset.
338 # XXX will probably deserve an optimised revset.
339 has_node = repo.changelog.index.has_node
339 has_node = repo.changelog.index.has_node
340 plen = -1
340 plen = -1
341 # compute the whole set of successors or descendants
341 # compute the whole set of successors or descendants
342 while len(foreground) != plen:
342 while len(foreground) != plen:
343 plen = len(foreground)
343 plen = len(foreground)
344 succs = {c.node() for c in foreground}
344 succs = {c.node() for c in foreground}
345 mutable = [c.node() for c in foreground if c.mutable()]
345 mutable = [c.node() for c in foreground if c.mutable()]
346 succs.update(allsuccessors(repo.obsstore, mutable))
346 succs.update(allsuccessors(repo.obsstore, mutable))
347 known = (n for n in succs if has_node(n))
347 known = (n for n in succs if has_node(n))
348 foreground = set(repo.set(b'%ln::', known))
348 foreground = set(repo.set(b'%ln::', known))
349 return {c.node() for c in foreground}
349 return {c.node() for c in foreground}
350
350
351
351
352 # effectflag field
352 # effectflag field
353 #
353 #
354 # Effect-flag is a 1-byte bit field used to store what changed between a
354 # Effect-flag is a 1-byte bit field used to store what changed between a
355 # changeset and its successor(s).
355 # changeset and its successor(s).
356 #
356 #
357 # The effect flag is stored in obs-markers metadata while we iterate on the
357 # The effect flag is stored in obs-markers metadata while we iterate on the
358 # information design. That's why we have the EFFECTFLAGFIELD. If we come up
358 # information design. That's why we have the EFFECTFLAGFIELD. If we come up
359 # with an incompatible design for effect flag, we can store a new design under
359 # with an incompatible design for effect flag, we can store a new design under
360 # another field name so we don't break readers. We plan to extend the existing
360 # another field name so we don't break readers. We plan to extend the existing
361 # obsmarkers bit-field when the effect flag design will be stabilized.
361 # obsmarkers bit-field when the effect flag design will be stabilized.
362 #
362 #
363 # The effect-flag is placed behind an experimental flag
363 # The effect-flag is placed behind an experimental flag
364 # `effect-flags` set to off by default.
364 # `effect-flags` set to off by default.
365 #
365 #
366
366
367 EFFECTFLAGFIELD = b"ef1"
367 EFFECTFLAGFIELD = b"ef1"
368
368
369 DESCCHANGED = 1 << 0 # action changed the description
369 DESCCHANGED = 1 << 0 # action changed the description
370 METACHANGED = 1 << 1 # action change the meta
370 METACHANGED = 1 << 1 # action change the meta
371 DIFFCHANGED = 1 << 3 # action change diff introduced by the changeset
371 DIFFCHANGED = 1 << 3 # action change diff introduced by the changeset
372 PARENTCHANGED = 1 << 2 # action change the parent
372 PARENTCHANGED = 1 << 2 # action change the parent
373 USERCHANGED = 1 << 4 # the user changed
373 USERCHANGED = 1 << 4 # the user changed
374 DATECHANGED = 1 << 5 # the date changed
374 DATECHANGED = 1 << 5 # the date changed
375 BRANCHCHANGED = 1 << 6 # the branch changed
375 BRANCHCHANGED = 1 << 6 # the branch changed
376
376
377 METABLACKLIST = [
377 METABLACKLIST = [
378 re.compile(b'^branch$'),
378 re.compile(b'^branch$'),
379 re.compile(b'^.*-source$'),
379 re.compile(b'^.*-source$'),
380 re.compile(b'^.*_source$'),
380 re.compile(b'^.*_source$'),
381 re.compile(b'^source$'),
381 re.compile(b'^source$'),
382 ]
382 ]
383
383
384
384
385 def metanotblacklisted(metaitem):
385 def metanotblacklisted(metaitem):
386 """Check that the key of a meta item (extrakey, extravalue) does not
386 """Check that the key of a meta item (extrakey, extravalue) does not
387 match at least one of the blacklist pattern
387 match at least one of the blacklist pattern
388 """
388 """
389 metakey = metaitem[0]
389 metakey = metaitem[0]
390
390
391 return not any(pattern.match(metakey) for pattern in METABLACKLIST)
391 return not any(pattern.match(metakey) for pattern in METABLACKLIST)
392
392
393
393
394 def _prepare_hunk(hunk):
394 def _prepare_hunk(hunk):
395 """Drop all information but the username and patch"""
395 """Drop all information but the username and patch"""
396 cleanhunk = []
396 cleanhunk = []
397 for line in hunk.splitlines():
397 for line in hunk.splitlines():
398 if line.startswith(b'# User') or not line.startswith(b'#'):
398 if line.startswith(b'# User') or not line.startswith(b'#'):
399 if line.startswith(b'@@'):
399 if line.startswith(b'@@'):
400 line = b'@@\n'
400 line = b'@@\n'
401 cleanhunk.append(line)
401 cleanhunk.append(line)
402 return cleanhunk
402 return cleanhunk
403
403
404
404
405 def _getdifflines(iterdiff):
405 def _getdifflines(iterdiff):
406 """return a cleaned up lines"""
406 """return a cleaned up lines"""
407 lines = next(iterdiff, None)
407 lines = next(iterdiff, None)
408
408
409 if lines is None:
409 if lines is None:
410 return lines
410 return lines
411
411
412 return _prepare_hunk(lines)
412 return _prepare_hunk(lines)
413
413
414
414
415 def _cmpdiff(leftctx, rightctx):
415 def _cmpdiff(leftctx, rightctx):
416 """return True if both ctx introduce the "same diff"
416 """return True if both ctx introduce the "same diff"
417
417
418 This is a first and basic implementation, with many shortcoming.
418 This is a first and basic implementation, with many shortcoming.
419 """
419 """
420 diffopts = diffutil.diffallopts(leftctx.repo().ui, {b'git': True})
420 diffopts = diffutil.diffallopts(leftctx.repo().ui, {b'git': True})
421
421
422 # Leftctx or right ctx might be filtered, so we need to use the contexts
422 # Leftctx or right ctx might be filtered, so we need to use the contexts
423 # with an unfiltered repository to safely compute the diff
423 # with an unfiltered repository to safely compute the diff
424
424
425 # leftctx and rightctx can be from different repository views in case of
425 # leftctx and rightctx can be from different repository views in case of
426 # hgsubversion, do don't try to access them from same repository
426 # hgsubversion, do don't try to access them from same repository
427 # rightctx.repo() and leftctx.repo() are not always the same
427 # rightctx.repo() and leftctx.repo() are not always the same
428 leftunfi = leftctx._repo.unfiltered()[leftctx.rev()]
428 leftunfi = leftctx._repo.unfiltered()[leftctx.rev()]
429 leftdiff = leftunfi.diff(opts=diffopts)
429 leftdiff = leftunfi.diff(opts=diffopts)
430 rightunfi = rightctx._repo.unfiltered()[rightctx.rev()]
430 rightunfi = rightctx._repo.unfiltered()[rightctx.rev()]
431 rightdiff = rightunfi.diff(opts=diffopts)
431 rightdiff = rightunfi.diff(opts=diffopts)
432
432
433 left, right = (0, 0)
433 left, right = (0, 0)
434 while None not in (left, right):
434 while None not in (left, right):
435 left = _getdifflines(leftdiff)
435 left = _getdifflines(leftdiff)
436 right = _getdifflines(rightdiff)
436 right = _getdifflines(rightdiff)
437
437
438 if left != right:
438 if left != right:
439 return False
439 return False
440 return True
440 return True
441
441
442
442
443 def geteffectflag(source, successors):
443 def geteffectflag(source, successors):
444 """From an obs-marker relation, compute what changed between the
444 """From an obs-marker relation, compute what changed between the
445 predecessor and the successor.
445 predecessor and the successor.
446 """
446 """
447 effects = 0
447 effects = 0
448
448
449 for changectx in successors:
449 for changectx in successors:
450 # Check if description has changed
450 # Check if description has changed
451 if changectx.description() != source.description():
451 if changectx.description() != source.description():
452 effects |= DESCCHANGED
452 effects |= DESCCHANGED
453
453
454 # Check if user has changed
454 # Check if user has changed
455 if changectx.user() != source.user():
455 if changectx.user() != source.user():
456 effects |= USERCHANGED
456 effects |= USERCHANGED
457
457
458 # Check if date has changed
458 # Check if date has changed
459 if changectx.date() != source.date():
459 if changectx.date() != source.date():
460 effects |= DATECHANGED
460 effects |= DATECHANGED
461
461
462 # Check if branch has changed
462 # Check if branch has changed
463 if changectx.branch() != source.branch():
463 if changectx.branch() != source.branch():
464 effects |= BRANCHCHANGED
464 effects |= BRANCHCHANGED
465
465
466 # Check if at least one of the parent has changed
466 # Check if at least one of the parent has changed
467 if changectx.parents() != source.parents():
467 if changectx.parents() != source.parents():
468 effects |= PARENTCHANGED
468 effects |= PARENTCHANGED
469
469
470 # Check if other meta has changed
470 # Check if other meta has changed
471 changeextra = changectx.extra().items()
471 changeextra = changectx.extra().items()
472 ctxmeta = sorted(filter(metanotblacklisted, changeextra))
472 ctxmeta = sorted(filter(metanotblacklisted, changeextra))
473
473
474 sourceextra = source.extra().items()
474 sourceextra = source.extra().items()
475 srcmeta = sorted(filter(metanotblacklisted, sourceextra))
475 srcmeta = sorted(filter(metanotblacklisted, sourceextra))
476
476
477 if ctxmeta != srcmeta:
477 if ctxmeta != srcmeta:
478 effects |= METACHANGED
478 effects |= METACHANGED
479
479
480 # Check if the diff has changed
480 # Check if the diff has changed
481 if not _cmpdiff(source, changectx):
481 if not _cmpdiff(source, changectx):
482 effects |= DIFFCHANGED
482 effects |= DIFFCHANGED
483
483
484 return effects
484 return effects
485
485
486
486
487 def getobsoleted(repo, tr=None, changes=None):
487 def getobsoleted(repo, tr=None, changes=None):
488 """return the set of pre-existing revisions obsoleted by a transaction
488 """return the set of pre-existing revisions obsoleted by a transaction
489
489
490 Either the transaction or changes item of the transaction (for hooks)
490 Either the transaction or changes item of the transaction (for hooks)
491 must be provided, but not both.
491 must be provided, but not both.
492 """
492 """
493 if (tr is None) == (changes is None):
493 if (tr is None) == (changes is None):
494 e = b"exactly one of tr and changes must be provided"
494 e = b"exactly one of tr and changes must be provided"
495 raise error.ProgrammingError(e)
495 raise error.ProgrammingError(e)
496 torev = repo.unfiltered().changelog.index.get_rev
496 torev = repo.unfiltered().changelog.index.get_rev
497 phase = repo._phasecache.phase
497 phase = repo._phasecache.phase
498 succsmarkers = repo.obsstore.successors.get
498 succsmarkers = repo.obsstore.successors.get
499 public = phases.public
499 public = phases.public
500 if changes is None:
500 if changes is None:
501 changes = tr.changes
501 changes = tr.changes
502 addedmarkers = changes[b'obsmarkers']
502 addedmarkers = changes[b'obsmarkers']
503 origrepolen = changes[b'origrepolen']
503 origrepolen = changes[b'origrepolen']
504 seenrevs = set()
504 seenrevs = set()
505 obsoleted = set()
505 obsoleted = set()
506 for mark in addedmarkers:
506 for mark in addedmarkers:
507 node = mark[0]
507 node = mark[0]
508 rev = torev(node)
508 rev = torev(node)
509 if rev is None or rev in seenrevs or rev >= origrepolen:
509 if rev is None or rev in seenrevs or rev >= origrepolen:
510 continue
510 continue
511 seenrevs.add(rev)
511 seenrevs.add(rev)
512 if phase(repo, rev) == public:
512 if phase(repo, rev) == public:
513 continue
513 continue
514 if set(succsmarkers(node) or []).issubset(addedmarkers):
514 if set(succsmarkers(node) or []).issubset(addedmarkers):
515 obsoleted.add(rev)
515 obsoleted.add(rev)
516 return obsoleted
516 return obsoleted
517
517
518
518
519 class _succs(list):
519 class _succs(list):
520 """small class to represent a successors with some metadata about it"""
520 """small class to represent a successors with some metadata about it"""
521
521
522 def __init__(self, *args, **kwargs):
522 def __init__(self, *args, **kwargs):
523 super(_succs, self).__init__(*args, **kwargs)
523 super(_succs, self).__init__(*args, **kwargs)
524 self.markers = set()
524 self.markers = set()
525
525
526 def copy(self):
526 def copy(self):
527 new = _succs(self)
527 new = _succs(self)
528 new.markers = self.markers.copy()
528 new.markers = self.markers.copy()
529 return new
529 return new
530
530
531 @util.propertycache
531 @util.propertycache
532 def _set(self):
532 def _set(self):
533 # immutable
533 # immutable
534 return set(self)
534 return set(self)
535
535
536 def canmerge(self, other):
536 def canmerge(self, other):
537 return self._set.issubset(other._set)
537 return self._set.issubset(other._set)
538
538
539
539
540 def successorssets(repo, initialnode, closest=False, cache=None):
540 def successorssets(repo, initialnode, closest=False, cache=None):
541 """Return set of all latest successors of initial nodes
541 """Return set of all latest successors of initial nodes
542
542
543 The successors set of a changeset A are the group of revisions that succeed
543 The successors set of a changeset A are the group of revisions that succeed
544 A. It succeeds A as a consistent whole, each revision being only a partial
544 A. It succeeds A as a consistent whole, each revision being only a partial
545 replacement. By default, the successors set contains non-obsolete
545 replacement. By default, the successors set contains non-obsolete
546 changesets only, walking the obsolescence graph until reaching a leaf. If
546 changesets only, walking the obsolescence graph until reaching a leaf. If
547 'closest' is set to True, closest successors-sets are return (the
547 'closest' is set to True, closest successors-sets are return (the
548 obsolescence walk stops on known changesets).
548 obsolescence walk stops on known changesets).
549
549
550 This function returns the full list of successor sets which is why it
550 This function returns the full list of successor sets which is why it
551 returns a list of tuples and not just a single tuple. Each tuple is a valid
551 returns a list of tuples and not just a single tuple. Each tuple is a valid
552 successors set. Note that (A,) may be a valid successors set for changeset A
552 successors set. Note that (A,) may be a valid successors set for changeset A
553 (see below).
553 (see below).
554
554
555 In most cases, a changeset A will have a single element (e.g. the changeset
555 In most cases, a changeset A will have a single element (e.g. the changeset
556 A is replaced by A') in its successors set. Though, it is also common for a
556 A is replaced by A') in its successors set. Though, it is also common for a
557 changeset A to have no elements in its successor set (e.g. the changeset
557 changeset A to have no elements in its successor set (e.g. the changeset
558 has been pruned). Therefore, the returned list of successors sets will be
558 has been pruned). Therefore, the returned list of successors sets will be
559 [(A',)] or [], respectively.
559 [(A',)] or [], respectively.
560
560
561 When a changeset A is split into A' and B', however, it will result in a
561 When a changeset A is split into A' and B', however, it will result in a
562 successors set containing more than a single element, i.e. [(A',B')].
562 successors set containing more than a single element, i.e. [(A',B')].
563 Divergent changesets will result in multiple successors sets, i.e. [(A',),
563 Divergent changesets will result in multiple successors sets, i.e. [(A',),
564 (A'')].
564 (A'')].
565
565
566 If a changeset A is not obsolete, then it will conceptually have no
566 If a changeset A is not obsolete, then it will conceptually have no
567 successors set. To distinguish this from a pruned changeset, the successor
567 successors set. To distinguish this from a pruned changeset, the successor
568 set will contain itself only, i.e. [(A,)].
568 set will contain itself only, i.e. [(A,)].
569
569
570 Finally, final successors unknown locally are considered to be pruned
570 Finally, final successors unknown locally are considered to be pruned
571 (pruned: obsoleted without any successors). (Final: successors not affected
571 (pruned: obsoleted without any successors). (Final: successors not affected
572 by markers).
572 by markers).
573
573
574 The 'closest' mode respect the repoview filtering. For example, without
574 The 'closest' mode respect the repoview filtering. For example, without
575 filter it will stop at the first locally known changeset, with 'visible'
575 filter it will stop at the first locally known changeset, with 'visible'
576 filter it will stop on visible changesets).
576 filter it will stop on visible changesets).
577
577
578 The optional `cache` parameter is a dictionary that may contains
578 The optional `cache` parameter is a dictionary that may contains
579 precomputed successors sets. It is meant to reuse the computation of a
579 precomputed successors sets. It is meant to reuse the computation of a
580 previous call to `successorssets` when multiple calls are made at the same
580 previous call to `successorssets` when multiple calls are made at the same
581 time. The cache dictionary is updated in place. The caller is responsible
581 time. The cache dictionary is updated in place. The caller is responsible
582 for its life span. Code that makes multiple calls to `successorssets`
582 for its life span. Code that makes multiple calls to `successorssets`
583 *should* use this cache mechanism or risk a performance hit.
583 *should* use this cache mechanism or risk a performance hit.
584
584
585 Since results are different depending of the 'closest' most, the same cache
585 Since results are different depending of the 'closest' most, the same cache
586 cannot be reused for both mode.
586 cannot be reused for both mode.
587 """
587 """
588
588
589 succmarkers = repo.obsstore.successors
589 succmarkers = repo.obsstore.successors
590
590
591 # Stack of nodes we search successors sets for
591 # Stack of nodes we search successors sets for
592 toproceed = [initialnode]
592 toproceed = [initialnode]
593 # set version of above list for fast loop detection
593 # set version of above list for fast loop detection
594 # element added to "toproceed" must be added here
594 # element added to "toproceed" must be added here
595 stackedset = set(toproceed)
595 stackedset = set(toproceed)
596 if cache is None:
596 if cache is None:
597 cache = {}
597 cache = {}
598
598
599 # This while loop is the flattened version of a recursive search for
599 # This while loop is the flattened version of a recursive search for
600 # successors sets
600 # successors sets
601 #
601 #
602 # def successorssets(x):
602 # def successorssets(x):
603 # successors = directsuccessors(x)
603 # successors = directsuccessors(x)
604 # ss = [[]]
604 # ss = [[]]
605 # for succ in directsuccessors(x):
605 # for succ in directsuccessors(x):
606 # # product as in itertools cartesian product
606 # # product as in itertools cartesian product
607 # ss = product(ss, successorssets(succ))
607 # ss = product(ss, successorssets(succ))
608 # return ss
608 # return ss
609 #
609 #
610 # But we can not use plain recursive calls here:
610 # But we can not use plain recursive calls here:
611 # - that would blow the python call stack
611 # - that would blow the python call stack
612 # - obsolescence markers may have cycles, we need to handle them.
612 # - obsolescence markers may have cycles, we need to handle them.
613 #
613 #
614 # The `toproceed` list act as our call stack. Every node we search
614 # The `toproceed` list act as our call stack. Every node we search
615 # successors set for are stacked there.
615 # successors set for are stacked there.
616 #
616 #
617 # The `stackedset` is set version of this stack used to check if a node is
617 # The `stackedset` is set version of this stack used to check if a node is
618 # already stacked. This check is used to detect cycles and prevent infinite
618 # already stacked. This check is used to detect cycles and prevent infinite
619 # loop.
619 # loop.
620 #
620 #
621 # successors set of all nodes are stored in the `cache` dictionary.
621 # successors set of all nodes are stored in the `cache` dictionary.
622 #
622 #
623 # After this while loop ends we use the cache to return the successors sets
623 # After this while loop ends we use the cache to return the successors sets
624 # for the node requested by the caller.
624 # for the node requested by the caller.
625 while toproceed:
625 while toproceed:
626 # Every iteration tries to compute the successors sets of the topmost
626 # Every iteration tries to compute the successors sets of the topmost
627 # node of the stack: CURRENT.
627 # node of the stack: CURRENT.
628 #
628 #
629 # There are four possible outcomes:
629 # There are four possible outcomes:
630 #
630 #
631 # 1) We already know the successors sets of CURRENT:
631 # 1) We already know the successors sets of CURRENT:
632 # -> mission accomplished, pop it from the stack.
632 # -> mission accomplished, pop it from the stack.
633 # 2) Stop the walk:
633 # 2) Stop the walk:
634 # default case: Node is not obsolete
634 # default case: Node is not obsolete
635 # closest case: Node is known at this repo filter level
635 # closest case: Node is known at this repo filter level
636 # -> the node is its own successors sets. Add it to the cache.
636 # -> the node is its own successors sets. Add it to the cache.
637 # 3) We do not know successors set of direct successors of CURRENT:
637 # 3) We do not know successors set of direct successors of CURRENT:
638 # -> We add those successors to the stack.
638 # -> We add those successors to the stack.
639 # 4) We know successors sets of all direct successors of CURRENT:
639 # 4) We know successors sets of all direct successors of CURRENT:
640 # -> We can compute CURRENT successors set and add it to the
640 # -> We can compute CURRENT successors set and add it to the
641 # cache.
641 # cache.
642 #
642 #
643 current = toproceed[-1]
643 current = toproceed[-1]
644
644
645 # case 2 condition is a bit hairy because of closest,
645 # case 2 condition is a bit hairy because of closest,
646 # we compute it on its own
646 # we compute it on its own
647 case2condition = (current not in succmarkers) or (
647 case2condition = (current not in succmarkers) or (
648 closest and current != initialnode and current in repo
648 closest and current != initialnode and current in repo
649 )
649 )
650
650
651 if current in cache:
651 if current in cache:
652 # case (1): We already know the successors sets
652 # case (1): We already know the successors sets
653 stackedset.remove(toproceed.pop())
653 stackedset.remove(toproceed.pop())
654 elif case2condition:
654 elif case2condition:
655 # case (2): end of walk.
655 # case (2): end of walk.
656 if current in repo:
656 if current in repo:
657 # We have a valid successors.
657 # We have a valid successors.
658 cache[current] = [_succs((current,))]
658 cache[current] = [_succs((current,))]
659 else:
659 else:
660 # Final obsolete version is unknown locally.
660 # Final obsolete version is unknown locally.
661 # Do not count that as a valid successors
661 # Do not count that as a valid successors
662 cache[current] = []
662 cache[current] = []
663 else:
663 else:
664 # cases (3) and (4)
664 # cases (3) and (4)
665 #
665 #
666 # We proceed in two phases. Phase 1 aims to distinguish case (3)
666 # We proceed in two phases. Phase 1 aims to distinguish case (3)
667 # from case (4):
667 # from case (4):
668 #
668 #
669 # For each direct successors of CURRENT, we check whether its
669 # For each direct successors of CURRENT, we check whether its
670 # successors sets are known. If they are not, we stack the
670 # successors sets are known. If they are not, we stack the
671 # unknown node and proceed to the next iteration of the while
671 # unknown node and proceed to the next iteration of the while
672 # loop. (case 3)
672 # loop. (case 3)
673 #
673 #
674 # During this step, we may detect obsolescence cycles: a node
674 # During this step, we may detect obsolescence cycles: a node
675 # with unknown successors sets but already in the call stack.
675 # with unknown successors sets but already in the call stack.
676 # In such a situation, we arbitrary set the successors sets of
676 # In such a situation, we arbitrary set the successors sets of
677 # the node to nothing (node pruned) to break the cycle.
677 # the node to nothing (node pruned) to break the cycle.
678 #
678 #
679 # If no break was encountered we proceed to phase 2.
679 # If no break was encountered we proceed to phase 2.
680 #
680 #
681 # Phase 2 computes successors sets of CURRENT (case 4); see details
681 # Phase 2 computes successors sets of CURRENT (case 4); see details
682 # in phase 2 itself.
682 # in phase 2 itself.
683 #
683 #
684 # Note the two levels of iteration in each phase.
684 # Note the two levels of iteration in each phase.
685 # - The first one handles obsolescence markers using CURRENT as
685 # - The first one handles obsolescence markers using CURRENT as
686 # precursor (successors markers of CURRENT).
686 # precursor (successors markers of CURRENT).
687 #
687 #
688 # Having multiple entry here means divergence.
688 # Having multiple entry here means divergence.
689 #
689 #
690 # - The second one handles successors defined in each marker.
690 # - The second one handles successors defined in each marker.
691 #
691 #
692 # Having none means pruned node, multiple successors means split,
692 # Having none means pruned node, multiple successors means split,
693 # single successors are standard replacement.
693 # single successors are standard replacement.
694 #
694 #
695 for mark in sortedmarkers(succmarkers[current]):
695 for mark in sortedmarkers(succmarkers[current]):
696 for suc in mark[1]:
696 for suc in mark[1]:
697 if suc not in cache:
697 if suc not in cache:
698 if suc in stackedset:
698 if suc in stackedset:
699 # cycle breaking
699 # cycle breaking
700 cache[suc] = []
700 cache[suc] = []
701 else:
701 else:
702 # case (3) If we have not computed successors sets
702 # case (3) If we have not computed successors sets
703 # of one of those successors we add it to the
703 # of one of those successors we add it to the
704 # `toproceed` stack and stop all work for this
704 # `toproceed` stack and stop all work for this
705 # iteration.
705 # iteration.
706 toproceed.append(suc)
706 toproceed.append(suc)
707 stackedset.add(suc)
707 stackedset.add(suc)
708 break
708 break
709 else:
709 else:
710 continue
710 continue
711 break
711 break
712 else:
712 else:
713 # case (4): we know all successors sets of all direct
713 # case (4): we know all successors sets of all direct
714 # successors
714 # successors
715 #
715 #
716 # Successors set contributed by each marker depends on the
716 # Successors set contributed by each marker depends on the
717 # successors sets of all its "successors" node.
717 # successors sets of all its "successors" node.
718 #
718 #
719 # Each different marker is a divergence in the obsolescence
719 # Each different marker is a divergence in the obsolescence
720 # history. It contributes successors sets distinct from other
720 # history. It contributes successors sets distinct from other
721 # markers.
721 # markers.
722 #
722 #
723 # Within a marker, a successor may have divergent successors
723 # Within a marker, a successor may have divergent successors
724 # sets. In such a case, the marker will contribute multiple
724 # sets. In such a case, the marker will contribute multiple
725 # divergent successors sets. If multiple successors have
725 # divergent successors sets. If multiple successors have
726 # divergent successors sets, a Cartesian product is used.
726 # divergent successors sets, a Cartesian product is used.
727 #
727 #
728 # At the end we post-process successors sets to remove
728 # At the end we post-process successors sets to remove
729 # duplicated entry and successors set that are strict subset of
729 # duplicated entry and successors set that are strict subset of
730 # another one.
730 # another one.
731 succssets = []
731 succssets = []
732 for mark in sortedmarkers(succmarkers[current]):
732 for mark in sortedmarkers(succmarkers[current]):
733 # successors sets contributed by this marker
733 # successors sets contributed by this marker
734 base = _succs()
734 base = _succs()
735 base.markers.add(mark)
735 base.markers.add(mark)
736 markss = [base]
736 markss = [base]
737 for suc in mark[1]:
737 for suc in mark[1]:
738 # cardinal product with previous successors
738 # cardinal product with previous successors
739 productresult = []
739 productresult = []
740 for prefix in markss:
740 for prefix in markss:
741 for suffix in cache[suc]:
741 for suffix in cache[suc]:
742 newss = prefix.copy()
742 newss = prefix.copy()
743 newss.markers.update(suffix.markers)
743 newss.markers.update(suffix.markers)
744 for part in suffix:
744 for part in suffix:
745 # do not duplicated entry in successors set
745 # do not duplicated entry in successors set
746 # first entry wins.
746 # first entry wins.
747 if part not in newss:
747 if part not in newss:
748 newss.append(part)
748 newss.append(part)
749 productresult.append(newss)
749 productresult.append(newss)
750 if productresult:
750 if productresult:
751 markss = productresult
751 markss = productresult
752 succssets.extend(markss)
752 succssets.extend(markss)
753 # remove duplicated and subset
753 # remove duplicated and subset
754 seen = []
754 seen = []
755 final = []
755 final = []
756 candidates = sorted(
756 candidates = sorted(
757 (s for s in succssets if s), key=len, reverse=True
757 (s for s in succssets if s), key=len, reverse=True
758 )
758 )
759 for cand in candidates:
759 for cand in candidates:
760 for seensuccs in seen:
760 for seensuccs in seen:
761 if cand.canmerge(seensuccs):
761 if cand.canmerge(seensuccs):
762 seensuccs.markers.update(cand.markers)
762 seensuccs.markers.update(cand.markers)
763 break
763 break
764 else:
764 else:
765 final.append(cand)
765 final.append(cand)
766 seen.append(cand)
766 seen.append(cand)
767 final.reverse() # put small successors set first
767 final.reverse() # put small successors set first
768 cache[current] = final
768 cache[current] = final
769 return cache[initialnode]
769 return cache[initialnode]
770
770
771
771
772 def successorsandmarkers(repo, ctx):
772 def successorsandmarkers(repo, ctx):
773 """compute the raw data needed for computing obsfate
773 """compute the raw data needed for computing obsfate
774 Returns a list of dict, one dict per successors set
774 Returns a list of dict, one dict per successors set
775 """
775 """
776 if not ctx.obsolete():
776 if not ctx.obsolete():
777 return None
777 return None
778
778
779 ssets = successorssets(repo, ctx.node(), closest=True)
779 ssets = successorssets(repo, ctx.node(), closest=True)
780
780
781 # closestsuccessors returns an empty list for pruned revisions, remap it
781 # closestsuccessors returns an empty list for pruned revisions, remap it
782 # into a list containing an empty list for future processing
782 # into a list containing an empty list for future processing
783 if ssets == []:
783 if ssets == []:
784 ssets = [_succs()]
784 ssets = [_succs()]
785
785
786 # Try to recover pruned markers
786 # Try to recover pruned markers
787 succsmap = repo.obsstore.successors
787 succsmap = repo.obsstore.successors
788 fullsuccessorsets = [] # successor set + markers
788 fullsuccessorsets = [] # successor set + markers
789 for sset in ssets:
789 for sset in ssets:
790 if sset:
790 if sset:
791 fullsuccessorsets.append(sset)
791 fullsuccessorsets.append(sset)
792 else:
792 else:
793 # successorsset return an empty set() when ctx or one of its
793 # successorsset return an empty set() when ctx or one of its
794 # successors is pruned.
794 # successors is pruned.
795 # In this case, walk the obs-markers tree again starting with ctx
795 # In this case, walk the obs-markers tree again starting with ctx
796 # and find the relevant pruning obs-makers, the ones without
796 # and find the relevant pruning obs-makers, the ones without
797 # successors.
797 # successors.
798 # Having these markers allow us to compute some information about
798 # Having these markers allow us to compute some information about
799 # its fate, like who pruned this changeset and when.
799 # its fate, like who pruned this changeset and when.
800
800
801 # XXX we do not catch all prune markers (eg rewritten then pruned)
801 # XXX we do not catch all prune markers (eg rewritten then pruned)
802 # (fix me later)
802 # (fix me later)
803 foundany = False
803 foundany = False
804 for mark in succsmap.get(ctx.node(), ()):
804 for mark in succsmap.get(ctx.node(), ()):
805 if not mark[1]:
805 if not mark[1]:
806 foundany = True
806 foundany = True
807 sset = _succs()
807 sset = _succs()
808 sset.markers.add(mark)
808 sset.markers.add(mark)
809 fullsuccessorsets.append(sset)
809 fullsuccessorsets.append(sset)
810 if not foundany:
810 if not foundany:
811 fullsuccessorsets.append(_succs())
811 fullsuccessorsets.append(_succs())
812
812
813 values = []
813 values = []
814 for sset in fullsuccessorsets:
814 for sset in fullsuccessorsets:
815 values.append({b'successors': sset, b'markers': sset.markers})
815 values.append({b'successors': sset, b'markers': sset.markers})
816
816
817 return values
817 return values
818
818
819
819
820 def _getobsfate(successorssets):
820 def _getobsfate(successorssets):
821 """Compute a changeset obsolescence fate based on its successorssets.
821 """Compute a changeset obsolescence fate based on its successorssets.
822 Successors can be the tipmost ones or the immediate ones. This function
822 Successors can be the tipmost ones or the immediate ones. This function
823 return values are not meant to be shown directly to users, it is meant to
823 return values are not meant to be shown directly to users, it is meant to
824 be used by internal functions only.
824 be used by internal functions only.
825 Returns one fate from the following values:
825 Returns one fate from the following values:
826 - pruned
826 - pruned
827 - diverged
827 - diverged
828 - superseded
828 - superseded
829 - superseded_split
829 - superseded_split
830 """
830 """
831
831
832 if len(successorssets) == 0:
832 if len(successorssets) == 0:
833 # The commit has been pruned
833 # The commit has been pruned
834 return b'pruned'
834 return b'pruned'
835 elif len(successorssets) > 1:
835 elif len(successorssets) > 1:
836 return b'diverged'
836 return b'diverged'
837 else:
837 else:
838 # No divergence, only one set of successors
838 # No divergence, only one set of successors
839 successors = successorssets[0]
839 successors = successorssets[0]
840
840
841 if len(successors) == 1:
841 if len(successors) == 1:
842 return b'superseded'
842 return b'superseded'
843 else:
843 else:
844 return b'superseded_split'
844 return b'superseded_split'
845
845
846
846
847 def obsfateverb(successorset, markers):
847 def obsfateverb(successorset, markers):
848 """Return the verb summarizing the successorset and potentially using
848 """Return the verb summarizing the successorset and potentially using
849 information from the markers
849 information from the markers
850 """
850 """
851 if not successorset:
851 if not successorset:
852 verb = b'pruned'
852 verb = b'pruned'
853 elif len(successorset) == 1:
853 elif len(successorset) == 1:
854 verb = b'rewritten'
854 verb = b'rewritten'
855 else:
855 else:
856 verb = b'split'
856 verb = b'split'
857 return verb
857 return verb
858
858
859
859
860 def markersdates(markers):
860 def markersdates(markers):
861 """returns the list of dates for a list of markers"""
861 """returns the list of dates for a list of markers"""
862 return [m[4] for m in markers]
862 return [m[4] for m in markers]
863
863
864
864
865 def markersusers(markers):
865 def markersusers(markers):
866 """Returns a sorted list of markers users without duplicates"""
866 """Returns a sorted list of markers users without duplicates"""
867 markersmeta = [dict(m[3]) for m in markers]
867 markersmeta = [dict(m[3]) for m in markers]
868 users = {
868 users = {
869 encoding.tolocal(meta[b'user'])
869 encoding.tolocal(meta[b'user'])
870 for meta in markersmeta
870 for meta in markersmeta
871 if meta.get(b'user')
871 if meta.get(b'user')
872 }
872 }
873
873
874 return sorted(users)
874 return sorted(users)
875
875
876
876
877 def markersoperations(markers):
877 def markersoperations(markers):
878 """Returns a sorted list of markers operations without duplicates"""
878 """Returns a sorted list of markers operations without duplicates"""
879 markersmeta = [dict(m[3]) for m in markers]
879 markersmeta = [dict(m[3]) for m in markers]
880 operations = {
880 operations = {
881 meta.get(b'operation') for meta in markersmeta if meta.get(b'operation')
881 meta.get(b'operation') for meta in markersmeta if meta.get(b'operation')
882 }
882 }
883
883
884 return sorted(operations)
884 return sorted(operations)
885
885
886
886
887 def obsfateprinter(ui, repo, successors, markers, formatctx):
887 def obsfateprinter(ui, repo, successors, markers, formatctx):
888 """Build a obsfate string for a single successorset using all obsfate
888 """Build a obsfate string for a single successorset using all obsfate
889 related function defined in obsutil
889 related function defined in obsutil
890 """
890 """
891 quiet = ui.quiet
891 quiet = ui.quiet
892 verbose = ui.verbose
892 verbose = ui.verbose
893 normal = not verbose and not quiet
893 normal = not verbose and not quiet
894
894
895 line = []
895 line = []
896
896
897 # Verb
897 # Verb
898 line.append(obsfateverb(successors, markers))
898 line.append(obsfateverb(successors, markers))
899
899
900 # Operations
900 # Operations
901 operations = markersoperations(markers)
901 operations = markersoperations(markers)
902 if operations:
902 if operations:
903 line.append(b" using %s" % b", ".join(operations))
903 line.append(b" using %s" % b", ".join(operations))
904
904
905 # Successors
905 # Successors
906 if successors:
906 if successors:
907 fmtsuccessors = [formatctx(repo[succ]) for succ in successors]
907 fmtsuccessors = [formatctx(repo[succ]) for succ in successors]
908 line.append(b" as %s" % b", ".join(fmtsuccessors))
908 line.append(b" as %s" % b", ".join(fmtsuccessors))
909
909
910 # Users
910 # Users
911 users = markersusers(markers)
911 users = markersusers(markers)
912 # Filter out current user in not verbose mode to reduce amount of
912 # Filter out current user in not verbose mode to reduce amount of
913 # information
913 # information
914 if not verbose:
914 if not verbose:
915 currentuser = ui.username(acceptempty=True)
915 currentuser = ui.username(acceptempty=True)
916 if len(users) == 1 and currentuser in users:
916 if len(users) == 1 and currentuser in users:
917 users = None
917 users = None
918
918
919 if (verbose or normal) and users:
919 if (verbose or normal) and users:
920 line.append(b" by %s" % b", ".join(users))
920 line.append(b" by %s" % b", ".join(users))
921
921
922 # Date
922 # Date
923 dates = markersdates(markers)
923 dates = markersdates(markers)
924
924
925 if dates and verbose:
925 if dates and verbose:
926 min_date = min(dates)
926 min_date = min(dates)
927 max_date = max(dates)
927 max_date = max(dates)
928
928
929 if min_date == max_date:
929 if min_date == max_date:
930 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
930 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
931 line.append(b" (at %s)" % fmtmin_date)
931 line.append(b" (at %s)" % fmtmin_date)
932 else:
932 else:
933 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
933 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
934 fmtmax_date = dateutil.datestr(max_date, b'%Y-%m-%d %H:%M %1%2')
934 fmtmax_date = dateutil.datestr(max_date, b'%Y-%m-%d %H:%M %1%2')
935 line.append(b" (between %s and %s)" % (fmtmin_date, fmtmax_date))
935 line.append(b" (between %s and %s)" % (fmtmin_date, fmtmax_date))
936
936
937 return b"".join(line)
937 return b"".join(line)
938
938
939
939
940 filteredmsgtable = {
940 filteredmsgtable = {
941 b"pruned": _(b"hidden revision '%s' is pruned"),
941 b"pruned": _(b"hidden revision '%s' is pruned"),
942 b"diverged": _(b"hidden revision '%s' has diverged"),
942 b"diverged": _(b"hidden revision '%s' has diverged"),
943 b"superseded": _(b"hidden revision '%s' was rewritten as: %s"),
943 b"superseded": _(b"hidden revision '%s' was rewritten as: %s"),
944 b"superseded_split": _(b"hidden revision '%s' was split as: %s"),
944 b"superseded_split": _(b"hidden revision '%s' was split as: %s"),
945 b"superseded_split_several": _(
945 b"superseded_split_several": _(
946 b"hidden revision '%s' was split as: %s and %d more"
946 b"hidden revision '%s' was split as: %s and %d more"
947 ),
947 ),
948 }
948 }
949
949
950
950
951 def _getfilteredreason(repo, changeid, ctx) -> bytes:
951 def _getfilteredreason(repo, changeid, ctx) -> bytes:
952 """return a human-friendly string on why a obsolete changeset is hidden"""
952 """return a human-friendly string on why a obsolete changeset is hidden"""
953 successors = successorssets(repo, ctx.node())
953 successors = successorssets(repo, ctx.node())
954 fate = _getobsfate(successors)
954 fate = _getobsfate(successors)
955
955
956 # Be more precise in case the revision is superseded
956 # Be more precise in case the revision is superseded
957 if fate == b'pruned':
957 if fate == b'pruned':
958 return filteredmsgtable[b'pruned'] % changeid
958 return filteredmsgtable[b'pruned'] % changeid
959 elif fate == b'diverged':
959 elif fate == b'diverged':
960 return filteredmsgtable[b'diverged'] % changeid
960 return filteredmsgtable[b'diverged'] % changeid
961 elif fate == b'superseded':
961 elif fate == b'superseded':
962 single_successor = short(successors[0][0])
962 single_successor = short(successors[0][0])
963 return filteredmsgtable[b'superseded'] % (changeid, single_successor)
963 return filteredmsgtable[b'superseded'] % (changeid, single_successor)
964 elif fate == b'superseded_split':
964 elif fate == b'superseded_split':
965 succs = []
965 succs = []
966 for node_id in successors[0]:
966 for node_id in successors[0]:
967 succs.append(short(node_id))
967 succs.append(short(node_id))
968
968
969 if len(succs) <= 2:
969 if len(succs) <= 2:
970 fmtsuccs = b', '.join(succs)
970 fmtsuccs = b', '.join(succs)
971 return filteredmsgtable[b'superseded_split'] % (changeid, fmtsuccs)
971 return filteredmsgtable[b'superseded_split'] % (changeid, fmtsuccs)
972 else:
972 else:
973 firstsuccessors = b', '.join(succs[:2])
973 firstsuccessors = b', '.join(succs[:2])
974 remainingnumber = len(succs) - 2
974 remainingnumber = len(succs) - 2
975
975
976 args = (changeid, firstsuccessors, remainingnumber)
976 args = (changeid, firstsuccessors, remainingnumber)
977 return filteredmsgtable[b'superseded_split_several'] % args
977 return filteredmsgtable[b'superseded_split_several'] % args
978 else:
978 else:
979 raise error.ProgrammingError("unhandled fate: %r" % fate)
979 raise error.ProgrammingError("unhandled fate: %r" % fate)
980
980
981
981
982 def divergentsets(repo, ctx):
982 def divergentsets(repo, ctx):
983 """Compute sets of commits divergent with a given one"""
983 """Compute sets of commits divergent with a given one"""
984 cache = {}
984 cache = {}
985 base = {}
985 base = {}
986 for n in allpredecessors(repo.obsstore, [ctx.node()]):
986 for n in allpredecessors(repo.obsstore, [ctx.node()]):
987 if n == ctx.node():
987 if n == ctx.node():
988 # a node can't be a base for divergence with itself
988 # a node can't be a base for divergence with itself
989 continue
989 continue
990 nsuccsets = successorssets(repo, n, cache)
990 nsuccsets = successorssets(repo, n, cache)
991 for nsuccset in nsuccsets:
991 for nsuccset in nsuccsets:
992 if ctx.node() in nsuccset:
992 if ctx.node() in nsuccset:
993 # we are only interested in *other* successor sets
993 # we are only interested in *other* successor sets
994 continue
994 continue
995 if tuple(nsuccset) in base:
995 if tuple(nsuccset) in base:
996 # we already know the latest base for this divergency
996 # we already know the latest base for this divergency
997 continue
997 continue
998 base[tuple(nsuccset)] = n
998 base[tuple(nsuccset)] = n
999 return [
999 return [
1000 {b'divergentnodes': divset, b'commonpredecessor': b}
1000 {b'divergentnodes': divset, b'commonpredecessor': b}
1001 for divset, b in base.items()
1001 for divset, b in base.items()
1002 ]
1002 ]
1003
1003
1004
1004
1005 def whyunstable(repo, ctx):
1005 def whyunstable(repo, ctx):
1006 result = []
1006 result = []
1007 if ctx.orphan():
1007 if ctx.orphan():
1008 for parent in ctx.parents():
1008 for parent in ctx.parents():
1009 kind = None
1009 kind = None
1010 if parent.orphan():
1010 if parent.orphan():
1011 kind = b'orphan'
1011 kind = b'orphan'
1012 elif parent.obsolete():
1012 elif parent.obsolete():
1013 kind = b'obsolete'
1013 kind = b'obsolete'
1014 if kind is not None:
1014 if kind is not None:
1015 result.append(
1015 result.append(
1016 {
1016 {
1017 b'instability': b'orphan',
1017 b'instability': b'orphan',
1018 b'reason': b'%s parent' % kind,
1018 b'reason': b'%s parent' % kind,
1019 b'node': parent.hex(),
1019 b'node': parent.hex(),
1020 }
1020 }
1021 )
1021 )
1022 if ctx.phasedivergent():
1022 if ctx.phasedivergent():
1023 predecessors = allpredecessors(
1023 predecessors = allpredecessors(
1024 repo.obsstore, [ctx.node()], ignoreflags=bumpedfix
1024 repo.obsstore, [ctx.node()], ignoreflags=bumpedfix
1025 )
1025 )
1026 immutable = [
1026 immutable = [
1027 repo[p] for p in predecessors if p in repo and not repo[p].mutable()
1027 repo[p] for p in predecessors if p in repo and not repo[p].mutable()
1028 ]
1028 ]
1029 for predecessor in immutable:
1029 for predecessor in immutable:
1030 result.append(
1030 result.append(
1031 {
1031 {
1032 b'instability': b'phase-divergent',
1032 b'instability': b'phase-divergent',
1033 b'reason': b'immutable predecessor',
1033 b'reason': b'immutable predecessor',
1034 b'node': predecessor.hex(),
1034 b'node': predecessor.hex(),
1035 }
1035 }
1036 )
1036 )
1037 if ctx.contentdivergent():
1037 if ctx.contentdivergent():
1038 dsets = divergentsets(repo, ctx)
1038 dsets = divergentsets(repo, ctx)
1039 for dset in dsets:
1039 for dset in dsets:
1040 divnodes = [repo[n] for n in dset[b'divergentnodes']]
1040 divnodes = [repo[n] for n in dset[b'divergentnodes']]
1041 result.append(
1041 result.append(
1042 {
1042 {
1043 b'instability': b'content-divergent',
1043 b'instability': b'content-divergent',
1044 b'divergentnodes': divnodes,
1044 b'divergentnodes': divnodes,
1045 b'reason': b'predecessor',
1045 b'reason': b'predecessor',
1046 b'node': hex(dset[b'commonpredecessor']),
1046 b'node': hex(dset[b'commonpredecessor']),
1047 }
1047 }
1048 )
1048 )
1049 return result
1049 return result
General Comments 0
You need to be logged in to leave comments. Login now