##// END OF EJS Templates
exchange: improve computation of relevant markers for large repos...
Joerg Sonnenberger -
r52922:f28c52a9 default
parent child Browse files
Show More
@@ -1,2675 +1,2675
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148
148
149 import collections
149 import collections
150 import errno
150 import errno
151 import os
151 import os
152 import re
152 import re
153 import string
153 import string
154 import struct
154 import struct
155 import sys
155 import sys
156
156
157 from .i18n import _
157 from .i18n import _
158 from .node import (
158 from .node import (
159 hex,
159 hex,
160 short,
160 short,
161 )
161 )
162 from . import (
162 from . import (
163 bookmarks,
163 bookmarks,
164 changegroup,
164 changegroup,
165 encoding,
165 encoding,
166 error,
166 error,
167 obsolete,
167 obsolete,
168 phases,
168 phases,
169 pushkey,
169 pushkey,
170 pycompat,
170 pycompat,
171 requirements,
171 requirements,
172 scmutil,
172 scmutil,
173 streamclone,
173 streamclone,
174 tags,
174 tags,
175 url,
175 url,
176 util,
176 util,
177 )
177 )
178 from .utils import (
178 from .utils import (
179 stringutil,
179 stringutil,
180 urlutil,
180 urlutil,
181 )
181 )
182 from .interfaces import repository
182 from .interfaces import repository
183
183
184 urlerr = util.urlerr
184 urlerr = util.urlerr
185 urlreq = util.urlreq
185 urlreq = util.urlreq
186
186
187 _pack = struct.pack
187 _pack = struct.pack
188 _unpack = struct.unpack
188 _unpack = struct.unpack
189
189
190 _fstreamparamsize = b'>i'
190 _fstreamparamsize = b'>i'
191 _fpartheadersize = b'>i'
191 _fpartheadersize = b'>i'
192 _fparttypesize = b'>B'
192 _fparttypesize = b'>B'
193 _fpartid = b'>I'
193 _fpartid = b'>I'
194 _fpayloadsize = b'>i'
194 _fpayloadsize = b'>i'
195 _fpartparamcount = b'>BB'
195 _fpartparamcount = b'>BB'
196
196
197 preferedchunksize = 32768
197 preferedchunksize = 32768
198
198
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200
200
201
201
202 def outdebug(ui, message):
202 def outdebug(ui, message):
203 """debug regarding output stream (bundling)"""
203 """debug regarding output stream (bundling)"""
204 if ui.configbool(b'devel', b'bundle2.debug'):
204 if ui.configbool(b'devel', b'bundle2.debug'):
205 ui.debug(b'bundle2-output: %s\n' % message)
205 ui.debug(b'bundle2-output: %s\n' % message)
206
206
207
207
208 def indebug(ui, message):
208 def indebug(ui, message):
209 """debug on input stream (unbundling)"""
209 """debug on input stream (unbundling)"""
210 if ui.configbool(b'devel', b'bundle2.debug'):
210 if ui.configbool(b'devel', b'bundle2.debug'):
211 ui.debug(b'bundle2-input: %s\n' % message)
211 ui.debug(b'bundle2-input: %s\n' % message)
212
212
213
213
214 def validateparttype(parttype):
214 def validateparttype(parttype):
215 """raise ValueError if a parttype contains invalid character"""
215 """raise ValueError if a parttype contains invalid character"""
216 if _parttypeforbidden.search(parttype):
216 if _parttypeforbidden.search(parttype):
217 raise ValueError(parttype)
217 raise ValueError(parttype)
218
218
219
219
220 def _makefpartparamsizes(nbparams):
220 def _makefpartparamsizes(nbparams):
221 """return a struct format to read part parameter sizes
221 """return a struct format to read part parameter sizes
222
222
223 The number parameters is variable so we need to build that format
223 The number parameters is variable so we need to build that format
224 dynamically.
224 dynamically.
225 """
225 """
226 return b'>' + (b'BB' * nbparams)
226 return b'>' + (b'BB' * nbparams)
227
227
228
228
229 parthandlermapping = {}
229 parthandlermapping = {}
230
230
231
231
232 def parthandler(parttype, params=()):
232 def parthandler(parttype, params=()):
233 """decorator that register a function as a bundle2 part handler
233 """decorator that register a function as a bundle2 part handler
234
234
235 eg::
235 eg::
236
236
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 def myparttypehandler(...):
238 def myparttypehandler(...):
239 '''process a part of type "my part".'''
239 '''process a part of type "my part".'''
240 ...
240 ...
241 """
241 """
242 validateparttype(parttype)
242 validateparttype(parttype)
243
243
244 def _decorator(func):
244 def _decorator(func):
245 lparttype = parttype.lower() # enforce lower case matching.
245 lparttype = parttype.lower() # enforce lower case matching.
246 assert lparttype not in parthandlermapping
246 assert lparttype not in parthandlermapping
247 parthandlermapping[lparttype] = func
247 parthandlermapping[lparttype] = func
248 func.params = frozenset(params)
248 func.params = frozenset(params)
249 return func
249 return func
250
250
251 return _decorator
251 return _decorator
252
252
253
253
254 class unbundlerecords:
254 class unbundlerecords:
255 """keep record of what happens during and unbundle
255 """keep record of what happens during and unbundle
256
256
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 category of record and obj is an arbitrary object.
258 category of record and obj is an arbitrary object.
259
259
260 `records['cat']` will return all entries of this category 'cat'.
260 `records['cat']` will return all entries of this category 'cat'.
261
261
262 Iterating on the object itself will yield `('category', obj)` tuples
262 Iterating on the object itself will yield `('category', obj)` tuples
263 for all entries.
263 for all entries.
264
264
265 All iterations happens in chronological order.
265 All iterations happens in chronological order.
266 """
266 """
267
267
268 def __init__(self):
268 def __init__(self):
269 self._categories = {}
269 self._categories = {}
270 self._sequences = []
270 self._sequences = []
271 self._replies = {}
271 self._replies = {}
272
272
273 def add(self, category, entry, inreplyto=None):
273 def add(self, category, entry, inreplyto=None):
274 """add a new record of a given category.
274 """add a new record of a given category.
275
275
276 The entry can then be retrieved in the list returned by
276 The entry can then be retrieved in the list returned by
277 self['category']."""
277 self['category']."""
278 self._categories.setdefault(category, []).append(entry)
278 self._categories.setdefault(category, []).append(entry)
279 self._sequences.append((category, entry))
279 self._sequences.append((category, entry))
280 if inreplyto is not None:
280 if inreplyto is not None:
281 self.getreplies(inreplyto).add(category, entry)
281 self.getreplies(inreplyto).add(category, entry)
282
282
283 def getreplies(self, partid):
283 def getreplies(self, partid):
284 """get the records that are replies to a specific part"""
284 """get the records that are replies to a specific part"""
285 return self._replies.setdefault(partid, unbundlerecords())
285 return self._replies.setdefault(partid, unbundlerecords())
286
286
287 def __getitem__(self, cat):
287 def __getitem__(self, cat):
288 return tuple(self._categories.get(cat, ()))
288 return tuple(self._categories.get(cat, ()))
289
289
290 def __iter__(self):
290 def __iter__(self):
291 return iter(self._sequences)
291 return iter(self._sequences)
292
292
293 def __len__(self):
293 def __len__(self):
294 return len(self._sequences)
294 return len(self._sequences)
295
295
296 def __nonzero__(self):
296 def __nonzero__(self):
297 return bool(self._sequences)
297 return bool(self._sequences)
298
298
299 __bool__ = __nonzero__
299 __bool__ = __nonzero__
300
300
301
301
302 class bundleoperation:
302 class bundleoperation:
303 """an object that represents a single bundling process
303 """an object that represents a single bundling process
304
304
305 Its purpose is to carry unbundle-related objects and states.
305 Its purpose is to carry unbundle-related objects and states.
306
306
307 A new object should be created at the beginning of each bundle processing.
307 A new object should be created at the beginning of each bundle processing.
308 The object is to be returned by the processing function.
308 The object is to be returned by the processing function.
309
309
310 The object has very little content now it will ultimately contain:
310 The object has very little content now it will ultimately contain:
311 * an access to the repo the bundle is applied to,
311 * an access to the repo the bundle is applied to,
312 * a ui object,
312 * a ui object,
313 * a way to retrieve a transaction to add changes to the repo,
313 * a way to retrieve a transaction to add changes to the repo,
314 * a way to record the result of processing each part,
314 * a way to record the result of processing each part,
315 * a way to construct a bundle response when applicable.
315 * a way to construct a bundle response when applicable.
316 """
316 """
317
317
318 def __init__(
318 def __init__(
319 self,
319 self,
320 repo,
320 repo,
321 transactiongetter,
321 transactiongetter,
322 captureoutput=True,
322 captureoutput=True,
323 source=b'',
323 source=b'',
324 remote=None,
324 remote=None,
325 ):
325 ):
326 self.repo = repo
326 self.repo = repo
327 # the peer object who produced this bundle if available
327 # the peer object who produced this bundle if available
328 self.remote = remote
328 self.remote = remote
329 self.ui = repo.ui
329 self.ui = repo.ui
330 self.records = unbundlerecords()
330 self.records = unbundlerecords()
331 self.reply = None
331 self.reply = None
332 self.captureoutput = captureoutput
332 self.captureoutput = captureoutput
333 self.hookargs = {}
333 self.hookargs = {}
334 self._gettransaction = transactiongetter
334 self._gettransaction = transactiongetter
335 # carries value that can modify part behavior
335 # carries value that can modify part behavior
336 self.modes = {}
336 self.modes = {}
337 self.source = source
337 self.source = source
338
338
339 def gettransaction(self):
339 def gettransaction(self):
340 transaction = self._gettransaction()
340 transaction = self._gettransaction()
341
341
342 if self.hookargs:
342 if self.hookargs:
343 # the ones added to the transaction supercede those added
343 # the ones added to the transaction supercede those added
344 # to the operation.
344 # to the operation.
345 self.hookargs.update(transaction.hookargs)
345 self.hookargs.update(transaction.hookargs)
346 transaction.hookargs = self.hookargs
346 transaction.hookargs = self.hookargs
347
347
348 # mark the hookargs as flushed. further attempts to add to
348 # mark the hookargs as flushed. further attempts to add to
349 # hookargs will result in an abort.
349 # hookargs will result in an abort.
350 self.hookargs = None
350 self.hookargs = None
351
351
352 return transaction
352 return transaction
353
353
354 def addhookargs(self, hookargs):
354 def addhookargs(self, hookargs):
355 if self.hookargs is None:
355 if self.hookargs is None:
356 raise error.ProgrammingError(
356 raise error.ProgrammingError(
357 b'attempted to add hookargs to '
357 b'attempted to add hookargs to '
358 b'operation after transaction started'
358 b'operation after transaction started'
359 )
359 )
360 self.hookargs.update(hookargs)
360 self.hookargs.update(hookargs)
361
361
362
362
363 class TransactionUnavailable(RuntimeError):
363 class TransactionUnavailable(RuntimeError):
364 pass
364 pass
365
365
366
366
367 def _notransaction():
367 def _notransaction():
368 """default method to get a transaction while processing a bundle
368 """default method to get a transaction while processing a bundle
369
369
370 Raise an exception to highlight the fact that no transaction was expected
370 Raise an exception to highlight the fact that no transaction was expected
371 to be created"""
371 to be created"""
372 raise TransactionUnavailable()
372 raise TransactionUnavailable()
373
373
374
374
375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
375 def applybundle(repo, unbundler, tr, source, url=None, remote=None, **kwargs):
376 # transform me into unbundler.apply() as soon as the freeze is lifted
376 # transform me into unbundler.apply() as soon as the freeze is lifted
377 if isinstance(unbundler, unbundle20):
377 if isinstance(unbundler, unbundle20):
378 tr.hookargs[b'bundle2'] = b'1'
378 tr.hookargs[b'bundle2'] = b'1'
379 if source is not None and b'source' not in tr.hookargs:
379 if source is not None and b'source' not in tr.hookargs:
380 tr.hookargs[b'source'] = source
380 tr.hookargs[b'source'] = source
381 if url is not None and b'url' not in tr.hookargs:
381 if url is not None and b'url' not in tr.hookargs:
382 tr.hookargs[b'url'] = url
382 tr.hookargs[b'url'] = url
383 return processbundle(
383 return processbundle(
384 repo, unbundler, lambda: tr, source=source, remote=remote
384 repo, unbundler, lambda: tr, source=source, remote=remote
385 )
385 )
386 else:
386 else:
387 # the transactiongetter won't be used, but we might as well set it
387 # the transactiongetter won't be used, but we might as well set it
388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
388 op = bundleoperation(repo, lambda: tr, source=source, remote=remote)
389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
389 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
390 return op
390 return op
391
391
392
392
393 class partiterator:
393 class partiterator:
394 def __init__(self, repo, op, unbundler):
394 def __init__(self, repo, op, unbundler):
395 self.repo = repo
395 self.repo = repo
396 self.op = op
396 self.op = op
397 self.unbundler = unbundler
397 self.unbundler = unbundler
398 self.iterator = None
398 self.iterator = None
399 self.count = 0
399 self.count = 0
400 self.current = None
400 self.current = None
401
401
402 def __enter__(self):
402 def __enter__(self):
403 def func():
403 def func():
404 itr = enumerate(self.unbundler.iterparts(), 1)
404 itr = enumerate(self.unbundler.iterparts(), 1)
405 for count, p in itr:
405 for count, p in itr:
406 self.count = count
406 self.count = count
407 self.current = p
407 self.current = p
408 yield p
408 yield p
409 p.consume()
409 p.consume()
410 self.current = None
410 self.current = None
411
411
412 self.iterator = func()
412 self.iterator = func()
413 return self.iterator
413 return self.iterator
414
414
415 def __exit__(self, type, exc, tb):
415 def __exit__(self, type, exc, tb):
416 if not self.iterator:
416 if not self.iterator:
417 return
417 return
418
418
419 # Only gracefully abort in a normal exception situation. User aborts
419 # Only gracefully abort in a normal exception situation. User aborts
420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
420 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
421 # and should not gracefully cleanup.
421 # and should not gracefully cleanup.
422 if isinstance(exc, Exception):
422 if isinstance(exc, Exception):
423 # Any exceptions seeking to the end of the bundle at this point are
423 # Any exceptions seeking to the end of the bundle at this point are
424 # almost certainly related to the underlying stream being bad.
424 # almost certainly related to the underlying stream being bad.
425 # And, chances are that the exception we're handling is related to
425 # And, chances are that the exception we're handling is related to
426 # getting in that bad state. So, we swallow the seeking error and
426 # getting in that bad state. So, we swallow the seeking error and
427 # re-raise the original error.
427 # re-raise the original error.
428 seekerror = False
428 seekerror = False
429 try:
429 try:
430 if self.current:
430 if self.current:
431 # consume the part content to not corrupt the stream.
431 # consume the part content to not corrupt the stream.
432 self.current.consume()
432 self.current.consume()
433
433
434 for part in self.iterator:
434 for part in self.iterator:
435 # consume the bundle content
435 # consume the bundle content
436 part.consume()
436 part.consume()
437 except Exception:
437 except Exception:
438 seekerror = True
438 seekerror = True
439
439
440 # Small hack to let caller code distinguish exceptions from bundle2
440 # Small hack to let caller code distinguish exceptions from bundle2
441 # processing from processing the old format. This is mostly needed
441 # processing from processing the old format. This is mostly needed
442 # to handle different return codes to unbundle according to the type
442 # to handle different return codes to unbundle according to the type
443 # of bundle. We should probably clean up or drop this return code
443 # of bundle. We should probably clean up or drop this return code
444 # craziness in a future version.
444 # craziness in a future version.
445 exc.duringunbundle2 = True
445 exc.duringunbundle2 = True
446 salvaged = []
446 salvaged = []
447 replycaps = None
447 replycaps = None
448 if self.op.reply is not None:
448 if self.op.reply is not None:
449 salvaged = self.op.reply.salvageoutput()
449 salvaged = self.op.reply.salvageoutput()
450 replycaps = self.op.reply.capabilities
450 replycaps = self.op.reply.capabilities
451 exc._replycaps = replycaps
451 exc._replycaps = replycaps
452 exc._bundle2salvagedoutput = salvaged
452 exc._bundle2salvagedoutput = salvaged
453
453
454 # Re-raising from a variable loses the original stack. So only use
454 # Re-raising from a variable loses the original stack. So only use
455 # that form if we need to.
455 # that form if we need to.
456 if seekerror:
456 if seekerror:
457 raise exc
457 raise exc
458
458
459 self.repo.ui.debug(
459 self.repo.ui.debug(
460 b'bundle2-input-bundle: %i parts total\n' % self.count
460 b'bundle2-input-bundle: %i parts total\n' % self.count
461 )
461 )
462
462
463
463
464 def processbundle(
464 def processbundle(
465 repo,
465 repo,
466 unbundler,
466 unbundler,
467 transactiongetter=None,
467 transactiongetter=None,
468 op=None,
468 op=None,
469 source=b'',
469 source=b'',
470 remote=None,
470 remote=None,
471 ):
471 ):
472 """This function process a bundle, apply effect to/from a repo
472 """This function process a bundle, apply effect to/from a repo
473
473
474 It iterates over each part then searches for and uses the proper handling
474 It iterates over each part then searches for and uses the proper handling
475 code to process the part. Parts are processed in order.
475 code to process the part. Parts are processed in order.
476
476
477 Unknown Mandatory part will abort the process.
477 Unknown Mandatory part will abort the process.
478
478
479 It is temporarily possible to provide a prebuilt bundleoperation to the
479 It is temporarily possible to provide a prebuilt bundleoperation to the
480 function. This is used to ensure output is properly propagated in case of
480 function. This is used to ensure output is properly propagated in case of
481 an error during the unbundling. This output capturing part will likely be
481 an error during the unbundling. This output capturing part will likely be
482 reworked and this ability will probably go away in the process.
482 reworked and this ability will probably go away in the process.
483 """
483 """
484 if op is None:
484 if op is None:
485 if transactiongetter is None:
485 if transactiongetter is None:
486 transactiongetter = _notransaction
486 transactiongetter = _notransaction
487 op = bundleoperation(
487 op = bundleoperation(
488 repo,
488 repo,
489 transactiongetter,
489 transactiongetter,
490 source=source,
490 source=source,
491 remote=remote,
491 remote=remote,
492 )
492 )
493 # todo:
493 # todo:
494 # - replace this is a init function soon.
494 # - replace this is a init function soon.
495 # - exception catching
495 # - exception catching
496 unbundler.params
496 unbundler.params
497 if repo.ui.debugflag:
497 if repo.ui.debugflag:
498 msg = [b'bundle2-input-bundle:']
498 msg = [b'bundle2-input-bundle:']
499 if unbundler.params:
499 if unbundler.params:
500 msg.append(b' %i params' % len(unbundler.params))
500 msg.append(b' %i params' % len(unbundler.params))
501 if op._gettransaction is None or op._gettransaction is _notransaction:
501 if op._gettransaction is None or op._gettransaction is _notransaction:
502 msg.append(b' no-transaction')
502 msg.append(b' no-transaction')
503 else:
503 else:
504 msg.append(b' with-transaction')
504 msg.append(b' with-transaction')
505 msg.append(b'\n')
505 msg.append(b'\n')
506 repo.ui.debug(b''.join(msg))
506 repo.ui.debug(b''.join(msg))
507
507
508 processparts(repo, op, unbundler)
508 processparts(repo, op, unbundler)
509
509
510 return op
510 return op
511
511
512
512
513 def processparts(repo, op, unbundler):
513 def processparts(repo, op, unbundler):
514 with partiterator(repo, op, unbundler) as parts:
514 with partiterator(repo, op, unbundler) as parts:
515 for part in parts:
515 for part in parts:
516 _processpart(op, part)
516 _processpart(op, part)
517
517
518
518
519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
519 def _processchangegroup(op, cg, tr, source, url, **kwargs):
520 if op.remote is not None and op.remote.path is not None:
520 if op.remote is not None and op.remote.path is not None:
521 remote_path = op.remote.path
521 remote_path = op.remote.path
522 kwargs = kwargs.copy()
522 kwargs = kwargs.copy()
523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
523 kwargs['delta_base_reuse_policy'] = remote_path.delta_reuse_policy
524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
524 ret = cg.apply(op.repo, tr, source, url, **kwargs)
525 op.records.add(
525 op.records.add(
526 b'changegroup',
526 b'changegroup',
527 {
527 {
528 b'return': ret,
528 b'return': ret,
529 },
529 },
530 )
530 )
531 return ret
531 return ret
532
532
533
533
534 def _gethandler(op, part):
534 def _gethandler(op, part):
535 status = b'unknown' # used by debug output
535 status = b'unknown' # used by debug output
536 try:
536 try:
537 handler = parthandlermapping.get(part.type)
537 handler = parthandlermapping.get(part.type)
538 if handler is None:
538 if handler is None:
539 status = b'unsupported-type'
539 status = b'unsupported-type'
540 raise error.BundleUnknownFeatureError(parttype=part.type)
540 raise error.BundleUnknownFeatureError(parttype=part.type)
541 indebug(op.ui, b'found a handler for part %s' % part.type)
541 indebug(op.ui, b'found a handler for part %s' % part.type)
542 unknownparams = part.mandatorykeys - handler.params
542 unknownparams = part.mandatorykeys - handler.params
543 if unknownparams:
543 if unknownparams:
544 unknownparams = list(unknownparams)
544 unknownparams = list(unknownparams)
545 unknownparams.sort()
545 unknownparams.sort()
546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
546 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
547 raise error.BundleUnknownFeatureError(
547 raise error.BundleUnknownFeatureError(
548 parttype=part.type, params=unknownparams
548 parttype=part.type, params=unknownparams
549 )
549 )
550 status = b'supported'
550 status = b'supported'
551 except error.BundleUnknownFeatureError as exc:
551 except error.BundleUnknownFeatureError as exc:
552 if part.mandatory: # mandatory parts
552 if part.mandatory: # mandatory parts
553 raise
553 raise
554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
554 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
555 return # skip to part processing
555 return # skip to part processing
556 finally:
556 finally:
557 if op.ui.debugflag:
557 if op.ui.debugflag:
558 msg = [b'bundle2-input-part: "%s"' % part.type]
558 msg = [b'bundle2-input-part: "%s"' % part.type]
559 if not part.mandatory:
559 if not part.mandatory:
560 msg.append(b' (advisory)')
560 msg.append(b' (advisory)')
561 nbmp = len(part.mandatorykeys)
561 nbmp = len(part.mandatorykeys)
562 nbap = len(part.params) - nbmp
562 nbap = len(part.params) - nbmp
563 if nbmp or nbap:
563 if nbmp or nbap:
564 msg.append(b' (params:')
564 msg.append(b' (params:')
565 if nbmp:
565 if nbmp:
566 msg.append(b' %i mandatory' % nbmp)
566 msg.append(b' %i mandatory' % nbmp)
567 if nbap:
567 if nbap:
568 msg.append(b' %i advisory' % nbmp)
568 msg.append(b' %i advisory' % nbmp)
569 msg.append(b')')
569 msg.append(b')')
570 msg.append(b' %s\n' % status)
570 msg.append(b' %s\n' % status)
571 op.ui.debug(b''.join(msg))
571 op.ui.debug(b''.join(msg))
572
572
573 return handler
573 return handler
574
574
575
575
576 def _processpart(op, part):
576 def _processpart(op, part):
577 """process a single part from a bundle
577 """process a single part from a bundle
578
578
579 The part is guaranteed to have been fully consumed when the function exits
579 The part is guaranteed to have been fully consumed when the function exits
580 (even if an exception is raised)."""
580 (even if an exception is raised)."""
581 handler = _gethandler(op, part)
581 handler = _gethandler(op, part)
582 if handler is None:
582 if handler is None:
583 return
583 return
584
584
585 # handler is called outside the above try block so that we don't
585 # handler is called outside the above try block so that we don't
586 # risk catching KeyErrors from anything other than the
586 # risk catching KeyErrors from anything other than the
587 # parthandlermapping lookup (any KeyError raised by handler()
587 # parthandlermapping lookup (any KeyError raised by handler()
588 # itself represents a defect of a different variety).
588 # itself represents a defect of a different variety).
589 output = None
589 output = None
590 if op.captureoutput and op.reply is not None:
590 if op.captureoutput and op.reply is not None:
591 op.ui.pushbuffer(error=True, subproc=True)
591 op.ui.pushbuffer(error=True, subproc=True)
592 output = b''
592 output = b''
593 try:
593 try:
594 handler(op, part)
594 handler(op, part)
595 finally:
595 finally:
596 if output is not None:
596 if output is not None:
597 output = op.ui.popbuffer()
597 output = op.ui.popbuffer()
598 if output:
598 if output:
599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
599 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
600 outpart.addparam(
600 outpart.addparam(
601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
601 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
602 )
602 )
603
603
604
604
605 def decodecaps(blob):
605 def decodecaps(blob):
606 """decode a bundle2 caps bytes blob into a dictionary
606 """decode a bundle2 caps bytes blob into a dictionary
607
607
608 The blob is a list of capabilities (one per line)
608 The blob is a list of capabilities (one per line)
609 Capabilities may have values using a line of the form::
609 Capabilities may have values using a line of the form::
610
610
611 capability=value1,value2,value3
611 capability=value1,value2,value3
612
612
613 The values are always a list."""
613 The values are always a list."""
614 caps = {}
614 caps = {}
615 for line in blob.splitlines():
615 for line in blob.splitlines():
616 if not line:
616 if not line:
617 continue
617 continue
618 if b'=' not in line:
618 if b'=' not in line:
619 key, vals = line, ()
619 key, vals = line, ()
620 else:
620 else:
621 key, vals = line.split(b'=', 1)
621 key, vals = line.split(b'=', 1)
622 vals = vals.split(b',')
622 vals = vals.split(b',')
623 key = urlreq.unquote(key)
623 key = urlreq.unquote(key)
624 vals = [urlreq.unquote(v) for v in vals]
624 vals = [urlreq.unquote(v) for v in vals]
625 caps[key] = vals
625 caps[key] = vals
626 return caps
626 return caps
627
627
628
628
629 def encodecaps(caps):
629 def encodecaps(caps):
630 """encode a bundle2 caps dictionary into a bytes blob"""
630 """encode a bundle2 caps dictionary into a bytes blob"""
631 chunks = []
631 chunks = []
632 for ca in sorted(caps):
632 for ca in sorted(caps):
633 vals = caps[ca]
633 vals = caps[ca]
634 ca = urlreq.quote(ca)
634 ca = urlreq.quote(ca)
635 vals = [urlreq.quote(v) for v in vals]
635 vals = [urlreq.quote(v) for v in vals]
636 if vals:
636 if vals:
637 ca = b"%s=%s" % (ca, b','.join(vals))
637 ca = b"%s=%s" % (ca, b','.join(vals))
638 chunks.append(ca)
638 chunks.append(ca)
639 return b'\n'.join(chunks)
639 return b'\n'.join(chunks)
640
640
641
641
642 bundletypes = {
642 bundletypes = {
643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
643 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
644 # since the unification ssh accepts a header but there
644 # since the unification ssh accepts a header but there
645 # is no capability signaling it.
645 # is no capability signaling it.
646 b"HG20": (), # special-cased below
646 b"HG20": (), # special-cased below
647 b"HG10UN": (b"HG10UN", b'UN'),
647 b"HG10UN": (b"HG10UN", b'UN'),
648 b"HG10BZ": (b"HG10", b'BZ'),
648 b"HG10BZ": (b"HG10", b'BZ'),
649 b"HG10GZ": (b"HG10GZ", b'GZ'),
649 b"HG10GZ": (b"HG10GZ", b'GZ'),
650 }
650 }
651
651
652 # hgweb uses this list to communicate its preferred type
652 # hgweb uses this list to communicate its preferred type
653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
653 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
654
654
655
655
656 class bundle20:
656 class bundle20:
657 """represent an outgoing bundle2 container
657 """represent an outgoing bundle2 container
658
658
659 Use the `addparam` method to add stream level parameter. and `newpart` to
659 Use the `addparam` method to add stream level parameter. and `newpart` to
660 populate it. Then call `getchunks` to retrieve all the binary chunks of
660 populate it. Then call `getchunks` to retrieve all the binary chunks of
661 data that compose the bundle2 container."""
661 data that compose the bundle2 container."""
662
662
663 _magicstring = b'HG20'
663 _magicstring = b'HG20'
664
664
665 def __init__(self, ui, capabilities=()):
665 def __init__(self, ui, capabilities=()):
666 self.ui = ui
666 self.ui = ui
667 self._params = []
667 self._params = []
668 self._parts = []
668 self._parts = []
669 self.capabilities = dict(capabilities)
669 self.capabilities = dict(capabilities)
670 self._compengine = util.compengines.forbundletype(b'UN')
670 self._compengine = util.compengines.forbundletype(b'UN')
671 self._compopts = None
671 self._compopts = None
672 # If compression is being handled by a consumer of the raw
672 # If compression is being handled by a consumer of the raw
673 # data (e.g. the wire protocol), unsetting this flag tells
673 # data (e.g. the wire protocol), unsetting this flag tells
674 # consumers that the bundle is best left uncompressed.
674 # consumers that the bundle is best left uncompressed.
675 self.prefercompressed = True
675 self.prefercompressed = True
676
676
677 def setcompression(self, alg, compopts=None):
677 def setcompression(self, alg, compopts=None):
678 """setup core part compression to <alg>"""
678 """setup core part compression to <alg>"""
679 if alg in (None, b'UN'):
679 if alg in (None, b'UN'):
680 return
680 return
681 assert not any(n.lower() == b'compression' for n, v in self._params)
681 assert not any(n.lower() == b'compression' for n, v in self._params)
682 self.addparam(b'Compression', alg)
682 self.addparam(b'Compression', alg)
683 self._compengine = util.compengines.forbundletype(alg)
683 self._compengine = util.compengines.forbundletype(alg)
684 self._compopts = compopts
684 self._compopts = compopts
685
685
686 @property
686 @property
687 def nbparts(self):
687 def nbparts(self):
688 """total number of parts added to the bundler"""
688 """total number of parts added to the bundler"""
689 return len(self._parts)
689 return len(self._parts)
690
690
691 # methods used to defines the bundle2 content
691 # methods used to defines the bundle2 content
692 def addparam(self, name, value=None):
692 def addparam(self, name, value=None):
693 """add a stream level parameter"""
693 """add a stream level parameter"""
694 if not name:
694 if not name:
695 raise error.ProgrammingError(b'empty parameter name')
695 raise error.ProgrammingError(b'empty parameter name')
696 if name[0:1] not in pycompat.bytestr(
696 if name[0:1] not in pycompat.bytestr(
697 string.ascii_letters # pytype: disable=wrong-arg-types
697 string.ascii_letters # pytype: disable=wrong-arg-types
698 ):
698 ):
699 raise error.ProgrammingError(
699 raise error.ProgrammingError(
700 b'non letter first character: %s' % name
700 b'non letter first character: %s' % name
701 )
701 )
702 self._params.append((name, value))
702 self._params.append((name, value))
703
703
704 def addpart(self, part):
704 def addpart(self, part):
705 """add a new part to the bundle2 container
705 """add a new part to the bundle2 container
706
706
707 Parts contains the actual applicative payload."""
707 Parts contains the actual applicative payload."""
708 assert part.id is None
708 assert part.id is None
709 part.id = len(self._parts) # very cheap counter
709 part.id = len(self._parts) # very cheap counter
710 self._parts.append(part)
710 self._parts.append(part)
711
711
712 def newpart(self, typeid, *args, **kwargs):
712 def newpart(self, typeid, *args, **kwargs):
713 """create a new part and add it to the containers
713 """create a new part and add it to the containers
714
714
715 As the part is directly added to the containers. For now, this means
715 As the part is directly added to the containers. For now, this means
716 that any failure to properly initialize the part after calling
716 that any failure to properly initialize the part after calling
717 ``newpart`` should result in a failure of the whole bundling process.
717 ``newpart`` should result in a failure of the whole bundling process.
718
718
719 You can still fall back to manually create and add if you need better
719 You can still fall back to manually create and add if you need better
720 control."""
720 control."""
721 part = bundlepart(typeid, *args, **kwargs)
721 part = bundlepart(typeid, *args, **kwargs)
722 self.addpart(part)
722 self.addpart(part)
723 return part
723 return part
724
724
725 # methods used to generate the bundle2 stream
725 # methods used to generate the bundle2 stream
726 def getchunks(self):
726 def getchunks(self):
727 if self.ui.debugflag:
727 if self.ui.debugflag:
728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
728 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
729 if self._params:
729 if self._params:
730 msg.append(b' (%i params)' % len(self._params))
730 msg.append(b' (%i params)' % len(self._params))
731 msg.append(b' %i parts total\n' % len(self._parts))
731 msg.append(b' %i parts total\n' % len(self._parts))
732 self.ui.debug(b''.join(msg))
732 self.ui.debug(b''.join(msg))
733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
733 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
734 yield self._magicstring
734 yield self._magicstring
735 param = self._paramchunk()
735 param = self._paramchunk()
736 outdebug(self.ui, b'bundle parameter: %s' % param)
736 outdebug(self.ui, b'bundle parameter: %s' % param)
737 yield _pack(_fstreamparamsize, len(param))
737 yield _pack(_fstreamparamsize, len(param))
738 if param:
738 if param:
739 yield param
739 yield param
740 for chunk in self._compengine.compressstream(
740 for chunk in self._compengine.compressstream(
741 self._getcorechunk(), self._compopts
741 self._getcorechunk(), self._compopts
742 ):
742 ):
743 yield chunk
743 yield chunk
744
744
745 def _paramchunk(self):
745 def _paramchunk(self):
746 """return a encoded version of all stream parameters"""
746 """return a encoded version of all stream parameters"""
747 blocks = []
747 blocks = []
748 for par, value in self._params:
748 for par, value in self._params:
749 par = urlreq.quote(par)
749 par = urlreq.quote(par)
750 if value is not None:
750 if value is not None:
751 value = urlreq.quote(value)
751 value = urlreq.quote(value)
752 par = b'%s=%s' % (par, value)
752 par = b'%s=%s' % (par, value)
753 blocks.append(par)
753 blocks.append(par)
754 return b' '.join(blocks)
754 return b' '.join(blocks)
755
755
756 def _getcorechunk(self):
756 def _getcorechunk(self):
757 """yield chunk for the core part of the bundle
757 """yield chunk for the core part of the bundle
758
758
759 (all but headers and parameters)"""
759 (all but headers and parameters)"""
760 outdebug(self.ui, b'start of parts')
760 outdebug(self.ui, b'start of parts')
761 for part in self._parts:
761 for part in self._parts:
762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
762 outdebug(self.ui, b'bundle part: "%s"' % part.type)
763 for chunk in part.getchunks(ui=self.ui):
763 for chunk in part.getchunks(ui=self.ui):
764 yield chunk
764 yield chunk
765 outdebug(self.ui, b'end of bundle')
765 outdebug(self.ui, b'end of bundle')
766 yield _pack(_fpartheadersize, 0)
766 yield _pack(_fpartheadersize, 0)
767
767
768 def salvageoutput(self):
768 def salvageoutput(self):
769 """return a list with a copy of all output parts in the bundle
769 """return a list with a copy of all output parts in the bundle
770
770
771 This is meant to be used during error handling to make sure we preserve
771 This is meant to be used during error handling to make sure we preserve
772 server output"""
772 server output"""
773 salvaged = []
773 salvaged = []
774 for part in self._parts:
774 for part in self._parts:
775 if part.type.startswith(b'output'):
775 if part.type.startswith(b'output'):
776 salvaged.append(part.copy())
776 salvaged.append(part.copy())
777 return salvaged
777 return salvaged
778
778
779
779
780 class unpackermixin:
780 class unpackermixin:
781 """A mixin to extract bytes and struct data from a stream"""
781 """A mixin to extract bytes and struct data from a stream"""
782
782
783 def __init__(self, fp):
783 def __init__(self, fp):
784 self._fp = fp
784 self._fp = fp
785
785
786 def _unpack(self, format):
786 def _unpack(self, format):
787 """unpack this struct format from the stream
787 """unpack this struct format from the stream
788
788
789 This method is meant for internal usage by the bundle2 protocol only.
789 This method is meant for internal usage by the bundle2 protocol only.
790 They directly manipulate the low level stream including bundle2 level
790 They directly manipulate the low level stream including bundle2 level
791 instruction.
791 instruction.
792
792
793 Do not use it to implement higher-level logic or methods."""
793 Do not use it to implement higher-level logic or methods."""
794 data = self._readexact(struct.calcsize(format))
794 data = self._readexact(struct.calcsize(format))
795 return _unpack(format, data)
795 return _unpack(format, data)
796
796
797 def _readexact(self, size):
797 def _readexact(self, size):
798 """read exactly <size> bytes from the stream
798 """read exactly <size> bytes from the stream
799
799
800 This method is meant for internal usage by the bundle2 protocol only.
800 This method is meant for internal usage by the bundle2 protocol only.
801 They directly manipulate the low level stream including bundle2 level
801 They directly manipulate the low level stream including bundle2 level
802 instruction.
802 instruction.
803
803
804 Do not use it to implement higher-level logic or methods."""
804 Do not use it to implement higher-level logic or methods."""
805 return changegroup.readexactly(self._fp, size)
805 return changegroup.readexactly(self._fp, size)
806
806
807
807
808 def getunbundler(ui, fp, magicstring=None):
808 def getunbundler(ui, fp, magicstring=None):
809 """return a valid unbundler object for a given magicstring"""
809 """return a valid unbundler object for a given magicstring"""
810 if magicstring is None:
810 if magicstring is None:
811 magicstring = changegroup.readexactly(fp, 4)
811 magicstring = changegroup.readexactly(fp, 4)
812 magic, version = magicstring[0:2], magicstring[2:4]
812 magic, version = magicstring[0:2], magicstring[2:4]
813 if magic != b'HG':
813 if magic != b'HG':
814 ui.debug(
814 ui.debug(
815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
815 b"error: invalid magic: %r (version %r), should be 'HG'\n"
816 % (magic, version)
816 % (magic, version)
817 )
817 )
818 raise error.Abort(_(b'not a Mercurial bundle'))
818 raise error.Abort(_(b'not a Mercurial bundle'))
819 unbundlerclass = formatmap.get(version)
819 unbundlerclass = formatmap.get(version)
820 if unbundlerclass is None:
820 if unbundlerclass is None:
821 raise error.Abort(_(b'unknown bundle version %s') % version)
821 raise error.Abort(_(b'unknown bundle version %s') % version)
822 unbundler = unbundlerclass(ui, fp)
822 unbundler = unbundlerclass(ui, fp)
823 indebug(ui, b'start processing of %s stream' % magicstring)
823 indebug(ui, b'start processing of %s stream' % magicstring)
824 return unbundler
824 return unbundler
825
825
826
826
827 class unbundle20(unpackermixin):
827 class unbundle20(unpackermixin):
828 """interpret a bundle2 stream
828 """interpret a bundle2 stream
829
829
830 This class is fed with a binary stream and yields parts through its
830 This class is fed with a binary stream and yields parts through its
831 `iterparts` methods."""
831 `iterparts` methods."""
832
832
833 _magicstring = b'HG20'
833 _magicstring = b'HG20'
834
834
835 def __init__(self, ui, fp):
835 def __init__(self, ui, fp):
836 """If header is specified, we do not read it out of the stream."""
836 """If header is specified, we do not read it out of the stream."""
837 self.ui = ui
837 self.ui = ui
838 self._compengine = util.compengines.forbundletype(b'UN')
838 self._compengine = util.compengines.forbundletype(b'UN')
839 self._compressed = None
839 self._compressed = None
840 super(unbundle20, self).__init__(fp)
840 super(unbundle20, self).__init__(fp)
841
841
842 @util.propertycache
842 @util.propertycache
843 def params(self):
843 def params(self):
844 """dictionary of stream level parameters"""
844 """dictionary of stream level parameters"""
845 indebug(self.ui, b'reading bundle2 stream parameters')
845 indebug(self.ui, b'reading bundle2 stream parameters')
846 params = {}
846 params = {}
847 paramssize = self._unpack(_fstreamparamsize)[0]
847 paramssize = self._unpack(_fstreamparamsize)[0]
848 if paramssize < 0:
848 if paramssize < 0:
849 raise error.BundleValueError(
849 raise error.BundleValueError(
850 b'negative bundle param size: %i' % paramssize
850 b'negative bundle param size: %i' % paramssize
851 )
851 )
852 if paramssize:
852 if paramssize:
853 params = self._readexact(paramssize)
853 params = self._readexact(paramssize)
854 params = self._processallparams(params)
854 params = self._processallparams(params)
855 return params
855 return params
856
856
857 def _processallparams(self, paramsblock):
857 def _processallparams(self, paramsblock):
858 """ """
858 """ """
859 params = util.sortdict()
859 params = util.sortdict()
860 for p in paramsblock.split(b' '):
860 for p in paramsblock.split(b' '):
861 p = p.split(b'=', 1)
861 p = p.split(b'=', 1)
862 p = [urlreq.unquote(i) for i in p]
862 p = [urlreq.unquote(i) for i in p]
863 if len(p) < 2:
863 if len(p) < 2:
864 p.append(None)
864 p.append(None)
865 self._processparam(*p)
865 self._processparam(*p)
866 params[p[0]] = p[1]
866 params[p[0]] = p[1]
867 return params
867 return params
868
868
869 def _processparam(self, name, value):
869 def _processparam(self, name, value):
870 """process a parameter, applying its effect if needed
870 """process a parameter, applying its effect if needed
871
871
872 Parameter starting with a lower case letter are advisory and will be
872 Parameter starting with a lower case letter are advisory and will be
873 ignored when unknown. Those starting with an upper case letter are
873 ignored when unknown. Those starting with an upper case letter are
874 mandatory and will this function will raise a KeyError when unknown.
874 mandatory and will this function will raise a KeyError when unknown.
875
875
876 Note: no option are currently supported. Any input will be either
876 Note: no option are currently supported. Any input will be either
877 ignored or failing.
877 ignored or failing.
878 """
878 """
879 if not name:
879 if not name:
880 raise ValueError('empty parameter name')
880 raise ValueError('empty parameter name')
881 if name[0:1] not in pycompat.bytestr(
881 if name[0:1] not in pycompat.bytestr(
882 string.ascii_letters # pytype: disable=wrong-arg-types
882 string.ascii_letters # pytype: disable=wrong-arg-types
883 ):
883 ):
884 raise ValueError('non letter first character: %s' % name)
884 raise ValueError('non letter first character: %s' % name)
885 try:
885 try:
886 handler = b2streamparamsmap[name.lower()]
886 handler = b2streamparamsmap[name.lower()]
887 except KeyError:
887 except KeyError:
888 if name[0:1].islower():
888 if name[0:1].islower():
889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
889 indebug(self.ui, b"ignoring unknown parameter %s" % name)
890 else:
890 else:
891 raise error.BundleUnknownFeatureError(params=(name,))
891 raise error.BundleUnknownFeatureError(params=(name,))
892 else:
892 else:
893 handler(self, name, value)
893 handler(self, name, value)
894
894
895 def _forwardchunks(self):
895 def _forwardchunks(self):
896 """utility to transfer a bundle2 as binary
896 """utility to transfer a bundle2 as binary
897
897
898 This is made necessary by the fact the 'getbundle' command over 'ssh'
898 This is made necessary by the fact the 'getbundle' command over 'ssh'
899 have no way to know when the reply ends, relying on the bundle to be
899 have no way to know when the reply ends, relying on the bundle to be
900 interpreted to know its end. This is terrible and we are sorry, but we
900 interpreted to know its end. This is terrible and we are sorry, but we
901 needed to move forward to get general delta enabled.
901 needed to move forward to get general delta enabled.
902 """
902 """
903 yield self._magicstring
903 yield self._magicstring
904 assert 'params' not in vars(self)
904 assert 'params' not in vars(self)
905 paramssize = self._unpack(_fstreamparamsize)[0]
905 paramssize = self._unpack(_fstreamparamsize)[0]
906 if paramssize < 0:
906 if paramssize < 0:
907 raise error.BundleValueError(
907 raise error.BundleValueError(
908 b'negative bundle param size: %i' % paramssize
908 b'negative bundle param size: %i' % paramssize
909 )
909 )
910 if paramssize:
910 if paramssize:
911 params = self._readexact(paramssize)
911 params = self._readexact(paramssize)
912 self._processallparams(params)
912 self._processallparams(params)
913 # The payload itself is decompressed below, so drop
913 # The payload itself is decompressed below, so drop
914 # the compression parameter passed down to compensate.
914 # the compression parameter passed down to compensate.
915 outparams = []
915 outparams = []
916 for p in params.split(b' '):
916 for p in params.split(b' '):
917 k, v = p.split(b'=', 1)
917 k, v = p.split(b'=', 1)
918 if k.lower() != b'compression':
918 if k.lower() != b'compression':
919 outparams.append(p)
919 outparams.append(p)
920 outparams = b' '.join(outparams)
920 outparams = b' '.join(outparams)
921 yield _pack(_fstreamparamsize, len(outparams))
921 yield _pack(_fstreamparamsize, len(outparams))
922 yield outparams
922 yield outparams
923 else:
923 else:
924 yield _pack(_fstreamparamsize, paramssize)
924 yield _pack(_fstreamparamsize, paramssize)
925 # From there, payload might need to be decompressed
925 # From there, payload might need to be decompressed
926 self._fp = self._compengine.decompressorreader(self._fp)
926 self._fp = self._compengine.decompressorreader(self._fp)
927 emptycount = 0
927 emptycount = 0
928 while emptycount < 2:
928 while emptycount < 2:
929 # so we can brainlessly loop
929 # so we can brainlessly loop
930 assert _fpartheadersize == _fpayloadsize
930 assert _fpartheadersize == _fpayloadsize
931 size = self._unpack(_fpartheadersize)[0]
931 size = self._unpack(_fpartheadersize)[0]
932 yield _pack(_fpartheadersize, size)
932 yield _pack(_fpartheadersize, size)
933 if size:
933 if size:
934 emptycount = 0
934 emptycount = 0
935 else:
935 else:
936 emptycount += 1
936 emptycount += 1
937 continue
937 continue
938 if size == flaginterrupt:
938 if size == flaginterrupt:
939 continue
939 continue
940 elif size < 0:
940 elif size < 0:
941 raise error.BundleValueError(b'negative chunk size: %i')
941 raise error.BundleValueError(b'negative chunk size: %i')
942 yield self._readexact(size)
942 yield self._readexact(size)
943
943
944 def iterparts(self, seekable=False):
944 def iterparts(self, seekable=False):
945 """yield all parts contained in the stream"""
945 """yield all parts contained in the stream"""
946 cls = seekableunbundlepart if seekable else unbundlepart
946 cls = seekableunbundlepart if seekable else unbundlepart
947 # make sure param have been loaded
947 # make sure param have been loaded
948 self.params
948 self.params
949 # From there, payload need to be decompressed
949 # From there, payload need to be decompressed
950 self._fp = self._compengine.decompressorreader(self._fp)
950 self._fp = self._compengine.decompressorreader(self._fp)
951 indebug(self.ui, b'start extraction of bundle2 parts')
951 indebug(self.ui, b'start extraction of bundle2 parts')
952 headerblock = self._readpartheader()
952 headerblock = self._readpartheader()
953 while headerblock is not None:
953 while headerblock is not None:
954 part = cls(self.ui, headerblock, self._fp)
954 part = cls(self.ui, headerblock, self._fp)
955 yield part
955 yield part
956 # Ensure part is fully consumed so we can start reading the next
956 # Ensure part is fully consumed so we can start reading the next
957 # part.
957 # part.
958 part.consume()
958 part.consume()
959
959
960 headerblock = self._readpartheader()
960 headerblock = self._readpartheader()
961 indebug(self.ui, b'end of bundle2 stream')
961 indebug(self.ui, b'end of bundle2 stream')
962
962
963 def _readpartheader(self):
963 def _readpartheader(self):
964 """reads a part header size and return the bytes blob
964 """reads a part header size and return the bytes blob
965
965
966 returns None if empty"""
966 returns None if empty"""
967 headersize = self._unpack(_fpartheadersize)[0]
967 headersize = self._unpack(_fpartheadersize)[0]
968 if headersize < 0:
968 if headersize < 0:
969 raise error.BundleValueError(
969 raise error.BundleValueError(
970 b'negative part header size: %i' % headersize
970 b'negative part header size: %i' % headersize
971 )
971 )
972 indebug(self.ui, b'part header size: %i' % headersize)
972 indebug(self.ui, b'part header size: %i' % headersize)
973 if headersize:
973 if headersize:
974 return self._readexact(headersize)
974 return self._readexact(headersize)
975 return None
975 return None
976
976
977 def compressed(self):
977 def compressed(self):
978 self.params # load params
978 self.params # load params
979 return self._compressed
979 return self._compressed
980
980
981 def close(self):
981 def close(self):
982 """close underlying file"""
982 """close underlying file"""
983 if hasattr(self._fp, 'close'):
983 if hasattr(self._fp, 'close'):
984 return self._fp.close()
984 return self._fp.close()
985
985
986
986
987 formatmap = {b'20': unbundle20}
987 formatmap = {b'20': unbundle20}
988
988
989 b2streamparamsmap = {}
989 b2streamparamsmap = {}
990
990
991
991
992 def b2streamparamhandler(name):
992 def b2streamparamhandler(name):
993 """register a handler for a stream level parameter"""
993 """register a handler for a stream level parameter"""
994
994
995 def decorator(func):
995 def decorator(func):
996 assert name not in formatmap
996 assert name not in formatmap
997 b2streamparamsmap[name] = func
997 b2streamparamsmap[name] = func
998 return func
998 return func
999
999
1000 return decorator
1000 return decorator
1001
1001
1002
1002
1003 @b2streamparamhandler(b'compression')
1003 @b2streamparamhandler(b'compression')
1004 def processcompression(unbundler, param, value):
1004 def processcompression(unbundler, param, value):
1005 """read compression parameter and install payload decompression"""
1005 """read compression parameter and install payload decompression"""
1006 if value not in util.compengines.supportedbundletypes:
1006 if value not in util.compengines.supportedbundletypes:
1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1007 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
1008 unbundler._compengine = util.compengines.forbundletype(value)
1008 unbundler._compengine = util.compengines.forbundletype(value)
1009 if value is not None:
1009 if value is not None:
1010 unbundler._compressed = True
1010 unbundler._compressed = True
1011
1011
1012
1012
1013 class bundlepart:
1013 class bundlepart:
1014 """A bundle2 part contains application level payload
1014 """A bundle2 part contains application level payload
1015
1015
1016 The part `type` is used to route the part to the application level
1016 The part `type` is used to route the part to the application level
1017 handler.
1017 handler.
1018
1018
1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1019 The part payload is contained in ``part.data``. It could be raw bytes or a
1020 generator of byte chunks.
1020 generator of byte chunks.
1021
1021
1022 You can add parameters to the part using the ``addparam`` method.
1022 You can add parameters to the part using the ``addparam`` method.
1023 Parameters can be either mandatory (default) or advisory. Remote side
1023 Parameters can be either mandatory (default) or advisory. Remote side
1024 should be able to safely ignore the advisory ones.
1024 should be able to safely ignore the advisory ones.
1025
1025
1026 Both data and parameters cannot be modified after the generation has begun.
1026 Both data and parameters cannot be modified after the generation has begun.
1027 """
1027 """
1028
1028
1029 def __init__(
1029 def __init__(
1030 self,
1030 self,
1031 parttype,
1031 parttype,
1032 mandatoryparams=(),
1032 mandatoryparams=(),
1033 advisoryparams=(),
1033 advisoryparams=(),
1034 data=b'',
1034 data=b'',
1035 mandatory=True,
1035 mandatory=True,
1036 ):
1036 ):
1037 validateparttype(parttype)
1037 validateparttype(parttype)
1038 self.id = None
1038 self.id = None
1039 self.type = parttype
1039 self.type = parttype
1040 self._data = data
1040 self._data = data
1041 self._mandatoryparams = list(mandatoryparams)
1041 self._mandatoryparams = list(mandatoryparams)
1042 self._advisoryparams = list(advisoryparams)
1042 self._advisoryparams = list(advisoryparams)
1043 # checking for duplicated entries
1043 # checking for duplicated entries
1044 self._seenparams = set()
1044 self._seenparams = set()
1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1045 for pname, __ in self._mandatoryparams + self._advisoryparams:
1046 if pname in self._seenparams:
1046 if pname in self._seenparams:
1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1047 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1048 self._seenparams.add(pname)
1048 self._seenparams.add(pname)
1049 # status of the part's generation:
1049 # status of the part's generation:
1050 # - None: not started,
1050 # - None: not started,
1051 # - False: currently generated,
1051 # - False: currently generated,
1052 # - True: generation done.
1052 # - True: generation done.
1053 self._generated = None
1053 self._generated = None
1054 self.mandatory = mandatory
1054 self.mandatory = mandatory
1055
1055
1056 def __repr__(self):
1056 def __repr__(self):
1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1057 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1058 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1059 cls,
1059 cls,
1060 id(self),
1060 id(self),
1061 self.id,
1061 self.id,
1062 self.type,
1062 self.type,
1063 self.mandatory,
1063 self.mandatory,
1064 )
1064 )
1065
1065
1066 def copy(self):
1066 def copy(self):
1067 """return a copy of the part
1067 """return a copy of the part
1068
1068
1069 The new part have the very same content but no partid assigned yet.
1069 The new part have the very same content but no partid assigned yet.
1070 Parts with generated data cannot be copied."""
1070 Parts with generated data cannot be copied."""
1071 assert not hasattr(self.data, 'next')
1071 assert not hasattr(self.data, 'next')
1072 return self.__class__(
1072 return self.__class__(
1073 self.type,
1073 self.type,
1074 self._mandatoryparams,
1074 self._mandatoryparams,
1075 self._advisoryparams,
1075 self._advisoryparams,
1076 self._data,
1076 self._data,
1077 self.mandatory,
1077 self.mandatory,
1078 )
1078 )
1079
1079
1080 # methods used to defines the part content
1080 # methods used to defines the part content
1081 @property
1081 @property
1082 def data(self):
1082 def data(self):
1083 return self._data
1083 return self._data
1084
1084
1085 @data.setter
1085 @data.setter
1086 def data(self, data):
1086 def data(self, data):
1087 if self._generated is not None:
1087 if self._generated is not None:
1088 raise error.ReadOnlyPartError(b'part is being generated')
1088 raise error.ReadOnlyPartError(b'part is being generated')
1089 self._data = data
1089 self._data = data
1090
1090
1091 @property
1091 @property
1092 def mandatoryparams(self):
1092 def mandatoryparams(self):
1093 # make it an immutable tuple to force people through ``addparam``
1093 # make it an immutable tuple to force people through ``addparam``
1094 return tuple(self._mandatoryparams)
1094 return tuple(self._mandatoryparams)
1095
1095
1096 @property
1096 @property
1097 def advisoryparams(self):
1097 def advisoryparams(self):
1098 # make it an immutable tuple to force people through ``addparam``
1098 # make it an immutable tuple to force people through ``addparam``
1099 return tuple(self._advisoryparams)
1099 return tuple(self._advisoryparams)
1100
1100
1101 def addparam(self, name, value=b'', mandatory=True):
1101 def addparam(self, name, value=b'', mandatory=True):
1102 """add a parameter to the part
1102 """add a parameter to the part
1103
1103
1104 If 'mandatory' is set to True, the remote handler must claim support
1104 If 'mandatory' is set to True, the remote handler must claim support
1105 for this parameter or the unbundling will be aborted.
1105 for this parameter or the unbundling will be aborted.
1106
1106
1107 The 'name' and 'value' cannot exceed 255 bytes each.
1107 The 'name' and 'value' cannot exceed 255 bytes each.
1108 """
1108 """
1109 if self._generated is not None:
1109 if self._generated is not None:
1110 raise error.ReadOnlyPartError(b'part is being generated')
1110 raise error.ReadOnlyPartError(b'part is being generated')
1111 if name in self._seenparams:
1111 if name in self._seenparams:
1112 raise ValueError(b'duplicated params: %s' % name)
1112 raise ValueError(b'duplicated params: %s' % name)
1113 self._seenparams.add(name)
1113 self._seenparams.add(name)
1114 params = self._advisoryparams
1114 params = self._advisoryparams
1115 if mandatory:
1115 if mandatory:
1116 params = self._mandatoryparams
1116 params = self._mandatoryparams
1117 params.append((name, value))
1117 params.append((name, value))
1118
1118
1119 # methods used to generates the bundle2 stream
1119 # methods used to generates the bundle2 stream
1120 def getchunks(self, ui):
1120 def getchunks(self, ui):
1121 if self._generated is not None:
1121 if self._generated is not None:
1122 raise error.ProgrammingError(b'part can only be consumed once')
1122 raise error.ProgrammingError(b'part can only be consumed once')
1123 self._generated = False
1123 self._generated = False
1124
1124
1125 if ui.debugflag:
1125 if ui.debugflag:
1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1126 msg = [b'bundle2-output-part: "%s"' % self.type]
1127 if not self.mandatory:
1127 if not self.mandatory:
1128 msg.append(b' (advisory)')
1128 msg.append(b' (advisory)')
1129 nbmp = len(self.mandatoryparams)
1129 nbmp = len(self.mandatoryparams)
1130 nbap = len(self.advisoryparams)
1130 nbap = len(self.advisoryparams)
1131 if nbmp or nbap:
1131 if nbmp or nbap:
1132 msg.append(b' (params:')
1132 msg.append(b' (params:')
1133 if nbmp:
1133 if nbmp:
1134 msg.append(b' %i mandatory' % nbmp)
1134 msg.append(b' %i mandatory' % nbmp)
1135 if nbap:
1135 if nbap:
1136 msg.append(b' %i advisory' % nbmp)
1136 msg.append(b' %i advisory' % nbmp)
1137 msg.append(b')')
1137 msg.append(b')')
1138 if not self.data:
1138 if not self.data:
1139 msg.append(b' empty payload')
1139 msg.append(b' empty payload')
1140 elif hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1140 elif hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1141 msg.append(b' streamed payload')
1141 msg.append(b' streamed payload')
1142 else:
1142 else:
1143 msg.append(b' %i bytes payload' % len(self.data))
1143 msg.append(b' %i bytes payload' % len(self.data))
1144 msg.append(b'\n')
1144 msg.append(b'\n')
1145 ui.debug(b''.join(msg))
1145 ui.debug(b''.join(msg))
1146
1146
1147 #### header
1147 #### header
1148 if self.mandatory:
1148 if self.mandatory:
1149 parttype = self.type.upper()
1149 parttype = self.type.upper()
1150 else:
1150 else:
1151 parttype = self.type.lower()
1151 parttype = self.type.lower()
1152 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1152 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1153 ## parttype
1153 ## parttype
1154 header = [
1154 header = [
1155 _pack(_fparttypesize, len(parttype)),
1155 _pack(_fparttypesize, len(parttype)),
1156 parttype,
1156 parttype,
1157 _pack(_fpartid, self.id),
1157 _pack(_fpartid, self.id),
1158 ]
1158 ]
1159 ## parameters
1159 ## parameters
1160 # count
1160 # count
1161 manpar = self.mandatoryparams
1161 manpar = self.mandatoryparams
1162 advpar = self.advisoryparams
1162 advpar = self.advisoryparams
1163 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1163 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1164 # size
1164 # size
1165 parsizes = []
1165 parsizes = []
1166 for key, value in manpar:
1166 for key, value in manpar:
1167 parsizes.append(len(key))
1167 parsizes.append(len(key))
1168 parsizes.append(len(value))
1168 parsizes.append(len(value))
1169 for key, value in advpar:
1169 for key, value in advpar:
1170 parsizes.append(len(key))
1170 parsizes.append(len(key))
1171 parsizes.append(len(value))
1171 parsizes.append(len(value))
1172 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1172 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1173 header.append(paramsizes)
1173 header.append(paramsizes)
1174 # key, value
1174 # key, value
1175 for key, value in manpar:
1175 for key, value in manpar:
1176 header.append(key)
1176 header.append(key)
1177 header.append(value)
1177 header.append(value)
1178 for key, value in advpar:
1178 for key, value in advpar:
1179 header.append(key)
1179 header.append(key)
1180 header.append(value)
1180 header.append(value)
1181 ## finalize header
1181 ## finalize header
1182 try:
1182 try:
1183 headerchunk = b''.join(header)
1183 headerchunk = b''.join(header)
1184 except TypeError:
1184 except TypeError:
1185 raise TypeError(
1185 raise TypeError(
1186 'Found a non-bytes trying to '
1186 'Found a non-bytes trying to '
1187 'build bundle part header: %r' % header
1187 'build bundle part header: %r' % header
1188 )
1188 )
1189 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1189 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1190 yield _pack(_fpartheadersize, len(headerchunk))
1190 yield _pack(_fpartheadersize, len(headerchunk))
1191 yield headerchunk
1191 yield headerchunk
1192 ## payload
1192 ## payload
1193 try:
1193 try:
1194 for chunk in self._payloadchunks():
1194 for chunk in self._payloadchunks():
1195 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1195 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1196 yield _pack(_fpayloadsize, len(chunk))
1196 yield _pack(_fpayloadsize, len(chunk))
1197 yield chunk
1197 yield chunk
1198 except GeneratorExit:
1198 except GeneratorExit:
1199 # GeneratorExit means that nobody is listening for our
1199 # GeneratorExit means that nobody is listening for our
1200 # results anyway, so just bail quickly rather than trying
1200 # results anyway, so just bail quickly rather than trying
1201 # to produce an error part.
1201 # to produce an error part.
1202 ui.debug(b'bundle2-generatorexit\n')
1202 ui.debug(b'bundle2-generatorexit\n')
1203 raise
1203 raise
1204 except BaseException as exc:
1204 except BaseException as exc:
1205 bexc = stringutil.forcebytestr(exc)
1205 bexc = stringutil.forcebytestr(exc)
1206 # backup exception data for later
1206 # backup exception data for later
1207 ui.debug(
1207 ui.debug(
1208 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1208 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1209 )
1209 )
1210 tb = sys.exc_info()[2]
1210 tb = sys.exc_info()[2]
1211 msg = b'unexpected error: %s' % bexc
1211 msg = b'unexpected error: %s' % bexc
1212 interpart = bundlepart(
1212 interpart = bundlepart(
1213 b'error:abort', [(b'message', msg)], mandatory=False
1213 b'error:abort', [(b'message', msg)], mandatory=False
1214 )
1214 )
1215 interpart.id = 0
1215 interpart.id = 0
1216 yield _pack(_fpayloadsize, -1)
1216 yield _pack(_fpayloadsize, -1)
1217 for chunk in interpart.getchunks(ui=ui):
1217 for chunk in interpart.getchunks(ui=ui):
1218 yield chunk
1218 yield chunk
1219 outdebug(ui, b'closing payload chunk')
1219 outdebug(ui, b'closing payload chunk')
1220 # abort current part payload
1220 # abort current part payload
1221 yield _pack(_fpayloadsize, 0)
1221 yield _pack(_fpayloadsize, 0)
1222 pycompat.raisewithtb(exc, tb)
1222 pycompat.raisewithtb(exc, tb)
1223 # end of payload
1223 # end of payload
1224 outdebug(ui, b'closing payload chunk')
1224 outdebug(ui, b'closing payload chunk')
1225 yield _pack(_fpayloadsize, 0)
1225 yield _pack(_fpayloadsize, 0)
1226 self._generated = True
1226 self._generated = True
1227
1227
1228 def _payloadchunks(self):
1228 def _payloadchunks(self):
1229 """yield chunks of a the part payload
1229 """yield chunks of a the part payload
1230
1230
1231 Exists to handle the different methods to provide data to a part."""
1231 Exists to handle the different methods to provide data to a part."""
1232 # we only support fixed size data now.
1232 # we only support fixed size data now.
1233 # This will be improved in the future.
1233 # This will be improved in the future.
1234 if hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1234 if hasattr(self.data, 'next') or hasattr(self.data, '__next__'):
1235 buff = util.chunkbuffer(self.data)
1235 buff = util.chunkbuffer(self.data)
1236 chunk = buff.read(preferedchunksize)
1236 chunk = buff.read(preferedchunksize)
1237 while chunk:
1237 while chunk:
1238 yield chunk
1238 yield chunk
1239 chunk = buff.read(preferedchunksize)
1239 chunk = buff.read(preferedchunksize)
1240 elif len(self.data):
1240 elif len(self.data):
1241 yield self.data
1241 yield self.data
1242
1242
1243
1243
1244 flaginterrupt = -1
1244 flaginterrupt = -1
1245
1245
1246
1246
1247 class interrupthandler(unpackermixin):
1247 class interrupthandler(unpackermixin):
1248 """read one part and process it with restricted capability
1248 """read one part and process it with restricted capability
1249
1249
1250 This allows to transmit exception raised on the producer size during part
1250 This allows to transmit exception raised on the producer size during part
1251 iteration while the consumer is reading a part.
1251 iteration while the consumer is reading a part.
1252
1252
1253 Part processed in this manner only have access to a ui object,"""
1253 Part processed in this manner only have access to a ui object,"""
1254
1254
1255 def __init__(self, ui, fp):
1255 def __init__(self, ui, fp):
1256 super(interrupthandler, self).__init__(fp)
1256 super(interrupthandler, self).__init__(fp)
1257 self.ui = ui
1257 self.ui = ui
1258
1258
1259 def _readpartheader(self):
1259 def _readpartheader(self):
1260 """reads a part header size and return the bytes blob
1260 """reads a part header size and return the bytes blob
1261
1261
1262 returns None if empty"""
1262 returns None if empty"""
1263 headersize = self._unpack(_fpartheadersize)[0]
1263 headersize = self._unpack(_fpartheadersize)[0]
1264 if headersize < 0:
1264 if headersize < 0:
1265 raise error.BundleValueError(
1265 raise error.BundleValueError(
1266 b'negative part header size: %i' % headersize
1266 b'negative part header size: %i' % headersize
1267 )
1267 )
1268 indebug(self.ui, b'part header size: %i\n' % headersize)
1268 indebug(self.ui, b'part header size: %i\n' % headersize)
1269 if headersize:
1269 if headersize:
1270 return self._readexact(headersize)
1270 return self._readexact(headersize)
1271 return None
1271 return None
1272
1272
1273 def __call__(self):
1273 def __call__(self):
1274
1274
1275 self.ui.debug(
1275 self.ui.debug(
1276 b'bundle2-input-stream-interrupt: opening out of band context\n'
1276 b'bundle2-input-stream-interrupt: opening out of band context\n'
1277 )
1277 )
1278 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1278 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1279 headerblock = self._readpartheader()
1279 headerblock = self._readpartheader()
1280 if headerblock is None:
1280 if headerblock is None:
1281 indebug(self.ui, b'no part found during interruption.')
1281 indebug(self.ui, b'no part found during interruption.')
1282 return
1282 return
1283 part = unbundlepart(self.ui, headerblock, self._fp)
1283 part = unbundlepart(self.ui, headerblock, self._fp)
1284 op = interruptoperation(self.ui)
1284 op = interruptoperation(self.ui)
1285 hardabort = False
1285 hardabort = False
1286 try:
1286 try:
1287 _processpart(op, part)
1287 _processpart(op, part)
1288 except (SystemExit, KeyboardInterrupt):
1288 except (SystemExit, KeyboardInterrupt):
1289 hardabort = True
1289 hardabort = True
1290 raise
1290 raise
1291 finally:
1291 finally:
1292 if not hardabort:
1292 if not hardabort:
1293 part.consume()
1293 part.consume()
1294 self.ui.debug(
1294 self.ui.debug(
1295 b'bundle2-input-stream-interrupt: closing out of band context\n'
1295 b'bundle2-input-stream-interrupt: closing out of band context\n'
1296 )
1296 )
1297
1297
1298
1298
1299 class interruptoperation:
1299 class interruptoperation:
1300 """A limited operation to be use by part handler during interruption
1300 """A limited operation to be use by part handler during interruption
1301
1301
1302 It only have access to an ui object.
1302 It only have access to an ui object.
1303 """
1303 """
1304
1304
1305 def __init__(self, ui):
1305 def __init__(self, ui):
1306 self.ui = ui
1306 self.ui = ui
1307 self.reply = None
1307 self.reply = None
1308 self.captureoutput = False
1308 self.captureoutput = False
1309
1309
1310 @property
1310 @property
1311 def repo(self):
1311 def repo(self):
1312 raise error.ProgrammingError(b'no repo access from stream interruption')
1312 raise error.ProgrammingError(b'no repo access from stream interruption')
1313
1313
1314 def gettransaction(self):
1314 def gettransaction(self):
1315 raise TransactionUnavailable(b'no repo access from stream interruption')
1315 raise TransactionUnavailable(b'no repo access from stream interruption')
1316
1316
1317
1317
1318 def decodepayloadchunks(ui, fh):
1318 def decodepayloadchunks(ui, fh):
1319 """Reads bundle2 part payload data into chunks.
1319 """Reads bundle2 part payload data into chunks.
1320
1320
1321 Part payload data consists of framed chunks. This function takes
1321 Part payload data consists of framed chunks. This function takes
1322 a file handle and emits those chunks.
1322 a file handle and emits those chunks.
1323 """
1323 """
1324 dolog = ui.configbool(b'devel', b'bundle2.debug')
1324 dolog = ui.configbool(b'devel', b'bundle2.debug')
1325 debug = ui.debug
1325 debug = ui.debug
1326
1326
1327 headerstruct = struct.Struct(_fpayloadsize)
1327 headerstruct = struct.Struct(_fpayloadsize)
1328 headersize = headerstruct.size
1328 headersize = headerstruct.size
1329 unpack = headerstruct.unpack
1329 unpack = headerstruct.unpack
1330
1330
1331 readexactly = changegroup.readexactly
1331 readexactly = changegroup.readexactly
1332 read = fh.read
1332 read = fh.read
1333
1333
1334 chunksize = unpack(readexactly(fh, headersize))[0]
1334 chunksize = unpack(readexactly(fh, headersize))[0]
1335 indebug(ui, b'payload chunk size: %i' % chunksize)
1335 indebug(ui, b'payload chunk size: %i' % chunksize)
1336
1336
1337 # changegroup.readexactly() is inlined below for performance.
1337 # changegroup.readexactly() is inlined below for performance.
1338 while chunksize:
1338 while chunksize:
1339 if chunksize >= 0:
1339 if chunksize >= 0:
1340 s = read(chunksize)
1340 s = read(chunksize)
1341 if len(s) < chunksize:
1341 if len(s) < chunksize:
1342 raise error.Abort(
1342 raise error.Abort(
1343 _(
1343 _(
1344 b'stream ended unexpectedly '
1344 b'stream ended unexpectedly '
1345 b' (got %d bytes, expected %d)'
1345 b' (got %d bytes, expected %d)'
1346 )
1346 )
1347 % (len(s), chunksize)
1347 % (len(s), chunksize)
1348 )
1348 )
1349
1349
1350 yield s
1350 yield s
1351 elif chunksize == flaginterrupt:
1351 elif chunksize == flaginterrupt:
1352 # Interrupt "signal" detected. The regular stream is interrupted
1352 # Interrupt "signal" detected. The regular stream is interrupted
1353 # and a bundle2 part follows. Consume it.
1353 # and a bundle2 part follows. Consume it.
1354 interrupthandler(ui, fh)()
1354 interrupthandler(ui, fh)()
1355 else:
1355 else:
1356 raise error.BundleValueError(
1356 raise error.BundleValueError(
1357 b'negative payload chunk size: %s' % chunksize
1357 b'negative payload chunk size: %s' % chunksize
1358 )
1358 )
1359
1359
1360 s = read(headersize)
1360 s = read(headersize)
1361 if len(s) < headersize:
1361 if len(s) < headersize:
1362 raise error.Abort(
1362 raise error.Abort(
1363 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1363 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1364 % (len(s), chunksize)
1364 % (len(s), chunksize)
1365 )
1365 )
1366
1366
1367 chunksize = unpack(s)[0]
1367 chunksize = unpack(s)[0]
1368
1368
1369 # indebug() inlined for performance.
1369 # indebug() inlined for performance.
1370 if dolog:
1370 if dolog:
1371 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1371 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1372
1372
1373
1373
1374 class unbundlepart(unpackermixin):
1374 class unbundlepart(unpackermixin):
1375 """a bundle part read from a bundle"""
1375 """a bundle part read from a bundle"""
1376
1376
1377 def __init__(self, ui, header, fp):
1377 def __init__(self, ui, header, fp):
1378 super(unbundlepart, self).__init__(fp)
1378 super(unbundlepart, self).__init__(fp)
1379 self._seekable = hasattr(fp, 'seek') and hasattr(fp, 'tell')
1379 self._seekable = hasattr(fp, 'seek') and hasattr(fp, 'tell')
1380 self.ui = ui
1380 self.ui = ui
1381 # unbundle state attr
1381 # unbundle state attr
1382 self._headerdata = header
1382 self._headerdata = header
1383 self._headeroffset = 0
1383 self._headeroffset = 0
1384 self._initialized = False
1384 self._initialized = False
1385 self.consumed = False
1385 self.consumed = False
1386 # part data
1386 # part data
1387 self.id = None
1387 self.id = None
1388 self.type = None
1388 self.type = None
1389 self.mandatoryparams = None
1389 self.mandatoryparams = None
1390 self.advisoryparams = None
1390 self.advisoryparams = None
1391 self.params = None
1391 self.params = None
1392 self.mandatorykeys = ()
1392 self.mandatorykeys = ()
1393 self._readheader()
1393 self._readheader()
1394 self._mandatory = None
1394 self._mandatory = None
1395 self._pos = 0
1395 self._pos = 0
1396
1396
1397 def _fromheader(self, size):
1397 def _fromheader(self, size):
1398 """return the next <size> byte from the header"""
1398 """return the next <size> byte from the header"""
1399 offset = self._headeroffset
1399 offset = self._headeroffset
1400 data = self._headerdata[offset : (offset + size)]
1400 data = self._headerdata[offset : (offset + size)]
1401 self._headeroffset = offset + size
1401 self._headeroffset = offset + size
1402 return data
1402 return data
1403
1403
1404 def _unpackheader(self, format):
1404 def _unpackheader(self, format):
1405 """read given format from header
1405 """read given format from header
1406
1406
1407 This automatically compute the size of the format to read."""
1407 This automatically compute the size of the format to read."""
1408 data = self._fromheader(struct.calcsize(format))
1408 data = self._fromheader(struct.calcsize(format))
1409 return _unpack(format, data)
1409 return _unpack(format, data)
1410
1410
1411 def _initparams(self, mandatoryparams, advisoryparams):
1411 def _initparams(self, mandatoryparams, advisoryparams):
1412 """internal function to setup all logic related parameters"""
1412 """internal function to setup all logic related parameters"""
1413 # make it read only to prevent people touching it by mistake.
1413 # make it read only to prevent people touching it by mistake.
1414 self.mandatoryparams = tuple(mandatoryparams)
1414 self.mandatoryparams = tuple(mandatoryparams)
1415 self.advisoryparams = tuple(advisoryparams)
1415 self.advisoryparams = tuple(advisoryparams)
1416 # user friendly UI
1416 # user friendly UI
1417 self.params = util.sortdict(self.mandatoryparams)
1417 self.params = util.sortdict(self.mandatoryparams)
1418 self.params.update(self.advisoryparams)
1418 self.params.update(self.advisoryparams)
1419 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1419 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1420
1420
1421 def _readheader(self):
1421 def _readheader(self):
1422 """read the header and setup the object"""
1422 """read the header and setup the object"""
1423 typesize = self._unpackheader(_fparttypesize)[0]
1423 typesize = self._unpackheader(_fparttypesize)[0]
1424 self.type = self._fromheader(typesize)
1424 self.type = self._fromheader(typesize)
1425 indebug(self.ui, b'part type: "%s"' % self.type)
1425 indebug(self.ui, b'part type: "%s"' % self.type)
1426 self.id = self._unpackheader(_fpartid)[0]
1426 self.id = self._unpackheader(_fpartid)[0]
1427 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1427 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1428 # extract mandatory bit from type
1428 # extract mandatory bit from type
1429 self.mandatory = self.type != self.type.lower()
1429 self.mandatory = self.type != self.type.lower()
1430 self.type = self.type.lower()
1430 self.type = self.type.lower()
1431 ## reading parameters
1431 ## reading parameters
1432 # param count
1432 # param count
1433 mancount, advcount = self._unpackheader(_fpartparamcount)
1433 mancount, advcount = self._unpackheader(_fpartparamcount)
1434 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1434 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1435 # param size
1435 # param size
1436 fparamsizes = _makefpartparamsizes(mancount + advcount)
1436 fparamsizes = _makefpartparamsizes(mancount + advcount)
1437 paramsizes = self._unpackheader(fparamsizes)
1437 paramsizes = self._unpackheader(fparamsizes)
1438 # make it a list of couple again
1438 # make it a list of couple again
1439 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1439 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1440 # split mandatory from advisory
1440 # split mandatory from advisory
1441 mansizes = paramsizes[:mancount]
1441 mansizes = paramsizes[:mancount]
1442 advsizes = paramsizes[mancount:]
1442 advsizes = paramsizes[mancount:]
1443 # retrieve param value
1443 # retrieve param value
1444 manparams = []
1444 manparams = []
1445 for key, value in mansizes:
1445 for key, value in mansizes:
1446 manparams.append((self._fromheader(key), self._fromheader(value)))
1446 manparams.append((self._fromheader(key), self._fromheader(value)))
1447 advparams = []
1447 advparams = []
1448 for key, value in advsizes:
1448 for key, value in advsizes:
1449 advparams.append((self._fromheader(key), self._fromheader(value)))
1449 advparams.append((self._fromheader(key), self._fromheader(value)))
1450 self._initparams(manparams, advparams)
1450 self._initparams(manparams, advparams)
1451 ## part payload
1451 ## part payload
1452 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1452 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1453 # we read the data, tell it
1453 # we read the data, tell it
1454 self._initialized = True
1454 self._initialized = True
1455
1455
1456 def _payloadchunks(self):
1456 def _payloadchunks(self):
1457 """Generator of decoded chunks in the payload."""
1457 """Generator of decoded chunks in the payload."""
1458 return decodepayloadchunks(self.ui, self._fp)
1458 return decodepayloadchunks(self.ui, self._fp)
1459
1459
1460 def consume(self):
1460 def consume(self):
1461 """Read the part payload until completion.
1461 """Read the part payload until completion.
1462
1462
1463 By consuming the part data, the underlying stream read offset will
1463 By consuming the part data, the underlying stream read offset will
1464 be advanced to the next part (or end of stream).
1464 be advanced to the next part (or end of stream).
1465 """
1465 """
1466 if self.consumed:
1466 if self.consumed:
1467 return
1467 return
1468
1468
1469 chunk = self.read(32768)
1469 chunk = self.read(32768)
1470 while chunk:
1470 while chunk:
1471 self._pos += len(chunk)
1471 self._pos += len(chunk)
1472 chunk = self.read(32768)
1472 chunk = self.read(32768)
1473
1473
1474 def read(self, size=None):
1474 def read(self, size=None):
1475 """read payload data"""
1475 """read payload data"""
1476 if not self._initialized:
1476 if not self._initialized:
1477 self._readheader()
1477 self._readheader()
1478 if size is None:
1478 if size is None:
1479 data = self._payloadstream.read()
1479 data = self._payloadstream.read()
1480 else:
1480 else:
1481 data = self._payloadstream.read(size)
1481 data = self._payloadstream.read(size)
1482 self._pos += len(data)
1482 self._pos += len(data)
1483 if size is None or len(data) < size:
1483 if size is None or len(data) < size:
1484 if not self.consumed and self._pos:
1484 if not self.consumed and self._pos:
1485 self.ui.debug(
1485 self.ui.debug(
1486 b'bundle2-input-part: total payload size %i\n' % self._pos
1486 b'bundle2-input-part: total payload size %i\n' % self._pos
1487 )
1487 )
1488 self.consumed = True
1488 self.consumed = True
1489 return data
1489 return data
1490
1490
1491
1491
1492 class seekableunbundlepart(unbundlepart):
1492 class seekableunbundlepart(unbundlepart):
1493 """A bundle2 part in a bundle that is seekable.
1493 """A bundle2 part in a bundle that is seekable.
1494
1494
1495 Regular ``unbundlepart`` instances can only be read once. This class
1495 Regular ``unbundlepart`` instances can only be read once. This class
1496 extends ``unbundlepart`` to enable bi-directional seeking within the
1496 extends ``unbundlepart`` to enable bi-directional seeking within the
1497 part.
1497 part.
1498
1498
1499 Bundle2 part data consists of framed chunks. Offsets when seeking
1499 Bundle2 part data consists of framed chunks. Offsets when seeking
1500 refer to the decoded data, not the offsets in the underlying bundle2
1500 refer to the decoded data, not the offsets in the underlying bundle2
1501 stream.
1501 stream.
1502
1502
1503 To facilitate quickly seeking within the decoded data, instances of this
1503 To facilitate quickly seeking within the decoded data, instances of this
1504 class maintain a mapping between offsets in the underlying stream and
1504 class maintain a mapping between offsets in the underlying stream and
1505 the decoded payload. This mapping will consume memory in proportion
1505 the decoded payload. This mapping will consume memory in proportion
1506 to the number of chunks within the payload (which almost certainly
1506 to the number of chunks within the payload (which almost certainly
1507 increases in proportion with the size of the part).
1507 increases in proportion with the size of the part).
1508 """
1508 """
1509
1509
1510 def __init__(self, ui, header, fp):
1510 def __init__(self, ui, header, fp):
1511 # (payload, file) offsets for chunk starts.
1511 # (payload, file) offsets for chunk starts.
1512 self._chunkindex = []
1512 self._chunkindex = []
1513
1513
1514 super(seekableunbundlepart, self).__init__(ui, header, fp)
1514 super(seekableunbundlepart, self).__init__(ui, header, fp)
1515
1515
1516 def _payloadchunks(self, chunknum=0):
1516 def _payloadchunks(self, chunknum=0):
1517 '''seek to specified chunk and start yielding data'''
1517 '''seek to specified chunk and start yielding data'''
1518 if len(self._chunkindex) == 0:
1518 if len(self._chunkindex) == 0:
1519 assert chunknum == 0, b'Must start with chunk 0'
1519 assert chunknum == 0, b'Must start with chunk 0'
1520 self._chunkindex.append((0, self._tellfp()))
1520 self._chunkindex.append((0, self._tellfp()))
1521 else:
1521 else:
1522 assert chunknum < len(self._chunkindex), (
1522 assert chunknum < len(self._chunkindex), (
1523 b'Unknown chunk %d' % chunknum
1523 b'Unknown chunk %d' % chunknum
1524 )
1524 )
1525 self._seekfp(self._chunkindex[chunknum][1])
1525 self._seekfp(self._chunkindex[chunknum][1])
1526
1526
1527 pos = self._chunkindex[chunknum][0]
1527 pos = self._chunkindex[chunknum][0]
1528
1528
1529 for chunk in decodepayloadchunks(self.ui, self._fp):
1529 for chunk in decodepayloadchunks(self.ui, self._fp):
1530 chunknum += 1
1530 chunknum += 1
1531 pos += len(chunk)
1531 pos += len(chunk)
1532 if chunknum == len(self._chunkindex):
1532 if chunknum == len(self._chunkindex):
1533 self._chunkindex.append((pos, self._tellfp()))
1533 self._chunkindex.append((pos, self._tellfp()))
1534
1534
1535 yield chunk
1535 yield chunk
1536
1536
1537 def _findchunk(self, pos):
1537 def _findchunk(self, pos):
1538 '''for a given payload position, return a chunk number and offset'''
1538 '''for a given payload position, return a chunk number and offset'''
1539 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1539 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1540 if ppos == pos:
1540 if ppos == pos:
1541 return chunk, 0
1541 return chunk, 0
1542 elif ppos > pos:
1542 elif ppos > pos:
1543 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1543 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1544 raise ValueError(b'Unknown chunk')
1544 raise ValueError(b'Unknown chunk')
1545
1545
1546 def tell(self):
1546 def tell(self):
1547 return self._pos
1547 return self._pos
1548
1548
1549 def seek(self, offset, whence=os.SEEK_SET):
1549 def seek(self, offset, whence=os.SEEK_SET):
1550 if whence == os.SEEK_SET:
1550 if whence == os.SEEK_SET:
1551 newpos = offset
1551 newpos = offset
1552 elif whence == os.SEEK_CUR:
1552 elif whence == os.SEEK_CUR:
1553 newpos = self._pos + offset
1553 newpos = self._pos + offset
1554 elif whence == os.SEEK_END:
1554 elif whence == os.SEEK_END:
1555 if not self.consumed:
1555 if not self.consumed:
1556 # Can't use self.consume() here because it advances self._pos.
1556 # Can't use self.consume() here because it advances self._pos.
1557 chunk = self.read(32768)
1557 chunk = self.read(32768)
1558 while chunk:
1558 while chunk:
1559 chunk = self.read(32768)
1559 chunk = self.read(32768)
1560 newpos = self._chunkindex[-1][0] - offset
1560 newpos = self._chunkindex[-1][0] - offset
1561 else:
1561 else:
1562 raise ValueError(b'Unknown whence value: %r' % (whence,))
1562 raise ValueError(b'Unknown whence value: %r' % (whence,))
1563
1563
1564 if newpos > self._chunkindex[-1][0] and not self.consumed:
1564 if newpos > self._chunkindex[-1][0] and not self.consumed:
1565 # Can't use self.consume() here because it advances self._pos.
1565 # Can't use self.consume() here because it advances self._pos.
1566 chunk = self.read(32768)
1566 chunk = self.read(32768)
1567 while chunk:
1567 while chunk:
1568 chunk = self.read(32668)
1568 chunk = self.read(32668)
1569
1569
1570 if not 0 <= newpos <= self._chunkindex[-1][0]:
1570 if not 0 <= newpos <= self._chunkindex[-1][0]:
1571 raise ValueError(b'Offset out of range')
1571 raise ValueError(b'Offset out of range')
1572
1572
1573 if self._pos != newpos:
1573 if self._pos != newpos:
1574 chunk, internaloffset = self._findchunk(newpos)
1574 chunk, internaloffset = self._findchunk(newpos)
1575 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1575 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1576 adjust = self.read(internaloffset)
1576 adjust = self.read(internaloffset)
1577 if len(adjust) != internaloffset:
1577 if len(adjust) != internaloffset:
1578 raise error.Abort(_(b'Seek failed\n'))
1578 raise error.Abort(_(b'Seek failed\n'))
1579 self._pos = newpos
1579 self._pos = newpos
1580
1580
1581 def _seekfp(self, offset, whence=0):
1581 def _seekfp(self, offset, whence=0):
1582 """move the underlying file pointer
1582 """move the underlying file pointer
1583
1583
1584 This method is meant for internal usage by the bundle2 protocol only.
1584 This method is meant for internal usage by the bundle2 protocol only.
1585 They directly manipulate the low level stream including bundle2 level
1585 They directly manipulate the low level stream including bundle2 level
1586 instruction.
1586 instruction.
1587
1587
1588 Do not use it to implement higher-level logic or methods."""
1588 Do not use it to implement higher-level logic or methods."""
1589 if self._seekable:
1589 if self._seekable:
1590 return self._fp.seek(offset, whence)
1590 return self._fp.seek(offset, whence)
1591 else:
1591 else:
1592 raise NotImplementedError(_(b'File pointer is not seekable'))
1592 raise NotImplementedError(_(b'File pointer is not seekable'))
1593
1593
1594 def _tellfp(self):
1594 def _tellfp(self):
1595 """return the file offset, or None if file is not seekable
1595 """return the file offset, or None if file is not seekable
1596
1596
1597 This method is meant for internal usage by the bundle2 protocol only.
1597 This method is meant for internal usage by the bundle2 protocol only.
1598 They directly manipulate the low level stream including bundle2 level
1598 They directly manipulate the low level stream including bundle2 level
1599 instruction.
1599 instruction.
1600
1600
1601 Do not use it to implement higher-level logic or methods."""
1601 Do not use it to implement higher-level logic or methods."""
1602 if self._seekable:
1602 if self._seekable:
1603 try:
1603 try:
1604 return self._fp.tell()
1604 return self._fp.tell()
1605 except IOError as e:
1605 except IOError as e:
1606 if e.errno == errno.ESPIPE:
1606 if e.errno == errno.ESPIPE:
1607 self._seekable = False
1607 self._seekable = False
1608 else:
1608 else:
1609 raise
1609 raise
1610 return None
1610 return None
1611
1611
1612
1612
1613 # These are only the static capabilities.
1613 # These are only the static capabilities.
1614 # Check the 'getrepocaps' function for the rest.
1614 # Check the 'getrepocaps' function for the rest.
1615 capabilities = {
1615 capabilities = {
1616 b'HG20': (),
1616 b'HG20': (),
1617 b'bookmarks': (),
1617 b'bookmarks': (),
1618 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1618 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1619 b'listkeys': (),
1619 b'listkeys': (),
1620 b'pushkey': (),
1620 b'pushkey': (),
1621 b'digests': tuple(sorted(util.DIGESTS.keys())),
1621 b'digests': tuple(sorted(util.DIGESTS.keys())),
1622 b'remote-changegroup': (b'http', b'https'),
1622 b'remote-changegroup': (b'http', b'https'),
1623 b'hgtagsfnodes': (),
1623 b'hgtagsfnodes': (),
1624 b'phases': (b'heads',),
1624 b'phases': (b'heads',),
1625 b'stream': (b'v2',),
1625 b'stream': (b'v2',),
1626 }
1626 }
1627
1627
1628
1628
1629 def getrepocaps(repo, allowpushback=False, role=None):
1629 def getrepocaps(repo, allowpushback=False, role=None):
1630 """return the bundle2 capabilities for a given repo
1630 """return the bundle2 capabilities for a given repo
1631
1631
1632 Exists to allow extensions (like evolution) to mutate the capabilities.
1632 Exists to allow extensions (like evolution) to mutate the capabilities.
1633
1633
1634 The returned value is used for servers advertising their capabilities as
1634 The returned value is used for servers advertising their capabilities as
1635 well as clients advertising their capabilities to servers as part of
1635 well as clients advertising their capabilities to servers as part of
1636 bundle2 requests. The ``role`` argument specifies which is which.
1636 bundle2 requests. The ``role`` argument specifies which is which.
1637 """
1637 """
1638 if role not in (b'client', b'server'):
1638 if role not in (b'client', b'server'):
1639 raise error.ProgrammingError(b'role argument must be client or server')
1639 raise error.ProgrammingError(b'role argument must be client or server')
1640
1640
1641 caps = capabilities.copy()
1641 caps = capabilities.copy()
1642 caps[b'changegroup'] = tuple(
1642 caps[b'changegroup'] = tuple(
1643 sorted(changegroup.supportedincomingversions(repo))
1643 sorted(changegroup.supportedincomingversions(repo))
1644 )
1644 )
1645 if obsolete.isenabled(repo, obsolete.exchangeopt):
1645 if obsolete.isenabled(repo, obsolete.exchangeopt):
1646 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1646 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1647 caps[b'obsmarkers'] = supportedformat
1647 caps[b'obsmarkers'] = supportedformat
1648 if allowpushback:
1648 if allowpushback:
1649 caps[b'pushback'] = ()
1649 caps[b'pushback'] = ()
1650 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1650 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1651 if cpmode == b'check-related':
1651 if cpmode == b'check-related':
1652 caps[b'checkheads'] = (b'related',)
1652 caps[b'checkheads'] = (b'related',)
1653 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1653 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1654 caps.pop(b'phases')
1654 caps.pop(b'phases')
1655
1655
1656 # Don't advertise stream clone support in server mode if not configured.
1656 # Don't advertise stream clone support in server mode if not configured.
1657 if role == b'server':
1657 if role == b'server':
1658 streamsupported = repo.ui.configbool(
1658 streamsupported = repo.ui.configbool(
1659 b'server', b'uncompressed', untrusted=True
1659 b'server', b'uncompressed', untrusted=True
1660 )
1660 )
1661 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1661 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1662
1662
1663 if not streamsupported or not featuresupported:
1663 if not streamsupported or not featuresupported:
1664 caps.pop(b'stream')
1664 caps.pop(b'stream')
1665 # Else always advertise support on client, because payload support
1665 # Else always advertise support on client, because payload support
1666 # should always be advertised.
1666 # should always be advertised.
1667
1667
1668 if repo.ui.configbool(b'experimental', b'stream-v3'):
1668 if repo.ui.configbool(b'experimental', b'stream-v3'):
1669 if b'stream' in caps:
1669 if b'stream' in caps:
1670 caps[b'stream'] += (b'v3-exp',)
1670 caps[b'stream'] += (b'v3-exp',)
1671
1671
1672 # b'rev-branch-cache is no longer advertised, but still supported
1672 # b'rev-branch-cache is no longer advertised, but still supported
1673 # for legacy clients.
1673 # for legacy clients.
1674
1674
1675 return caps
1675 return caps
1676
1676
1677
1677
1678 def bundle2caps(remote):
1678 def bundle2caps(remote):
1679 """return the bundle capabilities of a peer as dict"""
1679 """return the bundle capabilities of a peer as dict"""
1680 raw = remote.capable(b'bundle2')
1680 raw = remote.capable(b'bundle2')
1681 if not raw and raw != b'':
1681 if not raw and raw != b'':
1682 return {}
1682 return {}
1683 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1683 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1684 return decodecaps(capsblob)
1684 return decodecaps(capsblob)
1685
1685
1686
1686
1687 def obsmarkersversion(caps):
1687 def obsmarkersversion(caps):
1688 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1688 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1689 obscaps = caps.get(b'obsmarkers', ())
1689 obscaps = caps.get(b'obsmarkers', ())
1690 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1690 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1691
1691
1692
1692
1693 def writenewbundle(
1693 def writenewbundle(
1694 ui,
1694 ui,
1695 repo,
1695 repo,
1696 source,
1696 source,
1697 filename,
1697 filename,
1698 bundletype,
1698 bundletype,
1699 outgoing,
1699 outgoing,
1700 opts,
1700 opts,
1701 vfs=None,
1701 vfs=None,
1702 compression=None,
1702 compression=None,
1703 compopts=None,
1703 compopts=None,
1704 allow_internal=False,
1704 allow_internal=False,
1705 ):
1705 ):
1706 if bundletype.startswith(b'HG10'):
1706 if bundletype.startswith(b'HG10'):
1707 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1707 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1708 return writebundle(
1708 return writebundle(
1709 ui,
1709 ui,
1710 cg,
1710 cg,
1711 filename,
1711 filename,
1712 bundletype,
1712 bundletype,
1713 vfs=vfs,
1713 vfs=vfs,
1714 compression=compression,
1714 compression=compression,
1715 compopts=compopts,
1715 compopts=compopts,
1716 )
1716 )
1717 elif not bundletype.startswith(b'HG20'):
1717 elif not bundletype.startswith(b'HG20'):
1718 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1718 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1719
1719
1720 # enforce that no internal phase are to be bundled
1720 # enforce that no internal phase are to be bundled
1721 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1721 bundled_internal = repo.revs(b"%ln and _internal()", outgoing.ancestorsof)
1722 if bundled_internal and not allow_internal:
1722 if bundled_internal and not allow_internal:
1723 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1723 count = len(repo.revs(b'%ln and _internal()', outgoing.missing))
1724 msg = "backup bundle would contains %d internal changesets"
1724 msg = "backup bundle would contains %d internal changesets"
1725 msg %= count
1725 msg %= count
1726 raise error.ProgrammingError(msg)
1726 raise error.ProgrammingError(msg)
1727
1727
1728 caps = {}
1728 caps = {}
1729 if opts.get(b'obsolescence', False):
1729 if opts.get(b'obsolescence', False):
1730 caps[b'obsmarkers'] = (b'V1',)
1730 caps[b'obsmarkers'] = (b'V1',)
1731 stream_version = opts.get(b'stream', b"")
1731 stream_version = opts.get(b'stream', b"")
1732 if stream_version == b"v2":
1732 if stream_version == b"v2":
1733 caps[b'stream'] = [b'v2']
1733 caps[b'stream'] = [b'v2']
1734 elif stream_version == b"v3-exp":
1734 elif stream_version == b"v3-exp":
1735 caps[b'stream'] = [b'v3-exp']
1735 caps[b'stream'] = [b'v3-exp']
1736 bundle = bundle20(ui, caps)
1736 bundle = bundle20(ui, caps)
1737 bundle.setcompression(compression, compopts)
1737 bundle.setcompression(compression, compopts)
1738 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1738 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1739 chunkiter = bundle.getchunks()
1739 chunkiter = bundle.getchunks()
1740
1740
1741 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1741 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1742
1742
1743
1743
1744 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1744 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1745 # We should eventually reconcile this logic with the one behind
1745 # We should eventually reconcile this logic with the one behind
1746 # 'exchange.getbundle2partsgenerator'.
1746 # 'exchange.getbundle2partsgenerator'.
1747 #
1747 #
1748 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1748 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1749 # different right now. So we keep them separated for now for the sake of
1749 # different right now. So we keep them separated for now for the sake of
1750 # simplicity.
1750 # simplicity.
1751
1751
1752 # we might not always want a changegroup in such bundle, for example in
1752 # we might not always want a changegroup in such bundle, for example in
1753 # stream bundles
1753 # stream bundles
1754 if opts.get(b'changegroup', True):
1754 if opts.get(b'changegroup', True):
1755 cgversion = opts.get(b'cg.version')
1755 cgversion = opts.get(b'cg.version')
1756 if cgversion is None:
1756 if cgversion is None:
1757 cgversion = changegroup.safeversion(repo)
1757 cgversion = changegroup.safeversion(repo)
1758 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1758 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1759 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1759 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1760 part.addparam(b'version', cg.version)
1760 part.addparam(b'version', cg.version)
1761 if b'clcount' in cg.extras:
1761 if b'clcount' in cg.extras:
1762 part.addparam(
1762 part.addparam(
1763 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1763 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1764 )
1764 )
1765 if opts.get(b'phases'):
1765 if opts.get(b'phases'):
1766 target_phase = phases.draft
1766 target_phase = phases.draft
1767 for head in outgoing.ancestorsof:
1767 for head in outgoing.ancestorsof:
1768 target_phase = max(target_phase, repo[head].phase())
1768 target_phase = max(target_phase, repo[head].phase())
1769 if target_phase > phases.draft:
1769 if target_phase > phases.draft:
1770 part.addparam(
1770 part.addparam(
1771 b'targetphase',
1771 b'targetphase',
1772 b'%d' % target_phase,
1772 b'%d' % target_phase,
1773 mandatory=False,
1773 mandatory=False,
1774 )
1774 )
1775 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1775 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1776 part.addparam(b'exp-sidedata', b'1')
1776 part.addparam(b'exp-sidedata', b'1')
1777
1777
1778 if opts.get(b'stream', b"") == b"v2":
1778 if opts.get(b'stream', b"") == b"v2":
1779 addpartbundlestream2(bundler, repo, stream=True)
1779 addpartbundlestream2(bundler, repo, stream=True)
1780
1780
1781 if opts.get(b'stream', b"") == b"v3-exp":
1781 if opts.get(b'stream', b"") == b"v3-exp":
1782 addpartbundlestream2(bundler, repo, stream=True)
1782 addpartbundlestream2(bundler, repo, stream=True)
1783
1783
1784 if opts.get(b'tagsfnodescache', True):
1784 if opts.get(b'tagsfnodescache', True):
1785 addparttagsfnodescache(repo, bundler, outgoing)
1785 addparttagsfnodescache(repo, bundler, outgoing)
1786
1786
1787 if opts.get(b'revbranchcache', True):
1787 if opts.get(b'revbranchcache', True):
1788 addpartrevbranchcache(repo, bundler, outgoing)
1788 addpartrevbranchcache(repo, bundler, outgoing)
1789
1789
1790 if opts.get(b'obsolescence', False):
1790 if opts.get(b'obsolescence', False):
1791 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1791 obsmarkers = repo.obsstore.relevantmarkers(nodes=outgoing.missing)
1792 buildobsmarkerspart(
1792 buildobsmarkerspart(
1793 bundler,
1793 bundler,
1794 obsmarkers,
1794 obsmarkers,
1795 mandatory=opts.get(b'obsolescence-mandatory', True),
1795 mandatory=opts.get(b'obsolescence-mandatory', True),
1796 )
1796 )
1797
1797
1798 if opts.get(b'phases', False):
1798 if opts.get(b'phases', False):
1799 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1799 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1800 phasedata = phases.binaryencode(headsbyphase)
1800 phasedata = phases.binaryencode(headsbyphase)
1801 bundler.newpart(b'phase-heads', data=phasedata)
1801 bundler.newpart(b'phase-heads', data=phasedata)
1802
1802
1803
1803
1804 def addparttagsfnodescache(repo, bundler, outgoing):
1804 def addparttagsfnodescache(repo, bundler, outgoing):
1805 # we include the tags fnode cache for the bundle changeset
1805 # we include the tags fnode cache for the bundle changeset
1806 # (as an optional parts)
1806 # (as an optional parts)
1807 cache = tags.hgtagsfnodescache(repo.unfiltered())
1807 cache = tags.hgtagsfnodescache(repo.unfiltered())
1808 chunks = []
1808 chunks = []
1809
1809
1810 # .hgtags fnodes are only relevant for head changesets. While we could
1810 # .hgtags fnodes are only relevant for head changesets. While we could
1811 # transfer values for all known nodes, there will likely be little to
1811 # transfer values for all known nodes, there will likely be little to
1812 # no benefit.
1812 # no benefit.
1813 #
1813 #
1814 # We don't bother using a generator to produce output data because
1814 # We don't bother using a generator to produce output data because
1815 # a) we only have 40 bytes per head and even esoteric numbers of heads
1815 # a) we only have 40 bytes per head and even esoteric numbers of heads
1816 # consume little memory (1M heads is 40MB) b) we don't want to send the
1816 # consume little memory (1M heads is 40MB) b) we don't want to send the
1817 # part if we don't have entries and knowing if we have entries requires
1817 # part if we don't have entries and knowing if we have entries requires
1818 # cache lookups.
1818 # cache lookups.
1819 for node in outgoing.ancestorsof:
1819 for node in outgoing.ancestorsof:
1820 # Don't compute missing, as this may slow down serving.
1820 # Don't compute missing, as this may slow down serving.
1821 fnode = cache.getfnode(node, computemissing=False)
1821 fnode = cache.getfnode(node, computemissing=False)
1822 if fnode:
1822 if fnode:
1823 chunks.extend([node, fnode])
1823 chunks.extend([node, fnode])
1824
1824
1825 if chunks:
1825 if chunks:
1826 bundler.newpart(
1826 bundler.newpart(
1827 b'hgtagsfnodes',
1827 b'hgtagsfnodes',
1828 mandatory=False,
1828 mandatory=False,
1829 data=b''.join(chunks),
1829 data=b''.join(chunks),
1830 )
1830 )
1831
1831
1832
1832
1833 def addpartrevbranchcache(repo, bundler, outgoing):
1833 def addpartrevbranchcache(repo, bundler, outgoing):
1834 # we include the rev branch cache for the bundle changeset
1834 # we include the rev branch cache for the bundle changeset
1835 # (as an optional parts)
1835 # (as an optional parts)
1836 cache = repo.revbranchcache()
1836 cache = repo.revbranchcache()
1837 cl = repo.unfiltered().changelog
1837 cl = repo.unfiltered().changelog
1838 branchesdata = collections.defaultdict(lambda: (set(), set()))
1838 branchesdata = collections.defaultdict(lambda: (set(), set()))
1839 for node in outgoing.missing:
1839 for node in outgoing.missing:
1840 branch, close = cache.branchinfo(cl.rev(node))
1840 branch, close = cache.branchinfo(cl.rev(node))
1841 branchesdata[branch][close].add(node)
1841 branchesdata[branch][close].add(node)
1842
1842
1843 def generate():
1843 def generate():
1844 for branch, (nodes, closed) in sorted(branchesdata.items()):
1844 for branch, (nodes, closed) in sorted(branchesdata.items()):
1845 utf8branch = encoding.fromlocal(branch)
1845 utf8branch = encoding.fromlocal(branch)
1846 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1846 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1847 yield utf8branch
1847 yield utf8branch
1848 for n in sorted(nodes):
1848 for n in sorted(nodes):
1849 yield n
1849 yield n
1850 for n in sorted(closed):
1850 for n in sorted(closed):
1851 yield n
1851 yield n
1852
1852
1853 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1853 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1854
1854
1855
1855
1856 def _formatrequirementsspec(requirements):
1856 def _formatrequirementsspec(requirements):
1857 requirements = [req for req in requirements if req != b"shared"]
1857 requirements = [req for req in requirements if req != b"shared"]
1858 return urlreq.quote(b','.join(sorted(requirements)))
1858 return urlreq.quote(b','.join(sorted(requirements)))
1859
1859
1860
1860
1861 def _formatrequirementsparams(requirements):
1861 def _formatrequirementsparams(requirements):
1862 requirements = _formatrequirementsspec(requirements)
1862 requirements = _formatrequirementsspec(requirements)
1863 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1863 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1864 return params
1864 return params
1865
1865
1866
1866
1867 def format_remote_wanted_sidedata(repo):
1867 def format_remote_wanted_sidedata(repo):
1868 """Formats a repo's wanted sidedata categories into a bytestring for
1868 """Formats a repo's wanted sidedata categories into a bytestring for
1869 capabilities exchange."""
1869 capabilities exchange."""
1870 wanted = b""
1870 wanted = b""
1871 if repo._wanted_sidedata:
1871 if repo._wanted_sidedata:
1872 wanted = b','.join(
1872 wanted = b','.join(
1873 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1873 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1874 )
1874 )
1875 return wanted
1875 return wanted
1876
1876
1877
1877
1878 def read_remote_wanted_sidedata(remote):
1878 def read_remote_wanted_sidedata(remote):
1879 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1879 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1880 return read_wanted_sidedata(sidedata_categories)
1880 return read_wanted_sidedata(sidedata_categories)
1881
1881
1882
1882
1883 def read_wanted_sidedata(formatted):
1883 def read_wanted_sidedata(formatted):
1884 if formatted:
1884 if formatted:
1885 return set(formatted.split(b','))
1885 return set(formatted.split(b','))
1886 return set()
1886 return set()
1887
1887
1888
1888
1889 def addpartbundlestream2(bundler, repo, **kwargs):
1889 def addpartbundlestream2(bundler, repo, **kwargs):
1890 if not kwargs.get('stream', False):
1890 if not kwargs.get('stream', False):
1891 return
1891 return
1892
1892
1893 if not streamclone.allowservergeneration(repo):
1893 if not streamclone.allowservergeneration(repo):
1894 msg = _(b'stream data requested but server does not allow this feature')
1894 msg = _(b'stream data requested but server does not allow this feature')
1895 hint = _(b'the client seems buggy')
1895 hint = _(b'the client seems buggy')
1896 raise error.Abort(msg, hint=hint)
1896 raise error.Abort(msg, hint=hint)
1897 if not (b'stream' in bundler.capabilities):
1897 if not (b'stream' in bundler.capabilities):
1898 msg = _(
1898 msg = _(
1899 b'stream data requested but supported streaming clone versions were not specified'
1899 b'stream data requested but supported streaming clone versions were not specified'
1900 )
1900 )
1901 hint = _(b'the client seems buggy')
1901 hint = _(b'the client seems buggy')
1902 raise error.Abort(msg, hint=hint)
1902 raise error.Abort(msg, hint=hint)
1903 client_supported = set(bundler.capabilities[b'stream'])
1903 client_supported = set(bundler.capabilities[b'stream'])
1904 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1904 server_supported = set(getrepocaps(repo, role=b'client').get(b'stream', []))
1905 common_supported = client_supported & server_supported
1905 common_supported = client_supported & server_supported
1906 if not common_supported:
1906 if not common_supported:
1907 msg = _(b'no common supported version with the client: %s; %s')
1907 msg = _(b'no common supported version with the client: %s; %s')
1908 str_server = b','.join(sorted(server_supported))
1908 str_server = b','.join(sorted(server_supported))
1909 str_client = b','.join(sorted(client_supported))
1909 str_client = b','.join(sorted(client_supported))
1910 msg %= (str_server, str_client)
1910 msg %= (str_server, str_client)
1911 raise error.Abort(msg)
1911 raise error.Abort(msg)
1912 version = max(common_supported)
1912 version = max(common_supported)
1913
1913
1914 # Stream clones don't compress well. And compression undermines a
1914 # Stream clones don't compress well. And compression undermines a
1915 # goal of stream clones, which is to be fast. Communicate the desire
1915 # goal of stream clones, which is to be fast. Communicate the desire
1916 # to avoid compression to consumers of the bundle.
1916 # to avoid compression to consumers of the bundle.
1917 bundler.prefercompressed = False
1917 bundler.prefercompressed = False
1918
1918
1919 # get the includes and excludes
1919 # get the includes and excludes
1920 includepats = kwargs.get('includepats')
1920 includepats = kwargs.get('includepats')
1921 excludepats = kwargs.get('excludepats')
1921 excludepats = kwargs.get('excludepats')
1922
1922
1923 narrowstream = repo.ui.configbool(
1923 narrowstream = repo.ui.configbool(
1924 b'experimental', b'server.stream-narrow-clones'
1924 b'experimental', b'server.stream-narrow-clones'
1925 )
1925 )
1926
1926
1927 if (includepats or excludepats) and not narrowstream:
1927 if (includepats or excludepats) and not narrowstream:
1928 raise error.Abort(_(b'server does not support narrow stream clones'))
1928 raise error.Abort(_(b'server does not support narrow stream clones'))
1929
1929
1930 includeobsmarkers = False
1930 includeobsmarkers = False
1931 if repo.obsstore:
1931 if repo.obsstore:
1932 remoteversions = obsmarkersversion(bundler.capabilities)
1932 remoteversions = obsmarkersversion(bundler.capabilities)
1933 if not remoteversions:
1933 if not remoteversions:
1934 raise error.Abort(
1934 raise error.Abort(
1935 _(
1935 _(
1936 b'server has obsolescence markers, but client '
1936 b'server has obsolescence markers, but client '
1937 b'cannot receive them via stream clone'
1937 b'cannot receive them via stream clone'
1938 )
1938 )
1939 )
1939 )
1940 elif repo.obsstore._version in remoteversions:
1940 elif repo.obsstore._version in remoteversions:
1941 includeobsmarkers = True
1941 includeobsmarkers = True
1942
1942
1943 if version == b"v2":
1943 if version == b"v2":
1944 filecount, bytecount, it = streamclone.generatev2(
1944 filecount, bytecount, it = streamclone.generatev2(
1945 repo, includepats, excludepats, includeobsmarkers
1945 repo, includepats, excludepats, includeobsmarkers
1946 )
1946 )
1947 requirements = streamclone.streamed_requirements(repo)
1947 requirements = streamclone.streamed_requirements(repo)
1948 requirements = _formatrequirementsspec(requirements)
1948 requirements = _formatrequirementsspec(requirements)
1949 part = bundler.newpart(b'stream2', data=it)
1949 part = bundler.newpart(b'stream2', data=it)
1950 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1950 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1951 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1951 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1952 part.addparam(b'requirements', requirements, mandatory=True)
1952 part.addparam(b'requirements', requirements, mandatory=True)
1953 elif version == b"v3-exp":
1953 elif version == b"v3-exp":
1954 it = streamclone.generatev3(
1954 it = streamclone.generatev3(
1955 repo, includepats, excludepats, includeobsmarkers
1955 repo, includepats, excludepats, includeobsmarkers
1956 )
1956 )
1957 requirements = streamclone.streamed_requirements(repo)
1957 requirements = streamclone.streamed_requirements(repo)
1958 requirements = _formatrequirementsspec(requirements)
1958 requirements = _formatrequirementsspec(requirements)
1959 part = bundler.newpart(b'stream3-exp', data=it)
1959 part = bundler.newpart(b'stream3-exp', data=it)
1960 part.addparam(b'requirements', requirements, mandatory=True)
1960 part.addparam(b'requirements', requirements, mandatory=True)
1961
1961
1962
1962
1963 def buildobsmarkerspart(bundler, markers, mandatory=True):
1963 def buildobsmarkerspart(bundler, markers, mandatory=True):
1964 """add an obsmarker part to the bundler with <markers>
1964 """add an obsmarker part to the bundler with <markers>
1965
1965
1966 No part is created if markers is empty.
1966 No part is created if markers is empty.
1967 Raises ValueError if the bundler doesn't support any known obsmarker format.
1967 Raises ValueError if the bundler doesn't support any known obsmarker format.
1968 """
1968 """
1969 if not markers:
1969 if not markers:
1970 return None
1970 return None
1971
1971
1972 remoteversions = obsmarkersversion(bundler.capabilities)
1972 remoteversions = obsmarkersversion(bundler.capabilities)
1973 version = obsolete.commonversion(remoteversions)
1973 version = obsolete.commonversion(remoteversions)
1974 if version is None:
1974 if version is None:
1975 raise ValueError(b'bundler does not support common obsmarker format')
1975 raise ValueError(b'bundler does not support common obsmarker format')
1976 stream = obsolete.encodemarkers(markers, True, version=version)
1976 stream = obsolete.encodemarkers(markers, True, version=version)
1977 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1977 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1978
1978
1979
1979
1980 def writebundle(
1980 def writebundle(
1981 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1981 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1982 ):
1982 ):
1983 """Write a bundle file and return its filename.
1983 """Write a bundle file and return its filename.
1984
1984
1985 Existing files will not be overwritten.
1985 Existing files will not be overwritten.
1986 If no filename is specified, a temporary file is created.
1986 If no filename is specified, a temporary file is created.
1987 bz2 compression can be turned off.
1987 bz2 compression can be turned off.
1988 The bundle file will be deleted in case of errors.
1988 The bundle file will be deleted in case of errors.
1989 """
1989 """
1990
1990
1991 if bundletype == b"HG20":
1991 if bundletype == b"HG20":
1992 bundle = bundle20(ui)
1992 bundle = bundle20(ui)
1993 bundle.setcompression(compression, compopts)
1993 bundle.setcompression(compression, compopts)
1994 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1994 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1995 part.addparam(b'version', cg.version)
1995 part.addparam(b'version', cg.version)
1996 if b'clcount' in cg.extras:
1996 if b'clcount' in cg.extras:
1997 part.addparam(
1997 part.addparam(
1998 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1998 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1999 )
1999 )
2000 chunkiter = bundle.getchunks()
2000 chunkiter = bundle.getchunks()
2001 else:
2001 else:
2002 # compression argument is only for the bundle2 case
2002 # compression argument is only for the bundle2 case
2003 assert compression is None
2003 assert compression is None
2004 if cg.version != b'01':
2004 if cg.version != b'01':
2005 raise error.Abort(
2005 raise error.Abort(
2006 _(b'old bundle types only supports v1 changegroups')
2006 _(b'old bundle types only supports v1 changegroups')
2007 )
2007 )
2008
2008
2009 # HG20 is the case without 2 values to unpack, but is handled above.
2009 # HG20 is the case without 2 values to unpack, but is handled above.
2010 # pytype: disable=bad-unpacking
2010 # pytype: disable=bad-unpacking
2011 header, comp = bundletypes[bundletype]
2011 header, comp = bundletypes[bundletype]
2012 # pytype: enable=bad-unpacking
2012 # pytype: enable=bad-unpacking
2013
2013
2014 if comp not in util.compengines.supportedbundletypes:
2014 if comp not in util.compengines.supportedbundletypes:
2015 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2015 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
2016 compengine = util.compengines.forbundletype(comp)
2016 compengine = util.compengines.forbundletype(comp)
2017
2017
2018 def chunkiter():
2018 def chunkiter():
2019 yield header
2019 yield header
2020 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2020 for chunk in compengine.compressstream(cg.getchunks(), compopts):
2021 yield chunk
2021 yield chunk
2022
2022
2023 chunkiter = chunkiter()
2023 chunkiter = chunkiter()
2024
2024
2025 # parse the changegroup data, otherwise we will block
2025 # parse the changegroup data, otherwise we will block
2026 # in case of sshrepo because we don't know the end of the stream
2026 # in case of sshrepo because we don't know the end of the stream
2027 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2027 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
2028
2028
2029
2029
2030 def combinechangegroupresults(op):
2030 def combinechangegroupresults(op):
2031 """logic to combine 0 or more addchangegroup results into one"""
2031 """logic to combine 0 or more addchangegroup results into one"""
2032 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2032 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
2033 changedheads = 0
2033 changedheads = 0
2034 result = 1
2034 result = 1
2035 for ret in results:
2035 for ret in results:
2036 # If any changegroup result is 0, return 0
2036 # If any changegroup result is 0, return 0
2037 if ret == 0:
2037 if ret == 0:
2038 result = 0
2038 result = 0
2039 break
2039 break
2040 if ret < -1:
2040 if ret < -1:
2041 changedheads += ret + 1
2041 changedheads += ret + 1
2042 elif ret > 1:
2042 elif ret > 1:
2043 changedheads += ret - 1
2043 changedheads += ret - 1
2044 if changedheads > 0:
2044 if changedheads > 0:
2045 result = 1 + changedheads
2045 result = 1 + changedheads
2046 elif changedheads < 0:
2046 elif changedheads < 0:
2047 result = -1 + changedheads
2047 result = -1 + changedheads
2048 return result
2048 return result
2049
2049
2050
2050
2051 @parthandler(
2051 @parthandler(
2052 b'changegroup',
2052 b'changegroup',
2053 (
2053 (
2054 b'version',
2054 b'version',
2055 b'nbchanges',
2055 b'nbchanges',
2056 b'exp-sidedata',
2056 b'exp-sidedata',
2057 b'exp-wanted-sidedata',
2057 b'exp-wanted-sidedata',
2058 b'treemanifest',
2058 b'treemanifest',
2059 b'targetphase',
2059 b'targetphase',
2060 ),
2060 ),
2061 )
2061 )
2062 def handlechangegroup(op, inpart):
2062 def handlechangegroup(op, inpart):
2063 """apply a changegroup part on the repo"""
2063 """apply a changegroup part on the repo"""
2064 from . import localrepo
2064 from . import localrepo
2065
2065
2066 tr = op.gettransaction()
2066 tr = op.gettransaction()
2067 unpackerversion = inpart.params.get(b'version', b'01')
2067 unpackerversion = inpart.params.get(b'version', b'01')
2068 # We should raise an appropriate exception here
2068 # We should raise an appropriate exception here
2069 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2069 cg = changegroup.getunbundler(unpackerversion, inpart, None)
2070 # the source and url passed here are overwritten by the one contained in
2070 # the source and url passed here are overwritten by the one contained in
2071 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2071 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2072 nbchangesets = None
2072 nbchangesets = None
2073 if b'nbchanges' in inpart.params:
2073 if b'nbchanges' in inpart.params:
2074 nbchangesets = int(inpart.params.get(b'nbchanges'))
2074 nbchangesets = int(inpart.params.get(b'nbchanges'))
2075 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2075 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2076 if len(op.repo.changelog) != 0:
2076 if len(op.repo.changelog) != 0:
2077 raise error.Abort(
2077 raise error.Abort(
2078 _(
2078 _(
2079 b"bundle contains tree manifests, but local repo is "
2079 b"bundle contains tree manifests, but local repo is "
2080 b"non-empty and does not use tree manifests"
2080 b"non-empty and does not use tree manifests"
2081 )
2081 )
2082 )
2082 )
2083 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2083 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2084 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2084 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2085 op.repo.ui, op.repo.requirements, op.repo.features
2085 op.repo.ui, op.repo.requirements, op.repo.features
2086 )
2086 )
2087 scmutil.writereporequirements(op.repo)
2087 scmutil.writereporequirements(op.repo)
2088
2088
2089 extrakwargs = {}
2089 extrakwargs = {}
2090 targetphase = inpart.params.get(b'targetphase')
2090 targetphase = inpart.params.get(b'targetphase')
2091 if targetphase is not None:
2091 if targetphase is not None:
2092 extrakwargs['targetphase'] = int(targetphase)
2092 extrakwargs['targetphase'] = int(targetphase)
2093
2093
2094 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2094 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2095 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2095 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2096
2096
2097 ret = _processchangegroup(
2097 ret = _processchangegroup(
2098 op,
2098 op,
2099 cg,
2099 cg,
2100 tr,
2100 tr,
2101 op.source,
2101 op.source,
2102 b'bundle2',
2102 b'bundle2',
2103 expectedtotal=nbchangesets,
2103 expectedtotal=nbchangesets,
2104 **extrakwargs
2104 **extrakwargs
2105 )
2105 )
2106 if op.reply is not None:
2106 if op.reply is not None:
2107 # This is definitely not the final form of this
2107 # This is definitely not the final form of this
2108 # return. But one need to start somewhere.
2108 # return. But one need to start somewhere.
2109 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2109 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2110 part.addparam(
2110 part.addparam(
2111 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2111 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2112 )
2112 )
2113 part.addparam(b'return', b'%i' % ret, mandatory=False)
2113 part.addparam(b'return', b'%i' % ret, mandatory=False)
2114 assert not inpart.read()
2114 assert not inpart.read()
2115
2115
2116
2116
2117 _remotechangegroupparams = tuple(
2117 _remotechangegroupparams = tuple(
2118 [b'url', b'size', b'digests']
2118 [b'url', b'size', b'digests']
2119 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2119 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2120 )
2120 )
2121
2121
2122
2122
2123 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2123 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2124 def handleremotechangegroup(op, inpart):
2124 def handleremotechangegroup(op, inpart):
2125 """apply a bundle10 on the repo, given an url and validation information
2125 """apply a bundle10 on the repo, given an url and validation information
2126
2126
2127 All the information about the remote bundle to import are given as
2127 All the information about the remote bundle to import are given as
2128 parameters. The parameters include:
2128 parameters. The parameters include:
2129 - url: the url to the bundle10.
2129 - url: the url to the bundle10.
2130 - size: the bundle10 file size. It is used to validate what was
2130 - size: the bundle10 file size. It is used to validate what was
2131 retrieved by the client matches the server knowledge about the bundle.
2131 retrieved by the client matches the server knowledge about the bundle.
2132 - digests: a space separated list of the digest types provided as
2132 - digests: a space separated list of the digest types provided as
2133 parameters.
2133 parameters.
2134 - digest:<digest-type>: the hexadecimal representation of the digest with
2134 - digest:<digest-type>: the hexadecimal representation of the digest with
2135 that name. Like the size, it is used to validate what was retrieved by
2135 that name. Like the size, it is used to validate what was retrieved by
2136 the client matches what the server knows about the bundle.
2136 the client matches what the server knows about the bundle.
2137
2137
2138 When multiple digest types are given, all of them are checked.
2138 When multiple digest types are given, all of them are checked.
2139 """
2139 """
2140 try:
2140 try:
2141 raw_url = inpart.params[b'url']
2141 raw_url = inpart.params[b'url']
2142 except KeyError:
2142 except KeyError:
2143 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2143 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2144 parsed_url = urlutil.url(raw_url)
2144 parsed_url = urlutil.url(raw_url)
2145 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2145 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2146 raise error.Abort(
2146 raise error.Abort(
2147 _(b'remote-changegroup does not support %s urls')
2147 _(b'remote-changegroup does not support %s urls')
2148 % parsed_url.scheme
2148 % parsed_url.scheme
2149 )
2149 )
2150
2150
2151 try:
2151 try:
2152 size = int(inpart.params[b'size'])
2152 size = int(inpart.params[b'size'])
2153 except ValueError:
2153 except ValueError:
2154 raise error.Abort(
2154 raise error.Abort(
2155 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2155 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2156 )
2156 )
2157 except KeyError:
2157 except KeyError:
2158 raise error.Abort(
2158 raise error.Abort(
2159 _(b'remote-changegroup: missing "%s" param') % b'size'
2159 _(b'remote-changegroup: missing "%s" param') % b'size'
2160 )
2160 )
2161
2161
2162 digests = {}
2162 digests = {}
2163 for typ in inpart.params.get(b'digests', b'').split():
2163 for typ in inpart.params.get(b'digests', b'').split():
2164 param = b'digest:%s' % typ
2164 param = b'digest:%s' % typ
2165 try:
2165 try:
2166 value = inpart.params[param]
2166 value = inpart.params[param]
2167 except KeyError:
2167 except KeyError:
2168 raise error.Abort(
2168 raise error.Abort(
2169 _(b'remote-changegroup: missing "%s" param') % param
2169 _(b'remote-changegroup: missing "%s" param') % param
2170 )
2170 )
2171 digests[typ] = value
2171 digests[typ] = value
2172
2172
2173 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2173 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2174
2174
2175 tr = op.gettransaction()
2175 tr = op.gettransaction()
2176 from . import exchange
2176 from . import exchange
2177
2177
2178 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2178 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2179 if not isinstance(cg, changegroup.cg1unpacker):
2179 if not isinstance(cg, changegroup.cg1unpacker):
2180 raise error.Abort(
2180 raise error.Abort(
2181 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2181 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2182 )
2182 )
2183 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2183 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2184 if op.reply is not None:
2184 if op.reply is not None:
2185 # This is definitely not the final form of this
2185 # This is definitely not the final form of this
2186 # return. But one need to start somewhere.
2186 # return. But one need to start somewhere.
2187 part = op.reply.newpart(b'reply:changegroup')
2187 part = op.reply.newpart(b'reply:changegroup')
2188 part.addparam(
2188 part.addparam(
2189 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2189 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2190 )
2190 )
2191 part.addparam(b'return', b'%i' % ret, mandatory=False)
2191 part.addparam(b'return', b'%i' % ret, mandatory=False)
2192 try:
2192 try:
2193 real_part.validate()
2193 real_part.validate()
2194 except error.Abort as e:
2194 except error.Abort as e:
2195 raise error.Abort(
2195 raise error.Abort(
2196 _(b'bundle at %s is corrupted:\n%s')
2196 _(b'bundle at %s is corrupted:\n%s')
2197 % (urlutil.hidepassword(raw_url), e.message)
2197 % (urlutil.hidepassword(raw_url), e.message)
2198 )
2198 )
2199 assert not inpart.read()
2199 assert not inpart.read()
2200
2200
2201
2201
2202 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2202 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2203 def handlereplychangegroup(op, inpart):
2203 def handlereplychangegroup(op, inpart):
2204 ret = int(inpart.params[b'return'])
2204 ret = int(inpart.params[b'return'])
2205 replyto = int(inpart.params[b'in-reply-to'])
2205 replyto = int(inpart.params[b'in-reply-to'])
2206 op.records.add(b'changegroup', {b'return': ret}, replyto)
2206 op.records.add(b'changegroup', {b'return': ret}, replyto)
2207
2207
2208
2208
2209 @parthandler(b'check:bookmarks')
2209 @parthandler(b'check:bookmarks')
2210 def handlecheckbookmarks(op, inpart):
2210 def handlecheckbookmarks(op, inpart):
2211 """check location of bookmarks
2211 """check location of bookmarks
2212
2212
2213 This part is to be used to detect push race regarding bookmark, it
2213 This part is to be used to detect push race regarding bookmark, it
2214 contains binary encoded (bookmark, node) tuple. If the local state does
2214 contains binary encoded (bookmark, node) tuple. If the local state does
2215 not marks the one in the part, a PushRaced exception is raised
2215 not marks the one in the part, a PushRaced exception is raised
2216 """
2216 """
2217 bookdata = bookmarks.binarydecode(op.repo, inpart)
2217 bookdata = bookmarks.binarydecode(op.repo, inpart)
2218
2218
2219 msgstandard = (
2219 msgstandard = (
2220 b'remote repository changed while pushing - please try again '
2220 b'remote repository changed while pushing - please try again '
2221 b'(bookmark "%s" move from %s to %s)'
2221 b'(bookmark "%s" move from %s to %s)'
2222 )
2222 )
2223 msgmissing = (
2223 msgmissing = (
2224 b'remote repository changed while pushing - please try again '
2224 b'remote repository changed while pushing - please try again '
2225 b'(bookmark "%s" is missing, expected %s)'
2225 b'(bookmark "%s" is missing, expected %s)'
2226 )
2226 )
2227 msgexist = (
2227 msgexist = (
2228 b'remote repository changed while pushing - please try again '
2228 b'remote repository changed while pushing - please try again '
2229 b'(bookmark "%s" set on %s, expected missing)'
2229 b'(bookmark "%s" set on %s, expected missing)'
2230 )
2230 )
2231 for book, node in bookdata:
2231 for book, node in bookdata:
2232 currentnode = op.repo._bookmarks.get(book)
2232 currentnode = op.repo._bookmarks.get(book)
2233 if currentnode != node:
2233 if currentnode != node:
2234 if node is None:
2234 if node is None:
2235 finalmsg = msgexist % (book, short(currentnode))
2235 finalmsg = msgexist % (book, short(currentnode))
2236 elif currentnode is None:
2236 elif currentnode is None:
2237 finalmsg = msgmissing % (book, short(node))
2237 finalmsg = msgmissing % (book, short(node))
2238 else:
2238 else:
2239 finalmsg = msgstandard % (
2239 finalmsg = msgstandard % (
2240 book,
2240 book,
2241 short(node),
2241 short(node),
2242 short(currentnode),
2242 short(currentnode),
2243 )
2243 )
2244 raise error.PushRaced(finalmsg)
2244 raise error.PushRaced(finalmsg)
2245
2245
2246
2246
2247 @parthandler(b'check:heads')
2247 @parthandler(b'check:heads')
2248 def handlecheckheads(op, inpart):
2248 def handlecheckheads(op, inpart):
2249 """check that head of the repo did not change
2249 """check that head of the repo did not change
2250
2250
2251 This is used to detect a push race when using unbundle.
2251 This is used to detect a push race when using unbundle.
2252 This replaces the "heads" argument of unbundle."""
2252 This replaces the "heads" argument of unbundle."""
2253 h = inpart.read(20)
2253 h = inpart.read(20)
2254 heads = []
2254 heads = []
2255 while len(h) == 20:
2255 while len(h) == 20:
2256 heads.append(h)
2256 heads.append(h)
2257 h = inpart.read(20)
2257 h = inpart.read(20)
2258 assert not h
2258 assert not h
2259 # Trigger a transaction so that we are guaranteed to have the lock now.
2259 # Trigger a transaction so that we are guaranteed to have the lock now.
2260 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2260 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2261 op.gettransaction()
2261 op.gettransaction()
2262 if sorted(heads) != sorted(op.repo.heads()):
2262 if sorted(heads) != sorted(op.repo.heads()):
2263 raise error.PushRaced(
2263 raise error.PushRaced(
2264 b'remote repository changed while pushing - please try again'
2264 b'remote repository changed while pushing - please try again'
2265 )
2265 )
2266
2266
2267
2267
2268 @parthandler(b'check:updated-heads')
2268 @parthandler(b'check:updated-heads')
2269 def handlecheckupdatedheads(op, inpart):
2269 def handlecheckupdatedheads(op, inpart):
2270 """check for race on the heads touched by a push
2270 """check for race on the heads touched by a push
2271
2271
2272 This is similar to 'check:heads' but focus on the heads actually updated
2272 This is similar to 'check:heads' but focus on the heads actually updated
2273 during the push. If other activities happen on unrelated heads, it is
2273 during the push. If other activities happen on unrelated heads, it is
2274 ignored.
2274 ignored.
2275
2275
2276 This allow server with high traffic to avoid push contention as long as
2276 This allow server with high traffic to avoid push contention as long as
2277 unrelated parts of the graph are involved."""
2277 unrelated parts of the graph are involved."""
2278 h = inpart.read(20)
2278 h = inpart.read(20)
2279 heads = []
2279 heads = []
2280 while len(h) == 20:
2280 while len(h) == 20:
2281 heads.append(h)
2281 heads.append(h)
2282 h = inpart.read(20)
2282 h = inpart.read(20)
2283 assert not h
2283 assert not h
2284 # trigger a transaction so that we are guaranteed to have the lock now.
2284 # trigger a transaction so that we are guaranteed to have the lock now.
2285 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2285 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2286 op.gettransaction()
2286 op.gettransaction()
2287
2287
2288 currentheads = set()
2288 currentheads = set()
2289 for ls in op.repo.branchmap().iterheads():
2289 for ls in op.repo.branchmap().iterheads():
2290 currentheads.update(ls)
2290 currentheads.update(ls)
2291
2291
2292 for h in heads:
2292 for h in heads:
2293 if h not in currentheads:
2293 if h not in currentheads:
2294 raise error.PushRaced(
2294 raise error.PushRaced(
2295 b'remote repository changed while pushing - '
2295 b'remote repository changed while pushing - '
2296 b'please try again'
2296 b'please try again'
2297 )
2297 )
2298
2298
2299
2299
2300 @parthandler(b'check:phases')
2300 @parthandler(b'check:phases')
2301 def handlecheckphases(op, inpart):
2301 def handlecheckphases(op, inpart):
2302 """check that phase boundaries of the repository did not change
2302 """check that phase boundaries of the repository did not change
2303
2303
2304 This is used to detect a push race.
2304 This is used to detect a push race.
2305 """
2305 """
2306 phasetonodes = phases.binarydecode(inpart)
2306 phasetonodes = phases.binarydecode(inpart)
2307 unfi = op.repo.unfiltered()
2307 unfi = op.repo.unfiltered()
2308 cl = unfi.changelog
2308 cl = unfi.changelog
2309 phasecache = unfi._phasecache
2309 phasecache = unfi._phasecache
2310 msg = (
2310 msg = (
2311 b'remote repository changed while pushing - please try again '
2311 b'remote repository changed while pushing - please try again '
2312 b'(%s is %s expected %s)'
2312 b'(%s is %s expected %s)'
2313 )
2313 )
2314 for expectedphase, nodes in phasetonodes.items():
2314 for expectedphase, nodes in phasetonodes.items():
2315 for n in nodes:
2315 for n in nodes:
2316 actualphase = phasecache.phase(unfi, cl.rev(n))
2316 actualphase = phasecache.phase(unfi, cl.rev(n))
2317 if actualphase != expectedphase:
2317 if actualphase != expectedphase:
2318 finalmsg = msg % (
2318 finalmsg = msg % (
2319 short(n),
2319 short(n),
2320 phases.phasenames[actualphase],
2320 phases.phasenames[actualphase],
2321 phases.phasenames[expectedphase],
2321 phases.phasenames[expectedphase],
2322 )
2322 )
2323 raise error.PushRaced(finalmsg)
2323 raise error.PushRaced(finalmsg)
2324
2324
2325
2325
2326 @parthandler(b'output')
2326 @parthandler(b'output')
2327 def handleoutput(op, inpart):
2327 def handleoutput(op, inpart):
2328 """forward output captured on the server to the client"""
2328 """forward output captured on the server to the client"""
2329 for line in inpart.read().splitlines():
2329 for line in inpart.read().splitlines():
2330 op.ui.status(_(b'remote: %s\n') % line)
2330 op.ui.status(_(b'remote: %s\n') % line)
2331
2331
2332
2332
2333 @parthandler(b'replycaps')
2333 @parthandler(b'replycaps')
2334 def handlereplycaps(op, inpart):
2334 def handlereplycaps(op, inpart):
2335 """Notify that a reply bundle should be created
2335 """Notify that a reply bundle should be created
2336
2336
2337 The payload contains the capabilities information for the reply"""
2337 The payload contains the capabilities information for the reply"""
2338 caps = decodecaps(inpart.read())
2338 caps = decodecaps(inpart.read())
2339 if op.reply is None:
2339 if op.reply is None:
2340 op.reply = bundle20(op.ui, caps)
2340 op.reply = bundle20(op.ui, caps)
2341
2341
2342
2342
2343 class AbortFromPart(error.Abort):
2343 class AbortFromPart(error.Abort):
2344 """Sub-class of Abort that denotes an error from a bundle2 part."""
2344 """Sub-class of Abort that denotes an error from a bundle2 part."""
2345
2345
2346
2346
2347 @parthandler(b'error:abort', (b'message', b'hint'))
2347 @parthandler(b'error:abort', (b'message', b'hint'))
2348 def handleerrorabort(op, inpart):
2348 def handleerrorabort(op, inpart):
2349 """Used to transmit abort error over the wire"""
2349 """Used to transmit abort error over the wire"""
2350 raise AbortFromPart(
2350 raise AbortFromPart(
2351 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2351 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2352 )
2352 )
2353
2353
2354
2354
2355 @parthandler(
2355 @parthandler(
2356 b'error:pushkey',
2356 b'error:pushkey',
2357 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2357 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2358 )
2358 )
2359 def handleerrorpushkey(op, inpart):
2359 def handleerrorpushkey(op, inpart):
2360 """Used to transmit failure of a mandatory pushkey over the wire"""
2360 """Used to transmit failure of a mandatory pushkey over the wire"""
2361 kwargs = {}
2361 kwargs = {}
2362 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2362 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2363 value = inpart.params.get(name)
2363 value = inpart.params.get(name)
2364 if value is not None:
2364 if value is not None:
2365 kwargs[name] = value
2365 kwargs[name] = value
2366 raise error.PushkeyFailed(
2366 raise error.PushkeyFailed(
2367 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2367 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2368 )
2368 )
2369
2369
2370
2370
2371 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2371 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2372 def handleerrorunsupportedcontent(op, inpart):
2372 def handleerrorunsupportedcontent(op, inpart):
2373 """Used to transmit unknown content error over the wire"""
2373 """Used to transmit unknown content error over the wire"""
2374 kwargs = {}
2374 kwargs = {}
2375 parttype = inpart.params.get(b'parttype')
2375 parttype = inpart.params.get(b'parttype')
2376 if parttype is not None:
2376 if parttype is not None:
2377 kwargs[b'parttype'] = parttype
2377 kwargs[b'parttype'] = parttype
2378 params = inpart.params.get(b'params')
2378 params = inpart.params.get(b'params')
2379 if params is not None:
2379 if params is not None:
2380 kwargs[b'params'] = params.split(b'\0')
2380 kwargs[b'params'] = params.split(b'\0')
2381
2381
2382 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2382 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2383
2383
2384
2384
2385 @parthandler(b'error:pushraced', (b'message',))
2385 @parthandler(b'error:pushraced', (b'message',))
2386 def handleerrorpushraced(op, inpart):
2386 def handleerrorpushraced(op, inpart):
2387 """Used to transmit push race error over the wire"""
2387 """Used to transmit push race error over the wire"""
2388 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2388 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2389
2389
2390
2390
2391 @parthandler(b'listkeys', (b'namespace',))
2391 @parthandler(b'listkeys', (b'namespace',))
2392 def handlelistkeys(op, inpart):
2392 def handlelistkeys(op, inpart):
2393 """retrieve pushkey namespace content stored in a bundle2"""
2393 """retrieve pushkey namespace content stored in a bundle2"""
2394 namespace = inpart.params[b'namespace']
2394 namespace = inpart.params[b'namespace']
2395 r = pushkey.decodekeys(inpart.read())
2395 r = pushkey.decodekeys(inpart.read())
2396 op.records.add(b'listkeys', (namespace, r))
2396 op.records.add(b'listkeys', (namespace, r))
2397
2397
2398
2398
2399 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2399 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2400 def handlepushkey(op, inpart):
2400 def handlepushkey(op, inpart):
2401 """process a pushkey request"""
2401 """process a pushkey request"""
2402 dec = pushkey.decode
2402 dec = pushkey.decode
2403 namespace = dec(inpart.params[b'namespace'])
2403 namespace = dec(inpart.params[b'namespace'])
2404 key = dec(inpart.params[b'key'])
2404 key = dec(inpart.params[b'key'])
2405 old = dec(inpart.params[b'old'])
2405 old = dec(inpart.params[b'old'])
2406 new = dec(inpart.params[b'new'])
2406 new = dec(inpart.params[b'new'])
2407 # Grab the transaction to ensure that we have the lock before performing the
2407 # Grab the transaction to ensure that we have the lock before performing the
2408 # pushkey.
2408 # pushkey.
2409 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2409 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2410 op.gettransaction()
2410 op.gettransaction()
2411 ret = op.repo.pushkey(namespace, key, old, new)
2411 ret = op.repo.pushkey(namespace, key, old, new)
2412 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2412 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2413 op.records.add(b'pushkey', record)
2413 op.records.add(b'pushkey', record)
2414 if op.reply is not None:
2414 if op.reply is not None:
2415 rpart = op.reply.newpart(b'reply:pushkey')
2415 rpart = op.reply.newpart(b'reply:pushkey')
2416 rpart.addparam(
2416 rpart.addparam(
2417 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2417 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2418 )
2418 )
2419 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2419 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2420 if inpart.mandatory and not ret:
2420 if inpart.mandatory and not ret:
2421 kwargs = {}
2421 kwargs = {}
2422 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2422 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2423 if key in inpart.params:
2423 if key in inpart.params:
2424 kwargs[key] = inpart.params[key]
2424 kwargs[key] = inpart.params[key]
2425 raise error.PushkeyFailed(
2425 raise error.PushkeyFailed(
2426 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2426 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2427 )
2427 )
2428
2428
2429
2429
2430 @parthandler(b'bookmarks')
2430 @parthandler(b'bookmarks')
2431 def handlebookmark(op, inpart):
2431 def handlebookmark(op, inpart):
2432 """transmit bookmark information
2432 """transmit bookmark information
2433
2433
2434 The part contains binary encoded bookmark information.
2434 The part contains binary encoded bookmark information.
2435
2435
2436 The exact behavior of this part can be controlled by the 'bookmarks' mode
2436 The exact behavior of this part can be controlled by the 'bookmarks' mode
2437 on the bundle operation.
2437 on the bundle operation.
2438
2438
2439 When mode is 'apply' (the default) the bookmark information is applied as
2439 When mode is 'apply' (the default) the bookmark information is applied as
2440 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2440 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2441 issued earlier to check for push races in such update. This behavior is
2441 issued earlier to check for push races in such update. This behavior is
2442 suitable for pushing.
2442 suitable for pushing.
2443
2443
2444 When mode is 'records', the information is recorded into the 'bookmarks'
2444 When mode is 'records', the information is recorded into the 'bookmarks'
2445 records of the bundle operation. This behavior is suitable for pulling.
2445 records of the bundle operation. This behavior is suitable for pulling.
2446 """
2446 """
2447 changes = bookmarks.binarydecode(op.repo, inpart)
2447 changes = bookmarks.binarydecode(op.repo, inpart)
2448
2448
2449 pushkeycompat = op.repo.ui.configbool(
2449 pushkeycompat = op.repo.ui.configbool(
2450 b'server', b'bookmarks-pushkey-compat'
2450 b'server', b'bookmarks-pushkey-compat'
2451 )
2451 )
2452 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2452 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2453
2453
2454 if bookmarksmode == b'apply':
2454 if bookmarksmode == b'apply':
2455 tr = op.gettransaction()
2455 tr = op.gettransaction()
2456 bookstore = op.repo._bookmarks
2456 bookstore = op.repo._bookmarks
2457 if pushkeycompat:
2457 if pushkeycompat:
2458 allhooks = []
2458 allhooks = []
2459 for book, node in changes:
2459 for book, node in changes:
2460 hookargs = tr.hookargs.copy()
2460 hookargs = tr.hookargs.copy()
2461 hookargs[b'pushkeycompat'] = b'1'
2461 hookargs[b'pushkeycompat'] = b'1'
2462 hookargs[b'namespace'] = b'bookmarks'
2462 hookargs[b'namespace'] = b'bookmarks'
2463 hookargs[b'key'] = book
2463 hookargs[b'key'] = book
2464 hookargs[b'old'] = hex(bookstore.get(book, b''))
2464 hookargs[b'old'] = hex(bookstore.get(book, b''))
2465 hookargs[b'new'] = hex(node if node is not None else b'')
2465 hookargs[b'new'] = hex(node if node is not None else b'')
2466 allhooks.append(hookargs)
2466 allhooks.append(hookargs)
2467
2467
2468 for hookargs in allhooks:
2468 for hookargs in allhooks:
2469 op.repo.hook(
2469 op.repo.hook(
2470 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2470 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2471 )
2471 )
2472
2472
2473 for book, node in changes:
2473 for book, node in changes:
2474 if bookmarks.isdivergent(book):
2474 if bookmarks.isdivergent(book):
2475 msg = _(b'cannot accept divergent bookmark %s!') % book
2475 msg = _(b'cannot accept divergent bookmark %s!') % book
2476 raise error.Abort(msg)
2476 raise error.Abort(msg)
2477
2477
2478 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2478 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2479
2479
2480 if pushkeycompat:
2480 if pushkeycompat:
2481
2481
2482 def runhook(unused_success):
2482 def runhook(unused_success):
2483 for hookargs in allhooks:
2483 for hookargs in allhooks:
2484 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2484 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2485
2485
2486 op.repo._afterlock(runhook)
2486 op.repo._afterlock(runhook)
2487
2487
2488 elif bookmarksmode == b'records':
2488 elif bookmarksmode == b'records':
2489 for book, node in changes:
2489 for book, node in changes:
2490 record = {b'bookmark': book, b'node': node}
2490 record = {b'bookmark': book, b'node': node}
2491 op.records.add(b'bookmarks', record)
2491 op.records.add(b'bookmarks', record)
2492 else:
2492 else:
2493 raise error.ProgrammingError(
2493 raise error.ProgrammingError(
2494 b'unknown bookmark mode: %s' % bookmarksmode
2494 b'unknown bookmark mode: %s' % bookmarksmode
2495 )
2495 )
2496
2496
2497
2497
2498 @parthandler(b'phase-heads')
2498 @parthandler(b'phase-heads')
2499 def handlephases(op, inpart):
2499 def handlephases(op, inpart):
2500 """apply phases from bundle part to repo"""
2500 """apply phases from bundle part to repo"""
2501 headsbyphase = phases.binarydecode(inpart)
2501 headsbyphase = phases.binarydecode(inpart)
2502 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2502 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2503
2503
2504
2504
2505 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2505 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2506 def handlepushkeyreply(op, inpart):
2506 def handlepushkeyreply(op, inpart):
2507 """retrieve the result of a pushkey request"""
2507 """retrieve the result of a pushkey request"""
2508 ret = int(inpart.params[b'return'])
2508 ret = int(inpart.params[b'return'])
2509 partid = int(inpart.params[b'in-reply-to'])
2509 partid = int(inpart.params[b'in-reply-to'])
2510 op.records.add(b'pushkey', {b'return': ret}, partid)
2510 op.records.add(b'pushkey', {b'return': ret}, partid)
2511
2511
2512
2512
2513 @parthandler(b'obsmarkers')
2513 @parthandler(b'obsmarkers')
2514 def handleobsmarker(op, inpart):
2514 def handleobsmarker(op, inpart):
2515 """add a stream of obsmarkers to the repo"""
2515 """add a stream of obsmarkers to the repo"""
2516 tr = op.gettransaction()
2516 tr = op.gettransaction()
2517 markerdata = inpart.read()
2517 markerdata = inpart.read()
2518 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2518 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2519 op.ui.writenoi18n(
2519 op.ui.writenoi18n(
2520 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2520 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2521 )
2521 )
2522 # The mergemarkers call will crash if marker creation is not enabled.
2522 # The mergemarkers call will crash if marker creation is not enabled.
2523 # we want to avoid this if the part is advisory.
2523 # we want to avoid this if the part is advisory.
2524 if not inpart.mandatory and op.repo.obsstore.readonly:
2524 if not inpart.mandatory and op.repo.obsstore.readonly:
2525 op.repo.ui.debug(
2525 op.repo.ui.debug(
2526 b'ignoring obsolescence markers, feature not enabled\n'
2526 b'ignoring obsolescence markers, feature not enabled\n'
2527 )
2527 )
2528 return
2528 return
2529 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2529 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2530 op.repo.invalidatevolatilesets()
2530 op.repo.invalidatevolatilesets()
2531 op.records.add(b'obsmarkers', {b'new': new})
2531 op.records.add(b'obsmarkers', {b'new': new})
2532 if op.reply is not None:
2532 if op.reply is not None:
2533 rpart = op.reply.newpart(b'reply:obsmarkers')
2533 rpart = op.reply.newpart(b'reply:obsmarkers')
2534 rpart.addparam(
2534 rpart.addparam(
2535 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2535 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2536 )
2536 )
2537 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2537 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2538
2538
2539
2539
2540 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2540 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2541 def handleobsmarkerreply(op, inpart):
2541 def handleobsmarkerreply(op, inpart):
2542 """retrieve the result of a pushkey request"""
2542 """retrieve the result of a pushkey request"""
2543 ret = int(inpart.params[b'new'])
2543 ret = int(inpart.params[b'new'])
2544 partid = int(inpart.params[b'in-reply-to'])
2544 partid = int(inpart.params[b'in-reply-to'])
2545 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2545 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2546
2546
2547
2547
2548 @parthandler(b'hgtagsfnodes')
2548 @parthandler(b'hgtagsfnodes')
2549 def handlehgtagsfnodes(op, inpart):
2549 def handlehgtagsfnodes(op, inpart):
2550 """Applies .hgtags fnodes cache entries to the local repo.
2550 """Applies .hgtags fnodes cache entries to the local repo.
2551
2551
2552 Payload is pairs of 20 byte changeset nodes and filenodes.
2552 Payload is pairs of 20 byte changeset nodes and filenodes.
2553 """
2553 """
2554 # Grab the transaction so we ensure that we have the lock at this point.
2554 # Grab the transaction so we ensure that we have the lock at this point.
2555 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2555 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2556 op.gettransaction()
2556 op.gettransaction()
2557 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2557 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2558
2558
2559 count = 0
2559 count = 0
2560 while True:
2560 while True:
2561 node = inpart.read(20)
2561 node = inpart.read(20)
2562 fnode = inpart.read(20)
2562 fnode = inpart.read(20)
2563 if len(node) < 20 or len(fnode) < 20:
2563 if len(node) < 20 or len(fnode) < 20:
2564 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2564 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2565 break
2565 break
2566 cache.setfnode(node, fnode)
2566 cache.setfnode(node, fnode)
2567 count += 1
2567 count += 1
2568
2568
2569 cache.write()
2569 cache.write()
2570 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2570 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2571
2571
2572
2572
2573 rbcstruct = struct.Struct(b'>III')
2573 rbcstruct = struct.Struct(b'>III')
2574
2574
2575
2575
2576 @parthandler(b'cache:rev-branch-cache')
2576 @parthandler(b'cache:rev-branch-cache')
2577 def handlerbc(op, inpart):
2577 def handlerbc(op, inpart):
2578 """Legacy part, ignored for compatibility with bundles from or
2578 """Legacy part, ignored for compatibility with bundles from or
2579 for Mercurial before 5.7. Newer Mercurial computes the cache
2579 for Mercurial before 5.7. Newer Mercurial computes the cache
2580 efficiently enough during unbundling that the additional transfer
2580 efficiently enough during unbundling that the additional transfer
2581 is unnecessary."""
2581 is unnecessary."""
2582
2582
2583
2583
2584 @parthandler(b'pushvars')
2584 @parthandler(b'pushvars')
2585 def bundle2getvars(op, part):
2585 def bundle2getvars(op, part):
2586 '''unbundle a bundle2 containing shellvars on the server'''
2586 '''unbundle a bundle2 containing shellvars on the server'''
2587 # An option to disable unbundling on server-side for security reasons
2587 # An option to disable unbundling on server-side for security reasons
2588 if op.ui.configbool(b'push', b'pushvars.server'):
2588 if op.ui.configbool(b'push', b'pushvars.server'):
2589 hookargs = {}
2589 hookargs = {}
2590 for key, value in part.advisoryparams:
2590 for key, value in part.advisoryparams:
2591 key = key.upper()
2591 key = key.upper()
2592 # We want pushed variables to have USERVAR_ prepended so we know
2592 # We want pushed variables to have USERVAR_ prepended so we know
2593 # they came from the --pushvar flag.
2593 # they came from the --pushvar flag.
2594 key = b"USERVAR_" + key
2594 key = b"USERVAR_" + key
2595 hookargs[key] = value
2595 hookargs[key] = value
2596 op.addhookargs(hookargs)
2596 op.addhookargs(hookargs)
2597
2597
2598
2598
2599 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2599 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2600 def handlestreamv2bundle(op, part):
2600 def handlestreamv2bundle(op, part):
2601
2601
2602 requirements = urlreq.unquote(part.params[b'requirements'])
2602 requirements = urlreq.unquote(part.params[b'requirements'])
2603 requirements = requirements.split(b',') if requirements else []
2603 requirements = requirements.split(b',') if requirements else []
2604 filecount = int(part.params[b'filecount'])
2604 filecount = int(part.params[b'filecount'])
2605 bytecount = int(part.params[b'bytecount'])
2605 bytecount = int(part.params[b'bytecount'])
2606
2606
2607 repo = op.repo
2607 repo = op.repo
2608 if len(repo):
2608 if len(repo):
2609 msg = _(b'cannot apply stream clone to non empty repository')
2609 msg = _(b'cannot apply stream clone to non empty repository')
2610 raise error.Abort(msg)
2610 raise error.Abort(msg)
2611
2611
2612 repo.ui.debug(b'applying stream bundle\n')
2612 repo.ui.debug(b'applying stream bundle\n')
2613 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2613 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2614
2614
2615
2615
2616 @parthandler(b'stream3-exp', (b'requirements',))
2616 @parthandler(b'stream3-exp', (b'requirements',))
2617 def handlestreamv3bundle(op, part):
2617 def handlestreamv3bundle(op, part):
2618 requirements = urlreq.unquote(part.params[b'requirements'])
2618 requirements = urlreq.unquote(part.params[b'requirements'])
2619 requirements = requirements.split(b',') if requirements else []
2619 requirements = requirements.split(b',') if requirements else []
2620
2620
2621 repo = op.repo
2621 repo = op.repo
2622 if len(repo):
2622 if len(repo):
2623 msg = _(b'cannot apply stream clone to non empty repository')
2623 msg = _(b'cannot apply stream clone to non empty repository')
2624 raise error.Abort(msg)
2624 raise error.Abort(msg)
2625
2625
2626 repo.ui.debug(b'applying stream bundle\n')
2626 repo.ui.debug(b'applying stream bundle\n')
2627 streamclone.applybundlev3(repo, part, requirements)
2627 streamclone.applybundlev3(repo, part, requirements)
2628
2628
2629
2629
2630 def widen_bundle(
2630 def widen_bundle(
2631 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2631 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2632 ):
2632 ):
2633 """generates bundle2 for widening a narrow clone
2633 """generates bundle2 for widening a narrow clone
2634
2634
2635 bundler is the bundle to which data should be added
2635 bundler is the bundle to which data should be added
2636 repo is the localrepository instance
2636 repo is the localrepository instance
2637 oldmatcher matches what the client already has
2637 oldmatcher matches what the client already has
2638 newmatcher matches what the client needs (including what it already has)
2638 newmatcher matches what the client needs (including what it already has)
2639 common is set of common heads between server and client
2639 common is set of common heads between server and client
2640 known is a set of revs known on the client side (used in ellipses)
2640 known is a set of revs known on the client side (used in ellipses)
2641 cgversion is the changegroup version to send
2641 cgversion is the changegroup version to send
2642 ellipses is boolean value telling whether to send ellipses data or not
2642 ellipses is boolean value telling whether to send ellipses data or not
2643
2643
2644 returns bundle2 of the data required for extending
2644 returns bundle2 of the data required for extending
2645 """
2645 """
2646 commonnodes = set()
2646 commonnodes = set()
2647 cl = repo.changelog
2647 cl = repo.changelog
2648 for r in repo.revs(b"::%ln", common):
2648 for r in repo.revs(b"::%ln", common):
2649 commonnodes.add(cl.node(r))
2649 commonnodes.add(cl.node(r))
2650 if commonnodes:
2650 if commonnodes:
2651 packer = changegroup.getbundler(
2651 packer = changegroup.getbundler(
2652 cgversion,
2652 cgversion,
2653 repo,
2653 repo,
2654 oldmatcher=oldmatcher,
2654 oldmatcher=oldmatcher,
2655 matcher=newmatcher,
2655 matcher=newmatcher,
2656 fullnodes=commonnodes,
2656 fullnodes=commonnodes,
2657 )
2657 )
2658 cgdata = packer.generate(
2658 cgdata = packer.generate(
2659 {repo.nullid},
2659 {repo.nullid},
2660 list(commonnodes),
2660 list(commonnodes),
2661 False,
2661 False,
2662 b'narrow_widen',
2662 b'narrow_widen',
2663 changelog=False,
2663 changelog=False,
2664 )
2664 )
2665
2665
2666 part = bundler.newpart(b'changegroup', data=cgdata)
2666 part = bundler.newpart(b'changegroup', data=cgdata)
2667 part.addparam(b'version', cgversion)
2667 part.addparam(b'version', cgversion)
2668 if scmutil.istreemanifest(repo):
2668 if scmutil.istreemanifest(repo):
2669 part.addparam(b'treemanifest', b'1')
2669 part.addparam(b'treemanifest', b'1')
2670 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2670 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2671 part.addparam(b'exp-sidedata', b'1')
2671 part.addparam(b'exp-sidedata', b'1')
2672 wanted = format_remote_wanted_sidedata(repo)
2672 wanted = format_remote_wanted_sidedata(repo)
2673 part.addparam(b'exp-wanted-sidedata', wanted)
2673 part.addparam(b'exp-wanted-sidedata', wanted)
2674
2674
2675 return bundler
2675 return bundler
@@ -1,2953 +1,2953
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8
8
9 import collections
9 import collections
10 import weakref
10 import weakref
11
11
12 from .i18n import _
12 from .i18n import _
13 from .node import (
13 from .node import (
14 hex,
14 hex,
15 nullrev,
15 nullrev,
16 )
16 )
17 from . import (
17 from . import (
18 bookmarks as bookmod,
18 bookmarks as bookmod,
19 bundle2,
19 bundle2,
20 bundlecaches,
20 bundlecaches,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 logexchange,
25 logexchange,
26 narrowspec,
26 narrowspec,
27 obsolete,
27 obsolete,
28 obsutil,
28 obsutil,
29 phases,
29 phases,
30 pushkey,
30 pushkey,
31 pycompat,
31 pycompat,
32 requirements,
32 requirements,
33 scmutil,
33 scmutil,
34 streamclone,
34 streamclone,
35 url as urlmod,
35 url as urlmod,
36 util,
36 util,
37 wireprototypes,
37 wireprototypes,
38 )
38 )
39 from .utils import (
39 from .utils import (
40 hashutil,
40 hashutil,
41 stringutil,
41 stringutil,
42 urlutil,
42 urlutil,
43 )
43 )
44 from .interfaces import repository
44 from .interfaces import repository
45
45
46 urlerr = util.urlerr
46 urlerr = util.urlerr
47 urlreq = util.urlreq
47 urlreq = util.urlreq
48
48
49 _NARROWACL_SECTION = b'narrowacl'
49 _NARROWACL_SECTION = b'narrowacl'
50
50
51
51
52 def readbundle(ui, fh, fname, vfs=None):
52 def readbundle(ui, fh, fname, vfs=None):
53 header = changegroup.readexactly(fh, 4)
53 header = changegroup.readexactly(fh, 4)
54
54
55 alg = None
55 alg = None
56 if not fname:
56 if not fname:
57 fname = b"stream"
57 fname = b"stream"
58 if not header.startswith(b'HG') and header.startswith(b'\0'):
58 if not header.startswith(b'HG') and header.startswith(b'\0'):
59 fh = changegroup.headerlessfixup(fh, header)
59 fh = changegroup.headerlessfixup(fh, header)
60 header = b"HG10"
60 header = b"HG10"
61 alg = b'UN'
61 alg = b'UN'
62 elif vfs:
62 elif vfs:
63 fname = vfs.join(fname)
63 fname = vfs.join(fname)
64
64
65 magic, version = header[0:2], header[2:4]
65 magic, version = header[0:2], header[2:4]
66
66
67 if magic != b'HG':
67 if magic != b'HG':
68 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
68 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
69 if version == b'10':
69 if version == b'10':
70 if alg is None:
70 if alg is None:
71 alg = changegroup.readexactly(fh, 2)
71 alg = changegroup.readexactly(fh, 2)
72 return changegroup.cg1unpacker(fh, alg)
72 return changegroup.cg1unpacker(fh, alg)
73 elif version.startswith(b'2'):
73 elif version.startswith(b'2'):
74 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
74 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
75 elif version == b'S1':
75 elif version == b'S1':
76 return streamclone.streamcloneapplier(fh)
76 return streamclone.streamcloneapplier(fh)
77 else:
77 else:
78 raise error.Abort(
78 raise error.Abort(
79 _(b'%s: unknown bundle version %s') % (fname, version)
79 _(b'%s: unknown bundle version %s') % (fname, version)
80 )
80 )
81
81
82
82
83 def _format_params(params):
83 def _format_params(params):
84 parts = []
84 parts = []
85 for key, value in sorted(params.items()):
85 for key, value in sorted(params.items()):
86 value = urlreq.quote(value)
86 value = urlreq.quote(value)
87 parts.append(b"%s=%s" % (key, value))
87 parts.append(b"%s=%s" % (key, value))
88 return b';'.join(parts)
88 return b';'.join(parts)
89
89
90
90
91 def getbundlespec(ui, fh):
91 def getbundlespec(ui, fh):
92 """Infer the bundlespec from a bundle file handle.
92 """Infer the bundlespec from a bundle file handle.
93
93
94 The input file handle is seeked and the original seek position is not
94 The input file handle is seeked and the original seek position is not
95 restored.
95 restored.
96 """
96 """
97
97
98 def speccompression(alg):
98 def speccompression(alg):
99 try:
99 try:
100 return util.compengines.forbundletype(alg).bundletype()[0]
100 return util.compengines.forbundletype(alg).bundletype()[0]
101 except KeyError:
101 except KeyError:
102 return None
102 return None
103
103
104 params = {}
104 params = {}
105
105
106 b = readbundle(ui, fh, None)
106 b = readbundle(ui, fh, None)
107 if isinstance(b, changegroup.cg1unpacker):
107 if isinstance(b, changegroup.cg1unpacker):
108 alg = b._type
108 alg = b._type
109 if alg == b'_truncatedBZ':
109 if alg == b'_truncatedBZ':
110 alg = b'BZ'
110 alg = b'BZ'
111 comp = speccompression(alg)
111 comp = speccompression(alg)
112 if not comp:
112 if not comp:
113 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
113 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
114 return b'%s-v1' % comp
114 return b'%s-v1' % comp
115 elif isinstance(b, bundle2.unbundle20):
115 elif isinstance(b, bundle2.unbundle20):
116 if b'Compression' in b.params:
116 if b'Compression' in b.params:
117 comp = speccompression(b.params[b'Compression'])
117 comp = speccompression(b.params[b'Compression'])
118 if not comp:
118 if not comp:
119 raise error.Abort(
119 raise error.Abort(
120 _(b'unknown compression algorithm: %s') % comp
120 _(b'unknown compression algorithm: %s') % comp
121 )
121 )
122 else:
122 else:
123 comp = b'none'
123 comp = b'none'
124
124
125 version = None
125 version = None
126 for part in b.iterparts():
126 for part in b.iterparts():
127 if part.type == b'changegroup':
127 if part.type == b'changegroup':
128 cgversion = part.params[b'version']
128 cgversion = part.params[b'version']
129 if cgversion in (b'01', b'02'):
129 if cgversion in (b'01', b'02'):
130 version = b'v2'
130 version = b'v2'
131 elif cgversion in (b'03',):
131 elif cgversion in (b'03',):
132 version = b'v2'
132 version = b'v2'
133 params[b'cg.version'] = cgversion
133 params[b'cg.version'] = cgversion
134 else:
134 else:
135 raise error.Abort(
135 raise error.Abort(
136 _(
136 _(
137 b'changegroup version %s does not have '
137 b'changegroup version %s does not have '
138 b'a known bundlespec'
138 b'a known bundlespec'
139 )
139 )
140 % version,
140 % version,
141 hint=_(b'try upgrading your Mercurial client'),
141 hint=_(b'try upgrading your Mercurial client'),
142 )
142 )
143 elif part.type == b'stream2' and version is None:
143 elif part.type == b'stream2' and version is None:
144 # A stream2 part requires to be part of a v2 bundle
144 # A stream2 part requires to be part of a v2 bundle
145 requirements = urlreq.unquote(part.params[b'requirements'])
145 requirements = urlreq.unquote(part.params[b'requirements'])
146 splitted = requirements.split()
146 splitted = requirements.split()
147 params = bundle2._formatrequirementsparams(splitted)
147 params = bundle2._formatrequirementsparams(splitted)
148 return b'none-v2;stream=v2;%s' % params
148 return b'none-v2;stream=v2;%s' % params
149 elif part.type == b'stream3-exp' and version is None:
149 elif part.type == b'stream3-exp' and version is None:
150 # A stream3 part requires to be part of a v2 bundle
150 # A stream3 part requires to be part of a v2 bundle
151 requirements = urlreq.unquote(part.params[b'requirements'])
151 requirements = urlreq.unquote(part.params[b'requirements'])
152 splitted = requirements.split()
152 splitted = requirements.split()
153 params = bundle2._formatrequirementsparams(splitted)
153 params = bundle2._formatrequirementsparams(splitted)
154 return b'none-v2;stream=v3-exp;%s' % params
154 return b'none-v2;stream=v3-exp;%s' % params
155 elif part.type == b'obsmarkers':
155 elif part.type == b'obsmarkers':
156 params[b'obsolescence'] = b'yes'
156 params[b'obsolescence'] = b'yes'
157 if not part.mandatory:
157 if not part.mandatory:
158 params[b'obsolescence-mandatory'] = b'no'
158 params[b'obsolescence-mandatory'] = b'no'
159
159
160 if not version:
160 if not version:
161 params[b'changegroup'] = b'no'
161 params[b'changegroup'] = b'no'
162 version = b'v2'
162 version = b'v2'
163 spec = b'%s-%s' % (comp, version)
163 spec = b'%s-%s' % (comp, version)
164 if params:
164 if params:
165 spec += b';'
165 spec += b';'
166 spec += _format_params(params)
166 spec += _format_params(params)
167 return spec
167 return spec
168
168
169 elif isinstance(b, streamclone.streamcloneapplier):
169 elif isinstance(b, streamclone.streamcloneapplier):
170 requirements = streamclone.readbundle1header(fh)[2]
170 requirements = streamclone.readbundle1header(fh)[2]
171 formatted = bundle2._formatrequirementsparams(requirements)
171 formatted = bundle2._formatrequirementsparams(requirements)
172 return b'none-packed1;%s' % formatted
172 return b'none-packed1;%s' % formatted
173 else:
173 else:
174 raise error.Abort(_(b'unknown bundle type: %s') % b)
174 raise error.Abort(_(b'unknown bundle type: %s') % b)
175
175
176
176
177 def _computeoutgoing(repo, heads, common):
177 def _computeoutgoing(repo, heads, common):
178 """Computes which revs are outgoing given a set of common
178 """Computes which revs are outgoing given a set of common
179 and a set of heads.
179 and a set of heads.
180
180
181 This is a separate function so extensions can have access to
181 This is a separate function so extensions can have access to
182 the logic.
182 the logic.
183
183
184 Returns a discovery.outgoing object.
184 Returns a discovery.outgoing object.
185 """
185 """
186 cl = repo.changelog
186 cl = repo.changelog
187 if common:
187 if common:
188 hasnode = cl.hasnode
188 hasnode = cl.hasnode
189 common = [n for n in common if hasnode(n)]
189 common = [n for n in common if hasnode(n)]
190 else:
190 else:
191 common = [repo.nullid]
191 common = [repo.nullid]
192 if not heads:
192 if not heads:
193 heads = cl.heads()
193 heads = cl.heads()
194 return discovery.outgoing(repo, common, heads)
194 return discovery.outgoing(repo, common, heads)
195
195
196
196
197 def _checkpublish(pushop):
197 def _checkpublish(pushop):
198 repo = pushop.repo
198 repo = pushop.repo
199 ui = repo.ui
199 ui = repo.ui
200 behavior = ui.config(b'experimental', b'auto-publish')
200 behavior = ui.config(b'experimental', b'auto-publish')
201 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
201 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
202 return
202 return
203 remotephases = listkeys(pushop.remote, b'phases')
203 remotephases = listkeys(pushop.remote, b'phases')
204 if not remotephases.get(b'publishing', False):
204 if not remotephases.get(b'publishing', False):
205 return
205 return
206
206
207 if pushop.revs is None:
207 if pushop.revs is None:
208 published = repo.filtered(b'served').revs(b'not public()')
208 published = repo.filtered(b'served').revs(b'not public()')
209 else:
209 else:
210 published = repo.revs(b'::%ln - public()', pushop.revs)
210 published = repo.revs(b'::%ln - public()', pushop.revs)
211 # we want to use pushop.revs in the revset even if they themselves are
211 # we want to use pushop.revs in the revset even if they themselves are
212 # secret, but we don't want to have anything that the server won't see
212 # secret, but we don't want to have anything that the server won't see
213 # in the result of this expression
213 # in the result of this expression
214 published &= repo.filtered(b'served')
214 published &= repo.filtered(b'served')
215 if published:
215 if published:
216 if behavior == b'warn':
216 if behavior == b'warn':
217 ui.warn(
217 ui.warn(
218 _(b'%i changesets about to be published\n') % len(published)
218 _(b'%i changesets about to be published\n') % len(published)
219 )
219 )
220 elif behavior == b'confirm':
220 elif behavior == b'confirm':
221 if ui.promptchoice(
221 if ui.promptchoice(
222 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
222 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
223 % len(published)
223 % len(published)
224 ):
224 ):
225 raise error.CanceledError(_(b'user quit'))
225 raise error.CanceledError(_(b'user quit'))
226 elif behavior == b'abort':
226 elif behavior == b'abort':
227 msg = _(b'push would publish %i changesets') % len(published)
227 msg = _(b'push would publish %i changesets') % len(published)
228 hint = _(
228 hint = _(
229 b"use --publish or adjust 'experimental.auto-publish'"
229 b"use --publish or adjust 'experimental.auto-publish'"
230 b" config"
230 b" config"
231 )
231 )
232 raise error.Abort(msg, hint=hint)
232 raise error.Abort(msg, hint=hint)
233
233
234
234
235 def _forcebundle1(op):
235 def _forcebundle1(op):
236 """return true if a pull/push must use bundle1
236 """return true if a pull/push must use bundle1
237
237
238 This function is used to allow testing of the older bundle version"""
238 This function is used to allow testing of the older bundle version"""
239 ui = op.repo.ui
239 ui = op.repo.ui
240 # The goal is this config is to allow developer to choose the bundle
240 # The goal is this config is to allow developer to choose the bundle
241 # version used during exchanged. This is especially handy during test.
241 # version used during exchanged. This is especially handy during test.
242 # Value is a list of bundle version to be picked from, highest version
242 # Value is a list of bundle version to be picked from, highest version
243 # should be used.
243 # should be used.
244 #
244 #
245 # developer config: devel.legacy.exchange
245 # developer config: devel.legacy.exchange
246 exchange = ui.configlist(b'devel', b'legacy.exchange')
246 exchange = ui.configlist(b'devel', b'legacy.exchange')
247 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
247 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
248 return forcebundle1 or not op.remote.capable(b'bundle2')
248 return forcebundle1 or not op.remote.capable(b'bundle2')
249
249
250
250
251 class pushoperation:
251 class pushoperation:
252 """A object that represent a single push operation
252 """A object that represent a single push operation
253
253
254 Its purpose is to carry push related state and very common operations.
254 Its purpose is to carry push related state and very common operations.
255
255
256 A new pushoperation should be created at the beginning of each push and
256 A new pushoperation should be created at the beginning of each push and
257 discarded afterward.
257 discarded afterward.
258 """
258 """
259
259
260 def __init__(
260 def __init__(
261 self,
261 self,
262 repo,
262 repo,
263 remote,
263 remote,
264 force=False,
264 force=False,
265 revs=None,
265 revs=None,
266 newbranch=False,
266 newbranch=False,
267 bookmarks=(),
267 bookmarks=(),
268 publish=False,
268 publish=False,
269 pushvars=None,
269 pushvars=None,
270 ):
270 ):
271 # repo we push from
271 # repo we push from
272 self.repo = repo
272 self.repo = repo
273 self.ui = repo.ui
273 self.ui = repo.ui
274 # repo we push to
274 # repo we push to
275 self.remote = remote
275 self.remote = remote
276 # force option provided
276 # force option provided
277 self.force = force
277 self.force = force
278 # revs to be pushed (None is "all")
278 # revs to be pushed (None is "all")
279 self.revs = revs
279 self.revs = revs
280 # bookmark explicitly pushed
280 # bookmark explicitly pushed
281 self.bookmarks = bookmarks
281 self.bookmarks = bookmarks
282 # allow push of new branch
282 # allow push of new branch
283 self.newbranch = newbranch
283 self.newbranch = newbranch
284 # step already performed
284 # step already performed
285 # (used to check what steps have been already performed through bundle2)
285 # (used to check what steps have been already performed through bundle2)
286 self.stepsdone = set()
286 self.stepsdone = set()
287 # Integer version of the changegroup push result
287 # Integer version of the changegroup push result
288 # - None means nothing to push
288 # - None means nothing to push
289 # - 0 means HTTP error
289 # - 0 means HTTP error
290 # - 1 means we pushed and remote head count is unchanged *or*
290 # - 1 means we pushed and remote head count is unchanged *or*
291 # we have outgoing changesets but refused to push
291 # we have outgoing changesets but refused to push
292 # - other values as described by addchangegroup()
292 # - other values as described by addchangegroup()
293 self.cgresult = None
293 self.cgresult = None
294 # Boolean value for the bookmark push
294 # Boolean value for the bookmark push
295 self.bkresult = None
295 self.bkresult = None
296 # discover.outgoing object (contains common and outgoing data)
296 # discover.outgoing object (contains common and outgoing data)
297 self.outgoing = None
297 self.outgoing = None
298 # all remote topological heads before the push
298 # all remote topological heads before the push
299 self.remoteheads = None
299 self.remoteheads = None
300 # Details of the remote branch pre and post push
300 # Details of the remote branch pre and post push
301 #
301 #
302 # mapping: {'branch': ([remoteheads],
302 # mapping: {'branch': ([remoteheads],
303 # [newheads],
303 # [newheads],
304 # [unsyncedheads],
304 # [unsyncedheads],
305 # [discardedheads])}
305 # [discardedheads])}
306 # - branch: the branch name
306 # - branch: the branch name
307 # - remoteheads: the list of remote heads known locally
307 # - remoteheads: the list of remote heads known locally
308 # None if the branch is new
308 # None if the branch is new
309 # - newheads: the new remote heads (known locally) with outgoing pushed
309 # - newheads: the new remote heads (known locally) with outgoing pushed
310 # - unsyncedheads: the list of remote heads unknown locally.
310 # - unsyncedheads: the list of remote heads unknown locally.
311 # - discardedheads: the list of remote heads made obsolete by the push
311 # - discardedheads: the list of remote heads made obsolete by the push
312 self.pushbranchmap = None
312 self.pushbranchmap = None
313 # testable as a boolean indicating if any nodes are missing locally.
313 # testable as a boolean indicating if any nodes are missing locally.
314 self.incoming = None
314 self.incoming = None
315 # summary of the remote phase situation
315 # summary of the remote phase situation
316 self.remotephases = None
316 self.remotephases = None
317 # phases changes that must be pushed along side the changesets
317 # phases changes that must be pushed along side the changesets
318 self.outdatedphases = None
318 self.outdatedphases = None
319 # phases changes that must be pushed if changeset push fails
319 # phases changes that must be pushed if changeset push fails
320 self.fallbackoutdatedphases = None
320 self.fallbackoutdatedphases = None
321 # outgoing obsmarkers
321 # outgoing obsmarkers
322 self.outobsmarkers = set()
322 self.outobsmarkers = set()
323 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
323 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
324 self.outbookmarks = []
324 self.outbookmarks = []
325 # transaction manager
325 # transaction manager
326 self.trmanager = None
326 self.trmanager = None
327 # map { pushkey partid -> callback handling failure}
327 # map { pushkey partid -> callback handling failure}
328 # used to handle exception from mandatory pushkey part failure
328 # used to handle exception from mandatory pushkey part failure
329 self.pkfailcb = {}
329 self.pkfailcb = {}
330 # an iterable of pushvars or None
330 # an iterable of pushvars or None
331 self.pushvars = pushvars
331 self.pushvars = pushvars
332 # publish pushed changesets
332 # publish pushed changesets
333 self.publish = publish
333 self.publish = publish
334
334
335 @util.propertycache
335 @util.propertycache
336 def futureheads(self):
336 def futureheads(self):
337 """future remote heads if the changeset push succeeds"""
337 """future remote heads if the changeset push succeeds"""
338 return self.outgoing.ancestorsof
338 return self.outgoing.ancestorsof
339
339
340 @util.propertycache
340 @util.propertycache
341 def fallbackheads(self):
341 def fallbackheads(self):
342 """future remote heads if the changeset push fails"""
342 """future remote heads if the changeset push fails"""
343 if self.revs is None:
343 if self.revs is None:
344 # not target to push, all common are relevant
344 # not target to push, all common are relevant
345 return self.outgoing.commonheads
345 return self.outgoing.commonheads
346 unfi = self.repo.unfiltered()
346 unfi = self.repo.unfiltered()
347 # I want cheads = heads(::push_heads and ::commonheads)
347 # I want cheads = heads(::push_heads and ::commonheads)
348 #
348 #
349 # To push, we already computed
349 # To push, we already computed
350 # common = (::commonheads)
350 # common = (::commonheads)
351 # missing = ((commonheads::push_heads) - commonheads)
351 # missing = ((commonheads::push_heads) - commonheads)
352 #
352 #
353 # So we basically search
353 # So we basically search
354 #
354 #
355 # almost_heads = heads((parents(missing) + push_heads) & common)
355 # almost_heads = heads((parents(missing) + push_heads) & common)
356 #
356 #
357 # We use "almost" here as this can return revision that are ancestors
357 # We use "almost" here as this can return revision that are ancestors
358 # of other in the set and we need to explicitly turn it into an
358 # of other in the set and we need to explicitly turn it into an
359 # antichain later. We can do so using:
359 # antichain later. We can do so using:
360 #
360 #
361 # cheads = heads(almost_heads::almost_heads)
361 # cheads = heads(almost_heads::almost_heads)
362 #
362 #
363 # In pratice the code is a bit more convulted to avoid some extra
363 # In pratice the code is a bit more convulted to avoid some extra
364 # computation. It aims at doing the same computation as highlighted
364 # computation. It aims at doing the same computation as highlighted
365 # above however.
365 # above however.
366 common = self.outgoing.common
366 common = self.outgoing.common
367 unfi = self.repo.unfiltered()
367 unfi = self.repo.unfiltered()
368 cl = unfi.changelog
368 cl = unfi.changelog
369 to_rev = cl.index.rev
369 to_rev = cl.index.rev
370 to_node = cl.node
370 to_node = cl.node
371 parent_revs = cl.parentrevs
371 parent_revs = cl.parentrevs
372 unselected = []
372 unselected = []
373 cheads = set()
373 cheads = set()
374 # XXX-perf: `self.revs` and `outgoing.missing` could hold revs directly
374 # XXX-perf: `self.revs` and `outgoing.missing` could hold revs directly
375 for n in self.revs:
375 for n in self.revs:
376 r = to_rev(n)
376 r = to_rev(n)
377 if r in common:
377 if r in common:
378 cheads.add(r)
378 cheads.add(r)
379 else:
379 else:
380 unselected.append(r)
380 unselected.append(r)
381 known_non_heads = cl.ancestors(cheads, inclusive=True)
381 known_non_heads = cl.ancestors(cheads, inclusive=True)
382 if unselected:
382 if unselected:
383 missing_revs = {to_rev(n) for n in self.outgoing.missing}
383 missing_revs = {to_rev(n) for n in self.outgoing.missing}
384 missing_revs.add(nullrev)
384 missing_revs.add(nullrev)
385 root_points = set()
385 root_points = set()
386 for r in missing_revs:
386 for r in missing_revs:
387 p1, p2 = parent_revs(r)
387 p1, p2 = parent_revs(r)
388 if p1 not in missing_revs and p1 not in known_non_heads:
388 if p1 not in missing_revs and p1 not in known_non_heads:
389 root_points.add(p1)
389 root_points.add(p1)
390 if p2 not in missing_revs and p2 not in known_non_heads:
390 if p2 not in missing_revs and p2 not in known_non_heads:
391 root_points.add(p2)
391 root_points.add(p2)
392 if root_points:
392 if root_points:
393 heads = unfi.revs('heads(%ld::%ld)', root_points, root_points)
393 heads = unfi.revs('heads(%ld::%ld)', root_points, root_points)
394 cheads.update(heads)
394 cheads.update(heads)
395 # XXX-perf: could this be a set of revision?
395 # XXX-perf: could this be a set of revision?
396 return [to_node(r) for r in sorted(cheads)]
396 return [to_node(r) for r in sorted(cheads)]
397
397
398 @property
398 @property
399 def commonheads(self):
399 def commonheads(self):
400 """set of all common heads after changeset bundle push"""
400 """set of all common heads after changeset bundle push"""
401 if self.cgresult:
401 if self.cgresult:
402 return self.futureheads
402 return self.futureheads
403 else:
403 else:
404 return self.fallbackheads
404 return self.fallbackheads
405
405
406
406
407 # mapping of message used when pushing bookmark
407 # mapping of message used when pushing bookmark
408 bookmsgmap = {
408 bookmsgmap = {
409 b'update': (
409 b'update': (
410 _(b"updating bookmark %s\n"),
410 _(b"updating bookmark %s\n"),
411 _(b'updating bookmark %s failed\n'),
411 _(b'updating bookmark %s failed\n'),
412 ),
412 ),
413 b'export': (
413 b'export': (
414 _(b"exporting bookmark %s\n"),
414 _(b"exporting bookmark %s\n"),
415 _(b'exporting bookmark %s failed\n'),
415 _(b'exporting bookmark %s failed\n'),
416 ),
416 ),
417 b'delete': (
417 b'delete': (
418 _(b"deleting remote bookmark %s\n"),
418 _(b"deleting remote bookmark %s\n"),
419 _(b'deleting remote bookmark %s failed\n'),
419 _(b'deleting remote bookmark %s failed\n'),
420 ),
420 ),
421 }
421 }
422
422
423
423
424 def push(
424 def push(
425 repo,
425 repo,
426 remote,
426 remote,
427 force=False,
427 force=False,
428 revs=None,
428 revs=None,
429 newbranch=False,
429 newbranch=False,
430 bookmarks=(),
430 bookmarks=(),
431 publish=False,
431 publish=False,
432 opargs=None,
432 opargs=None,
433 ):
433 ):
434 """Push outgoing changesets (limited by revs) from a local
434 """Push outgoing changesets (limited by revs) from a local
435 repository to remote. Return an integer:
435 repository to remote. Return an integer:
436 - None means nothing to push
436 - None means nothing to push
437 - 0 means HTTP error
437 - 0 means HTTP error
438 - 1 means we pushed and remote head count is unchanged *or*
438 - 1 means we pushed and remote head count is unchanged *or*
439 we have outgoing changesets but refused to push
439 we have outgoing changesets but refused to push
440 - other values as described by addchangegroup()
440 - other values as described by addchangegroup()
441 """
441 """
442 if opargs is None:
442 if opargs is None:
443 opargs = {}
443 opargs = {}
444 pushop = pushoperation(
444 pushop = pushoperation(
445 repo,
445 repo,
446 remote,
446 remote,
447 force,
447 force,
448 revs,
448 revs,
449 newbranch,
449 newbranch,
450 bookmarks,
450 bookmarks,
451 publish,
451 publish,
452 **pycompat.strkwargs(opargs)
452 **pycompat.strkwargs(opargs)
453 )
453 )
454 if pushop.remote.local():
454 if pushop.remote.local():
455 missing = (
455 missing = (
456 set(pushop.repo.requirements) - pushop.remote.local().supported
456 set(pushop.repo.requirements) - pushop.remote.local().supported
457 )
457 )
458 if missing:
458 if missing:
459 msg = _(
459 msg = _(
460 b"required features are not"
460 b"required features are not"
461 b" supported in the destination:"
461 b" supported in the destination:"
462 b" %s"
462 b" %s"
463 ) % (b', '.join(sorted(missing)))
463 ) % (b', '.join(sorted(missing)))
464 raise error.Abort(msg)
464 raise error.Abort(msg)
465
465
466 if not pushop.remote.canpush():
466 if not pushop.remote.canpush():
467 raise error.Abort(_(b"destination does not support push"))
467 raise error.Abort(_(b"destination does not support push"))
468
468
469 if not pushop.remote.capable(b'unbundle'):
469 if not pushop.remote.capable(b'unbundle'):
470 raise error.Abort(
470 raise error.Abort(
471 _(
471 _(
472 b'cannot push: destination does not support the '
472 b'cannot push: destination does not support the '
473 b'unbundle wire protocol command'
473 b'unbundle wire protocol command'
474 )
474 )
475 )
475 )
476 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
476 for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)):
477 # Check that a computer is registered for that category for at least
477 # Check that a computer is registered for that category for at least
478 # one revlog kind.
478 # one revlog kind.
479 for kind, computers in repo._sidedata_computers.items():
479 for kind, computers in repo._sidedata_computers.items():
480 if computers.get(category):
480 if computers.get(category):
481 break
481 break
482 else:
482 else:
483 raise error.Abort(
483 raise error.Abort(
484 _(
484 _(
485 b'cannot push: required sidedata category not supported'
485 b'cannot push: required sidedata category not supported'
486 b" by this client: '%s'"
486 b" by this client: '%s'"
487 )
487 )
488 % pycompat.bytestr(category)
488 % pycompat.bytestr(category)
489 )
489 )
490 # get lock as we might write phase data
490 # get lock as we might write phase data
491 wlock = lock = None
491 wlock = lock = None
492 try:
492 try:
493 try:
493 try:
494 # bundle2 push may receive a reply bundle touching bookmarks
494 # bundle2 push may receive a reply bundle touching bookmarks
495 # requiring the wlock. Take it now to ensure proper ordering.
495 # requiring the wlock. Take it now to ensure proper ordering.
496 maypushback = pushop.ui.configbool(
496 maypushback = pushop.ui.configbool(
497 b'experimental',
497 b'experimental',
498 b'bundle2.pushback',
498 b'bundle2.pushback',
499 )
499 )
500 if (
500 if (
501 (not _forcebundle1(pushop))
501 (not _forcebundle1(pushop))
502 and maypushback
502 and maypushback
503 and not bookmod.bookmarksinstore(repo)
503 and not bookmod.bookmarksinstore(repo)
504 ):
504 ):
505 wlock = pushop.repo.wlock()
505 wlock = pushop.repo.wlock()
506 lock = pushop.repo.lock()
506 lock = pushop.repo.lock()
507 pushop.trmanager = transactionmanager(
507 pushop.trmanager = transactionmanager(
508 pushop.repo, b'push-response', pushop.remote.url()
508 pushop.repo, b'push-response', pushop.remote.url()
509 )
509 )
510 except error.LockUnavailable as err:
510 except error.LockUnavailable as err:
511 # source repo cannot be locked.
511 # source repo cannot be locked.
512 # We do not abort the push, but just disable the local phase
512 # We do not abort the push, but just disable the local phase
513 # synchronisation.
513 # synchronisation.
514 msg = b'cannot lock source repository: %s\n'
514 msg = b'cannot lock source repository: %s\n'
515 msg %= stringutil.forcebytestr(err)
515 msg %= stringutil.forcebytestr(err)
516 pushop.ui.debug(msg)
516 pushop.ui.debug(msg)
517
517
518 pushop.repo.checkpush(pushop)
518 pushop.repo.checkpush(pushop)
519 _checkpublish(pushop)
519 _checkpublish(pushop)
520 _pushdiscovery(pushop)
520 _pushdiscovery(pushop)
521 if not pushop.force:
521 if not pushop.force:
522 _checksubrepostate(pushop)
522 _checksubrepostate(pushop)
523 if not _forcebundle1(pushop):
523 if not _forcebundle1(pushop):
524 _pushbundle2(pushop)
524 _pushbundle2(pushop)
525 _pushchangeset(pushop)
525 _pushchangeset(pushop)
526 _pushsyncphase(pushop)
526 _pushsyncphase(pushop)
527 _pushobsolete(pushop)
527 _pushobsolete(pushop)
528 _pushbookmark(pushop)
528 _pushbookmark(pushop)
529 if pushop.trmanager is not None:
529 if pushop.trmanager is not None:
530 pushop.trmanager.close()
530 pushop.trmanager.close()
531 finally:
531 finally:
532 lockmod.release(pushop.trmanager, lock, wlock)
532 lockmod.release(pushop.trmanager, lock, wlock)
533
533
534 if repo.ui.configbool(b'experimental', b'remotenames'):
534 if repo.ui.configbool(b'experimental', b'remotenames'):
535 logexchange.pullremotenames(repo, remote)
535 logexchange.pullremotenames(repo, remote)
536
536
537 return pushop
537 return pushop
538
538
539
539
540 # list of steps to perform discovery before push
540 # list of steps to perform discovery before push
541 pushdiscoveryorder = []
541 pushdiscoveryorder = []
542
542
543 # Mapping between step name and function
543 # Mapping between step name and function
544 #
544 #
545 # This exists to help extensions wrap steps if necessary
545 # This exists to help extensions wrap steps if necessary
546 pushdiscoverymapping = {}
546 pushdiscoverymapping = {}
547
547
548
548
549 def pushdiscovery(stepname):
549 def pushdiscovery(stepname):
550 """decorator for function performing discovery before push
550 """decorator for function performing discovery before push
551
551
552 The function is added to the step -> function mapping and appended to the
552 The function is added to the step -> function mapping and appended to the
553 list of steps. Beware that decorated function will be added in order (this
553 list of steps. Beware that decorated function will be added in order (this
554 may matter).
554 may matter).
555
555
556 You can only use this decorator for a new step, if you want to wrap a step
556 You can only use this decorator for a new step, if you want to wrap a step
557 from an extension, change the pushdiscovery dictionary directly."""
557 from an extension, change the pushdiscovery dictionary directly."""
558
558
559 def dec(func):
559 def dec(func):
560 assert stepname not in pushdiscoverymapping
560 assert stepname not in pushdiscoverymapping
561 pushdiscoverymapping[stepname] = func
561 pushdiscoverymapping[stepname] = func
562 pushdiscoveryorder.append(stepname)
562 pushdiscoveryorder.append(stepname)
563 return func
563 return func
564
564
565 return dec
565 return dec
566
566
567
567
568 def _pushdiscovery(pushop):
568 def _pushdiscovery(pushop):
569 """Run all discovery steps"""
569 """Run all discovery steps"""
570 for stepname in pushdiscoveryorder:
570 for stepname in pushdiscoveryorder:
571 step = pushdiscoverymapping[stepname]
571 step = pushdiscoverymapping[stepname]
572 step(pushop)
572 step(pushop)
573
573
574
574
575 def _checksubrepostate(pushop):
575 def _checksubrepostate(pushop):
576 """Ensure all outgoing referenced subrepo revisions are present locally"""
576 """Ensure all outgoing referenced subrepo revisions are present locally"""
577
577
578 repo = pushop.repo
578 repo = pushop.repo
579
579
580 # If the repository does not use subrepos, skip the expensive
580 # If the repository does not use subrepos, skip the expensive
581 # manifest checks.
581 # manifest checks.
582 if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')):
582 if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')):
583 return
583 return
584
584
585 for n in pushop.outgoing.missing:
585 for n in pushop.outgoing.missing:
586 ctx = repo[n]
586 ctx = repo[n]
587
587
588 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
588 if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files():
589 for subpath in sorted(ctx.substate):
589 for subpath in sorted(ctx.substate):
590 sub = ctx.sub(subpath)
590 sub = ctx.sub(subpath)
591 sub.verify(onpush=True)
591 sub.verify(onpush=True)
592
592
593
593
594 @pushdiscovery(b'changeset')
594 @pushdiscovery(b'changeset')
595 def _pushdiscoverychangeset(pushop):
595 def _pushdiscoverychangeset(pushop):
596 """discover the changeset that need to be pushed"""
596 """discover the changeset that need to be pushed"""
597 fci = discovery.findcommonincoming
597 fci = discovery.findcommonincoming
598 if pushop.revs:
598 if pushop.revs:
599 commoninc = fci(
599 commoninc = fci(
600 pushop.repo,
600 pushop.repo,
601 pushop.remote,
601 pushop.remote,
602 force=pushop.force,
602 force=pushop.force,
603 ancestorsof=pushop.revs,
603 ancestorsof=pushop.revs,
604 )
604 )
605 else:
605 else:
606 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
606 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
607 common, inc, remoteheads = commoninc
607 common, inc, remoteheads = commoninc
608 fco = discovery.findcommonoutgoing
608 fco = discovery.findcommonoutgoing
609 outgoing = fco(
609 outgoing = fco(
610 pushop.repo,
610 pushop.repo,
611 pushop.remote,
611 pushop.remote,
612 onlyheads=pushop.revs,
612 onlyheads=pushop.revs,
613 commoninc=commoninc,
613 commoninc=commoninc,
614 force=pushop.force,
614 force=pushop.force,
615 )
615 )
616 pushop.outgoing = outgoing
616 pushop.outgoing = outgoing
617 pushop.remoteheads = remoteheads
617 pushop.remoteheads = remoteheads
618 pushop.incoming = inc
618 pushop.incoming = inc
619
619
620
620
621 @pushdiscovery(b'phase')
621 @pushdiscovery(b'phase')
622 def _pushdiscoveryphase(pushop):
622 def _pushdiscoveryphase(pushop):
623 """discover the phase that needs to be pushed
623 """discover the phase that needs to be pushed
624
624
625 (computed for both success and failure case for changesets push)"""
625 (computed for both success and failure case for changesets push)"""
626 outgoing = pushop.outgoing
626 outgoing = pushop.outgoing
627 repo = pushop.repo
627 repo = pushop.repo
628 unfi = repo.unfiltered()
628 unfi = repo.unfiltered()
629 cl = unfi.changelog
629 cl = unfi.changelog
630 to_rev = cl.index.rev
630 to_rev = cl.index.rev
631 remotephases = listkeys(pushop.remote, b'phases')
631 remotephases = listkeys(pushop.remote, b'phases')
632
632
633 if (
633 if (
634 pushop.ui.configbool(b'ui', b'_usedassubrepo')
634 pushop.ui.configbool(b'ui', b'_usedassubrepo')
635 and remotephases # server supports phases
635 and remotephases # server supports phases
636 and not pushop.outgoing.missing # no changesets to be pushed
636 and not pushop.outgoing.missing # no changesets to be pushed
637 and remotephases.get(b'publishing', False)
637 and remotephases.get(b'publishing', False)
638 ):
638 ):
639 # When:
639 # When:
640 # - this is a subrepo push
640 # - this is a subrepo push
641 # - and remote support phase
641 # - and remote support phase
642 # - and no changeset are to be pushed
642 # - and no changeset are to be pushed
643 # - and remote is publishing
643 # - and remote is publishing
644 # We may be in issue 3781 case!
644 # We may be in issue 3781 case!
645 # We drop the possible phase synchronisation done by
645 # We drop the possible phase synchronisation done by
646 # courtesy to publish changesets possibly locally draft
646 # courtesy to publish changesets possibly locally draft
647 # on the remote.
647 # on the remote.
648 pushop.outdatedphases = []
648 pushop.outdatedphases = []
649 pushop.fallbackoutdatedphases = []
649 pushop.fallbackoutdatedphases = []
650 return
650 return
651
651
652 fallbackheads_rev = {to_rev(n) for n in pushop.fallbackheads}
652 fallbackheads_rev = {to_rev(n) for n in pushop.fallbackheads}
653 pushop.remotephases = phases.RemotePhasesSummary(
653 pushop.remotephases = phases.RemotePhasesSummary(
654 pushop.repo,
654 pushop.repo,
655 fallbackheads_rev,
655 fallbackheads_rev,
656 remotephases,
656 remotephases,
657 )
657 )
658 droots = set(pushop.remotephases.draft_roots)
658 droots = set(pushop.remotephases.draft_roots)
659
659
660 fallback_publishing = pushop.remotephases.publishing
660 fallback_publishing = pushop.remotephases.publishing
661 push_publishing = pushop.remotephases.publishing or pushop.publish
661 push_publishing = pushop.remotephases.publishing or pushop.publish
662 missing_revs = {to_rev(n) for n in outgoing.missing}
662 missing_revs = {to_rev(n) for n in outgoing.missing}
663 drafts = unfi._phasecache.get_raw_set(unfi, phases.draft)
663 drafts = unfi._phasecache.get_raw_set(unfi, phases.draft)
664
664
665 if fallback_publishing:
665 if fallback_publishing:
666 fallback_roots = droots - missing_revs
666 fallback_roots = droots - missing_revs
667 revset = b'heads(%ld::%ld)'
667 revset = b'heads(%ld::%ld)'
668 else:
668 else:
669 fallback_roots = droots - drafts
669 fallback_roots = droots - drafts
670 fallback_roots -= missing_revs
670 fallback_roots -= missing_revs
671 # Get the list of all revs draft on remote but public here.
671 # Get the list of all revs draft on remote but public here.
672 revset = b'heads((%ld::%ld) and public())'
672 revset = b'heads((%ld::%ld) and public())'
673 if not fallback_roots:
673 if not fallback_roots:
674 fallback = fallback_rev = []
674 fallback = fallback_rev = []
675 else:
675 else:
676 fallback_rev = unfi.revs(revset, fallback_roots, fallbackheads_rev)
676 fallback_rev = unfi.revs(revset, fallback_roots, fallbackheads_rev)
677 fallback = [repo[r] for r in fallback_rev]
677 fallback = [repo[r] for r in fallback_rev]
678
678
679 if push_publishing:
679 if push_publishing:
680 published = missing_revs.copy()
680 published = missing_revs.copy()
681 else:
681 else:
682 published = missing_revs - drafts
682 published = missing_revs - drafts
683 if pushop.publish:
683 if pushop.publish:
684 published.update(fallbackheads_rev & drafts)
684 published.update(fallbackheads_rev & drafts)
685 elif fallback:
685 elif fallback:
686 published.update(fallback_rev)
686 published.update(fallback_rev)
687
687
688 pushop.outdatedphases = [repo[r] for r in cl.headrevs(published)]
688 pushop.outdatedphases = [repo[r] for r in cl.headrevs(published)]
689 pushop.fallbackoutdatedphases = fallback
689 pushop.fallbackoutdatedphases = fallback
690
690
691
691
692 @pushdiscovery(b'obsmarker')
692 @pushdiscovery(b'obsmarker')
693 def _pushdiscoveryobsmarkers(pushop):
693 def _pushdiscoveryobsmarkers(pushop):
694 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
694 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
695 return
695 return
696
696
697 if not pushop.repo.obsstore:
697 if not pushop.repo.obsstore:
698 return
698 return
699
699
700 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
700 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
701 return
701 return
702
702
703 repo = pushop.repo
703 repo = pushop.repo
704 # very naive computation, that can be quite expensive on big repo.
704 # very naive computation, that can be quite expensive on big repo.
705 # However: evolution is currently slow on them anyway.
705 # However: evolution is currently slow on them anyway.
706 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
706 revs = repo.revs(b'::%ln', pushop.futureheads)
707 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
707 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(revs=revs)
708
708
709
709
710 @pushdiscovery(b'bookmarks')
710 @pushdiscovery(b'bookmarks')
711 def _pushdiscoverybookmarks(pushop):
711 def _pushdiscoverybookmarks(pushop):
712 ui = pushop.ui
712 ui = pushop.ui
713 repo = pushop.repo.unfiltered()
713 repo = pushop.repo.unfiltered()
714 remote = pushop.remote
714 remote = pushop.remote
715 ui.debug(b"checking for updated bookmarks\n")
715 ui.debug(b"checking for updated bookmarks\n")
716 ancestors = ()
716 ancestors = ()
717 if pushop.revs:
717 if pushop.revs:
718 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
718 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
719 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
719 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
720
720
721 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
721 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
722
722
723 explicit = {
723 explicit = {
724 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
724 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
725 }
725 }
726
726
727 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
727 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
728 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
728 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
729
729
730
730
731 def _processcompared(pushop, pushed, explicit, remotebms, comp):
731 def _processcompared(pushop, pushed, explicit, remotebms, comp):
732 """take decision on bookmarks to push to the remote repo
732 """take decision on bookmarks to push to the remote repo
733
733
734 Exists to help extensions alter this behavior.
734 Exists to help extensions alter this behavior.
735 """
735 """
736 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
736 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
737
737
738 repo = pushop.repo
738 repo = pushop.repo
739
739
740 for b, scid, dcid in advsrc:
740 for b, scid, dcid in advsrc:
741 if b in explicit:
741 if b in explicit:
742 explicit.remove(b)
742 explicit.remove(b)
743 if not pushed or repo[scid].rev() in pushed:
743 if not pushed or repo[scid].rev() in pushed:
744 pushop.outbookmarks.append((b, dcid, scid))
744 pushop.outbookmarks.append((b, dcid, scid))
745 # search added bookmark
745 # search added bookmark
746 for b, scid, dcid in addsrc:
746 for b, scid, dcid in addsrc:
747 if b in explicit:
747 if b in explicit:
748 explicit.remove(b)
748 explicit.remove(b)
749 if bookmod.isdivergent(b):
749 if bookmod.isdivergent(b):
750 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
750 pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b)
751 pushop.bkresult = 2
751 pushop.bkresult = 2
752 elif pushed and repo[scid].rev() not in pushed:
752 elif pushed and repo[scid].rev() not in pushed:
753 # in case of race or secret
753 # in case of race or secret
754 msg = _(b'cannot push bookmark X without its revision: %s!\n')
754 msg = _(b'cannot push bookmark X without its revision: %s!\n')
755 pushop.ui.warn(msg % b)
755 pushop.ui.warn(msg % b)
756 pushop.bkresult = 2
756 pushop.bkresult = 2
757 else:
757 else:
758 pushop.outbookmarks.append((b, b'', scid))
758 pushop.outbookmarks.append((b, b'', scid))
759 # search for overwritten bookmark
759 # search for overwritten bookmark
760 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
760 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
761 if b in explicit:
761 if b in explicit:
762 explicit.remove(b)
762 explicit.remove(b)
763 if not pushed or repo[scid].rev() in pushed:
763 if not pushed or repo[scid].rev() in pushed:
764 pushop.outbookmarks.append((b, dcid, scid))
764 pushop.outbookmarks.append((b, dcid, scid))
765 # search for bookmark to delete
765 # search for bookmark to delete
766 for b, scid, dcid in adddst:
766 for b, scid, dcid in adddst:
767 if b in explicit:
767 if b in explicit:
768 explicit.remove(b)
768 explicit.remove(b)
769 # treat as "deleted locally"
769 # treat as "deleted locally"
770 pushop.outbookmarks.append((b, dcid, b''))
770 pushop.outbookmarks.append((b, dcid, b''))
771 # identical bookmarks shouldn't get reported
771 # identical bookmarks shouldn't get reported
772 for b, scid, dcid in same:
772 for b, scid, dcid in same:
773 if b in explicit:
773 if b in explicit:
774 explicit.remove(b)
774 explicit.remove(b)
775
775
776 if explicit:
776 if explicit:
777 explicit = sorted(explicit)
777 explicit = sorted(explicit)
778 # we should probably list all of them
778 # we should probably list all of them
779 pushop.ui.warn(
779 pushop.ui.warn(
780 _(
780 _(
781 b'bookmark %s does not exist on the local '
781 b'bookmark %s does not exist on the local '
782 b'or remote repository!\n'
782 b'or remote repository!\n'
783 )
783 )
784 % explicit[0]
784 % explicit[0]
785 )
785 )
786 pushop.bkresult = 2
786 pushop.bkresult = 2
787
787
788 pushop.outbookmarks.sort()
788 pushop.outbookmarks.sort()
789
789
790
790
791 def _pushcheckoutgoing(pushop):
791 def _pushcheckoutgoing(pushop):
792 outgoing = pushop.outgoing
792 outgoing = pushop.outgoing
793 unfi = pushop.repo.unfiltered()
793 unfi = pushop.repo.unfiltered()
794 if not outgoing.missing:
794 if not outgoing.missing:
795 # nothing to push
795 # nothing to push
796 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
796 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
797 return False
797 return False
798 # something to push
798 # something to push
799 if not pushop.force:
799 if not pushop.force:
800 # if repo.obsstore == False --> no obsolete
800 # if repo.obsstore == False --> no obsolete
801 # then, save the iteration
801 # then, save the iteration
802 if unfi.obsstore:
802 if unfi.obsstore:
803 # this message are here for 80 char limit reason
803 # this message are here for 80 char limit reason
804 mso = _(b"push includes obsolete changeset: %s!")
804 mso = _(b"push includes obsolete changeset: %s!")
805 mspd = _(b"push includes phase-divergent changeset: %s!")
805 mspd = _(b"push includes phase-divergent changeset: %s!")
806 mscd = _(b"push includes content-divergent changeset: %s!")
806 mscd = _(b"push includes content-divergent changeset: %s!")
807 mst = {
807 mst = {
808 b"orphan": _(b"push includes orphan changeset: %s!"),
808 b"orphan": _(b"push includes orphan changeset: %s!"),
809 b"phase-divergent": mspd,
809 b"phase-divergent": mspd,
810 b"content-divergent": mscd,
810 b"content-divergent": mscd,
811 }
811 }
812 # If we are to push if there is at least one
812 # If we are to push if there is at least one
813 # obsolete or unstable changeset in missing, at
813 # obsolete or unstable changeset in missing, at
814 # least one of the missinghead will be obsolete or
814 # least one of the missinghead will be obsolete or
815 # unstable. So checking heads only is ok
815 # unstable. So checking heads only is ok
816 for node in outgoing.ancestorsof:
816 for node in outgoing.ancestorsof:
817 ctx = unfi[node]
817 ctx = unfi[node]
818 if ctx.obsolete():
818 if ctx.obsolete():
819 raise error.Abort(mso % ctx)
819 raise error.Abort(mso % ctx)
820 elif ctx.isunstable():
820 elif ctx.isunstable():
821 # TODO print more than one instability in the abort
821 # TODO print more than one instability in the abort
822 # message
822 # message
823 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
823 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
824
824
825 discovery.checkheads(pushop)
825 discovery.checkheads(pushop)
826 return True
826 return True
827
827
828
828
829 # List of names of steps to perform for an outgoing bundle2, order matters.
829 # List of names of steps to perform for an outgoing bundle2, order matters.
830 b2partsgenorder = []
830 b2partsgenorder = []
831
831
832 # Mapping between step name and function
832 # Mapping between step name and function
833 #
833 #
834 # This exists to help extensions wrap steps if necessary
834 # This exists to help extensions wrap steps if necessary
835 b2partsgenmapping = {}
835 b2partsgenmapping = {}
836
836
837
837
838 def b2partsgenerator(stepname, idx=None):
838 def b2partsgenerator(stepname, idx=None):
839 """decorator for function generating bundle2 part
839 """decorator for function generating bundle2 part
840
840
841 The function is added to the step -> function mapping and appended to the
841 The function is added to the step -> function mapping and appended to the
842 list of steps. Beware that decorated functions will be added in order
842 list of steps. Beware that decorated functions will be added in order
843 (this may matter).
843 (this may matter).
844
844
845 You can only use this decorator for new steps, if you want to wrap a step
845 You can only use this decorator for new steps, if you want to wrap a step
846 from an extension, attack the b2partsgenmapping dictionary directly."""
846 from an extension, attack the b2partsgenmapping dictionary directly."""
847
847
848 def dec(func):
848 def dec(func):
849 assert stepname not in b2partsgenmapping
849 assert stepname not in b2partsgenmapping
850 b2partsgenmapping[stepname] = func
850 b2partsgenmapping[stepname] = func
851 if idx is None:
851 if idx is None:
852 b2partsgenorder.append(stepname)
852 b2partsgenorder.append(stepname)
853 else:
853 else:
854 b2partsgenorder.insert(idx, stepname)
854 b2partsgenorder.insert(idx, stepname)
855 return func
855 return func
856
856
857 return dec
857 return dec
858
858
859
859
860 def _pushb2ctxcheckheads(pushop, bundler):
860 def _pushb2ctxcheckheads(pushop, bundler):
861 """Generate race condition checking parts
861 """Generate race condition checking parts
862
862
863 Exists as an independent function to aid extensions
863 Exists as an independent function to aid extensions
864 """
864 """
865 # * 'force' do not check for push race,
865 # * 'force' do not check for push race,
866 # * if we don't push anything, there are nothing to check.
866 # * if we don't push anything, there are nothing to check.
867 if not pushop.force and pushop.outgoing.ancestorsof:
867 if not pushop.force and pushop.outgoing.ancestorsof:
868 allowunrelated = b'related' in bundler.capabilities.get(
868 allowunrelated = b'related' in bundler.capabilities.get(
869 b'checkheads', ()
869 b'checkheads', ()
870 )
870 )
871 emptyremote = pushop.pushbranchmap is None
871 emptyremote = pushop.pushbranchmap is None
872 if not allowunrelated or emptyremote:
872 if not allowunrelated or emptyremote:
873 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
873 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
874 else:
874 else:
875 affected = set()
875 affected = set()
876 for branch, heads in pushop.pushbranchmap.items():
876 for branch, heads in pushop.pushbranchmap.items():
877 remoteheads, newheads, unsyncedheads, discardedheads = heads
877 remoteheads, newheads, unsyncedheads, discardedheads = heads
878 if remoteheads is not None:
878 if remoteheads is not None:
879 remote = set(remoteheads)
879 remote = set(remoteheads)
880 affected |= set(discardedheads) & remote
880 affected |= set(discardedheads) & remote
881 affected |= remote - set(newheads)
881 affected |= remote - set(newheads)
882 if affected:
882 if affected:
883 data = iter(sorted(affected))
883 data = iter(sorted(affected))
884 bundler.newpart(b'check:updated-heads', data=data)
884 bundler.newpart(b'check:updated-heads', data=data)
885
885
886
886
887 def _pushing(pushop):
887 def _pushing(pushop):
888 """return True if we are pushing anything"""
888 """return True if we are pushing anything"""
889 return bool(
889 return bool(
890 pushop.outgoing.missing
890 pushop.outgoing.missing
891 or pushop.outdatedphases
891 or pushop.outdatedphases
892 or pushop.outobsmarkers
892 or pushop.outobsmarkers
893 or pushop.outbookmarks
893 or pushop.outbookmarks
894 )
894 )
895
895
896
896
897 @b2partsgenerator(b'check-bookmarks')
897 @b2partsgenerator(b'check-bookmarks')
898 def _pushb2checkbookmarks(pushop, bundler):
898 def _pushb2checkbookmarks(pushop, bundler):
899 """insert bookmark move checking"""
899 """insert bookmark move checking"""
900 if not _pushing(pushop) or pushop.force:
900 if not _pushing(pushop) or pushop.force:
901 return
901 return
902 b2caps = bundle2.bundle2caps(pushop.remote)
902 b2caps = bundle2.bundle2caps(pushop.remote)
903 hasbookmarkcheck = b'bookmarks' in b2caps
903 hasbookmarkcheck = b'bookmarks' in b2caps
904 if not (pushop.outbookmarks and hasbookmarkcheck):
904 if not (pushop.outbookmarks and hasbookmarkcheck):
905 return
905 return
906 data = []
906 data = []
907 for book, old, new in pushop.outbookmarks:
907 for book, old, new in pushop.outbookmarks:
908 data.append((book, old))
908 data.append((book, old))
909 checkdata = bookmod.binaryencode(pushop.repo, data)
909 checkdata = bookmod.binaryencode(pushop.repo, data)
910 bundler.newpart(b'check:bookmarks', data=checkdata)
910 bundler.newpart(b'check:bookmarks', data=checkdata)
911
911
912
912
913 @b2partsgenerator(b'check-phases')
913 @b2partsgenerator(b'check-phases')
914 def _pushb2checkphases(pushop, bundler):
914 def _pushb2checkphases(pushop, bundler):
915 """insert phase move checking"""
915 """insert phase move checking"""
916 if not _pushing(pushop) or pushop.force:
916 if not _pushing(pushop) or pushop.force:
917 return
917 return
918 b2caps = bundle2.bundle2caps(pushop.remote)
918 b2caps = bundle2.bundle2caps(pushop.remote)
919 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
919 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
920 if pushop.remotephases is not None and hasphaseheads:
920 if pushop.remotephases is not None and hasphaseheads:
921 # check that the remote phase has not changed
921 # check that the remote phase has not changed
922 checks = {p: [] for p in phases.allphases}
922 checks = {p: [] for p in phases.allphases}
923 to_node = pushop.repo.unfiltered().changelog.node
923 to_node = pushop.repo.unfiltered().changelog.node
924 checks[phases.public].extend(
924 checks[phases.public].extend(
925 to_node(r) for r in pushop.remotephases.public_heads
925 to_node(r) for r in pushop.remotephases.public_heads
926 )
926 )
927 checks[phases.draft].extend(
927 checks[phases.draft].extend(
928 to_node(r) for r in pushop.remotephases.draft_roots
928 to_node(r) for r in pushop.remotephases.draft_roots
929 )
929 )
930 if any(checks.values()):
930 if any(checks.values()):
931 for phase in checks:
931 for phase in checks:
932 checks[phase].sort()
932 checks[phase].sort()
933 checkdata = phases.binaryencode(checks)
933 checkdata = phases.binaryencode(checks)
934 bundler.newpart(b'check:phases', data=checkdata)
934 bundler.newpart(b'check:phases', data=checkdata)
935
935
936
936
937 @b2partsgenerator(b'changeset')
937 @b2partsgenerator(b'changeset')
938 def _pushb2ctx(pushop, bundler):
938 def _pushb2ctx(pushop, bundler):
939 """handle changegroup push through bundle2
939 """handle changegroup push through bundle2
940
940
941 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
941 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
942 """
942 """
943 if b'changesets' in pushop.stepsdone:
943 if b'changesets' in pushop.stepsdone:
944 return
944 return
945 pushop.stepsdone.add(b'changesets')
945 pushop.stepsdone.add(b'changesets')
946 # Send known heads to the server for race detection.
946 # Send known heads to the server for race detection.
947 if not _pushcheckoutgoing(pushop):
947 if not _pushcheckoutgoing(pushop):
948 return
948 return
949 pushop.repo.prepushoutgoinghooks(pushop)
949 pushop.repo.prepushoutgoinghooks(pushop)
950
950
951 _pushb2ctxcheckheads(pushop, bundler)
951 _pushb2ctxcheckheads(pushop, bundler)
952
952
953 b2caps = bundle2.bundle2caps(pushop.remote)
953 b2caps = bundle2.bundle2caps(pushop.remote)
954 version = b'01'
954 version = b'01'
955 cgversions = b2caps.get(b'changegroup')
955 cgversions = b2caps.get(b'changegroup')
956 if cgversions: # 3.1 and 3.2 ship with an empty value
956 if cgversions: # 3.1 and 3.2 ship with an empty value
957 cgversions = [
957 cgversions = [
958 v
958 v
959 for v in cgversions
959 for v in cgversions
960 if v in changegroup.supportedoutgoingversions(pushop.repo)
960 if v in changegroup.supportedoutgoingversions(pushop.repo)
961 ]
961 ]
962 if not cgversions:
962 if not cgversions:
963 raise error.Abort(_(b'no common changegroup version'))
963 raise error.Abort(_(b'no common changegroup version'))
964 version = max(cgversions)
964 version = max(cgversions)
965
965
966 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
966 remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote)
967 cgstream = changegroup.makestream(
967 cgstream = changegroup.makestream(
968 pushop.repo,
968 pushop.repo,
969 pushop.outgoing,
969 pushop.outgoing,
970 version,
970 version,
971 b'push',
971 b'push',
972 bundlecaps=b2caps,
972 bundlecaps=b2caps,
973 remote_sidedata=remote_sidedata,
973 remote_sidedata=remote_sidedata,
974 )
974 )
975 cgpart = bundler.newpart(b'changegroup', data=cgstream)
975 cgpart = bundler.newpart(b'changegroup', data=cgstream)
976 if cgversions:
976 if cgversions:
977 cgpart.addparam(b'version', version)
977 cgpart.addparam(b'version', version)
978 if scmutil.istreemanifest(pushop.repo):
978 if scmutil.istreemanifest(pushop.repo):
979 cgpart.addparam(b'treemanifest', b'1')
979 cgpart.addparam(b'treemanifest', b'1')
980 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
980 if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features:
981 cgpart.addparam(b'exp-sidedata', b'1')
981 cgpart.addparam(b'exp-sidedata', b'1')
982
982
983 def handlereply(op):
983 def handlereply(op):
984 """extract addchangegroup returns from server reply"""
984 """extract addchangegroup returns from server reply"""
985 cgreplies = op.records.getreplies(cgpart.id)
985 cgreplies = op.records.getreplies(cgpart.id)
986 assert len(cgreplies[b'changegroup']) == 1
986 assert len(cgreplies[b'changegroup']) == 1
987 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
987 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
988
988
989 return handlereply
989 return handlereply
990
990
991
991
992 @b2partsgenerator(b'phase')
992 @b2partsgenerator(b'phase')
993 def _pushb2phases(pushop, bundler):
993 def _pushb2phases(pushop, bundler):
994 """handle phase push through bundle2"""
994 """handle phase push through bundle2"""
995 if b'phases' in pushop.stepsdone:
995 if b'phases' in pushop.stepsdone:
996 return
996 return
997 b2caps = bundle2.bundle2caps(pushop.remote)
997 b2caps = bundle2.bundle2caps(pushop.remote)
998 ui = pushop.repo.ui
998 ui = pushop.repo.ui
999
999
1000 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1000 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1001 haspushkey = b'pushkey' in b2caps
1001 haspushkey = b'pushkey' in b2caps
1002 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1002 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1003
1003
1004 if hasphaseheads and not legacyphase:
1004 if hasphaseheads and not legacyphase:
1005 return _pushb2phaseheads(pushop, bundler)
1005 return _pushb2phaseheads(pushop, bundler)
1006 elif haspushkey:
1006 elif haspushkey:
1007 return _pushb2phasespushkey(pushop, bundler)
1007 return _pushb2phasespushkey(pushop, bundler)
1008
1008
1009
1009
1010 def _pushb2phaseheads(pushop, bundler):
1010 def _pushb2phaseheads(pushop, bundler):
1011 """push phase information through a bundle2 - binary part"""
1011 """push phase information through a bundle2 - binary part"""
1012 pushop.stepsdone.add(b'phases')
1012 pushop.stepsdone.add(b'phases')
1013 if pushop.outdatedphases:
1013 if pushop.outdatedphases:
1014 updates = {p: [] for p in phases.allphases}
1014 updates = {p: [] for p in phases.allphases}
1015 updates[0].extend(h.node() for h in pushop.outdatedphases)
1015 updates[0].extend(h.node() for h in pushop.outdatedphases)
1016 phasedata = phases.binaryencode(updates)
1016 phasedata = phases.binaryencode(updates)
1017 bundler.newpart(b'phase-heads', data=phasedata)
1017 bundler.newpart(b'phase-heads', data=phasedata)
1018
1018
1019
1019
1020 def _pushb2phasespushkey(pushop, bundler):
1020 def _pushb2phasespushkey(pushop, bundler):
1021 """push phase information through a bundle2 - pushkey part"""
1021 """push phase information through a bundle2 - pushkey part"""
1022 pushop.stepsdone.add(b'phases')
1022 pushop.stepsdone.add(b'phases')
1023 part2node = []
1023 part2node = []
1024
1024
1025 def handlefailure(pushop, exc):
1025 def handlefailure(pushop, exc):
1026 targetid = int(exc.partid)
1026 targetid = int(exc.partid)
1027 for partid, node in part2node:
1027 for partid, node in part2node:
1028 if partid == targetid:
1028 if partid == targetid:
1029 raise error.Abort(_(b'updating %s to public failed') % node)
1029 raise error.Abort(_(b'updating %s to public failed') % node)
1030
1030
1031 enc = pushkey.encode
1031 enc = pushkey.encode
1032 for newremotehead in pushop.outdatedphases:
1032 for newremotehead in pushop.outdatedphases:
1033 part = bundler.newpart(b'pushkey')
1033 part = bundler.newpart(b'pushkey')
1034 part.addparam(b'namespace', enc(b'phases'))
1034 part.addparam(b'namespace', enc(b'phases'))
1035 part.addparam(b'key', enc(newremotehead.hex()))
1035 part.addparam(b'key', enc(newremotehead.hex()))
1036 part.addparam(b'old', enc(b'%d' % phases.draft))
1036 part.addparam(b'old', enc(b'%d' % phases.draft))
1037 part.addparam(b'new', enc(b'%d' % phases.public))
1037 part.addparam(b'new', enc(b'%d' % phases.public))
1038 part2node.append((part.id, newremotehead))
1038 part2node.append((part.id, newremotehead))
1039 pushop.pkfailcb[part.id] = handlefailure
1039 pushop.pkfailcb[part.id] = handlefailure
1040
1040
1041 def handlereply(op):
1041 def handlereply(op):
1042 for partid, node in part2node:
1042 for partid, node in part2node:
1043 partrep = op.records.getreplies(partid)
1043 partrep = op.records.getreplies(partid)
1044 results = partrep[b'pushkey']
1044 results = partrep[b'pushkey']
1045 assert len(results) <= 1
1045 assert len(results) <= 1
1046 msg = None
1046 msg = None
1047 if not results:
1047 if not results:
1048 msg = _(b'server ignored update of %s to public!\n') % node
1048 msg = _(b'server ignored update of %s to public!\n') % node
1049 elif not int(results[0][b'return']):
1049 elif not int(results[0][b'return']):
1050 msg = _(b'updating %s to public failed!\n') % node
1050 msg = _(b'updating %s to public failed!\n') % node
1051 if msg is not None:
1051 if msg is not None:
1052 pushop.ui.warn(msg)
1052 pushop.ui.warn(msg)
1053
1053
1054 return handlereply
1054 return handlereply
1055
1055
1056
1056
1057 @b2partsgenerator(b'obsmarkers')
1057 @b2partsgenerator(b'obsmarkers')
1058 def _pushb2obsmarkers(pushop, bundler):
1058 def _pushb2obsmarkers(pushop, bundler):
1059 if b'obsmarkers' in pushop.stepsdone:
1059 if b'obsmarkers' in pushop.stepsdone:
1060 return
1060 return
1061 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1061 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1062 if obsolete.commonversion(remoteversions) is None:
1062 if obsolete.commonversion(remoteversions) is None:
1063 return
1063 return
1064 pushop.stepsdone.add(b'obsmarkers')
1064 pushop.stepsdone.add(b'obsmarkers')
1065 if pushop.outobsmarkers:
1065 if pushop.outobsmarkers:
1066 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1066 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1067 bundle2.buildobsmarkerspart(bundler, markers)
1067 bundle2.buildobsmarkerspart(bundler, markers)
1068
1068
1069
1069
1070 @b2partsgenerator(b'bookmarks')
1070 @b2partsgenerator(b'bookmarks')
1071 def _pushb2bookmarks(pushop, bundler):
1071 def _pushb2bookmarks(pushop, bundler):
1072 """handle bookmark push through bundle2"""
1072 """handle bookmark push through bundle2"""
1073 if b'bookmarks' in pushop.stepsdone:
1073 if b'bookmarks' in pushop.stepsdone:
1074 return
1074 return
1075 b2caps = bundle2.bundle2caps(pushop.remote)
1075 b2caps = bundle2.bundle2caps(pushop.remote)
1076
1076
1077 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1077 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1078 legacybooks = b'bookmarks' in legacy
1078 legacybooks = b'bookmarks' in legacy
1079
1079
1080 if not legacybooks and b'bookmarks' in b2caps:
1080 if not legacybooks and b'bookmarks' in b2caps:
1081 return _pushb2bookmarkspart(pushop, bundler)
1081 return _pushb2bookmarkspart(pushop, bundler)
1082 elif b'pushkey' in b2caps:
1082 elif b'pushkey' in b2caps:
1083 return _pushb2bookmarkspushkey(pushop, bundler)
1083 return _pushb2bookmarkspushkey(pushop, bundler)
1084
1084
1085
1085
1086 def _bmaction(old, new):
1086 def _bmaction(old, new):
1087 """small utility for bookmark pushing"""
1087 """small utility for bookmark pushing"""
1088 if not old:
1088 if not old:
1089 return b'export'
1089 return b'export'
1090 elif not new:
1090 elif not new:
1091 return b'delete'
1091 return b'delete'
1092 return b'update'
1092 return b'update'
1093
1093
1094
1094
1095 def _abortonsecretctx(pushop, node, b):
1095 def _abortonsecretctx(pushop, node, b):
1096 """abort if a given bookmark points to a secret changeset"""
1096 """abort if a given bookmark points to a secret changeset"""
1097 if node and pushop.repo[node].phase() == phases.secret:
1097 if node and pushop.repo[node].phase() == phases.secret:
1098 raise error.Abort(
1098 raise error.Abort(
1099 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1099 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1100 )
1100 )
1101
1101
1102
1102
1103 def _pushb2bookmarkspart(pushop, bundler):
1103 def _pushb2bookmarkspart(pushop, bundler):
1104 pushop.stepsdone.add(b'bookmarks')
1104 pushop.stepsdone.add(b'bookmarks')
1105 if not pushop.outbookmarks:
1105 if not pushop.outbookmarks:
1106 return
1106 return
1107
1107
1108 allactions = []
1108 allactions = []
1109 data = []
1109 data = []
1110 for book, old, new in pushop.outbookmarks:
1110 for book, old, new in pushop.outbookmarks:
1111 _abortonsecretctx(pushop, new, book)
1111 _abortonsecretctx(pushop, new, book)
1112 data.append((book, new))
1112 data.append((book, new))
1113 allactions.append((book, _bmaction(old, new)))
1113 allactions.append((book, _bmaction(old, new)))
1114 checkdata = bookmod.binaryencode(pushop.repo, data)
1114 checkdata = bookmod.binaryencode(pushop.repo, data)
1115 bundler.newpart(b'bookmarks', data=checkdata)
1115 bundler.newpart(b'bookmarks', data=checkdata)
1116
1116
1117 def handlereply(op):
1117 def handlereply(op):
1118 ui = pushop.ui
1118 ui = pushop.ui
1119 # if success
1119 # if success
1120 for book, action in allactions:
1120 for book, action in allactions:
1121 ui.status(bookmsgmap[action][0] % book)
1121 ui.status(bookmsgmap[action][0] % book)
1122
1122
1123 return handlereply
1123 return handlereply
1124
1124
1125
1125
1126 def _pushb2bookmarkspushkey(pushop, bundler):
1126 def _pushb2bookmarkspushkey(pushop, bundler):
1127 pushop.stepsdone.add(b'bookmarks')
1127 pushop.stepsdone.add(b'bookmarks')
1128 part2book = []
1128 part2book = []
1129 enc = pushkey.encode
1129 enc = pushkey.encode
1130
1130
1131 def handlefailure(pushop, exc):
1131 def handlefailure(pushop, exc):
1132 targetid = int(exc.partid)
1132 targetid = int(exc.partid)
1133 for partid, book, action in part2book:
1133 for partid, book, action in part2book:
1134 if partid == targetid:
1134 if partid == targetid:
1135 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1135 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1136 # we should not be called for part we did not generated
1136 # we should not be called for part we did not generated
1137 assert False
1137 assert False
1138
1138
1139 for book, old, new in pushop.outbookmarks:
1139 for book, old, new in pushop.outbookmarks:
1140 _abortonsecretctx(pushop, new, book)
1140 _abortonsecretctx(pushop, new, book)
1141 part = bundler.newpart(b'pushkey')
1141 part = bundler.newpart(b'pushkey')
1142 part.addparam(b'namespace', enc(b'bookmarks'))
1142 part.addparam(b'namespace', enc(b'bookmarks'))
1143 part.addparam(b'key', enc(book))
1143 part.addparam(b'key', enc(book))
1144 part.addparam(b'old', enc(hex(old)))
1144 part.addparam(b'old', enc(hex(old)))
1145 part.addparam(b'new', enc(hex(new)))
1145 part.addparam(b'new', enc(hex(new)))
1146 action = b'update'
1146 action = b'update'
1147 if not old:
1147 if not old:
1148 action = b'export'
1148 action = b'export'
1149 elif not new:
1149 elif not new:
1150 action = b'delete'
1150 action = b'delete'
1151 part2book.append((part.id, book, action))
1151 part2book.append((part.id, book, action))
1152 pushop.pkfailcb[part.id] = handlefailure
1152 pushop.pkfailcb[part.id] = handlefailure
1153
1153
1154 def handlereply(op):
1154 def handlereply(op):
1155 ui = pushop.ui
1155 ui = pushop.ui
1156 for partid, book, action in part2book:
1156 for partid, book, action in part2book:
1157 partrep = op.records.getreplies(partid)
1157 partrep = op.records.getreplies(partid)
1158 results = partrep[b'pushkey']
1158 results = partrep[b'pushkey']
1159 assert len(results) <= 1
1159 assert len(results) <= 1
1160 if not results:
1160 if not results:
1161 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1161 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1162 else:
1162 else:
1163 ret = int(results[0][b'return'])
1163 ret = int(results[0][b'return'])
1164 if ret:
1164 if ret:
1165 ui.status(bookmsgmap[action][0] % book)
1165 ui.status(bookmsgmap[action][0] % book)
1166 else:
1166 else:
1167 ui.warn(bookmsgmap[action][1] % book)
1167 ui.warn(bookmsgmap[action][1] % book)
1168 if pushop.bkresult is not None:
1168 if pushop.bkresult is not None:
1169 pushop.bkresult = 1
1169 pushop.bkresult = 1
1170
1170
1171 return handlereply
1171 return handlereply
1172
1172
1173
1173
1174 @b2partsgenerator(b'pushvars', idx=0)
1174 @b2partsgenerator(b'pushvars', idx=0)
1175 def _getbundlesendvars(pushop, bundler):
1175 def _getbundlesendvars(pushop, bundler):
1176 '''send shellvars via bundle2'''
1176 '''send shellvars via bundle2'''
1177 pushvars = pushop.pushvars
1177 pushvars = pushop.pushvars
1178 if pushvars:
1178 if pushvars:
1179 shellvars = {}
1179 shellvars = {}
1180 for raw in pushvars:
1180 for raw in pushvars:
1181 if b'=' not in raw:
1181 if b'=' not in raw:
1182 msg = (
1182 msg = (
1183 b"unable to parse variable '%s', should follow "
1183 b"unable to parse variable '%s', should follow "
1184 b"'KEY=VALUE' or 'KEY=' format"
1184 b"'KEY=VALUE' or 'KEY=' format"
1185 )
1185 )
1186 raise error.Abort(msg % raw)
1186 raise error.Abort(msg % raw)
1187 k, v = raw.split(b'=', 1)
1187 k, v = raw.split(b'=', 1)
1188 shellvars[k] = v
1188 shellvars[k] = v
1189
1189
1190 part = bundler.newpart(b'pushvars')
1190 part = bundler.newpart(b'pushvars')
1191
1191
1192 for key, value in shellvars.items():
1192 for key, value in shellvars.items():
1193 part.addparam(key, value, mandatory=False)
1193 part.addparam(key, value, mandatory=False)
1194
1194
1195
1195
1196 def _pushbundle2(pushop):
1196 def _pushbundle2(pushop):
1197 """push data to the remote using bundle2
1197 """push data to the remote using bundle2
1198
1198
1199 The only currently supported type of data is changegroup but this will
1199 The only currently supported type of data is changegroup but this will
1200 evolve in the future."""
1200 evolve in the future."""
1201 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1201 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1202 pushback = pushop.trmanager and pushop.ui.configbool(
1202 pushback = pushop.trmanager and pushop.ui.configbool(
1203 b'experimental', b'bundle2.pushback'
1203 b'experimental', b'bundle2.pushback'
1204 )
1204 )
1205
1205
1206 # create reply capability
1206 # create reply capability
1207 capsblob = bundle2.encodecaps(
1207 capsblob = bundle2.encodecaps(
1208 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1208 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1209 )
1209 )
1210 bundler.newpart(b'replycaps', data=capsblob)
1210 bundler.newpart(b'replycaps', data=capsblob)
1211 replyhandlers = []
1211 replyhandlers = []
1212 for partgenname in b2partsgenorder:
1212 for partgenname in b2partsgenorder:
1213 partgen = b2partsgenmapping[partgenname]
1213 partgen = b2partsgenmapping[partgenname]
1214 ret = partgen(pushop, bundler)
1214 ret = partgen(pushop, bundler)
1215 if callable(ret):
1215 if callable(ret):
1216 replyhandlers.append(ret)
1216 replyhandlers.append(ret)
1217 # do not push if nothing to push
1217 # do not push if nothing to push
1218 if bundler.nbparts <= 1:
1218 if bundler.nbparts <= 1:
1219 return
1219 return
1220 stream = util.chunkbuffer(bundler.getchunks())
1220 stream = util.chunkbuffer(bundler.getchunks())
1221 try:
1221 try:
1222 try:
1222 try:
1223 with pushop.remote.commandexecutor() as e:
1223 with pushop.remote.commandexecutor() as e:
1224 reply = e.callcommand(
1224 reply = e.callcommand(
1225 b'unbundle',
1225 b'unbundle',
1226 {
1226 {
1227 b'bundle': stream,
1227 b'bundle': stream,
1228 b'heads': [b'force'],
1228 b'heads': [b'force'],
1229 b'url': pushop.remote.url(),
1229 b'url': pushop.remote.url(),
1230 },
1230 },
1231 ).result()
1231 ).result()
1232 except error.BundleValueError as exc:
1232 except error.BundleValueError as exc:
1233 raise error.RemoteError(_(b'missing support for %s') % exc)
1233 raise error.RemoteError(_(b'missing support for %s') % exc)
1234 try:
1234 try:
1235 trgetter = None
1235 trgetter = None
1236 if pushback:
1236 if pushback:
1237 trgetter = pushop.trmanager.transaction
1237 trgetter = pushop.trmanager.transaction
1238 op = bundle2.processbundle(
1238 op = bundle2.processbundle(
1239 pushop.repo,
1239 pushop.repo,
1240 reply,
1240 reply,
1241 trgetter,
1241 trgetter,
1242 remote=pushop.remote,
1242 remote=pushop.remote,
1243 )
1243 )
1244 except error.BundleValueError as exc:
1244 except error.BundleValueError as exc:
1245 raise error.RemoteError(_(b'missing support for %s') % exc)
1245 raise error.RemoteError(_(b'missing support for %s') % exc)
1246 except bundle2.AbortFromPart as exc:
1246 except bundle2.AbortFromPart as exc:
1247 pushop.ui.error(_(b'remote: %s\n') % exc)
1247 pushop.ui.error(_(b'remote: %s\n') % exc)
1248 if exc.hint is not None:
1248 if exc.hint is not None:
1249 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1249 pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1250 raise error.RemoteError(_(b'push failed on remote'))
1250 raise error.RemoteError(_(b'push failed on remote'))
1251 except error.PushkeyFailed as exc:
1251 except error.PushkeyFailed as exc:
1252 partid = int(exc.partid)
1252 partid = int(exc.partid)
1253 if partid not in pushop.pkfailcb:
1253 if partid not in pushop.pkfailcb:
1254 raise
1254 raise
1255 pushop.pkfailcb[partid](pushop, exc)
1255 pushop.pkfailcb[partid](pushop, exc)
1256 for rephand in replyhandlers:
1256 for rephand in replyhandlers:
1257 rephand(op)
1257 rephand(op)
1258
1258
1259
1259
1260 def _pushchangeset(pushop):
1260 def _pushchangeset(pushop):
1261 """Make the actual push of changeset bundle to remote repo"""
1261 """Make the actual push of changeset bundle to remote repo"""
1262 if b'changesets' in pushop.stepsdone:
1262 if b'changesets' in pushop.stepsdone:
1263 return
1263 return
1264 pushop.stepsdone.add(b'changesets')
1264 pushop.stepsdone.add(b'changesets')
1265 if not _pushcheckoutgoing(pushop):
1265 if not _pushcheckoutgoing(pushop):
1266 return
1266 return
1267
1267
1268 # Should have verified this in push().
1268 # Should have verified this in push().
1269 assert pushop.remote.capable(b'unbundle')
1269 assert pushop.remote.capable(b'unbundle')
1270
1270
1271 pushop.repo.prepushoutgoinghooks(pushop)
1271 pushop.repo.prepushoutgoinghooks(pushop)
1272 outgoing = pushop.outgoing
1272 outgoing = pushop.outgoing
1273 # TODO: get bundlecaps from remote
1273 # TODO: get bundlecaps from remote
1274 bundlecaps = None
1274 bundlecaps = None
1275 # create a changegroup from local
1275 # create a changegroup from local
1276 if pushop.revs is None and not (
1276 if pushop.revs is None and not (
1277 outgoing.excluded or pushop.repo.changelog.filteredrevs
1277 outgoing.excluded or pushop.repo.changelog.filteredrevs
1278 ):
1278 ):
1279 # push everything,
1279 # push everything,
1280 # use the fast path, no race possible on push
1280 # use the fast path, no race possible on push
1281 cg = changegroup.makechangegroup(
1281 cg = changegroup.makechangegroup(
1282 pushop.repo,
1282 pushop.repo,
1283 outgoing,
1283 outgoing,
1284 b'01',
1284 b'01',
1285 b'push',
1285 b'push',
1286 fastpath=True,
1286 fastpath=True,
1287 bundlecaps=bundlecaps,
1287 bundlecaps=bundlecaps,
1288 )
1288 )
1289 else:
1289 else:
1290 cg = changegroup.makechangegroup(
1290 cg = changegroup.makechangegroup(
1291 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1291 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1292 )
1292 )
1293
1293
1294 # apply changegroup to remote
1294 # apply changegroup to remote
1295 # local repo finds heads on server, finds out what
1295 # local repo finds heads on server, finds out what
1296 # revs it must push. once revs transferred, if server
1296 # revs it must push. once revs transferred, if server
1297 # finds it has different heads (someone else won
1297 # finds it has different heads (someone else won
1298 # commit/push race), server aborts.
1298 # commit/push race), server aborts.
1299 if pushop.force:
1299 if pushop.force:
1300 remoteheads = [b'force']
1300 remoteheads = [b'force']
1301 else:
1301 else:
1302 remoteheads = pushop.remoteheads
1302 remoteheads = pushop.remoteheads
1303 # ssh: return remote's addchangegroup()
1303 # ssh: return remote's addchangegroup()
1304 # http: return remote's addchangegroup() or 0 for error
1304 # http: return remote's addchangegroup() or 0 for error
1305 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1305 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1306
1306
1307
1307
1308 def _pushsyncphase(pushop):
1308 def _pushsyncphase(pushop):
1309 """synchronise phase information locally and remotely"""
1309 """synchronise phase information locally and remotely"""
1310 cheads = pushop.commonheads
1310 cheads = pushop.commonheads
1311 # even when we don't push, exchanging phase data is useful
1311 # even when we don't push, exchanging phase data is useful
1312 remotephases = listkeys(pushop.remote, b'phases')
1312 remotephases = listkeys(pushop.remote, b'phases')
1313 if (
1313 if (
1314 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1314 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1315 and remotephases # server supports phases
1315 and remotephases # server supports phases
1316 and pushop.cgresult is None # nothing was pushed
1316 and pushop.cgresult is None # nothing was pushed
1317 and remotephases.get(b'publishing', False)
1317 and remotephases.get(b'publishing', False)
1318 ):
1318 ):
1319 # When:
1319 # When:
1320 # - this is a subrepo push
1320 # - this is a subrepo push
1321 # - and remote support phase
1321 # - and remote support phase
1322 # - and no changeset was pushed
1322 # - and no changeset was pushed
1323 # - and remote is publishing
1323 # - and remote is publishing
1324 # We may be in issue 3871 case!
1324 # We may be in issue 3871 case!
1325 # We drop the possible phase synchronisation done by
1325 # We drop the possible phase synchronisation done by
1326 # courtesy to publish changesets possibly locally draft
1326 # courtesy to publish changesets possibly locally draft
1327 # on the remote.
1327 # on the remote.
1328 remotephases = {b'publishing': b'True'}
1328 remotephases = {b'publishing': b'True'}
1329 if not remotephases: # old server or public only reply from non-publishing
1329 if not remotephases: # old server or public only reply from non-publishing
1330 _localphasemove(pushop, cheads)
1330 _localphasemove(pushop, cheads)
1331 # don't push any phase data as there is nothing to push
1331 # don't push any phase data as there is nothing to push
1332 else:
1332 else:
1333 unfi = pushop.repo.unfiltered()
1333 unfi = pushop.repo.unfiltered()
1334 to_rev = unfi.changelog.index.rev
1334 to_rev = unfi.changelog.index.rev
1335 to_node = unfi.changelog.node
1335 to_node = unfi.changelog.node
1336 cheads_revs = [to_rev(n) for n in cheads]
1336 cheads_revs = [to_rev(n) for n in cheads]
1337 pheads_revs, _dr = phases.analyze_remote_phases(
1337 pheads_revs, _dr = phases.analyze_remote_phases(
1338 pushop.repo,
1338 pushop.repo,
1339 cheads_revs,
1339 cheads_revs,
1340 remotephases,
1340 remotephases,
1341 )
1341 )
1342 pheads = [to_node(r) for r in pheads_revs]
1342 pheads = [to_node(r) for r in pheads_revs]
1343 ### Apply remote phase on local
1343 ### Apply remote phase on local
1344 if remotephases.get(b'publishing', False):
1344 if remotephases.get(b'publishing', False):
1345 _localphasemove(pushop, cheads)
1345 _localphasemove(pushop, cheads)
1346 else: # publish = False
1346 else: # publish = False
1347 _localphasemove(pushop, pheads)
1347 _localphasemove(pushop, pheads)
1348 _localphasemove(pushop, cheads, phases.draft)
1348 _localphasemove(pushop, cheads, phases.draft)
1349 ### Apply local phase on remote
1349 ### Apply local phase on remote
1350
1350
1351 if pushop.cgresult:
1351 if pushop.cgresult:
1352 if b'phases' in pushop.stepsdone:
1352 if b'phases' in pushop.stepsdone:
1353 # phases already pushed though bundle2
1353 # phases already pushed though bundle2
1354 return
1354 return
1355 outdated = pushop.outdatedphases
1355 outdated = pushop.outdatedphases
1356 else:
1356 else:
1357 outdated = pushop.fallbackoutdatedphases
1357 outdated = pushop.fallbackoutdatedphases
1358
1358
1359 pushop.stepsdone.add(b'phases')
1359 pushop.stepsdone.add(b'phases')
1360
1360
1361 # filter heads already turned public by the push
1361 # filter heads already turned public by the push
1362 outdated = [c for c in outdated if c.node() not in pheads]
1362 outdated = [c for c in outdated if c.node() not in pheads]
1363 # fallback to independent pushkey command
1363 # fallback to independent pushkey command
1364 for newremotehead in outdated:
1364 for newremotehead in outdated:
1365 with pushop.remote.commandexecutor() as e:
1365 with pushop.remote.commandexecutor() as e:
1366 r = e.callcommand(
1366 r = e.callcommand(
1367 b'pushkey',
1367 b'pushkey',
1368 {
1368 {
1369 b'namespace': b'phases',
1369 b'namespace': b'phases',
1370 b'key': newremotehead.hex(),
1370 b'key': newremotehead.hex(),
1371 b'old': b'%d' % phases.draft,
1371 b'old': b'%d' % phases.draft,
1372 b'new': b'%d' % phases.public,
1372 b'new': b'%d' % phases.public,
1373 },
1373 },
1374 ).result()
1374 ).result()
1375
1375
1376 if not r:
1376 if not r:
1377 pushop.ui.warn(
1377 pushop.ui.warn(
1378 _(b'updating %s to public failed!\n') % newremotehead
1378 _(b'updating %s to public failed!\n') % newremotehead
1379 )
1379 )
1380
1380
1381
1381
1382 def _localphasemove(pushop, nodes, phase=phases.public):
1382 def _localphasemove(pushop, nodes, phase=phases.public):
1383 """move <nodes> to <phase> in the local source repo"""
1383 """move <nodes> to <phase> in the local source repo"""
1384 if pushop.trmanager:
1384 if pushop.trmanager:
1385 phases.advanceboundary(
1385 phases.advanceboundary(
1386 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1386 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1387 )
1387 )
1388 else:
1388 else:
1389 # repo is not locked, do not change any phases!
1389 # repo is not locked, do not change any phases!
1390 # Informs the user that phases should have been moved when
1390 # Informs the user that phases should have been moved when
1391 # applicable.
1391 # applicable.
1392 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1392 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1393 phasestr = phases.phasenames[phase]
1393 phasestr = phases.phasenames[phase]
1394 if actualmoves:
1394 if actualmoves:
1395 pushop.ui.status(
1395 pushop.ui.status(
1396 _(
1396 _(
1397 b'cannot lock source repo, skipping '
1397 b'cannot lock source repo, skipping '
1398 b'local %s phase update\n'
1398 b'local %s phase update\n'
1399 )
1399 )
1400 % phasestr
1400 % phasestr
1401 )
1401 )
1402
1402
1403
1403
1404 def _pushobsolete(pushop):
1404 def _pushobsolete(pushop):
1405 """utility function to push obsolete markers to a remote"""
1405 """utility function to push obsolete markers to a remote"""
1406 if b'obsmarkers' in pushop.stepsdone:
1406 if b'obsmarkers' in pushop.stepsdone:
1407 return
1407 return
1408 repo = pushop.repo
1408 repo = pushop.repo
1409 remote = pushop.remote
1409 remote = pushop.remote
1410 pushop.stepsdone.add(b'obsmarkers')
1410 pushop.stepsdone.add(b'obsmarkers')
1411 if pushop.outobsmarkers:
1411 if pushop.outobsmarkers:
1412 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1412 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1413 rslts = []
1413 rslts = []
1414 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1414 markers = obsutil.sortedmarkers(pushop.outobsmarkers)
1415 remotedata = obsolete._pushkeyescape(markers)
1415 remotedata = obsolete._pushkeyescape(markers)
1416 for key in sorted(remotedata, reverse=True):
1416 for key in sorted(remotedata, reverse=True):
1417 # reverse sort to ensure we end with dump0
1417 # reverse sort to ensure we end with dump0
1418 data = remotedata[key]
1418 data = remotedata[key]
1419 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1419 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1420 if [r for r in rslts if not r]:
1420 if [r for r in rslts if not r]:
1421 msg = _(b'failed to push some obsolete markers!\n')
1421 msg = _(b'failed to push some obsolete markers!\n')
1422 repo.ui.warn(msg)
1422 repo.ui.warn(msg)
1423
1423
1424
1424
1425 def _pushbookmark(pushop):
1425 def _pushbookmark(pushop):
1426 """Update bookmark position on remote"""
1426 """Update bookmark position on remote"""
1427 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1427 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1428 return
1428 return
1429 pushop.stepsdone.add(b'bookmarks')
1429 pushop.stepsdone.add(b'bookmarks')
1430 ui = pushop.ui
1430 ui = pushop.ui
1431 remote = pushop.remote
1431 remote = pushop.remote
1432
1432
1433 for b, old, new in pushop.outbookmarks:
1433 for b, old, new in pushop.outbookmarks:
1434 action = b'update'
1434 action = b'update'
1435 if not old:
1435 if not old:
1436 action = b'export'
1436 action = b'export'
1437 elif not new:
1437 elif not new:
1438 action = b'delete'
1438 action = b'delete'
1439
1439
1440 with remote.commandexecutor() as e:
1440 with remote.commandexecutor() as e:
1441 r = e.callcommand(
1441 r = e.callcommand(
1442 b'pushkey',
1442 b'pushkey',
1443 {
1443 {
1444 b'namespace': b'bookmarks',
1444 b'namespace': b'bookmarks',
1445 b'key': b,
1445 b'key': b,
1446 b'old': hex(old),
1446 b'old': hex(old),
1447 b'new': hex(new),
1447 b'new': hex(new),
1448 },
1448 },
1449 ).result()
1449 ).result()
1450
1450
1451 if r:
1451 if r:
1452 ui.status(bookmsgmap[action][0] % b)
1452 ui.status(bookmsgmap[action][0] % b)
1453 else:
1453 else:
1454 ui.warn(bookmsgmap[action][1] % b)
1454 ui.warn(bookmsgmap[action][1] % b)
1455 # discovery can have set the value form invalid entry
1455 # discovery can have set the value form invalid entry
1456 if pushop.bkresult is not None:
1456 if pushop.bkresult is not None:
1457 pushop.bkresult = 1
1457 pushop.bkresult = 1
1458
1458
1459
1459
1460 class pulloperation:
1460 class pulloperation:
1461 """A object that represent a single pull operation
1461 """A object that represent a single pull operation
1462
1462
1463 It purpose is to carry pull related state and very common operation.
1463 It purpose is to carry pull related state and very common operation.
1464
1464
1465 A new should be created at the beginning of each pull and discarded
1465 A new should be created at the beginning of each pull and discarded
1466 afterward.
1466 afterward.
1467 """
1467 """
1468
1468
1469 def __init__(
1469 def __init__(
1470 self,
1470 self,
1471 repo,
1471 repo,
1472 remote,
1472 remote,
1473 heads=None,
1473 heads=None,
1474 force=False,
1474 force=False,
1475 bookmarks=(),
1475 bookmarks=(),
1476 remotebookmarks=None,
1476 remotebookmarks=None,
1477 streamclonerequested=None,
1477 streamclonerequested=None,
1478 includepats=None,
1478 includepats=None,
1479 excludepats=None,
1479 excludepats=None,
1480 depth=None,
1480 depth=None,
1481 path=None,
1481 path=None,
1482 ):
1482 ):
1483 # repo we pull into
1483 # repo we pull into
1484 self.repo = repo
1484 self.repo = repo
1485 # repo we pull from
1485 # repo we pull from
1486 self.remote = remote
1486 self.remote = remote
1487 # path object used to build this remote
1487 # path object used to build this remote
1488 #
1488 #
1489 # Ideally, the remote peer would carry that directly.
1489 # Ideally, the remote peer would carry that directly.
1490 self.remote_path = path
1490 self.remote_path = path
1491 # revision we try to pull (None is "all")
1491 # revision we try to pull (None is "all")
1492 self.heads = heads
1492 self.heads = heads
1493 # bookmark pulled explicitly
1493 # bookmark pulled explicitly
1494 self.explicitbookmarks = [
1494 self.explicitbookmarks = [
1495 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1495 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1496 ]
1496 ]
1497 # do we force pull?
1497 # do we force pull?
1498 self.force = force
1498 self.force = force
1499 # whether a streaming clone was requested
1499 # whether a streaming clone was requested
1500 self.streamclonerequested = streamclonerequested
1500 self.streamclonerequested = streamclonerequested
1501 # transaction manager
1501 # transaction manager
1502 self.trmanager = None
1502 self.trmanager = None
1503 # set of common changeset between local and remote before pull
1503 # set of common changeset between local and remote before pull
1504 self.common = None
1504 self.common = None
1505 # set of pulled head
1505 # set of pulled head
1506 self.rheads = None
1506 self.rheads = None
1507 # list of missing changeset to fetch remotely
1507 # list of missing changeset to fetch remotely
1508 self.fetch = None
1508 self.fetch = None
1509 # remote bookmarks data
1509 # remote bookmarks data
1510 self.remotebookmarks = remotebookmarks
1510 self.remotebookmarks = remotebookmarks
1511 # result of changegroup pulling (used as return code by pull)
1511 # result of changegroup pulling (used as return code by pull)
1512 self.cgresult = None
1512 self.cgresult = None
1513 # list of step already done
1513 # list of step already done
1514 self.stepsdone = set()
1514 self.stepsdone = set()
1515 # Whether we attempted a clone from pre-generated bundles.
1515 # Whether we attempted a clone from pre-generated bundles.
1516 self.clonebundleattempted = False
1516 self.clonebundleattempted = False
1517 # Set of file patterns to include.
1517 # Set of file patterns to include.
1518 self.includepats = includepats
1518 self.includepats = includepats
1519 # Set of file patterns to exclude.
1519 # Set of file patterns to exclude.
1520 self.excludepats = excludepats
1520 self.excludepats = excludepats
1521 # Number of ancestor changesets to pull from each pulled head.
1521 # Number of ancestor changesets to pull from each pulled head.
1522 self.depth = depth
1522 self.depth = depth
1523
1523
1524 @util.propertycache
1524 @util.propertycache
1525 def pulledsubset(self):
1525 def pulledsubset(self):
1526 """heads of the set of changeset target by the pull"""
1526 """heads of the set of changeset target by the pull"""
1527 # compute target subset
1527 # compute target subset
1528 if self.heads is None:
1528 if self.heads is None:
1529 # We pulled every thing possible
1529 # We pulled every thing possible
1530 # sync on everything common
1530 # sync on everything common
1531 c = set(self.common)
1531 c = set(self.common)
1532 ret = list(self.common)
1532 ret = list(self.common)
1533 for n in self.rheads:
1533 for n in self.rheads:
1534 if n not in c:
1534 if n not in c:
1535 ret.append(n)
1535 ret.append(n)
1536 return ret
1536 return ret
1537 else:
1537 else:
1538 # We pulled a specific subset
1538 # We pulled a specific subset
1539 # sync on this subset
1539 # sync on this subset
1540 return self.heads
1540 return self.heads
1541
1541
1542 @util.propertycache
1542 @util.propertycache
1543 def canusebundle2(self):
1543 def canusebundle2(self):
1544 return not _forcebundle1(self)
1544 return not _forcebundle1(self)
1545
1545
1546 @util.propertycache
1546 @util.propertycache
1547 def remotebundle2caps(self):
1547 def remotebundle2caps(self):
1548 return bundle2.bundle2caps(self.remote)
1548 return bundle2.bundle2caps(self.remote)
1549
1549
1550 def gettransaction(self):
1550 def gettransaction(self):
1551 # deprecated; talk to trmanager directly
1551 # deprecated; talk to trmanager directly
1552 return self.trmanager.transaction()
1552 return self.trmanager.transaction()
1553
1553
1554
1554
1555 class transactionmanager(util.transactional):
1555 class transactionmanager(util.transactional):
1556 """An object to manage the life cycle of a transaction
1556 """An object to manage the life cycle of a transaction
1557
1557
1558 It creates the transaction on demand and calls the appropriate hooks when
1558 It creates the transaction on demand and calls the appropriate hooks when
1559 closing the transaction."""
1559 closing the transaction."""
1560
1560
1561 def __init__(self, repo, source, url):
1561 def __init__(self, repo, source, url):
1562 self.repo = repo
1562 self.repo = repo
1563 self.source = source
1563 self.source = source
1564 self.url = url
1564 self.url = url
1565 self._tr = None
1565 self._tr = None
1566
1566
1567 def transaction(self):
1567 def transaction(self):
1568 """Return an open transaction object, constructing if necessary"""
1568 """Return an open transaction object, constructing if necessary"""
1569 if not self._tr:
1569 if not self._tr:
1570 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1570 trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url))
1571 self._tr = self.repo.transaction(trname)
1571 self._tr = self.repo.transaction(trname)
1572 self._tr.hookargs[b'source'] = self.source
1572 self._tr.hookargs[b'source'] = self.source
1573 self._tr.hookargs[b'url'] = self.url
1573 self._tr.hookargs[b'url'] = self.url
1574 return self._tr
1574 return self._tr
1575
1575
1576 def close(self):
1576 def close(self):
1577 """close transaction if created"""
1577 """close transaction if created"""
1578 if self._tr is not None:
1578 if self._tr is not None:
1579 self._tr.close()
1579 self._tr.close()
1580
1580
1581 def release(self):
1581 def release(self):
1582 """release transaction if created"""
1582 """release transaction if created"""
1583 if self._tr is not None:
1583 if self._tr is not None:
1584 self._tr.release()
1584 self._tr.release()
1585
1585
1586
1586
1587 def listkeys(remote, namespace):
1587 def listkeys(remote, namespace):
1588 with remote.commandexecutor() as e:
1588 with remote.commandexecutor() as e:
1589 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1589 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1590
1590
1591
1591
1592 def _fullpullbundle2(repo, pullop):
1592 def _fullpullbundle2(repo, pullop):
1593 # The server may send a partial reply, i.e. when inlining
1593 # The server may send a partial reply, i.e. when inlining
1594 # pre-computed bundles. In that case, update the common
1594 # pre-computed bundles. In that case, update the common
1595 # set based on the results and pull another bundle.
1595 # set based on the results and pull another bundle.
1596 #
1596 #
1597 # There are two indicators that the process is finished:
1597 # There are two indicators that the process is finished:
1598 # - no changeset has been added, or
1598 # - no changeset has been added, or
1599 # - all remote heads are known locally.
1599 # - all remote heads are known locally.
1600 # The head check must use the unfiltered view as obsoletion
1600 # The head check must use the unfiltered view as obsoletion
1601 # markers can hide heads.
1601 # markers can hide heads.
1602 unfi = repo.unfiltered()
1602 unfi = repo.unfiltered()
1603 unficl = unfi.changelog
1603 unficl = unfi.changelog
1604
1604
1605 def headsofdiff(h1, h2):
1605 def headsofdiff(h1, h2):
1606 """Returns heads(h1 % h2)"""
1606 """Returns heads(h1 % h2)"""
1607 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1607 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1608 return {ctx.node() for ctx in res}
1608 return {ctx.node() for ctx in res}
1609
1609
1610 def headsofunion(h1, h2):
1610 def headsofunion(h1, h2):
1611 """Returns heads((h1 + h2) - null)"""
1611 """Returns heads((h1 + h2) - null)"""
1612 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1612 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1613 return {ctx.node() for ctx in res}
1613 return {ctx.node() for ctx in res}
1614
1614
1615 while True:
1615 while True:
1616 old_heads = unficl.heads()
1616 old_heads = unficl.heads()
1617 clstart = len(unficl)
1617 clstart = len(unficl)
1618 _pullbundle2(pullop)
1618 _pullbundle2(pullop)
1619 if requirements.NARROW_REQUIREMENT in repo.requirements:
1619 if requirements.NARROW_REQUIREMENT in repo.requirements:
1620 # XXX narrow clones filter the heads on the server side during
1620 # XXX narrow clones filter the heads on the server side during
1621 # XXX getbundle and result in partial replies as well.
1621 # XXX getbundle and result in partial replies as well.
1622 # XXX Disable pull bundles in this case as band aid to avoid
1622 # XXX Disable pull bundles in this case as band aid to avoid
1623 # XXX extra round trips.
1623 # XXX extra round trips.
1624 break
1624 break
1625 if clstart == len(unficl):
1625 if clstart == len(unficl):
1626 break
1626 break
1627 if all(unficl.hasnode(n) for n in pullop.rheads):
1627 if all(unficl.hasnode(n) for n in pullop.rheads):
1628 break
1628 break
1629 new_heads = headsofdiff(unficl.heads(), old_heads)
1629 new_heads = headsofdiff(unficl.heads(), old_heads)
1630 pullop.common = headsofunion(new_heads, pullop.common)
1630 pullop.common = headsofunion(new_heads, pullop.common)
1631 pullop.rheads = set(pullop.rheads) - pullop.common
1631 pullop.rheads = set(pullop.rheads) - pullop.common
1632
1632
1633
1633
1634 def add_confirm_callback(repo, pullop):
1634 def add_confirm_callback(repo, pullop):
1635 """adds a finalize callback to transaction which can be used to show stats
1635 """adds a finalize callback to transaction which can be used to show stats
1636 to user and confirm the pull before committing transaction"""
1636 to user and confirm the pull before committing transaction"""
1637
1637
1638 tr = pullop.trmanager.transaction()
1638 tr = pullop.trmanager.transaction()
1639 scmutil.registersummarycallback(
1639 scmutil.registersummarycallback(
1640 repo, tr, txnname=b'pull', as_validator=True
1640 repo, tr, txnname=b'pull', as_validator=True
1641 )
1641 )
1642 reporef = weakref.ref(repo.unfiltered())
1642 reporef = weakref.ref(repo.unfiltered())
1643
1643
1644 def prompt(tr):
1644 def prompt(tr):
1645 repo = reporef()
1645 repo = reporef()
1646 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1646 cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No')
1647 if repo.ui.promptchoice(cm):
1647 if repo.ui.promptchoice(cm):
1648 raise error.Abort(b"user aborted")
1648 raise error.Abort(b"user aborted")
1649
1649
1650 tr.addvalidator(b'900-pull-prompt', prompt)
1650 tr.addvalidator(b'900-pull-prompt', prompt)
1651
1651
1652
1652
1653 def pull(
1653 def pull(
1654 repo,
1654 repo,
1655 remote,
1655 remote,
1656 path=None,
1656 path=None,
1657 heads=None,
1657 heads=None,
1658 force=False,
1658 force=False,
1659 bookmarks=(),
1659 bookmarks=(),
1660 opargs=None,
1660 opargs=None,
1661 streamclonerequested=None,
1661 streamclonerequested=None,
1662 includepats=None,
1662 includepats=None,
1663 excludepats=None,
1663 excludepats=None,
1664 depth=None,
1664 depth=None,
1665 confirm=None,
1665 confirm=None,
1666 ):
1666 ):
1667 """Fetch repository data from a remote.
1667 """Fetch repository data from a remote.
1668
1668
1669 This is the main function used to retrieve data from a remote repository.
1669 This is the main function used to retrieve data from a remote repository.
1670
1670
1671 ``repo`` is the local repository to clone into.
1671 ``repo`` is the local repository to clone into.
1672 ``remote`` is a peer instance.
1672 ``remote`` is a peer instance.
1673 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1673 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1674 default) means to pull everything from the remote.
1674 default) means to pull everything from the remote.
1675 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1675 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1676 default, all remote bookmarks are pulled.
1676 default, all remote bookmarks are pulled.
1677 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1677 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1678 initialization.
1678 initialization.
1679 ``streamclonerequested`` is a boolean indicating whether a "streaming
1679 ``streamclonerequested`` is a boolean indicating whether a "streaming
1680 clone" is requested. A "streaming clone" is essentially a raw file copy
1680 clone" is requested. A "streaming clone" is essentially a raw file copy
1681 of revlogs from the server. This only works when the local repository is
1681 of revlogs from the server. This only works when the local repository is
1682 empty. The default value of ``None`` means to respect the server
1682 empty. The default value of ``None`` means to respect the server
1683 configuration for preferring stream clones.
1683 configuration for preferring stream clones.
1684 ``includepats`` and ``excludepats`` define explicit file patterns to
1684 ``includepats`` and ``excludepats`` define explicit file patterns to
1685 include and exclude in storage, respectively. If not defined, narrow
1685 include and exclude in storage, respectively. If not defined, narrow
1686 patterns from the repo instance are used, if available.
1686 patterns from the repo instance are used, if available.
1687 ``depth`` is an integer indicating the DAG depth of history we're
1687 ``depth`` is an integer indicating the DAG depth of history we're
1688 interested in. If defined, for each revision specified in ``heads``, we
1688 interested in. If defined, for each revision specified in ``heads``, we
1689 will fetch up to this many of its ancestors and data associated with them.
1689 will fetch up to this many of its ancestors and data associated with them.
1690 ``confirm`` is a boolean indicating whether the pull should be confirmed
1690 ``confirm`` is a boolean indicating whether the pull should be confirmed
1691 before committing the transaction. This overrides HGPLAIN.
1691 before committing the transaction. This overrides HGPLAIN.
1692
1692
1693 Returns the ``pulloperation`` created for this pull.
1693 Returns the ``pulloperation`` created for this pull.
1694 """
1694 """
1695 if opargs is None:
1695 if opargs is None:
1696 opargs = {}
1696 opargs = {}
1697
1697
1698 # We allow the narrow patterns to be passed in explicitly to provide more
1698 # We allow the narrow patterns to be passed in explicitly to provide more
1699 # flexibility for API consumers.
1699 # flexibility for API consumers.
1700 if includepats is not None or excludepats is not None:
1700 if includepats is not None or excludepats is not None:
1701 includepats = includepats or set()
1701 includepats = includepats or set()
1702 excludepats = excludepats or set()
1702 excludepats = excludepats or set()
1703 else:
1703 else:
1704 includepats, excludepats = repo.narrowpats
1704 includepats, excludepats = repo.narrowpats
1705
1705
1706 narrowspec.validatepatterns(includepats)
1706 narrowspec.validatepatterns(includepats)
1707 narrowspec.validatepatterns(excludepats)
1707 narrowspec.validatepatterns(excludepats)
1708
1708
1709 pullop = pulloperation(
1709 pullop = pulloperation(
1710 repo,
1710 repo,
1711 remote,
1711 remote,
1712 path=path,
1712 path=path,
1713 heads=heads,
1713 heads=heads,
1714 force=force,
1714 force=force,
1715 bookmarks=bookmarks,
1715 bookmarks=bookmarks,
1716 streamclonerequested=streamclonerequested,
1716 streamclonerequested=streamclonerequested,
1717 includepats=includepats,
1717 includepats=includepats,
1718 excludepats=excludepats,
1718 excludepats=excludepats,
1719 depth=depth,
1719 depth=depth,
1720 **pycompat.strkwargs(opargs)
1720 **pycompat.strkwargs(opargs)
1721 )
1721 )
1722
1722
1723 peerlocal = pullop.remote.local()
1723 peerlocal = pullop.remote.local()
1724 if peerlocal:
1724 if peerlocal:
1725 missing = set(peerlocal.requirements) - pullop.repo.supported
1725 missing = set(peerlocal.requirements) - pullop.repo.supported
1726 if missing:
1726 if missing:
1727 msg = _(
1727 msg = _(
1728 b"required features are not"
1728 b"required features are not"
1729 b" supported in the destination:"
1729 b" supported in the destination:"
1730 b" %s"
1730 b" %s"
1731 ) % (b', '.join(sorted(missing)))
1731 ) % (b', '.join(sorted(missing)))
1732 raise error.Abort(msg)
1732 raise error.Abort(msg)
1733
1733
1734 for category in repo._wanted_sidedata:
1734 for category in repo._wanted_sidedata:
1735 # Check that a computer is registered for that category for at least
1735 # Check that a computer is registered for that category for at least
1736 # one revlog kind.
1736 # one revlog kind.
1737 for kind, computers in repo._sidedata_computers.items():
1737 for kind, computers in repo._sidedata_computers.items():
1738 if computers.get(category):
1738 if computers.get(category):
1739 break
1739 break
1740 else:
1740 else:
1741 # This should never happen since repos are supposed to be able to
1741 # This should never happen since repos are supposed to be able to
1742 # generate the sidedata they require.
1742 # generate the sidedata they require.
1743 raise error.ProgrammingError(
1743 raise error.ProgrammingError(
1744 _(
1744 _(
1745 b'sidedata category requested by local side without local'
1745 b'sidedata category requested by local side without local'
1746 b"support: '%s'"
1746 b"support: '%s'"
1747 )
1747 )
1748 % pycompat.bytestr(category)
1748 % pycompat.bytestr(category)
1749 )
1749 )
1750
1750
1751 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1751 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1752 wlock = util.nullcontextmanager
1752 wlock = util.nullcontextmanager
1753 if not bookmod.bookmarksinstore(repo):
1753 if not bookmod.bookmarksinstore(repo):
1754 wlock = repo.wlock
1754 wlock = repo.wlock
1755 with wlock(), repo.lock(), pullop.trmanager:
1755 with wlock(), repo.lock(), pullop.trmanager:
1756 if confirm or (
1756 if confirm or (
1757 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1757 repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain()
1758 ):
1758 ):
1759 add_confirm_callback(repo, pullop)
1759 add_confirm_callback(repo, pullop)
1760
1760
1761 # This should ideally be in _pullbundle2(). However, it needs to run
1761 # This should ideally be in _pullbundle2(). However, it needs to run
1762 # before discovery to avoid extra work.
1762 # before discovery to avoid extra work.
1763 _maybeapplyclonebundle(pullop)
1763 _maybeapplyclonebundle(pullop)
1764 streamclone.maybeperformlegacystreamclone(pullop)
1764 streamclone.maybeperformlegacystreamclone(pullop)
1765 _pulldiscovery(pullop)
1765 _pulldiscovery(pullop)
1766 if pullop.canusebundle2:
1766 if pullop.canusebundle2:
1767 _fullpullbundle2(repo, pullop)
1767 _fullpullbundle2(repo, pullop)
1768 _pullchangeset(pullop)
1768 _pullchangeset(pullop)
1769 _pullphase(pullop)
1769 _pullphase(pullop)
1770 _pullbookmarks(pullop)
1770 _pullbookmarks(pullop)
1771 _pullobsolete(pullop)
1771 _pullobsolete(pullop)
1772
1772
1773 # storing remotenames
1773 # storing remotenames
1774 if repo.ui.configbool(b'experimental', b'remotenames'):
1774 if repo.ui.configbool(b'experimental', b'remotenames'):
1775 logexchange.pullremotenames(repo, remote)
1775 logexchange.pullremotenames(repo, remote)
1776
1776
1777 return pullop
1777 return pullop
1778
1778
1779
1779
1780 # list of steps to perform discovery before pull
1780 # list of steps to perform discovery before pull
1781 pulldiscoveryorder = []
1781 pulldiscoveryorder = []
1782
1782
1783 # Mapping between step name and function
1783 # Mapping between step name and function
1784 #
1784 #
1785 # This exists to help extensions wrap steps if necessary
1785 # This exists to help extensions wrap steps if necessary
1786 pulldiscoverymapping = {}
1786 pulldiscoverymapping = {}
1787
1787
1788
1788
1789 def pulldiscovery(stepname):
1789 def pulldiscovery(stepname):
1790 """decorator for function performing discovery before pull
1790 """decorator for function performing discovery before pull
1791
1791
1792 The function is added to the step -> function mapping and appended to the
1792 The function is added to the step -> function mapping and appended to the
1793 list of steps. Beware that decorated function will be added in order (this
1793 list of steps. Beware that decorated function will be added in order (this
1794 may matter).
1794 may matter).
1795
1795
1796 You can only use this decorator for a new step, if you want to wrap a step
1796 You can only use this decorator for a new step, if you want to wrap a step
1797 from an extension, change the pulldiscovery dictionary directly."""
1797 from an extension, change the pulldiscovery dictionary directly."""
1798
1798
1799 def dec(func):
1799 def dec(func):
1800 assert stepname not in pulldiscoverymapping
1800 assert stepname not in pulldiscoverymapping
1801 pulldiscoverymapping[stepname] = func
1801 pulldiscoverymapping[stepname] = func
1802 pulldiscoveryorder.append(stepname)
1802 pulldiscoveryorder.append(stepname)
1803 return func
1803 return func
1804
1804
1805 return dec
1805 return dec
1806
1806
1807
1807
1808 def _pulldiscovery(pullop):
1808 def _pulldiscovery(pullop):
1809 """Run all discovery steps"""
1809 """Run all discovery steps"""
1810 for stepname in pulldiscoveryorder:
1810 for stepname in pulldiscoveryorder:
1811 step = pulldiscoverymapping[stepname]
1811 step = pulldiscoverymapping[stepname]
1812 step(pullop)
1812 step(pullop)
1813
1813
1814
1814
1815 @pulldiscovery(b'b1:bookmarks')
1815 @pulldiscovery(b'b1:bookmarks')
1816 def _pullbookmarkbundle1(pullop):
1816 def _pullbookmarkbundle1(pullop):
1817 """fetch bookmark data in bundle1 case
1817 """fetch bookmark data in bundle1 case
1818
1818
1819 If not using bundle2, we have to fetch bookmarks before changeset
1819 If not using bundle2, we have to fetch bookmarks before changeset
1820 discovery to reduce the chance and impact of race conditions."""
1820 discovery to reduce the chance and impact of race conditions."""
1821 if pullop.remotebookmarks is not None:
1821 if pullop.remotebookmarks is not None:
1822 return
1822 return
1823 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1823 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1824 # all known bundle2 servers now support listkeys, but lets be nice with
1824 # all known bundle2 servers now support listkeys, but lets be nice with
1825 # new implementation.
1825 # new implementation.
1826 return
1826 return
1827 books = listkeys(pullop.remote, b'bookmarks')
1827 books = listkeys(pullop.remote, b'bookmarks')
1828 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1828 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1829
1829
1830
1830
1831 @pulldiscovery(b'changegroup')
1831 @pulldiscovery(b'changegroup')
1832 def _pulldiscoverychangegroup(pullop):
1832 def _pulldiscoverychangegroup(pullop):
1833 """discovery phase for the pull
1833 """discovery phase for the pull
1834
1834
1835 Current handle changeset discovery only, will change handle all discovery
1835 Current handle changeset discovery only, will change handle all discovery
1836 at some point."""
1836 at some point."""
1837 tmp = discovery.findcommonincoming(
1837 tmp = discovery.findcommonincoming(
1838 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1838 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1839 )
1839 )
1840 common, fetch, rheads = tmp
1840 common, fetch, rheads = tmp
1841 has_node = pullop.repo.unfiltered().changelog.index.has_node
1841 has_node = pullop.repo.unfiltered().changelog.index.has_node
1842 if fetch and rheads:
1842 if fetch and rheads:
1843 # If a remote heads is filtered locally, put in back in common.
1843 # If a remote heads is filtered locally, put in back in common.
1844 #
1844 #
1845 # This is a hackish solution to catch most of "common but locally
1845 # This is a hackish solution to catch most of "common but locally
1846 # hidden situation". We do not performs discovery on unfiltered
1846 # hidden situation". We do not performs discovery on unfiltered
1847 # repository because it end up doing a pathological amount of round
1847 # repository because it end up doing a pathological amount of round
1848 # trip for w huge amount of changeset we do not care about.
1848 # trip for w huge amount of changeset we do not care about.
1849 #
1849 #
1850 # If a set of such "common but filtered" changeset exist on the server
1850 # If a set of such "common but filtered" changeset exist on the server
1851 # but are not including a remote heads, we'll not be able to detect it,
1851 # but are not including a remote heads, we'll not be able to detect it,
1852 scommon = set(common)
1852 scommon = set(common)
1853 for n in rheads:
1853 for n in rheads:
1854 if has_node(n):
1854 if has_node(n):
1855 if n not in scommon:
1855 if n not in scommon:
1856 common.append(n)
1856 common.append(n)
1857 if set(rheads).issubset(set(common)):
1857 if set(rheads).issubset(set(common)):
1858 fetch = []
1858 fetch = []
1859 pullop.common = common
1859 pullop.common = common
1860 pullop.fetch = fetch
1860 pullop.fetch = fetch
1861 pullop.rheads = rheads
1861 pullop.rheads = rheads
1862
1862
1863
1863
1864 def _pullbundle2(pullop):
1864 def _pullbundle2(pullop):
1865 """pull data using bundle2
1865 """pull data using bundle2
1866
1866
1867 For now, the only supported data are changegroup."""
1867 For now, the only supported data are changegroup."""
1868 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1868 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1869
1869
1870 # make ui easier to access
1870 # make ui easier to access
1871 ui = pullop.repo.ui
1871 ui = pullop.repo.ui
1872
1872
1873 # At the moment we don't do stream clones over bundle2. If that is
1873 # At the moment we don't do stream clones over bundle2. If that is
1874 # implemented then here's where the check for that will go.
1874 # implemented then here's where the check for that will go.
1875 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1875 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1876
1876
1877 # declare pull perimeters
1877 # declare pull perimeters
1878 kwargs[b'common'] = pullop.common
1878 kwargs[b'common'] = pullop.common
1879 kwargs[b'heads'] = pullop.heads or pullop.rheads
1879 kwargs[b'heads'] = pullop.heads or pullop.rheads
1880
1880
1881 # check server supports narrow and then adding includepats and excludepats
1881 # check server supports narrow and then adding includepats and excludepats
1882 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1882 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1883 if servernarrow and pullop.includepats:
1883 if servernarrow and pullop.includepats:
1884 kwargs[b'includepats'] = pullop.includepats
1884 kwargs[b'includepats'] = pullop.includepats
1885 if servernarrow and pullop.excludepats:
1885 if servernarrow and pullop.excludepats:
1886 kwargs[b'excludepats'] = pullop.excludepats
1886 kwargs[b'excludepats'] = pullop.excludepats
1887
1887
1888 if streaming:
1888 if streaming:
1889 kwargs[b'cg'] = False
1889 kwargs[b'cg'] = False
1890 kwargs[b'stream'] = True
1890 kwargs[b'stream'] = True
1891 pullop.stepsdone.add(b'changegroup')
1891 pullop.stepsdone.add(b'changegroup')
1892 pullop.stepsdone.add(b'phases')
1892 pullop.stepsdone.add(b'phases')
1893
1893
1894 else:
1894 else:
1895 # pulling changegroup
1895 # pulling changegroup
1896 pullop.stepsdone.add(b'changegroup')
1896 pullop.stepsdone.add(b'changegroup')
1897
1897
1898 kwargs[b'cg'] = pullop.fetch
1898 kwargs[b'cg'] = pullop.fetch
1899
1899
1900 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1900 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1901 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1901 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1902 if not legacyphase and hasbinaryphase:
1902 if not legacyphase and hasbinaryphase:
1903 kwargs[b'phases'] = True
1903 kwargs[b'phases'] = True
1904 pullop.stepsdone.add(b'phases')
1904 pullop.stepsdone.add(b'phases')
1905
1905
1906 if b'listkeys' in pullop.remotebundle2caps:
1906 if b'listkeys' in pullop.remotebundle2caps:
1907 if b'phases' not in pullop.stepsdone:
1907 if b'phases' not in pullop.stepsdone:
1908 kwargs[b'listkeys'] = [b'phases']
1908 kwargs[b'listkeys'] = [b'phases']
1909
1909
1910 bookmarksrequested = False
1910 bookmarksrequested = False
1911 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1911 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1912 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1912 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1913
1913
1914 if pullop.remotebookmarks is not None:
1914 if pullop.remotebookmarks is not None:
1915 pullop.stepsdone.add(b'request-bookmarks')
1915 pullop.stepsdone.add(b'request-bookmarks')
1916
1916
1917 if (
1917 if (
1918 b'request-bookmarks' not in pullop.stepsdone
1918 b'request-bookmarks' not in pullop.stepsdone
1919 and pullop.remotebookmarks is None
1919 and pullop.remotebookmarks is None
1920 and not legacybookmark
1920 and not legacybookmark
1921 and hasbinarybook
1921 and hasbinarybook
1922 ):
1922 ):
1923 kwargs[b'bookmarks'] = True
1923 kwargs[b'bookmarks'] = True
1924 bookmarksrequested = True
1924 bookmarksrequested = True
1925
1925
1926 if b'listkeys' in pullop.remotebundle2caps:
1926 if b'listkeys' in pullop.remotebundle2caps:
1927 if b'request-bookmarks' not in pullop.stepsdone:
1927 if b'request-bookmarks' not in pullop.stepsdone:
1928 # make sure to always includes bookmark data when migrating
1928 # make sure to always includes bookmark data when migrating
1929 # `hg incoming --bundle` to using this function.
1929 # `hg incoming --bundle` to using this function.
1930 pullop.stepsdone.add(b'request-bookmarks')
1930 pullop.stepsdone.add(b'request-bookmarks')
1931 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1931 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1932
1932
1933 # If this is a full pull / clone and the server supports the clone bundles
1933 # If this is a full pull / clone and the server supports the clone bundles
1934 # feature, tell the server whether we attempted a clone bundle. The
1934 # feature, tell the server whether we attempted a clone bundle. The
1935 # presence of this flag indicates the client supports clone bundles. This
1935 # presence of this flag indicates the client supports clone bundles. This
1936 # will enable the server to treat clients that support clone bundles
1936 # will enable the server to treat clients that support clone bundles
1937 # differently from those that don't.
1937 # differently from those that don't.
1938 if (
1938 if (
1939 pullop.remote.capable(b'clonebundles')
1939 pullop.remote.capable(b'clonebundles')
1940 and pullop.heads is None
1940 and pullop.heads is None
1941 and list(pullop.common) == [pullop.repo.nullid]
1941 and list(pullop.common) == [pullop.repo.nullid]
1942 ):
1942 ):
1943 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1943 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1944
1944
1945 if streaming:
1945 if streaming:
1946 pullop.repo.ui.status(_(b'streaming all changes\n'))
1946 pullop.repo.ui.status(_(b'streaming all changes\n'))
1947 elif not pullop.fetch:
1947 elif not pullop.fetch:
1948 pullop.repo.ui.status(_(b"no changes found\n"))
1948 pullop.repo.ui.status(_(b"no changes found\n"))
1949 pullop.cgresult = 0
1949 pullop.cgresult = 0
1950 else:
1950 else:
1951 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1951 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
1952 pullop.repo.ui.status(_(b"requesting all changes\n"))
1952 pullop.repo.ui.status(_(b"requesting all changes\n"))
1953 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1953 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1954 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1954 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1955 if obsolete.commonversion(remoteversions) is not None:
1955 if obsolete.commonversion(remoteversions) is not None:
1956 kwargs[b'obsmarkers'] = True
1956 kwargs[b'obsmarkers'] = True
1957 pullop.stepsdone.add(b'obsmarkers')
1957 pullop.stepsdone.add(b'obsmarkers')
1958 _pullbundle2extraprepare(pullop, kwargs)
1958 _pullbundle2extraprepare(pullop, kwargs)
1959
1959
1960 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1960 remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote)
1961 if remote_sidedata:
1961 if remote_sidedata:
1962 kwargs[b'remote_sidedata'] = remote_sidedata
1962 kwargs[b'remote_sidedata'] = remote_sidedata
1963
1963
1964 with pullop.remote.commandexecutor() as e:
1964 with pullop.remote.commandexecutor() as e:
1965 args = dict(kwargs)
1965 args = dict(kwargs)
1966 args[b'source'] = b'pull'
1966 args[b'source'] = b'pull'
1967 bundle = e.callcommand(b'getbundle', args).result()
1967 bundle = e.callcommand(b'getbundle', args).result()
1968
1968
1969 try:
1969 try:
1970 op = bundle2.bundleoperation(
1970 op = bundle2.bundleoperation(
1971 pullop.repo,
1971 pullop.repo,
1972 pullop.gettransaction,
1972 pullop.gettransaction,
1973 source=b'pull',
1973 source=b'pull',
1974 remote=pullop.remote,
1974 remote=pullop.remote,
1975 )
1975 )
1976 op.modes[b'bookmarks'] = b'records'
1976 op.modes[b'bookmarks'] = b'records'
1977 bundle2.processbundle(
1977 bundle2.processbundle(
1978 pullop.repo,
1978 pullop.repo,
1979 bundle,
1979 bundle,
1980 op=op,
1980 op=op,
1981 remote=pullop.remote,
1981 remote=pullop.remote,
1982 )
1982 )
1983 except bundle2.AbortFromPart as exc:
1983 except bundle2.AbortFromPart as exc:
1984 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1984 pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc)
1985 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1985 raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint)
1986 except error.BundleValueError as exc:
1986 except error.BundleValueError as exc:
1987 raise error.RemoteError(_(b'missing support for %s') % exc)
1987 raise error.RemoteError(_(b'missing support for %s') % exc)
1988
1988
1989 if pullop.fetch:
1989 if pullop.fetch:
1990 pullop.cgresult = bundle2.combinechangegroupresults(op)
1990 pullop.cgresult = bundle2.combinechangegroupresults(op)
1991
1991
1992 # processing phases change
1992 # processing phases change
1993 for namespace, value in op.records[b'listkeys']:
1993 for namespace, value in op.records[b'listkeys']:
1994 if namespace == b'phases':
1994 if namespace == b'phases':
1995 _pullapplyphases(pullop, value)
1995 _pullapplyphases(pullop, value)
1996
1996
1997 # processing bookmark update
1997 # processing bookmark update
1998 if bookmarksrequested:
1998 if bookmarksrequested:
1999 books = {}
1999 books = {}
2000 for record in op.records[b'bookmarks']:
2000 for record in op.records[b'bookmarks']:
2001 books[record[b'bookmark']] = record[b"node"]
2001 books[record[b'bookmark']] = record[b"node"]
2002 pullop.remotebookmarks = books
2002 pullop.remotebookmarks = books
2003 else:
2003 else:
2004 for namespace, value in op.records[b'listkeys']:
2004 for namespace, value in op.records[b'listkeys']:
2005 if namespace == b'bookmarks':
2005 if namespace == b'bookmarks':
2006 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2006 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2007
2007
2008 # bookmark data were either already there or pulled in the bundle
2008 # bookmark data were either already there or pulled in the bundle
2009 if pullop.remotebookmarks is not None:
2009 if pullop.remotebookmarks is not None:
2010 _pullbookmarks(pullop)
2010 _pullbookmarks(pullop)
2011
2011
2012
2012
2013 def _pullbundle2extraprepare(pullop, kwargs):
2013 def _pullbundle2extraprepare(pullop, kwargs):
2014 """hook function so that extensions can extend the getbundle call"""
2014 """hook function so that extensions can extend the getbundle call"""
2015
2015
2016
2016
2017 def _pullchangeset(pullop):
2017 def _pullchangeset(pullop):
2018 """pull changeset from unbundle into the local repo"""
2018 """pull changeset from unbundle into the local repo"""
2019 # We delay the open of the transaction as late as possible so we
2019 # We delay the open of the transaction as late as possible so we
2020 # don't open transaction for nothing or you break future useful
2020 # don't open transaction for nothing or you break future useful
2021 # rollback call
2021 # rollback call
2022 if b'changegroup' in pullop.stepsdone:
2022 if b'changegroup' in pullop.stepsdone:
2023 return
2023 return
2024 pullop.stepsdone.add(b'changegroup')
2024 pullop.stepsdone.add(b'changegroup')
2025 if not pullop.fetch:
2025 if not pullop.fetch:
2026 pullop.repo.ui.status(_(b"no changes found\n"))
2026 pullop.repo.ui.status(_(b"no changes found\n"))
2027 pullop.cgresult = 0
2027 pullop.cgresult = 0
2028 return
2028 return
2029 tr = pullop.gettransaction()
2029 tr = pullop.gettransaction()
2030 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
2030 if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]:
2031 pullop.repo.ui.status(_(b"requesting all changes\n"))
2031 pullop.repo.ui.status(_(b"requesting all changes\n"))
2032 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2032 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2033 # issue1320, avoid a race if remote changed after discovery
2033 # issue1320, avoid a race if remote changed after discovery
2034 pullop.heads = pullop.rheads
2034 pullop.heads = pullop.rheads
2035
2035
2036 if pullop.remote.capable(b'getbundle'):
2036 if pullop.remote.capable(b'getbundle'):
2037 # TODO: get bundlecaps from remote
2037 # TODO: get bundlecaps from remote
2038 cg = pullop.remote.getbundle(
2038 cg = pullop.remote.getbundle(
2039 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2039 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2040 )
2040 )
2041 elif pullop.heads is None:
2041 elif pullop.heads is None:
2042 with pullop.remote.commandexecutor() as e:
2042 with pullop.remote.commandexecutor() as e:
2043 cg = e.callcommand(
2043 cg = e.callcommand(
2044 b'changegroup',
2044 b'changegroup',
2045 {
2045 {
2046 b'nodes': pullop.fetch,
2046 b'nodes': pullop.fetch,
2047 b'source': b'pull',
2047 b'source': b'pull',
2048 },
2048 },
2049 ).result()
2049 ).result()
2050
2050
2051 elif not pullop.remote.capable(b'changegroupsubset'):
2051 elif not pullop.remote.capable(b'changegroupsubset'):
2052 raise error.Abort(
2052 raise error.Abort(
2053 _(
2053 _(
2054 b"partial pull cannot be done because "
2054 b"partial pull cannot be done because "
2055 b"other repository doesn't support "
2055 b"other repository doesn't support "
2056 b"changegroupsubset."
2056 b"changegroupsubset."
2057 )
2057 )
2058 )
2058 )
2059 else:
2059 else:
2060 with pullop.remote.commandexecutor() as e:
2060 with pullop.remote.commandexecutor() as e:
2061 cg = e.callcommand(
2061 cg = e.callcommand(
2062 b'changegroupsubset',
2062 b'changegroupsubset',
2063 {
2063 {
2064 b'bases': pullop.fetch,
2064 b'bases': pullop.fetch,
2065 b'heads': pullop.heads,
2065 b'heads': pullop.heads,
2066 b'source': b'pull',
2066 b'source': b'pull',
2067 },
2067 },
2068 ).result()
2068 ).result()
2069
2069
2070 bundleop = bundle2.applybundle(
2070 bundleop = bundle2.applybundle(
2071 pullop.repo,
2071 pullop.repo,
2072 cg,
2072 cg,
2073 tr,
2073 tr,
2074 b'pull',
2074 b'pull',
2075 pullop.remote.url(),
2075 pullop.remote.url(),
2076 remote=pullop.remote,
2076 remote=pullop.remote,
2077 )
2077 )
2078 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2078 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2079
2079
2080
2080
2081 def _pullphase(pullop):
2081 def _pullphase(pullop):
2082 # Get remote phases data from remote
2082 # Get remote phases data from remote
2083 if b'phases' in pullop.stepsdone:
2083 if b'phases' in pullop.stepsdone:
2084 return
2084 return
2085 remotephases = listkeys(pullop.remote, b'phases')
2085 remotephases = listkeys(pullop.remote, b'phases')
2086 _pullapplyphases(pullop, remotephases)
2086 _pullapplyphases(pullop, remotephases)
2087
2087
2088
2088
2089 def _pullapplyphases(pullop, remotephases):
2089 def _pullapplyphases(pullop, remotephases):
2090 """apply phase movement from observed remote state"""
2090 """apply phase movement from observed remote state"""
2091 if b'phases' in pullop.stepsdone:
2091 if b'phases' in pullop.stepsdone:
2092 return
2092 return
2093 pullop.stepsdone.add(b'phases')
2093 pullop.stepsdone.add(b'phases')
2094 publishing = bool(remotephases.get(b'publishing', False))
2094 publishing = bool(remotephases.get(b'publishing', False))
2095 if remotephases and not publishing:
2095 if remotephases and not publishing:
2096 unfi = pullop.repo.unfiltered()
2096 unfi = pullop.repo.unfiltered()
2097 to_rev = unfi.changelog.index.rev
2097 to_rev = unfi.changelog.index.rev
2098 to_node = unfi.changelog.node
2098 to_node = unfi.changelog.node
2099 pulledsubset_revs = [to_rev(n) for n in pullop.pulledsubset]
2099 pulledsubset_revs = [to_rev(n) for n in pullop.pulledsubset]
2100 # remote is new and non-publishing
2100 # remote is new and non-publishing
2101 pheads_revs, _dr = phases.analyze_remote_phases(
2101 pheads_revs, _dr = phases.analyze_remote_phases(
2102 pullop.repo,
2102 pullop.repo,
2103 pulledsubset_revs,
2103 pulledsubset_revs,
2104 remotephases,
2104 remotephases,
2105 )
2105 )
2106 pheads = [to_node(r) for r in pheads_revs]
2106 pheads = [to_node(r) for r in pheads_revs]
2107 dheads = pullop.pulledsubset
2107 dheads = pullop.pulledsubset
2108 else:
2108 else:
2109 # Remote is old or publishing all common changesets
2109 # Remote is old or publishing all common changesets
2110 # should be seen as public
2110 # should be seen as public
2111 pheads = pullop.pulledsubset
2111 pheads = pullop.pulledsubset
2112 dheads = []
2112 dheads = []
2113 unfi = pullop.repo.unfiltered()
2113 unfi = pullop.repo.unfiltered()
2114 phase = unfi._phasecache.phase
2114 phase = unfi._phasecache.phase
2115 rev = unfi.changelog.index.get_rev
2115 rev = unfi.changelog.index.get_rev
2116 public = phases.public
2116 public = phases.public
2117 draft = phases.draft
2117 draft = phases.draft
2118
2118
2119 # exclude changesets already public locally and update the others
2119 # exclude changesets already public locally and update the others
2120 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2120 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2121 if pheads:
2121 if pheads:
2122 tr = pullop.gettransaction()
2122 tr = pullop.gettransaction()
2123 phases.advanceboundary(pullop.repo, tr, public, pheads)
2123 phases.advanceboundary(pullop.repo, tr, public, pheads)
2124
2124
2125 # exclude changesets already draft locally and update the others
2125 # exclude changesets already draft locally and update the others
2126 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2126 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2127 if dheads:
2127 if dheads:
2128 tr = pullop.gettransaction()
2128 tr = pullop.gettransaction()
2129 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2129 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2130
2130
2131
2131
2132 def _pullbookmarks(pullop):
2132 def _pullbookmarks(pullop):
2133 """process the remote bookmark information to update the local one"""
2133 """process the remote bookmark information to update the local one"""
2134 if b'bookmarks' in pullop.stepsdone:
2134 if b'bookmarks' in pullop.stepsdone:
2135 return
2135 return
2136 pullop.stepsdone.add(b'bookmarks')
2136 pullop.stepsdone.add(b'bookmarks')
2137 repo = pullop.repo
2137 repo = pullop.repo
2138 remotebookmarks = pullop.remotebookmarks
2138 remotebookmarks = pullop.remotebookmarks
2139 bookmarks_mode = None
2139 bookmarks_mode = None
2140 if pullop.remote_path is not None:
2140 if pullop.remote_path is not None:
2141 bookmarks_mode = pullop.remote_path.bookmarks_mode
2141 bookmarks_mode = pullop.remote_path.bookmarks_mode
2142 bookmod.updatefromremote(
2142 bookmod.updatefromremote(
2143 repo.ui,
2143 repo.ui,
2144 repo,
2144 repo,
2145 remotebookmarks,
2145 remotebookmarks,
2146 pullop.remote.url(),
2146 pullop.remote.url(),
2147 pullop.gettransaction,
2147 pullop.gettransaction,
2148 explicit=pullop.explicitbookmarks,
2148 explicit=pullop.explicitbookmarks,
2149 mode=bookmarks_mode,
2149 mode=bookmarks_mode,
2150 )
2150 )
2151
2151
2152
2152
2153 def _pullobsolete(pullop):
2153 def _pullobsolete(pullop):
2154 """utility function to pull obsolete markers from a remote
2154 """utility function to pull obsolete markers from a remote
2155
2155
2156 The `gettransaction` is function that return the pull transaction, creating
2156 The `gettransaction` is function that return the pull transaction, creating
2157 one if necessary. We return the transaction to inform the calling code that
2157 one if necessary. We return the transaction to inform the calling code that
2158 a new transaction have been created (when applicable).
2158 a new transaction have been created (when applicable).
2159
2159
2160 Exists mostly to allow overriding for experimentation purpose"""
2160 Exists mostly to allow overriding for experimentation purpose"""
2161 if b'obsmarkers' in pullop.stepsdone:
2161 if b'obsmarkers' in pullop.stepsdone:
2162 return
2162 return
2163 pullop.stepsdone.add(b'obsmarkers')
2163 pullop.stepsdone.add(b'obsmarkers')
2164 tr = None
2164 tr = None
2165 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2165 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2166 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2166 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2167 remoteobs = listkeys(pullop.remote, b'obsolete')
2167 remoteobs = listkeys(pullop.remote, b'obsolete')
2168 if b'dump0' in remoteobs:
2168 if b'dump0' in remoteobs:
2169 tr = pullop.gettransaction()
2169 tr = pullop.gettransaction()
2170 markers = []
2170 markers = []
2171 for key in sorted(remoteobs, reverse=True):
2171 for key in sorted(remoteobs, reverse=True):
2172 if key.startswith(b'dump'):
2172 if key.startswith(b'dump'):
2173 data = util.b85decode(remoteobs[key])
2173 data = util.b85decode(remoteobs[key])
2174 version, newmarks = obsolete._readmarkers(data)
2174 version, newmarks = obsolete._readmarkers(data)
2175 markers += newmarks
2175 markers += newmarks
2176 if markers:
2176 if markers:
2177 pullop.repo.obsstore.add(tr, markers)
2177 pullop.repo.obsstore.add(tr, markers)
2178 pullop.repo.invalidatevolatilesets()
2178 pullop.repo.invalidatevolatilesets()
2179 return tr
2179 return tr
2180
2180
2181
2181
2182 def applynarrowacl(repo, kwargs):
2182 def applynarrowacl(repo, kwargs):
2183 """Apply narrow fetch access control.
2183 """Apply narrow fetch access control.
2184
2184
2185 This massages the named arguments for getbundle wire protocol commands
2185 This massages the named arguments for getbundle wire protocol commands
2186 so requested data is filtered through access control rules.
2186 so requested data is filtered through access control rules.
2187 """
2187 """
2188 ui = repo.ui
2188 ui = repo.ui
2189 # TODO this assumes existence of HTTP and is a layering violation.
2189 # TODO this assumes existence of HTTP and is a layering violation.
2190 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2190 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2191 user_includes = ui.configlist(
2191 user_includes = ui.configlist(
2192 _NARROWACL_SECTION,
2192 _NARROWACL_SECTION,
2193 username + b'.includes',
2193 username + b'.includes',
2194 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2194 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2195 )
2195 )
2196 user_excludes = ui.configlist(
2196 user_excludes = ui.configlist(
2197 _NARROWACL_SECTION,
2197 _NARROWACL_SECTION,
2198 username + b'.excludes',
2198 username + b'.excludes',
2199 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2199 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2200 )
2200 )
2201 if not user_includes:
2201 if not user_includes:
2202 raise error.Abort(
2202 raise error.Abort(
2203 _(b"%s configuration for user %s is empty")
2203 _(b"%s configuration for user %s is empty")
2204 % (_NARROWACL_SECTION, username)
2204 % (_NARROWACL_SECTION, username)
2205 )
2205 )
2206
2206
2207 user_includes = [
2207 user_includes = [
2208 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2208 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2209 ]
2209 ]
2210 user_excludes = [
2210 user_excludes = [
2211 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2211 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2212 ]
2212 ]
2213
2213
2214 req_includes = set(kwargs.get('includepats', []))
2214 req_includes = set(kwargs.get('includepats', []))
2215 req_excludes = set(kwargs.get('excludepats', []))
2215 req_excludes = set(kwargs.get('excludepats', []))
2216
2216
2217 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2217 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2218 req_includes, req_excludes, user_includes, user_excludes
2218 req_includes, req_excludes, user_includes, user_excludes
2219 )
2219 )
2220
2220
2221 if invalid_includes:
2221 if invalid_includes:
2222 raise error.Abort(
2222 raise error.Abort(
2223 _(b"The following includes are not accessible for %s: %s")
2223 _(b"The following includes are not accessible for %s: %s")
2224 % (username, stringutil.pprint(invalid_includes))
2224 % (username, stringutil.pprint(invalid_includes))
2225 )
2225 )
2226
2226
2227 new_args = {}
2227 new_args = {}
2228 new_args.update(kwargs)
2228 new_args.update(kwargs)
2229 new_args['narrow'] = True
2229 new_args['narrow'] = True
2230 new_args['narrow_acl'] = True
2230 new_args['narrow_acl'] = True
2231 new_args['includepats'] = req_includes
2231 new_args['includepats'] = req_includes
2232 if req_excludes:
2232 if req_excludes:
2233 new_args['excludepats'] = req_excludes
2233 new_args['excludepats'] = req_excludes
2234
2234
2235 return new_args
2235 return new_args
2236
2236
2237
2237
2238 def _computeellipsis(repo, common, heads, known, match, depth=None):
2238 def _computeellipsis(repo, common, heads, known, match, depth=None):
2239 """Compute the shape of a narrowed DAG.
2239 """Compute the shape of a narrowed DAG.
2240
2240
2241 Args:
2241 Args:
2242 repo: The repository we're transferring.
2242 repo: The repository we're transferring.
2243 common: The roots of the DAG range we're transferring.
2243 common: The roots of the DAG range we're transferring.
2244 May be just [nullid], which means all ancestors of heads.
2244 May be just [nullid], which means all ancestors of heads.
2245 heads: The heads of the DAG range we're transferring.
2245 heads: The heads of the DAG range we're transferring.
2246 match: The narrowmatcher that allows us to identify relevant changes.
2246 match: The narrowmatcher that allows us to identify relevant changes.
2247 depth: If not None, only consider nodes to be full nodes if they are at
2247 depth: If not None, only consider nodes to be full nodes if they are at
2248 most depth changesets away from one of heads.
2248 most depth changesets away from one of heads.
2249
2249
2250 Returns:
2250 Returns:
2251 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2251 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2252
2252
2253 visitnodes: The list of nodes (either full or ellipsis) which
2253 visitnodes: The list of nodes (either full or ellipsis) which
2254 need to be sent to the client.
2254 need to be sent to the client.
2255 relevant_nodes: The set of changelog nodes which change a file inside
2255 relevant_nodes: The set of changelog nodes which change a file inside
2256 the narrowspec. The client needs these as non-ellipsis nodes.
2256 the narrowspec. The client needs these as non-ellipsis nodes.
2257 ellipsisroots: A dict of {rev: parents} that is used in
2257 ellipsisroots: A dict of {rev: parents} that is used in
2258 narrowchangegroup to produce ellipsis nodes with the
2258 narrowchangegroup to produce ellipsis nodes with the
2259 correct parents.
2259 correct parents.
2260 """
2260 """
2261 cl = repo.changelog
2261 cl = repo.changelog
2262 mfl = repo.manifestlog
2262 mfl = repo.manifestlog
2263
2263
2264 clrev = cl.rev
2264 clrev = cl.rev
2265
2265
2266 commonrevs = {clrev(n) for n in common} | {nullrev}
2266 commonrevs = {clrev(n) for n in common} | {nullrev}
2267 headsrevs = {clrev(n) for n in heads}
2267 headsrevs = {clrev(n) for n in heads}
2268
2268
2269 if depth:
2269 if depth:
2270 revdepth = {h: 0 for h in headsrevs}
2270 revdepth = {h: 0 for h in headsrevs}
2271
2271
2272 ellipsisheads = collections.defaultdict(set)
2272 ellipsisheads = collections.defaultdict(set)
2273 ellipsisroots = collections.defaultdict(set)
2273 ellipsisroots = collections.defaultdict(set)
2274
2274
2275 def addroot(head, curchange):
2275 def addroot(head, curchange):
2276 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2276 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2277 ellipsisroots[head].add(curchange)
2277 ellipsisroots[head].add(curchange)
2278 # Recursively split ellipsis heads with 3 roots by finding the
2278 # Recursively split ellipsis heads with 3 roots by finding the
2279 # roots' youngest common descendant which is an elided merge commit.
2279 # roots' youngest common descendant which is an elided merge commit.
2280 # That descendant takes 2 of the 3 roots as its own, and becomes a
2280 # That descendant takes 2 of the 3 roots as its own, and becomes a
2281 # root of the head.
2281 # root of the head.
2282 while len(ellipsisroots[head]) > 2:
2282 while len(ellipsisroots[head]) > 2:
2283 child, roots = splithead(head)
2283 child, roots = splithead(head)
2284 splitroots(head, child, roots)
2284 splitroots(head, child, roots)
2285 head = child # Recurse in case we just added a 3rd root
2285 head = child # Recurse in case we just added a 3rd root
2286
2286
2287 def splitroots(head, child, roots):
2287 def splitroots(head, child, roots):
2288 ellipsisroots[head].difference_update(roots)
2288 ellipsisroots[head].difference_update(roots)
2289 ellipsisroots[head].add(child)
2289 ellipsisroots[head].add(child)
2290 ellipsisroots[child].update(roots)
2290 ellipsisroots[child].update(roots)
2291 ellipsisroots[child].discard(child)
2291 ellipsisroots[child].discard(child)
2292
2292
2293 def splithead(head):
2293 def splithead(head):
2294 r1, r2, r3 = sorted(ellipsisroots[head])
2294 r1, r2, r3 = sorted(ellipsisroots[head])
2295 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2295 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2296 mid = repo.revs(
2296 mid = repo.revs(
2297 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2297 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2298 )
2298 )
2299 for j in mid:
2299 for j in mid:
2300 if j == nr2:
2300 if j == nr2:
2301 return nr2, (nr1, nr2)
2301 return nr2, (nr1, nr2)
2302 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2302 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2303 return j, (nr1, nr2)
2303 return j, (nr1, nr2)
2304 raise error.Abort(
2304 raise error.Abort(
2305 _(
2305 _(
2306 b'Failed to split up ellipsis node! head: %d, '
2306 b'Failed to split up ellipsis node! head: %d, '
2307 b'roots: %d %d %d'
2307 b'roots: %d %d %d'
2308 )
2308 )
2309 % (head, r1, r2, r3)
2309 % (head, r1, r2, r3)
2310 )
2310 )
2311
2311
2312 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2312 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2313 visit = reversed(missing)
2313 visit = reversed(missing)
2314 relevant_nodes = set()
2314 relevant_nodes = set()
2315 visitnodes = [cl.node(m) for m in missing]
2315 visitnodes = [cl.node(m) for m in missing]
2316 required = set(headsrevs) | known
2316 required = set(headsrevs) | known
2317 for rev in visit:
2317 for rev in visit:
2318 clrev = cl.changelogrevision(rev)
2318 clrev = cl.changelogrevision(rev)
2319 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2319 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2320 if depth is not None:
2320 if depth is not None:
2321 curdepth = revdepth[rev]
2321 curdepth = revdepth[rev]
2322 for p in ps:
2322 for p in ps:
2323 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2323 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2324 needed = False
2324 needed = False
2325 shallow_enough = depth is None or revdepth[rev] <= depth
2325 shallow_enough = depth is None or revdepth[rev] <= depth
2326 if shallow_enough:
2326 if shallow_enough:
2327 curmf = mfl[clrev.manifest].read()
2327 curmf = mfl[clrev.manifest].read()
2328 if ps:
2328 if ps:
2329 # We choose to not trust the changed files list in
2329 # We choose to not trust the changed files list in
2330 # changesets because it's not always correct. TODO: could
2330 # changesets because it's not always correct. TODO: could
2331 # we trust it for the non-merge case?
2331 # we trust it for the non-merge case?
2332 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2332 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2333 needed = bool(curmf.diff(p1mf, match))
2333 needed = bool(curmf.diff(p1mf, match))
2334 if not needed and len(ps) > 1:
2334 if not needed and len(ps) > 1:
2335 # For merge changes, the list of changed files is not
2335 # For merge changes, the list of changed files is not
2336 # helpful, since we need to emit the merge if a file
2336 # helpful, since we need to emit the merge if a file
2337 # in the narrow spec has changed on either side of the
2337 # in the narrow spec has changed on either side of the
2338 # merge. As a result, we do a manifest diff to check.
2338 # merge. As a result, we do a manifest diff to check.
2339 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2339 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2340 needed = bool(curmf.diff(p2mf, match))
2340 needed = bool(curmf.diff(p2mf, match))
2341 else:
2341 else:
2342 # For a root node, we need to include the node if any
2342 # For a root node, we need to include the node if any
2343 # files in the node match the narrowspec.
2343 # files in the node match the narrowspec.
2344 needed = any(curmf.walk(match))
2344 needed = any(curmf.walk(match))
2345
2345
2346 if needed:
2346 if needed:
2347 for head in ellipsisheads[rev]:
2347 for head in ellipsisheads[rev]:
2348 addroot(head, rev)
2348 addroot(head, rev)
2349 for p in ps:
2349 for p in ps:
2350 required.add(p)
2350 required.add(p)
2351 relevant_nodes.add(cl.node(rev))
2351 relevant_nodes.add(cl.node(rev))
2352 else:
2352 else:
2353 if not ps:
2353 if not ps:
2354 ps = [nullrev]
2354 ps = [nullrev]
2355 if rev in required:
2355 if rev in required:
2356 for head in ellipsisheads[rev]:
2356 for head in ellipsisheads[rev]:
2357 addroot(head, rev)
2357 addroot(head, rev)
2358 for p in ps:
2358 for p in ps:
2359 ellipsisheads[p].add(rev)
2359 ellipsisheads[p].add(rev)
2360 else:
2360 else:
2361 for p in ps:
2361 for p in ps:
2362 ellipsisheads[p] |= ellipsisheads[rev]
2362 ellipsisheads[p] |= ellipsisheads[rev]
2363
2363
2364 # add common changesets as roots of their reachable ellipsis heads
2364 # add common changesets as roots of their reachable ellipsis heads
2365 for c in commonrevs:
2365 for c in commonrevs:
2366 for head in ellipsisheads[c]:
2366 for head in ellipsisheads[c]:
2367 addroot(head, c)
2367 addroot(head, c)
2368 return visitnodes, relevant_nodes, ellipsisroots
2368 return visitnodes, relevant_nodes, ellipsisroots
2369
2369
2370
2370
2371 def caps20to10(repo, role):
2371 def caps20to10(repo, role):
2372 """return a set with appropriate options to use bundle20 during getbundle"""
2372 """return a set with appropriate options to use bundle20 during getbundle"""
2373 caps = {b'HG20'}
2373 caps = {b'HG20'}
2374 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2374 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2375 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2375 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2376 return caps
2376 return caps
2377
2377
2378
2378
2379 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2379 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2380 getbundle2partsorder = []
2380 getbundle2partsorder = []
2381
2381
2382 # Mapping between step name and function
2382 # Mapping between step name and function
2383 #
2383 #
2384 # This exists to help extensions wrap steps if necessary
2384 # This exists to help extensions wrap steps if necessary
2385 getbundle2partsmapping = {}
2385 getbundle2partsmapping = {}
2386
2386
2387
2387
2388 def getbundle2partsgenerator(stepname, idx=None):
2388 def getbundle2partsgenerator(stepname, idx=None):
2389 """decorator for function generating bundle2 part for getbundle
2389 """decorator for function generating bundle2 part for getbundle
2390
2390
2391 The function is added to the step -> function mapping and appended to the
2391 The function is added to the step -> function mapping and appended to the
2392 list of steps. Beware that decorated functions will be added in order
2392 list of steps. Beware that decorated functions will be added in order
2393 (this may matter).
2393 (this may matter).
2394
2394
2395 You can only use this decorator for new steps, if you want to wrap a step
2395 You can only use this decorator for new steps, if you want to wrap a step
2396 from an extension, attack the getbundle2partsmapping dictionary directly."""
2396 from an extension, attack the getbundle2partsmapping dictionary directly."""
2397
2397
2398 def dec(func):
2398 def dec(func):
2399 assert stepname not in getbundle2partsmapping
2399 assert stepname not in getbundle2partsmapping
2400 getbundle2partsmapping[stepname] = func
2400 getbundle2partsmapping[stepname] = func
2401 if idx is None:
2401 if idx is None:
2402 getbundle2partsorder.append(stepname)
2402 getbundle2partsorder.append(stepname)
2403 else:
2403 else:
2404 getbundle2partsorder.insert(idx, stepname)
2404 getbundle2partsorder.insert(idx, stepname)
2405 return func
2405 return func
2406
2406
2407 return dec
2407 return dec
2408
2408
2409
2409
2410 def bundle2requested(bundlecaps):
2410 def bundle2requested(bundlecaps):
2411 if bundlecaps is not None:
2411 if bundlecaps is not None:
2412 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2412 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2413 return False
2413 return False
2414
2414
2415
2415
2416 def getbundlechunks(
2416 def getbundlechunks(
2417 repo,
2417 repo,
2418 source,
2418 source,
2419 heads=None,
2419 heads=None,
2420 common=None,
2420 common=None,
2421 bundlecaps=None,
2421 bundlecaps=None,
2422 remote_sidedata=None,
2422 remote_sidedata=None,
2423 **kwargs
2423 **kwargs
2424 ):
2424 ):
2425 """Return chunks constituting a bundle's raw data.
2425 """Return chunks constituting a bundle's raw data.
2426
2426
2427 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2427 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2428 passed.
2428 passed.
2429
2429
2430 Returns a 2-tuple of a dict with metadata about the generated bundle
2430 Returns a 2-tuple of a dict with metadata about the generated bundle
2431 and an iterator over raw chunks (of varying sizes).
2431 and an iterator over raw chunks (of varying sizes).
2432 """
2432 """
2433 kwargs = pycompat.byteskwargs(kwargs)
2433 kwargs = pycompat.byteskwargs(kwargs)
2434 info = {}
2434 info = {}
2435 usebundle2 = bundle2requested(bundlecaps)
2435 usebundle2 = bundle2requested(bundlecaps)
2436 # bundle10 case
2436 # bundle10 case
2437 if not usebundle2:
2437 if not usebundle2:
2438 if bundlecaps and not kwargs.get(b'cg', True):
2438 if bundlecaps and not kwargs.get(b'cg', True):
2439 raise ValueError(
2439 raise ValueError(
2440 _(b'request for bundle10 must include changegroup')
2440 _(b'request for bundle10 must include changegroup')
2441 )
2441 )
2442
2442
2443 if kwargs:
2443 if kwargs:
2444 raise ValueError(
2444 raise ValueError(
2445 _(b'unsupported getbundle arguments: %s')
2445 _(b'unsupported getbundle arguments: %s')
2446 % b', '.join(sorted(kwargs.keys()))
2446 % b', '.join(sorted(kwargs.keys()))
2447 )
2447 )
2448 outgoing = _computeoutgoing(repo, heads, common)
2448 outgoing = _computeoutgoing(repo, heads, common)
2449 info[b'bundleversion'] = 1
2449 info[b'bundleversion'] = 1
2450 return (
2450 return (
2451 info,
2451 info,
2452 changegroup.makestream(
2452 changegroup.makestream(
2453 repo,
2453 repo,
2454 outgoing,
2454 outgoing,
2455 b'01',
2455 b'01',
2456 source,
2456 source,
2457 bundlecaps=bundlecaps,
2457 bundlecaps=bundlecaps,
2458 remote_sidedata=remote_sidedata,
2458 remote_sidedata=remote_sidedata,
2459 ),
2459 ),
2460 )
2460 )
2461
2461
2462 # bundle20 case
2462 # bundle20 case
2463 info[b'bundleversion'] = 2
2463 info[b'bundleversion'] = 2
2464 b2caps = {}
2464 b2caps = {}
2465 for bcaps in bundlecaps:
2465 for bcaps in bundlecaps:
2466 if bcaps.startswith(b'bundle2='):
2466 if bcaps.startswith(b'bundle2='):
2467 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2467 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2468 b2caps.update(bundle2.decodecaps(blob))
2468 b2caps.update(bundle2.decodecaps(blob))
2469 bundler = bundle2.bundle20(repo.ui, b2caps)
2469 bundler = bundle2.bundle20(repo.ui, b2caps)
2470
2470
2471 kwargs[b'heads'] = heads
2471 kwargs[b'heads'] = heads
2472 kwargs[b'common'] = common
2472 kwargs[b'common'] = common
2473
2473
2474 for name in getbundle2partsorder:
2474 for name in getbundle2partsorder:
2475 func = getbundle2partsmapping[name]
2475 func = getbundle2partsmapping[name]
2476 func(
2476 func(
2477 bundler,
2477 bundler,
2478 repo,
2478 repo,
2479 source,
2479 source,
2480 bundlecaps=bundlecaps,
2480 bundlecaps=bundlecaps,
2481 b2caps=b2caps,
2481 b2caps=b2caps,
2482 remote_sidedata=remote_sidedata,
2482 remote_sidedata=remote_sidedata,
2483 **pycompat.strkwargs(kwargs)
2483 **pycompat.strkwargs(kwargs)
2484 )
2484 )
2485
2485
2486 info[b'prefercompressed'] = bundler.prefercompressed
2486 info[b'prefercompressed'] = bundler.prefercompressed
2487
2487
2488 return info, bundler.getchunks()
2488 return info, bundler.getchunks()
2489
2489
2490
2490
2491 @getbundle2partsgenerator(b'stream')
2491 @getbundle2partsgenerator(b'stream')
2492 def _getbundlestream2(bundler, repo, *args, **kwargs):
2492 def _getbundlestream2(bundler, repo, *args, **kwargs):
2493 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2493 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2494
2494
2495
2495
2496 @getbundle2partsgenerator(b'changegroup')
2496 @getbundle2partsgenerator(b'changegroup')
2497 def _getbundlechangegrouppart(
2497 def _getbundlechangegrouppart(
2498 bundler,
2498 bundler,
2499 repo,
2499 repo,
2500 source,
2500 source,
2501 bundlecaps=None,
2501 bundlecaps=None,
2502 b2caps=None,
2502 b2caps=None,
2503 heads=None,
2503 heads=None,
2504 common=None,
2504 common=None,
2505 remote_sidedata=None,
2505 remote_sidedata=None,
2506 **kwargs
2506 **kwargs
2507 ):
2507 ):
2508 """add a changegroup part to the requested bundle"""
2508 """add a changegroup part to the requested bundle"""
2509 if not kwargs.get('cg', True) or not b2caps:
2509 if not kwargs.get('cg', True) or not b2caps:
2510 return
2510 return
2511
2511
2512 version = b'01'
2512 version = b'01'
2513 cgversions = b2caps.get(b'changegroup')
2513 cgversions = b2caps.get(b'changegroup')
2514 if cgversions: # 3.1 and 3.2 ship with an empty value
2514 if cgversions: # 3.1 and 3.2 ship with an empty value
2515 cgversions = [
2515 cgversions = [
2516 v
2516 v
2517 for v in cgversions
2517 for v in cgversions
2518 if v in changegroup.supportedoutgoingversions(repo)
2518 if v in changegroup.supportedoutgoingversions(repo)
2519 ]
2519 ]
2520 if not cgversions:
2520 if not cgversions:
2521 raise error.Abort(_(b'no common changegroup version'))
2521 raise error.Abort(_(b'no common changegroup version'))
2522 version = max(cgversions)
2522 version = max(cgversions)
2523
2523
2524 outgoing = _computeoutgoing(repo, heads, common)
2524 outgoing = _computeoutgoing(repo, heads, common)
2525 if not outgoing.missing:
2525 if not outgoing.missing:
2526 return
2526 return
2527
2527
2528 if kwargs.get('narrow', False):
2528 if kwargs.get('narrow', False):
2529 include = sorted(filter(bool, kwargs.get('includepats', [])))
2529 include = sorted(filter(bool, kwargs.get('includepats', [])))
2530 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2530 exclude = sorted(filter(bool, kwargs.get('excludepats', [])))
2531 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2531 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2532 else:
2532 else:
2533 matcher = None
2533 matcher = None
2534
2534
2535 cgstream = changegroup.makestream(
2535 cgstream = changegroup.makestream(
2536 repo,
2536 repo,
2537 outgoing,
2537 outgoing,
2538 version,
2538 version,
2539 source,
2539 source,
2540 bundlecaps=bundlecaps,
2540 bundlecaps=bundlecaps,
2541 matcher=matcher,
2541 matcher=matcher,
2542 remote_sidedata=remote_sidedata,
2542 remote_sidedata=remote_sidedata,
2543 )
2543 )
2544
2544
2545 part = bundler.newpart(b'changegroup', data=cgstream)
2545 part = bundler.newpart(b'changegroup', data=cgstream)
2546 if cgversions:
2546 if cgversions:
2547 part.addparam(b'version', version)
2547 part.addparam(b'version', version)
2548
2548
2549 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2549 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2550
2550
2551 if scmutil.istreemanifest(repo):
2551 if scmutil.istreemanifest(repo):
2552 part.addparam(b'treemanifest', b'1')
2552 part.addparam(b'treemanifest', b'1')
2553
2553
2554 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2554 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2555 part.addparam(b'exp-sidedata', b'1')
2555 part.addparam(b'exp-sidedata', b'1')
2556 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2556 sidedata = bundle2.format_remote_wanted_sidedata(repo)
2557 part.addparam(b'exp-wanted-sidedata', sidedata)
2557 part.addparam(b'exp-wanted-sidedata', sidedata)
2558
2558
2559 if (
2559 if (
2560 kwargs.get('narrow', False)
2560 kwargs.get('narrow', False)
2561 and kwargs.get('narrow_acl', False)
2561 and kwargs.get('narrow_acl', False)
2562 and (include or exclude)
2562 and (include or exclude)
2563 ):
2563 ):
2564 # this is mandatory because otherwise ACL clients won't work
2564 # this is mandatory because otherwise ACL clients won't work
2565 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2565 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2566 narrowspecpart.data = b'%s\0%s' % (
2566 narrowspecpart.data = b'%s\0%s' % (
2567 b'\n'.join(include),
2567 b'\n'.join(include),
2568 b'\n'.join(exclude),
2568 b'\n'.join(exclude),
2569 )
2569 )
2570
2570
2571
2571
2572 @getbundle2partsgenerator(b'bookmarks')
2572 @getbundle2partsgenerator(b'bookmarks')
2573 def _getbundlebookmarkpart(
2573 def _getbundlebookmarkpart(
2574 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2574 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2575 ):
2575 ):
2576 """add a bookmark part to the requested bundle"""
2576 """add a bookmark part to the requested bundle"""
2577 if not kwargs.get('bookmarks', False):
2577 if not kwargs.get('bookmarks', False):
2578 return
2578 return
2579 if not b2caps or b'bookmarks' not in b2caps:
2579 if not b2caps or b'bookmarks' not in b2caps:
2580 raise error.Abort(_(b'no common bookmarks exchange method'))
2580 raise error.Abort(_(b'no common bookmarks exchange method'))
2581 books = bookmod.listbinbookmarks(repo)
2581 books = bookmod.listbinbookmarks(repo)
2582 data = bookmod.binaryencode(repo, books)
2582 data = bookmod.binaryencode(repo, books)
2583 if data:
2583 if data:
2584 bundler.newpart(b'bookmarks', data=data)
2584 bundler.newpart(b'bookmarks', data=data)
2585
2585
2586
2586
2587 @getbundle2partsgenerator(b'listkeys')
2587 @getbundle2partsgenerator(b'listkeys')
2588 def _getbundlelistkeysparts(
2588 def _getbundlelistkeysparts(
2589 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2589 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2590 ):
2590 ):
2591 """add parts containing listkeys namespaces to the requested bundle"""
2591 """add parts containing listkeys namespaces to the requested bundle"""
2592 listkeys = kwargs.get('listkeys', ())
2592 listkeys = kwargs.get('listkeys', ())
2593 for namespace in listkeys:
2593 for namespace in listkeys:
2594 part = bundler.newpart(b'listkeys')
2594 part = bundler.newpart(b'listkeys')
2595 part.addparam(b'namespace', namespace)
2595 part.addparam(b'namespace', namespace)
2596 keys = repo.listkeys(namespace).items()
2596 keys = repo.listkeys(namespace).items()
2597 part.data = pushkey.encodekeys(keys)
2597 part.data = pushkey.encodekeys(keys)
2598
2598
2599
2599
2600 @getbundle2partsgenerator(b'obsmarkers')
2600 @getbundle2partsgenerator(b'obsmarkers')
2601 def _getbundleobsmarkerpart(
2601 def _getbundleobsmarkerpart(
2602 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2602 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2603 ):
2603 ):
2604 """add an obsolescence markers part to the requested bundle"""
2604 """add an obsolescence markers part to the requested bundle"""
2605 if kwargs.get('obsmarkers', False):
2605 if kwargs.get('obsmarkers', False):
2606 if heads is None:
2606 if heads is None:
2607 heads = repo.heads()
2607 heads = repo.heads()
2608 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2608 revs = repo.revs(b'::%ln', heads)
2609 markers = repo.obsstore.relevantmarkers(subset)
2609 markers = repo.obsstore.relevantmarkers(revs=revs)
2610 markers = obsutil.sortedmarkers(markers)
2610 markers = obsutil.sortedmarkers(markers)
2611 bundle2.buildobsmarkerspart(bundler, markers)
2611 bundle2.buildobsmarkerspart(bundler, markers)
2612
2612
2613
2613
2614 @getbundle2partsgenerator(b'phases')
2614 @getbundle2partsgenerator(b'phases')
2615 def _getbundlephasespart(
2615 def _getbundlephasespart(
2616 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2616 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2617 ):
2617 ):
2618 """add phase heads part to the requested bundle"""
2618 """add phase heads part to the requested bundle"""
2619 if kwargs.get('phases', False):
2619 if kwargs.get('phases', False):
2620 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2620 if not b2caps or b'heads' not in b2caps.get(b'phases'):
2621 raise error.Abort(_(b'no common phases exchange method'))
2621 raise error.Abort(_(b'no common phases exchange method'))
2622 if heads is None:
2622 if heads is None:
2623 heads = repo.heads()
2623 heads = repo.heads()
2624
2624
2625 headsbyphase = collections.defaultdict(set)
2625 headsbyphase = collections.defaultdict(set)
2626 if repo.publishing():
2626 if repo.publishing():
2627 headsbyphase[phases.public] = heads
2627 headsbyphase[phases.public] = heads
2628 else:
2628 else:
2629 # find the appropriate heads to move
2629 # find the appropriate heads to move
2630
2630
2631 phase = repo._phasecache.phase
2631 phase = repo._phasecache.phase
2632 node = repo.changelog.node
2632 node = repo.changelog.node
2633 rev = repo.changelog.rev
2633 rev = repo.changelog.rev
2634 for h in heads:
2634 for h in heads:
2635 headsbyphase[phase(repo, rev(h))].add(h)
2635 headsbyphase[phase(repo, rev(h))].add(h)
2636 seenphases = list(headsbyphase.keys())
2636 seenphases = list(headsbyphase.keys())
2637
2637
2638 # We do not handle anything but public and draft phase for now)
2638 # We do not handle anything but public and draft phase for now)
2639 if seenphases:
2639 if seenphases:
2640 assert max(seenphases) <= phases.draft
2640 assert max(seenphases) <= phases.draft
2641
2641
2642 # if client is pulling non-public changesets, we need to find
2642 # if client is pulling non-public changesets, we need to find
2643 # intermediate public heads.
2643 # intermediate public heads.
2644 draftheads = headsbyphase.get(phases.draft, set())
2644 draftheads = headsbyphase.get(phases.draft, set())
2645 if draftheads:
2645 if draftheads:
2646 publicheads = headsbyphase.get(phases.public, set())
2646 publicheads = headsbyphase.get(phases.public, set())
2647
2647
2648 revset = b'heads(only(%ln, %ln) and public())'
2648 revset = b'heads(only(%ln, %ln) and public())'
2649 extraheads = repo.revs(revset, draftheads, publicheads)
2649 extraheads = repo.revs(revset, draftheads, publicheads)
2650 for r in extraheads:
2650 for r in extraheads:
2651 headsbyphase[phases.public].add(node(r))
2651 headsbyphase[phases.public].add(node(r))
2652
2652
2653 # transform data in a format used by the encoding function
2653 # transform data in a format used by the encoding function
2654 phasemapping = {
2654 phasemapping = {
2655 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2655 phase: sorted(headsbyphase[phase]) for phase in phases.allphases
2656 }
2656 }
2657
2657
2658 # generate the actual part
2658 # generate the actual part
2659 phasedata = phases.binaryencode(phasemapping)
2659 phasedata = phases.binaryencode(phasemapping)
2660 bundler.newpart(b'phase-heads', data=phasedata)
2660 bundler.newpart(b'phase-heads', data=phasedata)
2661
2661
2662
2662
2663 @getbundle2partsgenerator(b'hgtagsfnodes')
2663 @getbundle2partsgenerator(b'hgtagsfnodes')
2664 def _getbundletagsfnodes(
2664 def _getbundletagsfnodes(
2665 bundler,
2665 bundler,
2666 repo,
2666 repo,
2667 source,
2667 source,
2668 bundlecaps=None,
2668 bundlecaps=None,
2669 b2caps=None,
2669 b2caps=None,
2670 heads=None,
2670 heads=None,
2671 common=None,
2671 common=None,
2672 **kwargs
2672 **kwargs
2673 ):
2673 ):
2674 """Transfer the .hgtags filenodes mapping.
2674 """Transfer the .hgtags filenodes mapping.
2675
2675
2676 Only values for heads in this bundle will be transferred.
2676 Only values for heads in this bundle will be transferred.
2677
2677
2678 The part data consists of pairs of 20 byte changeset node and .hgtags
2678 The part data consists of pairs of 20 byte changeset node and .hgtags
2679 filenodes raw values.
2679 filenodes raw values.
2680 """
2680 """
2681 # Don't send unless:
2681 # Don't send unless:
2682 # - changeset are being exchanged,
2682 # - changeset are being exchanged,
2683 # - the client supports it.
2683 # - the client supports it.
2684 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2684 if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps):
2685 return
2685 return
2686
2686
2687 outgoing = _computeoutgoing(repo, heads, common)
2687 outgoing = _computeoutgoing(repo, heads, common)
2688 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2688 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2689
2689
2690
2690
2691 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2691 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2692 def _getbundlerevbranchcache(
2692 def _getbundlerevbranchcache(
2693 bundler,
2693 bundler,
2694 repo,
2694 repo,
2695 source,
2695 source,
2696 bundlecaps=None,
2696 bundlecaps=None,
2697 b2caps=None,
2697 b2caps=None,
2698 heads=None,
2698 heads=None,
2699 common=None,
2699 common=None,
2700 **kwargs
2700 **kwargs
2701 ):
2701 ):
2702 """Transfer the rev-branch-cache mapping
2702 """Transfer the rev-branch-cache mapping
2703
2703
2704 The payload is a series of data related to each branch
2704 The payload is a series of data related to each branch
2705
2705
2706 1) branch name length
2706 1) branch name length
2707 2) number of open heads
2707 2) number of open heads
2708 3) number of closed heads
2708 3) number of closed heads
2709 4) open heads nodes
2709 4) open heads nodes
2710 5) closed heads nodes
2710 5) closed heads nodes
2711 """
2711 """
2712 # Don't send unless:
2712 # Don't send unless:
2713 # - changeset are being exchanged,
2713 # - changeset are being exchanged,
2714 # - the client supports it.
2714 # - the client supports it.
2715 # - narrow bundle isn't in play (not currently compatible).
2715 # - narrow bundle isn't in play (not currently compatible).
2716 if (
2716 if (
2717 not kwargs.get('cg', True)
2717 not kwargs.get('cg', True)
2718 or not b2caps
2718 or not b2caps
2719 or b'rev-branch-cache' not in b2caps
2719 or b'rev-branch-cache' not in b2caps
2720 or kwargs.get('narrow', False)
2720 or kwargs.get('narrow', False)
2721 or repo.ui.has_section(_NARROWACL_SECTION)
2721 or repo.ui.has_section(_NARROWACL_SECTION)
2722 ):
2722 ):
2723 return
2723 return
2724
2724
2725 outgoing = _computeoutgoing(repo, heads, common)
2725 outgoing = _computeoutgoing(repo, heads, common)
2726 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2726 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2727
2727
2728
2728
2729 def check_heads(repo, their_heads, context):
2729 def check_heads(repo, their_heads, context):
2730 """check if the heads of a repo have been modified
2730 """check if the heads of a repo have been modified
2731
2731
2732 Used by peer for unbundling.
2732 Used by peer for unbundling.
2733 """
2733 """
2734 heads = repo.heads()
2734 heads = repo.heads()
2735 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2735 heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest()
2736 if not (
2736 if not (
2737 their_heads == [b'force']
2737 their_heads == [b'force']
2738 or their_heads == heads
2738 or their_heads == heads
2739 or their_heads == [b'hashed', heads_hash]
2739 or their_heads == [b'hashed', heads_hash]
2740 ):
2740 ):
2741 # someone else committed/pushed/unbundled while we
2741 # someone else committed/pushed/unbundled while we
2742 # were transferring data
2742 # were transferring data
2743 raise error.PushRaced(
2743 raise error.PushRaced(
2744 b'repository changed while %s - please try again' % context
2744 b'repository changed while %s - please try again' % context
2745 )
2745 )
2746
2746
2747
2747
2748 def unbundle(repo, cg, heads, source, url):
2748 def unbundle(repo, cg, heads, source, url):
2749 """Apply a bundle to a repo.
2749 """Apply a bundle to a repo.
2750
2750
2751 this function makes sure the repo is locked during the application and have
2751 this function makes sure the repo is locked during the application and have
2752 mechanism to check that no push race occurred between the creation of the
2752 mechanism to check that no push race occurred between the creation of the
2753 bundle and its application.
2753 bundle and its application.
2754
2754
2755 If the push was raced as PushRaced exception is raised."""
2755 If the push was raced as PushRaced exception is raised."""
2756 r = 0
2756 r = 0
2757 # need a transaction when processing a bundle2 stream
2757 # need a transaction when processing a bundle2 stream
2758 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2758 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2759 lockandtr = [None, None, None]
2759 lockandtr = [None, None, None]
2760 recordout = None
2760 recordout = None
2761 # quick fix for output mismatch with bundle2 in 3.4
2761 # quick fix for output mismatch with bundle2 in 3.4
2762 captureoutput = repo.ui.configbool(
2762 captureoutput = repo.ui.configbool(
2763 b'experimental', b'bundle2-output-capture'
2763 b'experimental', b'bundle2-output-capture'
2764 )
2764 )
2765 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2765 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2766 captureoutput = True
2766 captureoutput = True
2767 try:
2767 try:
2768 # note: outside bundle1, 'heads' is expected to be empty and this
2768 # note: outside bundle1, 'heads' is expected to be empty and this
2769 # 'check_heads' call wil be a no-op
2769 # 'check_heads' call wil be a no-op
2770 check_heads(repo, heads, b'uploading changes')
2770 check_heads(repo, heads, b'uploading changes')
2771 # push can proceed
2771 # push can proceed
2772 if not isinstance(cg, bundle2.unbundle20):
2772 if not isinstance(cg, bundle2.unbundle20):
2773 # legacy case: bundle1 (changegroup 01)
2773 # legacy case: bundle1 (changegroup 01)
2774 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2774 txnname = b"\n".join([source, urlutil.hidepassword(url)])
2775 with repo.lock(), repo.transaction(txnname) as tr:
2775 with repo.lock(), repo.transaction(txnname) as tr:
2776 op = bundle2.applybundle(repo, cg, tr, source, url)
2776 op = bundle2.applybundle(repo, cg, tr, source, url)
2777 r = bundle2.combinechangegroupresults(op)
2777 r = bundle2.combinechangegroupresults(op)
2778 else:
2778 else:
2779 r = None
2779 r = None
2780 try:
2780 try:
2781
2781
2782 def gettransaction():
2782 def gettransaction():
2783 if not lockandtr[2]:
2783 if not lockandtr[2]:
2784 if not bookmod.bookmarksinstore(repo):
2784 if not bookmod.bookmarksinstore(repo):
2785 lockandtr[0] = repo.wlock()
2785 lockandtr[0] = repo.wlock()
2786 lockandtr[1] = repo.lock()
2786 lockandtr[1] = repo.lock()
2787 lockandtr[2] = repo.transaction(source)
2787 lockandtr[2] = repo.transaction(source)
2788 lockandtr[2].hookargs[b'source'] = source
2788 lockandtr[2].hookargs[b'source'] = source
2789 lockandtr[2].hookargs[b'url'] = url
2789 lockandtr[2].hookargs[b'url'] = url
2790 lockandtr[2].hookargs[b'bundle2'] = b'1'
2790 lockandtr[2].hookargs[b'bundle2'] = b'1'
2791 return lockandtr[2]
2791 return lockandtr[2]
2792
2792
2793 # Do greedy locking by default until we're satisfied with lazy
2793 # Do greedy locking by default until we're satisfied with lazy
2794 # locking.
2794 # locking.
2795 if not repo.ui.configbool(
2795 if not repo.ui.configbool(
2796 b'experimental', b'bundle2lazylocking'
2796 b'experimental', b'bundle2lazylocking'
2797 ):
2797 ):
2798 gettransaction()
2798 gettransaction()
2799
2799
2800 op = bundle2.bundleoperation(
2800 op = bundle2.bundleoperation(
2801 repo,
2801 repo,
2802 gettransaction,
2802 gettransaction,
2803 captureoutput=captureoutput,
2803 captureoutput=captureoutput,
2804 source=b'push',
2804 source=b'push',
2805 )
2805 )
2806 try:
2806 try:
2807 op = bundle2.processbundle(repo, cg, op=op)
2807 op = bundle2.processbundle(repo, cg, op=op)
2808 finally:
2808 finally:
2809 r = op.reply
2809 r = op.reply
2810 if captureoutput and r is not None:
2810 if captureoutput and r is not None:
2811 repo.ui.pushbuffer(error=True, subproc=True)
2811 repo.ui.pushbuffer(error=True, subproc=True)
2812
2812
2813 def recordout(output):
2813 def recordout(output):
2814 r.newpart(b'output', data=output, mandatory=False)
2814 r.newpart(b'output', data=output, mandatory=False)
2815
2815
2816 if lockandtr[2] is not None:
2816 if lockandtr[2] is not None:
2817 lockandtr[2].close()
2817 lockandtr[2].close()
2818 except BaseException as exc:
2818 except BaseException as exc:
2819 exc.duringunbundle2 = True
2819 exc.duringunbundle2 = True
2820 if captureoutput and r is not None:
2820 if captureoutput and r is not None:
2821 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2821 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2822
2822
2823 def recordout(output):
2823 def recordout(output):
2824 part = bundle2.bundlepart(
2824 part = bundle2.bundlepart(
2825 b'output', data=output, mandatory=False
2825 b'output', data=output, mandatory=False
2826 )
2826 )
2827 parts.append(part)
2827 parts.append(part)
2828
2828
2829 raise
2829 raise
2830 finally:
2830 finally:
2831 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2831 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2832 if recordout is not None:
2832 if recordout is not None:
2833 recordout(repo.ui.popbuffer())
2833 recordout(repo.ui.popbuffer())
2834 return r
2834 return r
2835
2835
2836
2836
2837 def _maybeapplyclonebundle(pullop):
2837 def _maybeapplyclonebundle(pullop):
2838 """Apply a clone bundle from a remote, if possible."""
2838 """Apply a clone bundle from a remote, if possible."""
2839
2839
2840 repo = pullop.repo
2840 repo = pullop.repo
2841 remote = pullop.remote
2841 remote = pullop.remote
2842
2842
2843 if not repo.ui.configbool(b'ui', b'clonebundles'):
2843 if not repo.ui.configbool(b'ui', b'clonebundles'):
2844 return
2844 return
2845
2845
2846 # Only run if local repo is empty.
2846 # Only run if local repo is empty.
2847 if len(repo):
2847 if len(repo):
2848 return
2848 return
2849
2849
2850 if pullop.heads:
2850 if pullop.heads:
2851 return
2851 return
2852
2852
2853 if not remote.capable(b'clonebundles'):
2853 if not remote.capable(b'clonebundles'):
2854 return
2854 return
2855
2855
2856 with remote.commandexecutor() as e:
2856 with remote.commandexecutor() as e:
2857 res = e.callcommand(b'clonebundles', {}).result()
2857 res = e.callcommand(b'clonebundles', {}).result()
2858
2858
2859 # If we call the wire protocol command, that's good enough to record the
2859 # If we call the wire protocol command, that's good enough to record the
2860 # attempt.
2860 # attempt.
2861 pullop.clonebundleattempted = True
2861 pullop.clonebundleattempted = True
2862
2862
2863 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2863 entries = bundlecaches.parseclonebundlesmanifest(repo, res)
2864 if not entries:
2864 if not entries:
2865 repo.ui.note(
2865 repo.ui.note(
2866 _(
2866 _(
2867 b'no clone bundles available on remote; '
2867 b'no clone bundles available on remote; '
2868 b'falling back to regular clone\n'
2868 b'falling back to regular clone\n'
2869 )
2869 )
2870 )
2870 )
2871 return
2871 return
2872
2872
2873 entries = bundlecaches.filterclonebundleentries(
2873 entries = bundlecaches.filterclonebundleentries(
2874 repo, entries, streamclonerequested=pullop.streamclonerequested
2874 repo, entries, streamclonerequested=pullop.streamclonerequested
2875 )
2875 )
2876
2876
2877 if not entries:
2877 if not entries:
2878 # There is a thundering herd concern here. However, if a server
2878 # There is a thundering herd concern here. However, if a server
2879 # operator doesn't advertise bundles appropriate for its clients,
2879 # operator doesn't advertise bundles appropriate for its clients,
2880 # they deserve what's coming. Furthermore, from a client's
2880 # they deserve what's coming. Furthermore, from a client's
2881 # perspective, no automatic fallback would mean not being able to
2881 # perspective, no automatic fallback would mean not being able to
2882 # clone!
2882 # clone!
2883 repo.ui.warn(
2883 repo.ui.warn(
2884 _(
2884 _(
2885 b'no compatible clone bundles available on server; '
2885 b'no compatible clone bundles available on server; '
2886 b'falling back to regular clone\n'
2886 b'falling back to regular clone\n'
2887 )
2887 )
2888 )
2888 )
2889 repo.ui.warn(
2889 repo.ui.warn(
2890 _(b'(you may want to report this to the server operator)\n')
2890 _(b'(you may want to report this to the server operator)\n')
2891 )
2891 )
2892 return
2892 return
2893
2893
2894 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2894 entries = bundlecaches.sortclonebundleentries(repo.ui, entries)
2895
2895
2896 url = entries[0][b'URL']
2896 url = entries[0][b'URL']
2897 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2897 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2898 if trypullbundlefromurl(repo.ui, repo, url, remote):
2898 if trypullbundlefromurl(repo.ui, repo, url, remote):
2899 repo.ui.status(_(b'finished applying clone bundle\n'))
2899 repo.ui.status(_(b'finished applying clone bundle\n'))
2900 # Bundle failed.
2900 # Bundle failed.
2901 #
2901 #
2902 # We abort by default to avoid the thundering herd of
2902 # We abort by default to avoid the thundering herd of
2903 # clients flooding a server that was expecting expensive
2903 # clients flooding a server that was expecting expensive
2904 # clone load to be offloaded.
2904 # clone load to be offloaded.
2905 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2905 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2906 repo.ui.warn(_(b'falling back to normal clone\n'))
2906 repo.ui.warn(_(b'falling back to normal clone\n'))
2907 else:
2907 else:
2908 raise error.Abort(
2908 raise error.Abort(
2909 _(b'error applying bundle'),
2909 _(b'error applying bundle'),
2910 hint=_(
2910 hint=_(
2911 b'if this error persists, consider contacting '
2911 b'if this error persists, consider contacting '
2912 b'the server operator or disable clone '
2912 b'the server operator or disable clone '
2913 b'bundles via '
2913 b'bundles via '
2914 b'"--config ui.clonebundles=false"'
2914 b'"--config ui.clonebundles=false"'
2915 ),
2915 ),
2916 )
2916 )
2917
2917
2918
2918
2919 def inline_clone_bundle_open(ui, url, peer):
2919 def inline_clone_bundle_open(ui, url, peer):
2920 if not peer:
2920 if not peer:
2921 raise error.Abort(_(b'no remote repository supplied for %s' % url))
2921 raise error.Abort(_(b'no remote repository supplied for %s' % url))
2922 clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :]
2922 clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :]
2923 peerclonebundle = peer.get_cached_bundle_inline(clonebundleid)
2923 peerclonebundle = peer.get_cached_bundle_inline(clonebundleid)
2924 return util.chunkbuffer(peerclonebundle)
2924 return util.chunkbuffer(peerclonebundle)
2925
2925
2926
2926
2927 def trypullbundlefromurl(ui, repo, url, peer):
2927 def trypullbundlefromurl(ui, repo, url, peer):
2928 """Attempt to apply a bundle from a URL."""
2928 """Attempt to apply a bundle from a URL."""
2929 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2929 with repo.lock(), repo.transaction(b'bundleurl') as tr:
2930 try:
2930 try:
2931 if url.startswith(bundlecaches.CLONEBUNDLESCHEME):
2931 if url.startswith(bundlecaches.CLONEBUNDLESCHEME):
2932 fh = inline_clone_bundle_open(ui, url, peer)
2932 fh = inline_clone_bundle_open(ui, url, peer)
2933 else:
2933 else:
2934 fh = urlmod.open(ui, url)
2934 fh = urlmod.open(ui, url)
2935 cg = readbundle(ui, fh, b'stream')
2935 cg = readbundle(ui, fh, b'stream')
2936
2936
2937 if isinstance(cg, streamclone.streamcloneapplier):
2937 if isinstance(cg, streamclone.streamcloneapplier):
2938 cg.apply(repo)
2938 cg.apply(repo)
2939 else:
2939 else:
2940 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2940 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
2941 return True
2941 return True
2942 except urlerr.httperror as e:
2942 except urlerr.httperror as e:
2943 ui.warn(
2943 ui.warn(
2944 _(b'HTTP error fetching bundle: %s\n')
2944 _(b'HTTP error fetching bundle: %s\n')
2945 % stringutil.forcebytestr(e)
2945 % stringutil.forcebytestr(e)
2946 )
2946 )
2947 except urlerr.urlerror as e:
2947 except urlerr.urlerror as e:
2948 ui.warn(
2948 ui.warn(
2949 _(b'error fetching bundle: %s\n')
2949 _(b'error fetching bundle: %s\n')
2950 % stringutil.forcebytestr(e.reason)
2950 % stringutil.forcebytestr(e.reason)
2951 )
2951 )
2952
2952
2953 return False
2953 return False
@@ -1,1155 +1,1170
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "predecessor" and possible
23 The old obsoleted changeset is called a "predecessor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a predecessor are called "successor markers of X" because they hold
25 a predecessor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "predecessor markers of Y" because they hold
27 a successors are call "predecessor markers of Y" because they hold
28 information about the predecessors of Y.
28 information about the predecessors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker is used:
45 - When changeset A is split into B and C, a single marker is used:
46
46
47 (A, (B, C))
47 (A, (B, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66 The header is followed by the markers. Marker format depend of the version. See
66 The header is followed by the markers. Marker format depend of the version. See
67 comment associated with each format for details.
67 comment associated with each format for details.
68
68
69 """
69 """
70
70
71 import binascii
71 import binascii
72 import struct
72 import struct
73 import weakref
73 import weakref
74
74
75 from .i18n import _
75 from .i18n import _
76 from .node import (
76 from .node import (
77 bin,
77 bin,
78 hex,
78 hex,
79 )
79 )
80 from . import (
80 from . import (
81 encoding,
81 encoding,
82 error,
82 error,
83 obsutil,
83 obsutil,
84 phases,
84 phases,
85 policy,
85 policy,
86 pycompat,
86 pycompat,
87 util,
87 util,
88 )
88 )
89 from .utils import (
89 from .utils import (
90 dateutil,
90 dateutil,
91 hashutil,
91 hashutil,
92 )
92 )
93
93
94 parsers = policy.importmod('parsers')
94 parsers = policy.importmod('parsers')
95
95
96 _pack = struct.pack
96 _pack = struct.pack
97 _unpack = struct.unpack
97 _unpack = struct.unpack
98 _calcsize = struct.calcsize
98 _calcsize = struct.calcsize
99 propertycache = util.propertycache
99 propertycache = util.propertycache
100
100
101 # Options for obsolescence
101 # Options for obsolescence
102 createmarkersopt = b'createmarkers'
102 createmarkersopt = b'createmarkers'
103 allowunstableopt = b'allowunstable'
103 allowunstableopt = b'allowunstable'
104 allowdivergenceopt = b'allowdivergence'
104 allowdivergenceopt = b'allowdivergence'
105 exchangeopt = b'exchange'
105 exchangeopt = b'exchange'
106
106
107
107
108 def _getoptionvalue(repo, option):
108 def _getoptionvalue(repo, option):
109 """Returns True if the given repository has the given obsolete option
109 """Returns True if the given repository has the given obsolete option
110 enabled.
110 enabled.
111 """
111 """
112 configkey = b'evolution.%s' % option
112 configkey = b'evolution.%s' % option
113 newconfig = repo.ui.configbool(b'experimental', configkey)
113 newconfig = repo.ui.configbool(b'experimental', configkey)
114
114
115 # Return the value only if defined
115 # Return the value only if defined
116 if newconfig is not None:
116 if newconfig is not None:
117 return newconfig
117 return newconfig
118
118
119 # Fallback on generic option
119 # Fallback on generic option
120 try:
120 try:
121 return repo.ui.configbool(b'experimental', b'evolution')
121 return repo.ui.configbool(b'experimental', b'evolution')
122 except (error.ConfigError, AttributeError):
122 except (error.ConfigError, AttributeError):
123 # Fallback on old-fashion config
123 # Fallback on old-fashion config
124 # inconsistent config: experimental.evolution
124 # inconsistent config: experimental.evolution
125 result = set(repo.ui.configlist(b'experimental', b'evolution'))
125 result = set(repo.ui.configlist(b'experimental', b'evolution'))
126
126
127 if b'all' in result:
127 if b'all' in result:
128 return True
128 return True
129
129
130 # Temporary hack for next check
130 # Temporary hack for next check
131 newconfig = repo.ui.config(b'experimental', b'evolution.createmarkers')
131 newconfig = repo.ui.config(b'experimental', b'evolution.createmarkers')
132 if newconfig:
132 if newconfig:
133 result.add(b'createmarkers')
133 result.add(b'createmarkers')
134
134
135 return option in result
135 return option in result
136
136
137
137
138 def getoptions(repo):
138 def getoptions(repo):
139 """Returns dicts showing state of obsolescence features."""
139 """Returns dicts showing state of obsolescence features."""
140
140
141 createmarkersvalue = _getoptionvalue(repo, createmarkersopt)
141 createmarkersvalue = _getoptionvalue(repo, createmarkersopt)
142 if createmarkersvalue:
142 if createmarkersvalue:
143 unstablevalue = _getoptionvalue(repo, allowunstableopt)
143 unstablevalue = _getoptionvalue(repo, allowunstableopt)
144 divergencevalue = _getoptionvalue(repo, allowdivergenceopt)
144 divergencevalue = _getoptionvalue(repo, allowdivergenceopt)
145 exchangevalue = _getoptionvalue(repo, exchangeopt)
145 exchangevalue = _getoptionvalue(repo, exchangeopt)
146 else:
146 else:
147 # if we cannot create obsolescence markers, we shouldn't exchange them
147 # if we cannot create obsolescence markers, we shouldn't exchange them
148 # or perform operations that lead to instability or divergence
148 # or perform operations that lead to instability or divergence
149 unstablevalue = False
149 unstablevalue = False
150 divergencevalue = False
150 divergencevalue = False
151 exchangevalue = False
151 exchangevalue = False
152
152
153 return {
153 return {
154 createmarkersopt: createmarkersvalue,
154 createmarkersopt: createmarkersvalue,
155 allowunstableopt: unstablevalue,
155 allowunstableopt: unstablevalue,
156 allowdivergenceopt: divergencevalue,
156 allowdivergenceopt: divergencevalue,
157 exchangeopt: exchangevalue,
157 exchangeopt: exchangevalue,
158 }
158 }
159
159
160
160
161 def isenabled(repo, option):
161 def isenabled(repo, option):
162 """Returns True if the given repository has the given obsolete option
162 """Returns True if the given repository has the given obsolete option
163 enabled.
163 enabled.
164 """
164 """
165 return getoptions(repo)[option]
165 return getoptions(repo)[option]
166
166
167
167
168 # Creating aliases for marker flags because evolve extension looks for
168 # Creating aliases for marker flags because evolve extension looks for
169 # bumpedfix in obsolete.py
169 # bumpedfix in obsolete.py
170 bumpedfix = obsutil.bumpedfix
170 bumpedfix = obsutil.bumpedfix
171 usingsha256 = obsutil.usingsha256
171 usingsha256 = obsutil.usingsha256
172
172
173 ## Parsing and writing of version "0"
173 ## Parsing and writing of version "0"
174 #
174 #
175 # The header is followed by the markers. Each marker is made of:
175 # The header is followed by the markers. Each marker is made of:
176 #
176 #
177 # - 1 uint8 : number of new changesets "N", can be zero.
177 # - 1 uint8 : number of new changesets "N", can be zero.
178 #
178 #
179 # - 1 uint32: metadata size "M" in bytes.
179 # - 1 uint32: metadata size "M" in bytes.
180 #
180 #
181 # - 1 byte: a bit field. It is reserved for flags used in common
181 # - 1 byte: a bit field. It is reserved for flags used in common
182 # obsolete marker operations, to avoid repeated decoding of metadata
182 # obsolete marker operations, to avoid repeated decoding of metadata
183 # entries.
183 # entries.
184 #
184 #
185 # - 20 bytes: obsoleted changeset identifier.
185 # - 20 bytes: obsoleted changeset identifier.
186 #
186 #
187 # - N*20 bytes: new changesets identifiers.
187 # - N*20 bytes: new changesets identifiers.
188 #
188 #
189 # - M bytes: metadata as a sequence of nul-terminated strings. Each
189 # - M bytes: metadata as a sequence of nul-terminated strings. Each
190 # string contains a key and a value, separated by a colon ':', without
190 # string contains a key and a value, separated by a colon ':', without
191 # additional encoding. Keys cannot contain '\0' or ':' and values
191 # additional encoding. Keys cannot contain '\0' or ':' and values
192 # cannot contain '\0'.
192 # cannot contain '\0'.
193 _fm0version = 0
193 _fm0version = 0
194 _fm0fixed = b'>BIB20s'
194 _fm0fixed = b'>BIB20s'
195 _fm0node = b'20s'
195 _fm0node = b'20s'
196 _fm0fsize = _calcsize(_fm0fixed)
196 _fm0fsize = _calcsize(_fm0fixed)
197 _fm0fnodesize = _calcsize(_fm0node)
197 _fm0fnodesize = _calcsize(_fm0node)
198
198
199
199
200 def _fm0readmarkers(data, off, stop):
200 def _fm0readmarkers(data, off, stop):
201 # Loop on markers
201 # Loop on markers
202 while off < stop:
202 while off < stop:
203 # read fixed part
203 # read fixed part
204 cur = data[off : off + _fm0fsize]
204 cur = data[off : off + _fm0fsize]
205 off += _fm0fsize
205 off += _fm0fsize
206 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
206 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
207 # read replacement
207 # read replacement
208 sucs = ()
208 sucs = ()
209 if numsuc:
209 if numsuc:
210 s = _fm0fnodesize * numsuc
210 s = _fm0fnodesize * numsuc
211 cur = data[off : off + s]
211 cur = data[off : off + s]
212 sucs = _unpack(_fm0node * numsuc, cur)
212 sucs = _unpack(_fm0node * numsuc, cur)
213 off += s
213 off += s
214 # read metadata
214 # read metadata
215 # (metadata will be decoded on demand)
215 # (metadata will be decoded on demand)
216 metadata = data[off : off + mdsize]
216 metadata = data[off : off + mdsize]
217 if len(metadata) != mdsize:
217 if len(metadata) != mdsize:
218 raise error.Abort(
218 raise error.Abort(
219 _(
219 _(
220 b'parsing obsolete marker: metadata is too '
220 b'parsing obsolete marker: metadata is too '
221 b'short, %d bytes expected, got %d'
221 b'short, %d bytes expected, got %d'
222 )
222 )
223 % (mdsize, len(metadata))
223 % (mdsize, len(metadata))
224 )
224 )
225 off += mdsize
225 off += mdsize
226 metadata = _fm0decodemeta(metadata)
226 metadata = _fm0decodemeta(metadata)
227 try:
227 try:
228 when, offset = metadata.pop(b'date', b'0 0').split(b' ')
228 when, offset = metadata.pop(b'date', b'0 0').split(b' ')
229 date = float(when), int(offset)
229 date = float(when), int(offset)
230 except ValueError:
230 except ValueError:
231 date = (0.0, 0)
231 date = (0.0, 0)
232 parents = None
232 parents = None
233 if b'p2' in metadata:
233 if b'p2' in metadata:
234 parents = (metadata.pop(b'p1', None), metadata.pop(b'p2', None))
234 parents = (metadata.pop(b'p1', None), metadata.pop(b'p2', None))
235 elif b'p1' in metadata:
235 elif b'p1' in metadata:
236 parents = (metadata.pop(b'p1', None),)
236 parents = (metadata.pop(b'p1', None),)
237 elif b'p0' in metadata:
237 elif b'p0' in metadata:
238 parents = ()
238 parents = ()
239 if parents is not None:
239 if parents is not None:
240 try:
240 try:
241 parents = tuple(bin(p) for p in parents)
241 parents = tuple(bin(p) for p in parents)
242 # if parent content is not a nodeid, drop the data
242 # if parent content is not a nodeid, drop the data
243 for p in parents:
243 for p in parents:
244 if len(p) != 20:
244 if len(p) != 20:
245 parents = None
245 parents = None
246 break
246 break
247 except binascii.Error:
247 except binascii.Error:
248 # if content cannot be translated to nodeid drop the data.
248 # if content cannot be translated to nodeid drop the data.
249 parents = None
249 parents = None
250
250
251 metadata = tuple(sorted(metadata.items()))
251 metadata = tuple(sorted(metadata.items()))
252
252
253 yield (pre, sucs, flags, metadata, date, parents)
253 yield (pre, sucs, flags, metadata, date, parents)
254
254
255
255
256 def _fm0encodeonemarker(marker):
256 def _fm0encodeonemarker(marker):
257 pre, sucs, flags, metadata, date, parents = marker
257 pre, sucs, flags, metadata, date, parents = marker
258 if flags & usingsha256:
258 if flags & usingsha256:
259 raise error.Abort(_(b'cannot handle sha256 with old obsstore format'))
259 raise error.Abort(_(b'cannot handle sha256 with old obsstore format'))
260 metadata = dict(metadata)
260 metadata = dict(metadata)
261 time, tz = date
261 time, tz = date
262 metadata[b'date'] = b'%r %i' % (time, tz)
262 metadata[b'date'] = b'%r %i' % (time, tz)
263 if parents is not None:
263 if parents is not None:
264 if not parents:
264 if not parents:
265 # mark that we explicitly recorded no parents
265 # mark that we explicitly recorded no parents
266 metadata[b'p0'] = b''
266 metadata[b'p0'] = b''
267 for i, p in enumerate(parents, 1):
267 for i, p in enumerate(parents, 1):
268 metadata[b'p%i' % i] = hex(p)
268 metadata[b'p%i' % i] = hex(p)
269 metadata = _fm0encodemeta(metadata)
269 metadata = _fm0encodemeta(metadata)
270 numsuc = len(sucs)
270 numsuc = len(sucs)
271 format = _fm0fixed + (_fm0node * numsuc)
271 format = _fm0fixed + (_fm0node * numsuc)
272 data = [numsuc, len(metadata), flags, pre]
272 data = [numsuc, len(metadata), flags, pre]
273 data.extend(sucs)
273 data.extend(sucs)
274 return _pack(format, *data) + metadata
274 return _pack(format, *data) + metadata
275
275
276
276
277 def _fm0encodemeta(meta):
277 def _fm0encodemeta(meta):
278 """Return encoded metadata string to string mapping.
278 """Return encoded metadata string to string mapping.
279
279
280 Assume no ':' in key and no '\0' in both key and value."""
280 Assume no ':' in key and no '\0' in both key and value."""
281 for key, value in meta.items():
281 for key, value in meta.items():
282 if b':' in key or b'\0' in key:
282 if b':' in key or b'\0' in key:
283 raise ValueError(b"':' and '\0' are forbidden in metadata key'")
283 raise ValueError(b"':' and '\0' are forbidden in metadata key'")
284 if b'\0' in value:
284 if b'\0' in value:
285 raise ValueError(b"':' is forbidden in metadata value'")
285 raise ValueError(b"':' is forbidden in metadata value'")
286 return b'\0'.join([b'%s:%s' % (k, meta[k]) for k in sorted(meta)])
286 return b'\0'.join([b'%s:%s' % (k, meta[k]) for k in sorted(meta)])
287
287
288
288
289 def _fm0decodemeta(data):
289 def _fm0decodemeta(data):
290 """Return string to string dictionary from encoded version."""
290 """Return string to string dictionary from encoded version."""
291 d = {}
291 d = {}
292 for l in data.split(b'\0'):
292 for l in data.split(b'\0'):
293 if l:
293 if l:
294 key, value = l.split(b':', 1)
294 key, value = l.split(b':', 1)
295 d[key] = value
295 d[key] = value
296 return d
296 return d
297
297
298
298
299 ## Parsing and writing of version "1"
299 ## Parsing and writing of version "1"
300 #
300 #
301 # The header is followed by the markers. Each marker is made of:
301 # The header is followed by the markers. Each marker is made of:
302 #
302 #
303 # - uint32: total size of the marker (including this field)
303 # - uint32: total size of the marker (including this field)
304 #
304 #
305 # - float64: date in seconds since epoch
305 # - float64: date in seconds since epoch
306 #
306 #
307 # - int16: timezone offset in minutes
307 # - int16: timezone offset in minutes
308 #
308 #
309 # - uint16: a bit field. It is reserved for flags used in common
309 # - uint16: a bit field. It is reserved for flags used in common
310 # obsolete marker operations, to avoid repeated decoding of metadata
310 # obsolete marker operations, to avoid repeated decoding of metadata
311 # entries.
311 # entries.
312 #
312 #
313 # - uint8: number of successors "N", can be zero.
313 # - uint8: number of successors "N", can be zero.
314 #
314 #
315 # - uint8: number of parents "P", can be zero.
315 # - uint8: number of parents "P", can be zero.
316 #
316 #
317 # 0: parents data stored but no parent,
317 # 0: parents data stored but no parent,
318 # 1: one parent stored,
318 # 1: one parent stored,
319 # 2: two parents stored,
319 # 2: two parents stored,
320 # 3: no parent data stored
320 # 3: no parent data stored
321 #
321 #
322 # - uint8: number of metadata entries M
322 # - uint8: number of metadata entries M
323 #
323 #
324 # - 20 or 32 bytes: predecessor changeset identifier.
324 # - 20 or 32 bytes: predecessor changeset identifier.
325 #
325 #
326 # - N*(20 or 32) bytes: successors changesets identifiers.
326 # - N*(20 or 32) bytes: successors changesets identifiers.
327 #
327 #
328 # - P*(20 or 32) bytes: parents of the predecessors changesets.
328 # - P*(20 or 32) bytes: parents of the predecessors changesets.
329 #
329 #
330 # - M*(uint8, uint8): size of all metadata entries (key and value)
330 # - M*(uint8, uint8): size of all metadata entries (key and value)
331 #
331 #
332 # - remaining bytes: the metadata, each (key, value) pair after the other.
332 # - remaining bytes: the metadata, each (key, value) pair after the other.
333 _fm1version = 1
333 _fm1version = 1
334 _fm1fixed = b'>IdhHBBB'
334 _fm1fixed = b'>IdhHBBB'
335 _fm1nodesha1 = b'20s'
335 _fm1nodesha1 = b'20s'
336 _fm1nodesha256 = b'32s'
336 _fm1nodesha256 = b'32s'
337 _fm1nodesha1size = _calcsize(_fm1nodesha1)
337 _fm1nodesha1size = _calcsize(_fm1nodesha1)
338 _fm1nodesha256size = _calcsize(_fm1nodesha256)
338 _fm1nodesha256size = _calcsize(_fm1nodesha256)
339 _fm1fsize = _calcsize(_fm1fixed)
339 _fm1fsize = _calcsize(_fm1fixed)
340 _fm1parentnone = 3
340 _fm1parentnone = 3
341 _fm1metapair = b'BB'
341 _fm1metapair = b'BB'
342 _fm1metapairsize = _calcsize(_fm1metapair)
342 _fm1metapairsize = _calcsize(_fm1metapair)
343
343
344
344
345 def _fm1purereadmarkers(data, off, stop):
345 def _fm1purereadmarkers(data, off, stop):
346 # make some global constants local for performance
346 # make some global constants local for performance
347 noneflag = _fm1parentnone
347 noneflag = _fm1parentnone
348 sha2flag = usingsha256
348 sha2flag = usingsha256
349 sha1size = _fm1nodesha1size
349 sha1size = _fm1nodesha1size
350 sha2size = _fm1nodesha256size
350 sha2size = _fm1nodesha256size
351 sha1fmt = _fm1nodesha1
351 sha1fmt = _fm1nodesha1
352 sha2fmt = _fm1nodesha256
352 sha2fmt = _fm1nodesha256
353 metasize = _fm1metapairsize
353 metasize = _fm1metapairsize
354 metafmt = _fm1metapair
354 metafmt = _fm1metapair
355 fsize = _fm1fsize
355 fsize = _fm1fsize
356 unpack = _unpack
356 unpack = _unpack
357
357
358 # Loop on markers
358 # Loop on markers
359 ufixed = struct.Struct(_fm1fixed).unpack
359 ufixed = struct.Struct(_fm1fixed).unpack
360
360
361 while off < stop:
361 while off < stop:
362 # read fixed part
362 # read fixed part
363 o1 = off + fsize
363 o1 = off + fsize
364 t, secs, tz, flags, numsuc, numpar, nummeta = ufixed(data[off:o1])
364 t, secs, tz, flags, numsuc, numpar, nummeta = ufixed(data[off:o1])
365
365
366 if flags & sha2flag:
366 if flags & sha2flag:
367 nodefmt = sha2fmt
367 nodefmt = sha2fmt
368 nodesize = sha2size
368 nodesize = sha2size
369 else:
369 else:
370 nodefmt = sha1fmt
370 nodefmt = sha1fmt
371 nodesize = sha1size
371 nodesize = sha1size
372
372
373 (prec,) = unpack(nodefmt, data[o1 : o1 + nodesize])
373 (prec,) = unpack(nodefmt, data[o1 : o1 + nodesize])
374 o1 += nodesize
374 o1 += nodesize
375
375
376 # read 0 or more successors
376 # read 0 or more successors
377 if numsuc == 1:
377 if numsuc == 1:
378 o2 = o1 + nodesize
378 o2 = o1 + nodesize
379 sucs = (data[o1:o2],)
379 sucs = (data[o1:o2],)
380 else:
380 else:
381 o2 = o1 + nodesize * numsuc
381 o2 = o1 + nodesize * numsuc
382 sucs = unpack(nodefmt * numsuc, data[o1:o2])
382 sucs = unpack(nodefmt * numsuc, data[o1:o2])
383
383
384 # read parents
384 # read parents
385 if numpar == noneflag:
385 if numpar == noneflag:
386 o3 = o2
386 o3 = o2
387 parents = None
387 parents = None
388 elif numpar == 1:
388 elif numpar == 1:
389 o3 = o2 + nodesize
389 o3 = o2 + nodesize
390 parents = (data[o2:o3],)
390 parents = (data[o2:o3],)
391 else:
391 else:
392 o3 = o2 + nodesize * numpar
392 o3 = o2 + nodesize * numpar
393 parents = unpack(nodefmt * numpar, data[o2:o3])
393 parents = unpack(nodefmt * numpar, data[o2:o3])
394
394
395 # read metadata
395 # read metadata
396 off = o3 + metasize * nummeta
396 off = o3 + metasize * nummeta
397 metapairsize = unpack(b'>' + (metafmt * nummeta), data[o3:off])
397 metapairsize = unpack(b'>' + (metafmt * nummeta), data[o3:off])
398 metadata = []
398 metadata = []
399 for idx in range(0, len(metapairsize), 2):
399 for idx in range(0, len(metapairsize), 2):
400 o1 = off + metapairsize[idx]
400 o1 = off + metapairsize[idx]
401 o2 = o1 + metapairsize[idx + 1]
401 o2 = o1 + metapairsize[idx + 1]
402 metadata.append((data[off:o1], data[o1:o2]))
402 metadata.append((data[off:o1], data[o1:o2]))
403 off = o2
403 off = o2
404
404
405 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
405 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
406
406
407
407
408 def _fm1encodeonemarker(marker):
408 def _fm1encodeonemarker(marker):
409 pre, sucs, flags, metadata, date, parents = marker
409 pre, sucs, flags, metadata, date, parents = marker
410 # determine node size
410 # determine node size
411 _fm1node = _fm1nodesha1
411 _fm1node = _fm1nodesha1
412 if flags & usingsha256:
412 if flags & usingsha256:
413 _fm1node = _fm1nodesha256
413 _fm1node = _fm1nodesha256
414 numsuc = len(sucs)
414 numsuc = len(sucs)
415 numextranodes = 1 + numsuc
415 numextranodes = 1 + numsuc
416 if parents is None:
416 if parents is None:
417 numpar = _fm1parentnone
417 numpar = _fm1parentnone
418 else:
418 else:
419 numpar = len(parents)
419 numpar = len(parents)
420 numextranodes += numpar
420 numextranodes += numpar
421 formatnodes = _fm1node * numextranodes
421 formatnodes = _fm1node * numextranodes
422 formatmeta = _fm1metapair * len(metadata)
422 formatmeta = _fm1metapair * len(metadata)
423 format = _fm1fixed + formatnodes + formatmeta
423 format = _fm1fixed + formatnodes + formatmeta
424 # tz is stored in minutes so we divide by 60
424 # tz is stored in minutes so we divide by 60
425 tz = date[1] // 60
425 tz = date[1] // 60
426 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
426 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
427 data.extend(sucs)
427 data.extend(sucs)
428 if parents is not None:
428 if parents is not None:
429 data.extend(parents)
429 data.extend(parents)
430 totalsize = _calcsize(format)
430 totalsize = _calcsize(format)
431 for key, value in metadata:
431 for key, value in metadata:
432 lk = len(key)
432 lk = len(key)
433 lv = len(value)
433 lv = len(value)
434 if lk > 255:
434 if lk > 255:
435 msg = (
435 msg = (
436 b'obsstore metadata key cannot be longer than 255 bytes'
436 b'obsstore metadata key cannot be longer than 255 bytes'
437 b' (key "%s" is %u bytes)'
437 b' (key "%s" is %u bytes)'
438 ) % (key, lk)
438 ) % (key, lk)
439 raise error.ProgrammingError(msg)
439 raise error.ProgrammingError(msg)
440 if lv > 255:
440 if lv > 255:
441 msg = (
441 msg = (
442 b'obsstore metadata value cannot be longer than 255 bytes'
442 b'obsstore metadata value cannot be longer than 255 bytes'
443 b' (value "%s" for key "%s" is %u bytes)'
443 b' (value "%s" for key "%s" is %u bytes)'
444 ) % (value, key, lv)
444 ) % (value, key, lv)
445 raise error.ProgrammingError(msg)
445 raise error.ProgrammingError(msg)
446 data.append(lk)
446 data.append(lk)
447 data.append(lv)
447 data.append(lv)
448 totalsize += lk + lv
448 totalsize += lk + lv
449 data[0] = totalsize
449 data[0] = totalsize
450 data = [_pack(format, *data)]
450 data = [_pack(format, *data)]
451 for key, value in metadata:
451 for key, value in metadata:
452 data.append(key)
452 data.append(key)
453 data.append(value)
453 data.append(value)
454 return b''.join(data)
454 return b''.join(data)
455
455
456
456
457 def _fm1readmarkers(data, off, stop):
457 def _fm1readmarkers(data, off, stop):
458 native = getattr(parsers, 'fm1readmarkers', None)
458 native = getattr(parsers, 'fm1readmarkers', None)
459 if not native:
459 if not native:
460 return _fm1purereadmarkers(data, off, stop)
460 return _fm1purereadmarkers(data, off, stop)
461 return native(data, off, stop)
461 return native(data, off, stop)
462
462
463
463
464 # mapping to read/write various marker formats
464 # mapping to read/write various marker formats
465 # <version> -> (decoder, encoder)
465 # <version> -> (decoder, encoder)
466 formats = {
466 formats = {
467 _fm0version: (_fm0readmarkers, _fm0encodeonemarker),
467 _fm0version: (_fm0readmarkers, _fm0encodeonemarker),
468 _fm1version: (_fm1readmarkers, _fm1encodeonemarker),
468 _fm1version: (_fm1readmarkers, _fm1encodeonemarker),
469 }
469 }
470
470
471
471
472 def _readmarkerversion(data):
472 def _readmarkerversion(data):
473 return _unpack(b'>B', data[0:1])[0]
473 return _unpack(b'>B', data[0:1])[0]
474
474
475
475
476 @util.nogc
476 @util.nogc
477 def _readmarkers(data, off=None, stop=None):
477 def _readmarkers(data, off=None, stop=None):
478 """Read and enumerate markers from raw data"""
478 """Read and enumerate markers from raw data"""
479 diskversion = _readmarkerversion(data)
479 diskversion = _readmarkerversion(data)
480 if not off:
480 if not off:
481 off = 1 # skip 1 byte version number
481 off = 1 # skip 1 byte version number
482 if stop is None:
482 if stop is None:
483 stop = len(data)
483 stop = len(data)
484 if diskversion not in formats:
484 if diskversion not in formats:
485 msg = _(b'parsing obsolete marker: unknown version %r') % diskversion
485 msg = _(b'parsing obsolete marker: unknown version %r') % diskversion
486 raise error.UnknownVersion(msg, version=diskversion)
486 raise error.UnknownVersion(msg, version=diskversion)
487 return diskversion, formats[diskversion][0](data, off, stop)
487 return diskversion, formats[diskversion][0](data, off, stop)
488
488
489
489
490 def encodeheader(version=_fm0version):
490 def encodeheader(version=_fm0version):
491 return _pack(b'>B', version)
491 return _pack(b'>B', version)
492
492
493
493
494 def encodemarkers(markers, addheader=False, version=_fm0version):
494 def encodemarkers(markers, addheader=False, version=_fm0version):
495 # Kept separate from flushmarkers(), it will be reused for
495 # Kept separate from flushmarkers(), it will be reused for
496 # markers exchange.
496 # markers exchange.
497 encodeone = formats[version][1]
497 encodeone = formats[version][1]
498 if addheader:
498 if addheader:
499 yield encodeheader(version)
499 yield encodeheader(version)
500 for marker in markers:
500 for marker in markers:
501 yield encodeone(marker)
501 yield encodeone(marker)
502
502
503
503
504 @util.nogc
504 @util.nogc
505 def _addsuccessors(successors, markers):
505 def _addsuccessors(successors, markers):
506 for mark in markers:
506 for mark in markers:
507 successors.setdefault(mark[0], set()).add(mark)
507 successors.setdefault(mark[0], set()).add(mark)
508
508
509
509
510 @util.nogc
510 @util.nogc
511 def _addpredecessors(predecessors, markers):
511 def _addpredecessors(predecessors, markers):
512 for mark in markers:
512 for mark in markers:
513 for suc in mark[1]:
513 for suc in mark[1]:
514 predecessors.setdefault(suc, set()).add(mark)
514 predecessors.setdefault(suc, set()).add(mark)
515
515
516
516
517 @util.nogc
517 @util.nogc
518 def _addchildren(children, markers):
518 def _addchildren(children, markers):
519 for mark in markers:
519 for mark in markers:
520 parents = mark[5]
520 parents = mark[5]
521 if parents is not None:
521 if parents is not None:
522 for p in parents:
522 for p in parents:
523 children.setdefault(p, set()).add(mark)
523 children.setdefault(p, set()).add(mark)
524
524
525
525
526 def _checkinvalidmarkers(repo, markers):
526 def _checkinvalidmarkers(repo, markers):
527 """search for marker with invalid data and raise error if needed
527 """search for marker with invalid data and raise error if needed
528
528
529 Exist as a separated function to allow the evolve extension for a more
529 Exist as a separated function to allow the evolve extension for a more
530 subtle handling.
530 subtle handling.
531 """
531 """
532 for mark in markers:
532 for mark in markers:
533 if repo.nullid in mark[1]:
533 if repo.nullid in mark[1]:
534 raise error.Abort(
534 raise error.Abort(
535 _(
535 _(
536 b'bad obsolescence marker detected: '
536 b'bad obsolescence marker detected: '
537 b'invalid successors nullid'
537 b'invalid successors nullid'
538 )
538 )
539 )
539 )
540
540
541
541
542 class obsstore:
542 class obsstore:
543 """Store obsolete markers
543 """Store obsolete markers
544
544
545 Markers can be accessed with two mappings:
545 Markers can be accessed with two mappings:
546 - predecessors[x] -> set(markers on predecessors edges of x)
546 - predecessors[x] -> set(markers on predecessors edges of x)
547 - successors[x] -> set(markers on successors edges of x)
547 - successors[x] -> set(markers on successors edges of x)
548 - children[x] -> set(markers on predecessors edges of children(x)
548 - children[x] -> set(markers on predecessors edges of children(x)
549 """
549 """
550
550
551 fields = (b'prec', b'succs', b'flag', b'meta', b'date', b'parents')
551 fields = (b'prec', b'succs', b'flag', b'meta', b'date', b'parents')
552 # prec: nodeid, predecessors changesets
552 # prec: nodeid, predecessors changesets
553 # succs: tuple of nodeid, successor changesets (0-N length)
553 # succs: tuple of nodeid, successor changesets (0-N length)
554 # flag: integer, flag field carrying modifier for the markers (see doc)
554 # flag: integer, flag field carrying modifier for the markers (see doc)
555 # meta: binary blob in UTF-8, encoded metadata dictionary
555 # meta: binary blob in UTF-8, encoded metadata dictionary
556 # date: (float, int) tuple, date of marker creation
556 # date: (float, int) tuple, date of marker creation
557 # parents: (tuple of nodeid) or None, parents of predecessors
557 # parents: (tuple of nodeid) or None, parents of predecessors
558 # None is used when no data has been recorded
558 # None is used when no data has been recorded
559
559
560 def __init__(self, repo, svfs, defaultformat=_fm1version, readonly=False):
560 def __init__(self, repo, svfs, defaultformat=_fm1version, readonly=False):
561 # caches for various obsolescence related cache
561 # caches for various obsolescence related cache
562 self.caches = {}
562 self.caches = {}
563 self.svfs = svfs
563 self.svfs = svfs
564 self._repo = weakref.ref(repo)
564 self._repo = weakref.ref(repo)
565 self._defaultformat = defaultformat
565 self._defaultformat = defaultformat
566 self._readonly = readonly
566 self._readonly = readonly
567
567
568 @property
568 @property
569 def repo(self):
569 def repo(self):
570 r = self._repo()
570 r = self._repo()
571 if r is None:
571 if r is None:
572 msg = "using the obsstore of a deallocated repo"
572 msg = "using the obsstore of a deallocated repo"
573 raise error.ProgrammingError(msg)
573 raise error.ProgrammingError(msg)
574 return r
574 return r
575
575
576 def __iter__(self):
576 def __iter__(self):
577 return iter(self._all)
577 return iter(self._all)
578
578
579 def __len__(self):
579 def __len__(self):
580 return len(self._all)
580 return len(self._all)
581
581
582 def __nonzero__(self):
582 def __nonzero__(self):
583 from . import statichttprepo
583 from . import statichttprepo
584
584
585 if isinstance(self.repo, statichttprepo.statichttprepository):
585 if isinstance(self.repo, statichttprepo.statichttprepository):
586 # If repo is accessed via static HTTP, then we can't use os.stat()
586 # If repo is accessed via static HTTP, then we can't use os.stat()
587 # to just peek at the file size.
587 # to just peek at the file size.
588 return len(self._data) > 1
588 return len(self._data) > 1
589 if not self._cached('_all'):
589 if not self._cached('_all'):
590 try:
590 try:
591 return self.svfs.stat(b'obsstore').st_size > 1
591 return self.svfs.stat(b'obsstore').st_size > 1
592 except FileNotFoundError:
592 except FileNotFoundError:
593 # just build an empty _all list if no obsstore exists, which
593 # just build an empty _all list if no obsstore exists, which
594 # avoids further stat() syscalls
594 # avoids further stat() syscalls
595 pass
595 pass
596 return bool(self._all)
596 return bool(self._all)
597
597
598 __bool__ = __nonzero__
598 __bool__ = __nonzero__
599
599
600 @property
600 @property
601 def readonly(self):
601 def readonly(self):
602 """True if marker creation is disabled
602 """True if marker creation is disabled
603
603
604 Remove me in the future when obsolete marker is always on."""
604 Remove me in the future when obsolete marker is always on."""
605 return self._readonly
605 return self._readonly
606
606
607 def create(
607 def create(
608 self,
608 self,
609 transaction,
609 transaction,
610 prec,
610 prec,
611 succs=(),
611 succs=(),
612 flag=0,
612 flag=0,
613 parents=None,
613 parents=None,
614 date=None,
614 date=None,
615 metadata=None,
615 metadata=None,
616 ui=None,
616 ui=None,
617 ):
617 ):
618 """obsolete: add a new obsolete marker
618 """obsolete: add a new obsolete marker
619
619
620 * ensuring it is hashable
620 * ensuring it is hashable
621 * check mandatory metadata
621 * check mandatory metadata
622 * encode metadata
622 * encode metadata
623
623
624 If you are a human writing code creating marker you want to use the
624 If you are a human writing code creating marker you want to use the
625 `createmarkers` function in this module instead.
625 `createmarkers` function in this module instead.
626
626
627 return True if a new marker have been added, False if the markers
627 return True if a new marker have been added, False if the markers
628 already existed (no op).
628 already existed (no op).
629 """
629 """
630 flag = int(flag)
630 flag = int(flag)
631 if metadata is None:
631 if metadata is None:
632 metadata = {}
632 metadata = {}
633 if date is None:
633 if date is None:
634 if b'date' in metadata:
634 if b'date' in metadata:
635 # as a courtesy for out-of-tree extensions
635 # as a courtesy for out-of-tree extensions
636 date = dateutil.parsedate(metadata.pop(b'date'))
636 date = dateutil.parsedate(metadata.pop(b'date'))
637 elif ui is not None:
637 elif ui is not None:
638 date = ui.configdate(b'devel', b'default-date')
638 date = ui.configdate(b'devel', b'default-date')
639 if date is None:
639 if date is None:
640 date = dateutil.makedate()
640 date = dateutil.makedate()
641 else:
641 else:
642 date = dateutil.makedate()
642 date = dateutil.makedate()
643 if flag & usingsha256:
643 if flag & usingsha256:
644 if len(prec) != 32:
644 if len(prec) != 32:
645 raise ValueError(prec)
645 raise ValueError(prec)
646 for succ in succs:
646 for succ in succs:
647 if len(succ) != 32:
647 if len(succ) != 32:
648 raise ValueError(succ)
648 raise ValueError(succ)
649 else:
649 else:
650 if len(prec) != 20:
650 if len(prec) != 20:
651 raise ValueError(prec)
651 raise ValueError(prec)
652 for succ in succs:
652 for succ in succs:
653 if len(succ) != 20:
653 if len(succ) != 20:
654 raise ValueError(succ)
654 raise ValueError(succ)
655 if prec in succs:
655 if prec in succs:
656 raise ValueError('in-marker cycle with %s' % prec.hex())
656 raise ValueError('in-marker cycle with %s' % prec.hex())
657
657
658 metadata = tuple(sorted(metadata.items()))
658 metadata = tuple(sorted(metadata.items()))
659 for k, v in metadata:
659 for k, v in metadata:
660 try:
660 try:
661 # might be better to reject non-ASCII keys
661 # might be better to reject non-ASCII keys
662 k.decode('utf-8')
662 k.decode('utf-8')
663 v.decode('utf-8')
663 v.decode('utf-8')
664 except UnicodeDecodeError:
664 except UnicodeDecodeError:
665 raise error.ProgrammingError(
665 raise error.ProgrammingError(
666 b'obsstore metadata must be valid UTF-8 sequence '
666 b'obsstore metadata must be valid UTF-8 sequence '
667 b'(key = %r, value = %r)'
667 b'(key = %r, value = %r)'
668 % (pycompat.bytestr(k), pycompat.bytestr(v))
668 % (pycompat.bytestr(k), pycompat.bytestr(v))
669 )
669 )
670
670
671 marker = (bytes(prec), tuple(succs), flag, metadata, date, parents)
671 marker = (bytes(prec), tuple(succs), flag, metadata, date, parents)
672 return bool(self.add(transaction, [marker]))
672 return bool(self.add(transaction, [marker]))
673
673
674 def add(self, transaction, markers):
674 def add(self, transaction, markers):
675 """Add new markers to the store
675 """Add new markers to the store
676
676
677 Take care of filtering duplicate.
677 Take care of filtering duplicate.
678 Return the number of new marker."""
678 Return the number of new marker."""
679 if self._readonly:
679 if self._readonly:
680 raise error.Abort(
680 raise error.Abort(
681 _(b'creating obsolete markers is not enabled on this repo')
681 _(b'creating obsolete markers is not enabled on this repo')
682 )
682 )
683 known = set()
683 known = set()
684 getsuccessors = self.successors.get
684 getsuccessors = self.successors.get
685 new = []
685 new = []
686 for m in markers:
686 for m in markers:
687 if m not in getsuccessors(m[0], ()) and m not in known:
687 if m not in getsuccessors(m[0], ()) and m not in known:
688 known.add(m)
688 known.add(m)
689 new.append(m)
689 new.append(m)
690 if new:
690 if new:
691 f = self.svfs(b'obsstore', b'ab')
691 f = self.svfs(b'obsstore', b'ab')
692 try:
692 try:
693 offset = f.tell()
693 offset = f.tell()
694 transaction.add(b'obsstore', offset)
694 transaction.add(b'obsstore', offset)
695 # offset == 0: new file - add the version header
695 # offset == 0: new file - add the version header
696 data = b''.join(encodemarkers(new, offset == 0, self._version))
696 data = b''.join(encodemarkers(new, offset == 0, self._version))
697 f.write(data)
697 f.write(data)
698 finally:
698 finally:
699 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
699 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
700 # call 'filecacheentry.refresh()' here
700 # call 'filecacheentry.refresh()' here
701 f.close()
701 f.close()
702 addedmarkers = transaction.changes.get(b'obsmarkers')
702 addedmarkers = transaction.changes.get(b'obsmarkers')
703 if addedmarkers is not None:
703 if addedmarkers is not None:
704 addedmarkers.update(new)
704 addedmarkers.update(new)
705 self._addmarkers(new, data)
705 self._addmarkers(new, data)
706 # new marker *may* have changed several set. invalidate the cache.
706 # new marker *may* have changed several set. invalidate the cache.
707 self.caches.clear()
707 self.caches.clear()
708 # records the number of new markers for the transaction hooks
708 # records the number of new markers for the transaction hooks
709 previous = int(transaction.hookargs.get(b'new_obsmarkers', b'0'))
709 previous = int(transaction.hookargs.get(b'new_obsmarkers', b'0'))
710 transaction.hookargs[b'new_obsmarkers'] = b'%d' % (previous + len(new))
710 transaction.hookargs[b'new_obsmarkers'] = b'%d' % (previous + len(new))
711 return len(new)
711 return len(new)
712
712
713 def mergemarkers(self, transaction, data):
713 def mergemarkers(self, transaction, data):
714 """merge a binary stream of markers inside the obsstore
714 """merge a binary stream of markers inside the obsstore
715
715
716 Returns the number of new markers added."""
716 Returns the number of new markers added."""
717 version, markers = _readmarkers(data)
717 version, markers = _readmarkers(data)
718 return self.add(transaction, markers)
718 return self.add(transaction, markers)
719
719
720 @propertycache
720 @propertycache
721 def _data(self):
721 def _data(self):
722 return self.svfs.tryread(b'obsstore')
722 return self.svfs.tryread(b'obsstore')
723
723
724 @propertycache
724 @propertycache
725 def _version(self):
725 def _version(self):
726 if len(self._data) >= 1:
726 if len(self._data) >= 1:
727 return _readmarkerversion(self._data)
727 return _readmarkerversion(self._data)
728 else:
728 else:
729 return self._defaultformat
729 return self._defaultformat
730
730
731 @propertycache
731 @propertycache
732 def _all(self):
732 def _all(self):
733 data = self._data
733 data = self._data
734 if not data:
734 if not data:
735 return []
735 return []
736 self._version, markers = _readmarkers(data)
736 self._version, markers = _readmarkers(data)
737 markers = list(markers)
737 markers = list(markers)
738 _checkinvalidmarkers(self.repo, markers)
738 _checkinvalidmarkers(self.repo, markers)
739 return markers
739 return markers
740
740
741 @propertycache
741 @propertycache
742 def successors(self):
742 def successors(self):
743 successors = {}
743 successors = {}
744 _addsuccessors(successors, self._all)
744 _addsuccessors(successors, self._all)
745 return successors
745 return successors
746
746
747 @propertycache
747 @propertycache
748 def predecessors(self):
748 def predecessors(self):
749 predecessors = {}
749 predecessors = {}
750 _addpredecessors(predecessors, self._all)
750 _addpredecessors(predecessors, self._all)
751 return predecessors
751 return predecessors
752
752
753 @propertycache
753 @propertycache
754 def children(self):
754 def children(self):
755 children = {}
755 children = {}
756 _addchildren(children, self._all)
756 _addchildren(children, self._all)
757 return children
757 return children
758
758
759 def _cached(self, attr):
759 def _cached(self, attr):
760 return attr in self.__dict__
760 return attr in self.__dict__
761
761
762 def _addmarkers(self, markers, rawdata):
762 def _addmarkers(self, markers, rawdata):
763 markers = list(markers) # to allow repeated iteration
763 markers = list(markers) # to allow repeated iteration
764 self._data = self._data + rawdata
764 self._data = self._data + rawdata
765 self._all.extend(markers)
765 self._all.extend(markers)
766 if self._cached('successors'):
766 if self._cached('successors'):
767 _addsuccessors(self.successors, markers)
767 _addsuccessors(self.successors, markers)
768 if self._cached('predecessors'):
768 if self._cached('predecessors'):
769 _addpredecessors(self.predecessors, markers)
769 _addpredecessors(self.predecessors, markers)
770 if self._cached('children'):
770 if self._cached('children'):
771 _addchildren(self.children, markers)
771 _addchildren(self.children, markers)
772 _checkinvalidmarkers(self.repo, markers)
772 _checkinvalidmarkers(self.repo, markers)
773
773
774 def relevantmarkers(self, nodes):
774 def relevantmarkers(self, nodes=None, revs=None):
775 """return a set of all obsolescence markers relevant to a set of nodes.
775 """return a set of all obsolescence markers relevant to a set of
776 nodes or revisions.
776
777
777 "relevant" to a set of nodes mean:
778 "relevant" to a set of nodes or revisions mean:
778
779
779 - marker that use this changeset as successor
780 - marker that use this changeset as successor
780 - prune marker of direct children on this changeset
781 - prune marker of direct children on this changeset
781 - recursive application of the two rules on predecessors of these
782 - recursive application of the two rules on predecessors of these
782 markers
783 markers
783
784
784 It is a set so you cannot rely on order."""
785 It is a set so you cannot rely on order."""
786 if nodes is None:
787 nodes = set()
788 if revs is None:
789 revs = set()
785
790
786 pendingnodes = set(nodes)
791 get_rev = self.repo.unfiltered().changelog.index.get_rev
792 pendingnodes = set()
793 for marker in self._all:
794 for node in (marker[0],) + marker[1] + (marker[5] or ()):
795 if node in nodes:
796 pendingnodes.add(node)
797 elif revs:
798 rev = get_rev(node)
799 if rev is not None and rev in revs:
800 pendingnodes.add(node)
787 seenmarkers = set()
801 seenmarkers = set()
788 seennodes = set(pendingnodes)
802 seenmarkers = set()
803 seennodes = set()
789 precursorsmarkers = self.predecessors
804 precursorsmarkers = self.predecessors
790 succsmarkers = self.successors
805 succsmarkers = self.successors
791 children = self.children
806 children = self.children
792 while pendingnodes:
807 while pendingnodes:
793 direct = set()
808 direct = set()
794 for current in pendingnodes:
809 for current in pendingnodes:
795 direct.update(precursorsmarkers.get(current, ()))
810 direct.update(precursorsmarkers.get(current, ()))
796 pruned = [m for m in children.get(current, ()) if not m[1]]
811 pruned = [m for m in children.get(current, ()) if not m[1]]
797 direct.update(pruned)
812 direct.update(pruned)
798 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
813 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
799 direct.update(pruned)
814 direct.update(pruned)
800 direct -= seenmarkers
815 direct -= seenmarkers
801 pendingnodes = {m[0] for m in direct}
816 pendingnodes = {m[0] for m in direct}
802 seenmarkers |= direct
817 seenmarkers |= direct
803 pendingnodes -= seennodes
818 pendingnodes -= seennodes
804 seennodes |= pendingnodes
819 seennodes |= pendingnodes
805 return seenmarkers
820 return seenmarkers
806
821
807
822
808 def makestore(ui, repo):
823 def makestore(ui, repo):
809 """Create an obsstore instance from a repo."""
824 """Create an obsstore instance from a repo."""
810 # read default format for new obsstore.
825 # read default format for new obsstore.
811 # developer config: format.obsstore-version
826 # developer config: format.obsstore-version
812 defaultformat = ui.configint(b'format', b'obsstore-version')
827 defaultformat = ui.configint(b'format', b'obsstore-version')
813 # rely on obsstore class default when possible.
828 # rely on obsstore class default when possible.
814 kwargs = {}
829 kwargs = {}
815 if defaultformat is not None:
830 if defaultformat is not None:
816 kwargs['defaultformat'] = defaultformat
831 kwargs['defaultformat'] = defaultformat
817 readonly = not isenabled(repo, createmarkersopt)
832 readonly = not isenabled(repo, createmarkersopt)
818 store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs)
833 store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs)
819 if store and readonly:
834 if store and readonly:
820 ui.warn(
835 ui.warn(
821 _(b'"obsolete" feature not enabled but %i markers found!\n')
836 _(b'"obsolete" feature not enabled but %i markers found!\n')
822 % len(list(store))
837 % len(list(store))
823 )
838 )
824 return store
839 return store
825
840
826
841
827 def commonversion(versions):
842 def commonversion(versions):
828 """Return the newest version listed in both versions and our local formats.
843 """Return the newest version listed in both versions and our local formats.
829
844
830 Returns None if no common version exists.
845 Returns None if no common version exists.
831 """
846 """
832 versions.sort(reverse=True)
847 versions.sort(reverse=True)
833 # search for highest version known on both side
848 # search for highest version known on both side
834 for v in versions:
849 for v in versions:
835 if v in formats:
850 if v in formats:
836 return v
851 return v
837 return None
852 return None
838
853
839
854
840 # arbitrary picked to fit into 8K limit from HTTP server
855 # arbitrary picked to fit into 8K limit from HTTP server
841 # you have to take in account:
856 # you have to take in account:
842 # - the version header
857 # - the version header
843 # - the base85 encoding
858 # - the base85 encoding
844 _maxpayload = 5300
859 _maxpayload = 5300
845
860
846
861
847 def _pushkeyescape(markers):
862 def _pushkeyescape(markers):
848 """encode markers into a dict suitable for pushkey exchange
863 """encode markers into a dict suitable for pushkey exchange
849
864
850 - binary data is base85 encoded
865 - binary data is base85 encoded
851 - split in chunks smaller than 5300 bytes"""
866 - split in chunks smaller than 5300 bytes"""
852 keys = {}
867 keys = {}
853 parts = []
868 parts = []
854 currentlen = _maxpayload * 2 # ensure we create a new part
869 currentlen = _maxpayload * 2 # ensure we create a new part
855 for marker in markers:
870 for marker in markers:
856 nextdata = _fm0encodeonemarker(marker)
871 nextdata = _fm0encodeonemarker(marker)
857 if len(nextdata) + currentlen > _maxpayload:
872 if len(nextdata) + currentlen > _maxpayload:
858 currentpart = []
873 currentpart = []
859 currentlen = 0
874 currentlen = 0
860 parts.append(currentpart)
875 parts.append(currentpart)
861 currentpart.append(nextdata)
876 currentpart.append(nextdata)
862 currentlen += len(nextdata)
877 currentlen += len(nextdata)
863 for idx, part in enumerate(reversed(parts)):
878 for idx, part in enumerate(reversed(parts)):
864 data = b''.join([_pack(b'>B', _fm0version)] + part)
879 data = b''.join([_pack(b'>B', _fm0version)] + part)
865 keys[b'dump%i' % idx] = util.b85encode(data)
880 keys[b'dump%i' % idx] = util.b85encode(data)
866 return keys
881 return keys
867
882
868
883
869 def listmarkers(repo):
884 def listmarkers(repo):
870 """List markers over pushkey"""
885 """List markers over pushkey"""
871 if not repo.obsstore:
886 if not repo.obsstore:
872 return {}
887 return {}
873 return _pushkeyescape(sorted(repo.obsstore))
888 return _pushkeyescape(sorted(repo.obsstore))
874
889
875
890
876 def pushmarker(repo, key, old, new):
891 def pushmarker(repo, key, old, new):
877 """Push markers over pushkey"""
892 """Push markers over pushkey"""
878 if not key.startswith(b'dump'):
893 if not key.startswith(b'dump'):
879 repo.ui.warn(_(b'unknown key: %r') % key)
894 repo.ui.warn(_(b'unknown key: %r') % key)
880 return False
895 return False
881 if old:
896 if old:
882 repo.ui.warn(_(b'unexpected old value for %r') % key)
897 repo.ui.warn(_(b'unexpected old value for %r') % key)
883 return False
898 return False
884 data = util.b85decode(new)
899 data = util.b85decode(new)
885 with repo.lock(), repo.transaction(b'pushkey: obsolete markers') as tr:
900 with repo.lock(), repo.transaction(b'pushkey: obsolete markers') as tr:
886 repo.obsstore.mergemarkers(tr, data)
901 repo.obsstore.mergemarkers(tr, data)
887 repo.invalidatevolatilesets()
902 repo.invalidatevolatilesets()
888 return True
903 return True
889
904
890
905
891 # mapping of 'set-name' -> <function to compute this set>
906 # mapping of 'set-name' -> <function to compute this set>
892 cachefuncs = {}
907 cachefuncs = {}
893
908
894
909
895 def cachefor(name):
910 def cachefor(name):
896 """Decorator to register a function as computing the cache for a set"""
911 """Decorator to register a function as computing the cache for a set"""
897
912
898 def decorator(func):
913 def decorator(func):
899 if name in cachefuncs:
914 if name in cachefuncs:
900 msg = b"duplicated registration for volatileset '%s' (existing: %r)"
915 msg = b"duplicated registration for volatileset '%s' (existing: %r)"
901 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
916 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
902 cachefuncs[name] = func
917 cachefuncs[name] = func
903 return func
918 return func
904
919
905 return decorator
920 return decorator
906
921
907
922
908 def getrevs(repo, name):
923 def getrevs(repo, name):
909 """Return the set of revision that belong to the <name> set
924 """Return the set of revision that belong to the <name> set
910
925
911 Such access may compute the set and cache it for future use"""
926 Such access may compute the set and cache it for future use"""
912 repo = repo.unfiltered()
927 repo = repo.unfiltered()
913 with util.timedcm('getrevs %s', name):
928 with util.timedcm('getrevs %s', name):
914 if not repo.obsstore:
929 if not repo.obsstore:
915 return frozenset()
930 return frozenset()
916 if name not in repo.obsstore.caches:
931 if name not in repo.obsstore.caches:
917 repo.obsstore.caches[name] = cachefuncs[name](repo)
932 repo.obsstore.caches[name] = cachefuncs[name](repo)
918 return repo.obsstore.caches[name]
933 return repo.obsstore.caches[name]
919
934
920
935
921 # To be simple we need to invalidate obsolescence cache when:
936 # To be simple we need to invalidate obsolescence cache when:
922 #
937 #
923 # - new changeset is added:
938 # - new changeset is added:
924 # - public phase is changed
939 # - public phase is changed
925 # - obsolescence marker are added
940 # - obsolescence marker are added
926 # - strip is used a repo
941 # - strip is used a repo
927 def clearobscaches(repo):
942 def clearobscaches(repo):
928 """Remove all obsolescence related cache from a repo
943 """Remove all obsolescence related cache from a repo
929
944
930 This remove all cache in obsstore is the obsstore already exist on the
945 This remove all cache in obsstore is the obsstore already exist on the
931 repo.
946 repo.
932
947
933 (We could be smarter here given the exact event that trigger the cache
948 (We could be smarter here given the exact event that trigger the cache
934 clearing)"""
949 clearing)"""
935 # only clear cache is there is obsstore data in this repo
950 # only clear cache is there is obsstore data in this repo
936 if b'obsstore' in repo._filecache:
951 if b'obsstore' in repo._filecache:
937 repo.obsstore.caches.clear()
952 repo.obsstore.caches.clear()
938
953
939
954
940 def _mutablerevs(repo):
955 def _mutablerevs(repo):
941 """the set of mutable revision in the repository"""
956 """the set of mutable revision in the repository"""
942 return repo._phasecache.getrevset(repo, phases.relevant_mutable_phases)
957 return repo._phasecache.getrevset(repo, phases.relevant_mutable_phases)
943
958
944
959
945 @cachefor(b'obsolete')
960 @cachefor(b'obsolete')
946 def _computeobsoleteset(repo):
961 def _computeobsoleteset(repo):
947 """the set of obsolete revisions"""
962 """the set of obsolete revisions"""
948 getnode = repo.changelog.node
963 getnode = repo.changelog.node
949 notpublic = _mutablerevs(repo)
964 notpublic = _mutablerevs(repo)
950 isobs = repo.obsstore.successors.__contains__
965 isobs = repo.obsstore.successors.__contains__
951 return frozenset(r for r in notpublic if isobs(getnode(r)))
966 return frozenset(r for r in notpublic if isobs(getnode(r)))
952
967
953
968
954 @cachefor(b'orphan')
969 @cachefor(b'orphan')
955 def _computeorphanset(repo):
970 def _computeorphanset(repo):
956 """the set of non obsolete revisions with obsolete parents"""
971 """the set of non obsolete revisions with obsolete parents"""
957 pfunc = repo.changelog.parentrevs
972 pfunc = repo.changelog.parentrevs
958 mutable = _mutablerevs(repo)
973 mutable = _mutablerevs(repo)
959 obsolete = getrevs(repo, b'obsolete')
974 obsolete = getrevs(repo, b'obsolete')
960 others = mutable - obsolete
975 others = mutable - obsolete
961 unstable = set()
976 unstable = set()
962 for r in sorted(others):
977 for r in sorted(others):
963 # A rev is unstable if one of its parent is obsolete or unstable
978 # A rev is unstable if one of its parent is obsolete or unstable
964 # this works since we traverse following growing rev order
979 # this works since we traverse following growing rev order
965 for p in pfunc(r):
980 for p in pfunc(r):
966 if p in obsolete or p in unstable:
981 if p in obsolete or p in unstable:
967 unstable.add(r)
982 unstable.add(r)
968 break
983 break
969 return frozenset(unstable)
984 return frozenset(unstable)
970
985
971
986
972 @cachefor(b'suspended')
987 @cachefor(b'suspended')
973 def _computesuspendedset(repo):
988 def _computesuspendedset(repo):
974 """the set of obsolete parents with non obsolete descendants"""
989 """the set of obsolete parents with non obsolete descendants"""
975 suspended = repo.changelog.ancestors(getrevs(repo, b'orphan'))
990 suspended = repo.changelog.ancestors(getrevs(repo, b'orphan'))
976 return frozenset(r for r in getrevs(repo, b'obsolete') if r in suspended)
991 return frozenset(r for r in getrevs(repo, b'obsolete') if r in suspended)
977
992
978
993
979 @cachefor(b'extinct')
994 @cachefor(b'extinct')
980 def _computeextinctset(repo):
995 def _computeextinctset(repo):
981 """the set of obsolete parents without non obsolete descendants"""
996 """the set of obsolete parents without non obsolete descendants"""
982 return getrevs(repo, b'obsolete') - getrevs(repo, b'suspended')
997 return getrevs(repo, b'obsolete') - getrevs(repo, b'suspended')
983
998
984
999
985 @cachefor(b'phasedivergent')
1000 @cachefor(b'phasedivergent')
986 def _computephasedivergentset(repo):
1001 def _computephasedivergentset(repo):
987 """the set of revs trying to obsolete public revisions"""
1002 """the set of revs trying to obsolete public revisions"""
988 bumped = set()
1003 bumped = set()
989 # util function (avoid attribute lookup in the loop)
1004 # util function (avoid attribute lookup in the loop)
990 phase = repo._phasecache.phase # would be faster to grab the full list
1005 phase = repo._phasecache.phase # would be faster to grab the full list
991 public = phases.public
1006 public = phases.public
992 cl = repo.changelog
1007 cl = repo.changelog
993 torev = cl.index.get_rev
1008 torev = cl.index.get_rev
994 tonode = cl.node
1009 tonode = cl.node
995 obsstore = repo.obsstore
1010 obsstore = repo.obsstore
996 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1011 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
997 for rev in candidates:
1012 for rev in candidates:
998 # We only evaluate mutable, non-obsolete revision
1013 # We only evaluate mutable, non-obsolete revision
999 node = tonode(rev)
1014 node = tonode(rev)
1000 # (future) A cache of predecessors may worth if split is very common
1015 # (future) A cache of predecessors may worth if split is very common
1001 for pnode in obsutil.allpredecessors(
1016 for pnode in obsutil.allpredecessors(
1002 obsstore, [node], ignoreflags=bumpedfix
1017 obsstore, [node], ignoreflags=bumpedfix
1003 ):
1018 ):
1004 prev = torev(pnode) # unfiltered! but so is phasecache
1019 prev = torev(pnode) # unfiltered! but so is phasecache
1005 if (prev is not None) and (phase(repo, prev) <= public):
1020 if (prev is not None) and (phase(repo, prev) <= public):
1006 # we have a public predecessor
1021 # we have a public predecessor
1007 bumped.add(rev)
1022 bumped.add(rev)
1008 break # Next draft!
1023 break # Next draft!
1009 return frozenset(bumped)
1024 return frozenset(bumped)
1010
1025
1011
1026
1012 @cachefor(b'contentdivergent')
1027 @cachefor(b'contentdivergent')
1013 def _computecontentdivergentset(repo):
1028 def _computecontentdivergentset(repo):
1014 """the set of rev that compete to be the final successors of some revision."""
1029 """the set of rev that compete to be the final successors of some revision."""
1015 divergent = set()
1030 divergent = set()
1016 obsstore = repo.obsstore
1031 obsstore = repo.obsstore
1017 newermap = {}
1032 newermap = {}
1018 tonode = repo.changelog.node
1033 tonode = repo.changelog.node
1019 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1034 candidates = sorted(_mutablerevs(repo) - getrevs(repo, b"obsolete"))
1020 for rev in candidates:
1035 for rev in candidates:
1021 node = tonode(rev)
1036 node = tonode(rev)
1022 mark = obsstore.predecessors.get(node, ())
1037 mark = obsstore.predecessors.get(node, ())
1023 toprocess = set(mark)
1038 toprocess = set(mark)
1024 seen = set()
1039 seen = set()
1025 while toprocess:
1040 while toprocess:
1026 prec = toprocess.pop()[0]
1041 prec = toprocess.pop()[0]
1027 if prec in seen:
1042 if prec in seen:
1028 continue # emergency cycle hanging prevention
1043 continue # emergency cycle hanging prevention
1029 seen.add(prec)
1044 seen.add(prec)
1030 if prec not in newermap:
1045 if prec not in newermap:
1031 obsutil.successorssets(repo, prec, cache=newermap)
1046 obsutil.successorssets(repo, prec, cache=newermap)
1032 newer = [n for n in newermap[prec] if n]
1047 newer = [n for n in newermap[prec] if n]
1033 if len(newer) > 1:
1048 if len(newer) > 1:
1034 divergent.add(rev)
1049 divergent.add(rev)
1035 break
1050 break
1036 toprocess.update(obsstore.predecessors.get(prec, ()))
1051 toprocess.update(obsstore.predecessors.get(prec, ()))
1037 return frozenset(divergent)
1052 return frozenset(divergent)
1038
1053
1039
1054
1040 def makefoldid(relation, user):
1055 def makefoldid(relation, user):
1041
1056
1042 folddigest = hashutil.sha1(user)
1057 folddigest = hashutil.sha1(user)
1043 for p in relation[0] + relation[1]:
1058 for p in relation[0] + relation[1]:
1044 folddigest.update(b'%d' % p.rev())
1059 folddigest.update(b'%d' % p.rev())
1045 folddigest.update(p.node())
1060 folddigest.update(p.node())
1046 # Since fold only has to compete against fold for the same successors, it
1061 # Since fold only has to compete against fold for the same successors, it
1047 # seems fine to use a small ID. Smaller ID save space.
1062 # seems fine to use a small ID. Smaller ID save space.
1048 return hex(folddigest.digest())[:8]
1063 return hex(folddigest.digest())[:8]
1049
1064
1050
1065
1051 def createmarkers(
1066 def createmarkers(
1052 repo, relations, flag=0, date=None, metadata=None, operation=None
1067 repo, relations, flag=0, date=None, metadata=None, operation=None
1053 ):
1068 ):
1054 """Add obsolete markers between changesets in a repo
1069 """Add obsolete markers between changesets in a repo
1055
1070
1056 <relations> must be an iterable of ((<old>,...), (<new>, ...)[,{metadata}])
1071 <relations> must be an iterable of ((<old>,...), (<new>, ...)[,{metadata}])
1057 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1072 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1058 containing metadata for this marker only. It is merged with the global
1073 containing metadata for this marker only. It is merged with the global
1059 metadata specified through the `metadata` argument of this function.
1074 metadata specified through the `metadata` argument of this function.
1060 Any string values in metadata must be UTF-8 bytes.
1075 Any string values in metadata must be UTF-8 bytes.
1061
1076
1062 Trying to obsolete a public changeset will raise an exception.
1077 Trying to obsolete a public changeset will raise an exception.
1063
1078
1064 Current user and date are used except if specified otherwise in the
1079 Current user and date are used except if specified otherwise in the
1065 metadata attribute.
1080 metadata attribute.
1066
1081
1067 This function operates within a transaction of its own, but does
1082 This function operates within a transaction of its own, but does
1068 not take any lock on the repo.
1083 not take any lock on the repo.
1069 """
1084 """
1070 # prepare metadata
1085 # prepare metadata
1071 if metadata is None:
1086 if metadata is None:
1072 metadata = {}
1087 metadata = {}
1073 if b'user' not in metadata:
1088 if b'user' not in metadata:
1074 luser = (
1089 luser = (
1075 repo.ui.config(b'devel', b'user.obsmarker') or repo.ui.username()
1090 repo.ui.config(b'devel', b'user.obsmarker') or repo.ui.username()
1076 )
1091 )
1077 metadata[b'user'] = encoding.fromlocal(luser)
1092 metadata[b'user'] = encoding.fromlocal(luser)
1078
1093
1079 # Operation metadata handling
1094 # Operation metadata handling
1080 useoperation = repo.ui.configbool(
1095 useoperation = repo.ui.configbool(
1081 b'experimental', b'evolution.track-operation'
1096 b'experimental', b'evolution.track-operation'
1082 )
1097 )
1083 if useoperation and operation:
1098 if useoperation and operation:
1084 metadata[b'operation'] = operation
1099 metadata[b'operation'] = operation
1085
1100
1086 # Effect flag metadata handling
1101 # Effect flag metadata handling
1087 saveeffectflag = repo.ui.configbool(
1102 saveeffectflag = repo.ui.configbool(
1088 b'experimental', b'evolution.effect-flags'
1103 b'experimental', b'evolution.effect-flags'
1089 )
1104 )
1090
1105
1091 with repo.transaction(b'add-obsolescence-marker') as tr:
1106 with repo.transaction(b'add-obsolescence-marker') as tr:
1092 markerargs = []
1107 markerargs = []
1093 for rel in relations:
1108 for rel in relations:
1094 predecessors = rel[0]
1109 predecessors = rel[0]
1095 if not isinstance(predecessors, tuple):
1110 if not isinstance(predecessors, tuple):
1096 # preserve compat with old API until all caller are migrated
1111 # preserve compat with old API until all caller are migrated
1097 predecessors = (predecessors,)
1112 predecessors = (predecessors,)
1098 if len(predecessors) > 1 and len(rel[1]) != 1:
1113 if len(predecessors) > 1 and len(rel[1]) != 1:
1099 msg = b'Fold markers can only have 1 successors, not %d'
1114 msg = b'Fold markers can only have 1 successors, not %d'
1100 raise error.ProgrammingError(msg % len(rel[1]))
1115 raise error.ProgrammingError(msg % len(rel[1]))
1101 foldid = None
1116 foldid = None
1102 foldsize = len(predecessors)
1117 foldsize = len(predecessors)
1103 if 1 < foldsize:
1118 if 1 < foldsize:
1104 foldid = makefoldid(rel, metadata[b'user'])
1119 foldid = makefoldid(rel, metadata[b'user'])
1105 for foldidx, prec in enumerate(predecessors, 1):
1120 for foldidx, prec in enumerate(predecessors, 1):
1106 sucs = rel[1]
1121 sucs = rel[1]
1107 localmetadata = metadata.copy()
1122 localmetadata = metadata.copy()
1108 if len(rel) > 2:
1123 if len(rel) > 2:
1109 localmetadata.update(rel[2])
1124 localmetadata.update(rel[2])
1110 if foldid is not None:
1125 if foldid is not None:
1111 localmetadata[b'fold-id'] = foldid
1126 localmetadata[b'fold-id'] = foldid
1112 localmetadata[b'fold-idx'] = b'%d' % foldidx
1127 localmetadata[b'fold-idx'] = b'%d' % foldidx
1113 localmetadata[b'fold-size'] = b'%d' % foldsize
1128 localmetadata[b'fold-size'] = b'%d' % foldsize
1114
1129
1115 if not prec.mutable():
1130 if not prec.mutable():
1116 raise error.Abort(
1131 raise error.Abort(
1117 _(b"cannot obsolete public changeset: %s") % prec,
1132 _(b"cannot obsolete public changeset: %s") % prec,
1118 hint=b"see 'hg help phases' for details",
1133 hint=b"see 'hg help phases' for details",
1119 )
1134 )
1120 nprec = prec.node()
1135 nprec = prec.node()
1121 nsucs = tuple(s.node() for s in sucs)
1136 nsucs = tuple(s.node() for s in sucs)
1122 npare = None
1137 npare = None
1123 if not nsucs:
1138 if not nsucs:
1124 npare = tuple(p.node() for p in prec.parents())
1139 npare = tuple(p.node() for p in prec.parents())
1125 if nprec in nsucs:
1140 if nprec in nsucs:
1126 raise error.Abort(
1141 raise error.Abort(
1127 _(b"changeset %s cannot obsolete itself") % prec
1142 _(b"changeset %s cannot obsolete itself") % prec
1128 )
1143 )
1129
1144
1130 # Effect flag can be different by relation
1145 # Effect flag can be different by relation
1131 if saveeffectflag:
1146 if saveeffectflag:
1132 # The effect flag is saved in a versioned field name for
1147 # The effect flag is saved in a versioned field name for
1133 # future evolution
1148 # future evolution
1134 effectflag = obsutil.geteffectflag(prec, sucs)
1149 effectflag = obsutil.geteffectflag(prec, sucs)
1135 localmetadata[obsutil.EFFECTFLAGFIELD] = b"%d" % effectflag
1150 localmetadata[obsutil.EFFECTFLAGFIELD] = b"%d" % effectflag
1136
1151
1137 # Creating the marker causes the hidden cache to become
1152 # Creating the marker causes the hidden cache to become
1138 # invalid, which causes recomputation when we ask for
1153 # invalid, which causes recomputation when we ask for
1139 # prec.parents() above. Resulting in n^2 behavior. So let's
1154 # prec.parents() above. Resulting in n^2 behavior. So let's
1140 # prepare all of the args first, then create the markers.
1155 # prepare all of the args first, then create the markers.
1141 markerargs.append((nprec, nsucs, npare, localmetadata))
1156 markerargs.append((nprec, nsucs, npare, localmetadata))
1142
1157
1143 for args in markerargs:
1158 for args in markerargs:
1144 nprec, nsucs, npare, localmetadata = args
1159 nprec, nsucs, npare, localmetadata = args
1145 repo.obsstore.create(
1160 repo.obsstore.create(
1146 tr,
1161 tr,
1147 nprec,
1162 nprec,
1148 nsucs,
1163 nsucs,
1149 flag,
1164 flag,
1150 parents=npare,
1165 parents=npare,
1151 date=date,
1166 date=date,
1152 metadata=localmetadata,
1167 metadata=localmetadata,
1153 ui=repo.ui,
1168 ui=repo.ui,
1154 )
1169 )
1155 repo.filteredrevcache.clear()
1170 repo.filteredrevcache.clear()
@@ -1,1047 +1,1047
1 # obsutil.py - utility functions for obsolescence
1 # obsutil.py - utility functions for obsolescence
2 #
2 #
3 # Copyright 2017 Boris Feld <boris.feld@octobus.net>
3 # Copyright 2017 Boris Feld <boris.feld@octobus.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8
8
9 import re
9 import re
10
10
11 from .i18n import _
11 from .i18n import _
12 from .node import (
12 from .node import (
13 hex,
13 hex,
14 short,
14 short,
15 )
15 )
16 from . import (
16 from . import (
17 diffutil,
17 diffutil,
18 encoding,
18 encoding,
19 error,
19 error,
20 phases,
20 phases,
21 util,
21 util,
22 )
22 )
23 from .utils import dateutil
23 from .utils import dateutil
24
24
25 ### obsolescence marker flag
25 ### obsolescence marker flag
26
26
27 ## bumpedfix flag
27 ## bumpedfix flag
28 #
28 #
29 # When a changeset A' succeed to a changeset A which became public, we call A'
29 # When a changeset A' succeed to a changeset A which became public, we call A'
30 # "bumped" because it's a successors of a public changesets
30 # "bumped" because it's a successors of a public changesets
31 #
31 #
32 # o A' (bumped)
32 # o A' (bumped)
33 # |`:
33 # |`:
34 # | o A
34 # | o A
35 # |/
35 # |/
36 # o Z
36 # o Z
37 #
37 #
38 # The way to solve this situation is to create a new changeset Ad as children
38 # The way to solve this situation is to create a new changeset Ad as children
39 # of A. This changeset have the same content than A'. So the diff from A to A'
39 # of A. This changeset have the same content than A'. So the diff from A to A'
40 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
40 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
41 #
41 #
42 # o Ad
42 # o Ad
43 # |`:
43 # |`:
44 # | x A'
44 # | x A'
45 # |'|
45 # |'|
46 # o | A
46 # o | A
47 # |/
47 # |/
48 # o Z
48 # o Z
49 #
49 #
50 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
50 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
51 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
51 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
52 # This flag mean that the successors express the changes between the public and
52 # This flag mean that the successors express the changes between the public and
53 # bumped version and fix the situation, breaking the transitivity of
53 # bumped version and fix the situation, breaking the transitivity of
54 # "bumped" here.
54 # "bumped" here.
55 bumpedfix = 1
55 bumpedfix = 1
56 usingsha256 = 2
56 usingsha256 = 2
57
57
58
58
59 class marker:
59 class marker:
60 """Wrap obsolete marker raw data"""
60 """Wrap obsolete marker raw data"""
61
61
62 def __init__(self, repo, data):
62 def __init__(self, repo, data):
63 # the repo argument will be used to create changectx in later version
63 # the repo argument will be used to create changectx in later version
64 self._repo = repo
64 self._repo = repo
65 self._data = data
65 self._data = data
66 self._decodedmeta = None
66 self._decodedmeta = None
67
67
68 def __hash__(self):
68 def __hash__(self):
69 return hash(self._data)
69 return hash(self._data)
70
70
71 def __eq__(self, other):
71 def __eq__(self, other):
72 if type(other) != type(self):
72 if type(other) != type(self):
73 return False
73 return False
74 return self._data == other._data
74 return self._data == other._data
75
75
76 def prednode(self):
76 def prednode(self):
77 """Predecessor changeset node identifier"""
77 """Predecessor changeset node identifier"""
78 return self._data[0]
78 return self._data[0]
79
79
80 def succnodes(self):
80 def succnodes(self):
81 """List of successor changesets node identifiers"""
81 """List of successor changesets node identifiers"""
82 return self._data[1]
82 return self._data[1]
83
83
84 def parentnodes(self):
84 def parentnodes(self):
85 """Parents of the predecessors (None if not recorded)"""
85 """Parents of the predecessors (None if not recorded)"""
86 return self._data[5]
86 return self._data[5]
87
87
88 def metadata(self):
88 def metadata(self):
89 """Decoded metadata dictionary"""
89 """Decoded metadata dictionary"""
90 return dict(self._data[3])
90 return dict(self._data[3])
91
91
92 def date(self):
92 def date(self):
93 """Creation date as (unixtime, offset)"""
93 """Creation date as (unixtime, offset)"""
94 return self._data[4]
94 return self._data[4]
95
95
96 def flags(self):
96 def flags(self):
97 """The flags field of the marker"""
97 """The flags field of the marker"""
98 return self._data[2]
98 return self._data[2]
99
99
100
100
101 def getmarkers(repo, nodes=None, exclusive=False):
101 def getmarkers(repo, nodes=None, exclusive=False):
102 """returns markers known in a repository
102 """returns markers known in a repository
103
103
104 If <nodes> is specified, only markers "relevant" to those nodes are are
104 If <nodes> is specified, only markers "relevant" to those nodes are are
105 returned"""
105 returned"""
106 if nodes is None:
106 if nodes is None:
107 rawmarkers = repo.obsstore
107 rawmarkers = repo.obsstore
108 elif exclusive:
108 elif exclusive:
109 rawmarkers = exclusivemarkers(repo, nodes)
109 rawmarkers = exclusivemarkers(repo, nodes)
110 else:
110 else:
111 rawmarkers = repo.obsstore.relevantmarkers(nodes)
111 rawmarkers = repo.obsstore.relevantmarkers(nodes=nodes)
112
112
113 for markerdata in rawmarkers:
113 for markerdata in rawmarkers:
114 yield marker(repo, markerdata)
114 yield marker(repo, markerdata)
115
115
116
116
117 def sortedmarkers(markers):
117 def sortedmarkers(markers):
118 # last item of marker tuple ('parents') may be None or a tuple
118 # last item of marker tuple ('parents') may be None or a tuple
119 return sorted(markers, key=lambda m: m[:-1] + (m[-1] or (),))
119 return sorted(markers, key=lambda m: m[:-1] + (m[-1] or (),))
120
120
121
121
122 def closestpredecessors(repo, nodeid):
122 def closestpredecessors(repo, nodeid):
123 """yield the list of next predecessors pointing on visible changectx nodes
123 """yield the list of next predecessors pointing on visible changectx nodes
124
124
125 This function respect the repoview filtering, filtered revision will be
125 This function respect the repoview filtering, filtered revision will be
126 considered missing.
126 considered missing.
127 """
127 """
128
128
129 precursors = repo.obsstore.predecessors
129 precursors = repo.obsstore.predecessors
130 stack = [nodeid]
130 stack = [nodeid]
131 seen = set(stack)
131 seen = set(stack)
132
132
133 while stack:
133 while stack:
134 current = stack.pop()
134 current = stack.pop()
135 currentpreccs = precursors.get(current, ())
135 currentpreccs = precursors.get(current, ())
136
136
137 for prec in currentpreccs:
137 for prec in currentpreccs:
138 precnodeid = prec[0]
138 precnodeid = prec[0]
139
139
140 # Basic cycle protection
140 # Basic cycle protection
141 if precnodeid in seen:
141 if precnodeid in seen:
142 continue
142 continue
143 seen.add(precnodeid)
143 seen.add(precnodeid)
144
144
145 if precnodeid in repo:
145 if precnodeid in repo:
146 yield precnodeid
146 yield precnodeid
147 else:
147 else:
148 stack.append(precnodeid)
148 stack.append(precnodeid)
149
149
150
150
151 def allpredecessors(obsstore, nodes, ignoreflags=0):
151 def allpredecessors(obsstore, nodes, ignoreflags=0):
152 """Yield node for every precursors of <nodes>.
152 """Yield node for every precursors of <nodes>.
153
153
154 Some precursors may be unknown locally.
154 Some precursors may be unknown locally.
155
155
156 This is a linear yield unsuited to detecting folded changesets. It includes
156 This is a linear yield unsuited to detecting folded changesets. It includes
157 initial nodes too."""
157 initial nodes too."""
158
158
159 remaining = set(nodes)
159 remaining = set(nodes)
160 seen = set(remaining)
160 seen = set(remaining)
161 prec = obsstore.predecessors.get
161 prec = obsstore.predecessors.get
162 while remaining:
162 while remaining:
163 current = remaining.pop()
163 current = remaining.pop()
164 yield current
164 yield current
165 for mark in prec(current, ()):
165 for mark in prec(current, ()):
166 # ignore marker flagged with specified flag
166 # ignore marker flagged with specified flag
167 if mark[2] & ignoreflags:
167 if mark[2] & ignoreflags:
168 continue
168 continue
169 suc = mark[0]
169 suc = mark[0]
170 if suc not in seen:
170 if suc not in seen:
171 seen.add(suc)
171 seen.add(suc)
172 remaining.add(suc)
172 remaining.add(suc)
173
173
174
174
175 def allsuccessors(obsstore, nodes, ignoreflags=0):
175 def allsuccessors(obsstore, nodes, ignoreflags=0):
176 """Yield node for every successor of <nodes>.
176 """Yield node for every successor of <nodes>.
177
177
178 Some successors may be unknown locally.
178 Some successors may be unknown locally.
179
179
180 This is a linear yield unsuited to detecting split changesets. It includes
180 This is a linear yield unsuited to detecting split changesets. It includes
181 initial nodes too."""
181 initial nodes too."""
182 remaining = set(nodes)
182 remaining = set(nodes)
183 seen = set(remaining)
183 seen = set(remaining)
184 while remaining:
184 while remaining:
185 current = remaining.pop()
185 current = remaining.pop()
186 yield current
186 yield current
187 for mark in obsstore.successors.get(current, ()):
187 for mark in obsstore.successors.get(current, ()):
188 # ignore marker flagged with specified flag
188 # ignore marker flagged with specified flag
189 if mark[2] & ignoreflags:
189 if mark[2] & ignoreflags:
190 continue
190 continue
191 for suc in mark[1]:
191 for suc in mark[1]:
192 if suc not in seen:
192 if suc not in seen:
193 seen.add(suc)
193 seen.add(suc)
194 remaining.add(suc)
194 remaining.add(suc)
195
195
196
196
197 def _filterprunes(markers):
197 def _filterprunes(markers):
198 """return a set with no prune markers"""
198 """return a set with no prune markers"""
199 return {m for m in markers if m[1]}
199 return {m for m in markers if m[1]}
200
200
201
201
202 def exclusivemarkers(repo, nodes):
202 def exclusivemarkers(repo, nodes):
203 """set of markers relevant to "nodes" but no other locally-known nodes
203 """set of markers relevant to "nodes" but no other locally-known nodes
204
204
205 This function compute the set of markers "exclusive" to a locally-known
205 This function compute the set of markers "exclusive" to a locally-known
206 node. This means we walk the markers starting from <nodes> until we reach a
206 node. This means we walk the markers starting from <nodes> until we reach a
207 locally-known precursors outside of <nodes>. Element of <nodes> with
207 locally-known precursors outside of <nodes>. Element of <nodes> with
208 locally-known successors outside of <nodes> are ignored (since their
208 locally-known successors outside of <nodes> are ignored (since their
209 precursors markers are also relevant to these successors).
209 precursors markers are also relevant to these successors).
210
210
211 For example:
211 For example:
212
212
213 # (A0 rewritten as A1)
213 # (A0 rewritten as A1)
214 #
214 #
215 # A0 <-1- A1 # Marker "1" is exclusive to A1
215 # A0 <-1- A1 # Marker "1" is exclusive to A1
216
216
217 or
217 or
218
218
219 # (A0 rewritten as AX; AX rewritten as A1; AX is unknown locally)
219 # (A0 rewritten as AX; AX rewritten as A1; AX is unknown locally)
220 #
220 #
221 # <-1- A0 <-2- AX <-3- A1 # Marker "2,3" are exclusive to A1
221 # <-1- A0 <-2- AX <-3- A1 # Marker "2,3" are exclusive to A1
222
222
223 or
223 or
224
224
225 # (A0 has unknown precursors, A0 rewritten as A1 and A2 (divergence))
225 # (A0 has unknown precursors, A0 rewritten as A1 and A2 (divergence))
226 #
226 #
227 # <-2- A1 # Marker "2" is exclusive to A0,A1
227 # <-2- A1 # Marker "2" is exclusive to A0,A1
228 # /
228 # /
229 # <-1- A0
229 # <-1- A0
230 # \
230 # \
231 # <-3- A2 # Marker "3" is exclusive to A0,A2
231 # <-3- A2 # Marker "3" is exclusive to A0,A2
232 #
232 #
233 # in addition:
233 # in addition:
234 #
234 #
235 # Markers "2,3" are exclusive to A1,A2
235 # Markers "2,3" are exclusive to A1,A2
236 # Markers "1,2,3" are exclusive to A0,A1,A2
236 # Markers "1,2,3" are exclusive to A0,A1,A2
237
237
238 See test/test-obsolete-bundle-strip.t for more examples.
238 See test/test-obsolete-bundle-strip.t for more examples.
239
239
240 An example usage is strip. When stripping a changeset, we also want to
240 An example usage is strip. When stripping a changeset, we also want to
241 strip the markers exclusive to this changeset. Otherwise we would have
241 strip the markers exclusive to this changeset. Otherwise we would have
242 "dangling"" obsolescence markers from its precursors: Obsolescence markers
242 "dangling"" obsolescence markers from its precursors: Obsolescence markers
243 marking a node as obsolete without any successors available locally.
243 marking a node as obsolete without any successors available locally.
244
244
245 As for relevant markers, the prune markers for children will be followed.
245 As for relevant markers, the prune markers for children will be followed.
246 Of course, they will only be followed if the pruned children is
246 Of course, they will only be followed if the pruned children is
247 locally-known. Since the prune markers are relevant to the pruned node.
247 locally-known. Since the prune markers are relevant to the pruned node.
248 However, while prune markers are considered relevant to the parent of the
248 However, while prune markers are considered relevant to the parent of the
249 pruned changesets, prune markers for locally-known changeset (with no
249 pruned changesets, prune markers for locally-known changeset (with no
250 successors) are considered exclusive to the pruned nodes. This allows
250 successors) are considered exclusive to the pruned nodes. This allows
251 to strip the prune markers (with the rest of the exclusive chain) alongside
251 to strip the prune markers (with the rest of the exclusive chain) alongside
252 the pruned changesets.
252 the pruned changesets.
253 """
253 """
254 # running on a filtered repository would be dangerous as markers could be
254 # running on a filtered repository would be dangerous as markers could be
255 # reported as exclusive when they are relevant for other filtered nodes.
255 # reported as exclusive when they are relevant for other filtered nodes.
256 unfi = repo.unfiltered()
256 unfi = repo.unfiltered()
257
257
258 # shortcut to various useful item
258 # shortcut to various useful item
259 has_node = unfi.changelog.index.has_node
259 has_node = unfi.changelog.index.has_node
260 precursorsmarkers = unfi.obsstore.predecessors
260 precursorsmarkers = unfi.obsstore.predecessors
261 successormarkers = unfi.obsstore.successors
261 successormarkers = unfi.obsstore.successors
262 childrenmarkers = unfi.obsstore.children
262 childrenmarkers = unfi.obsstore.children
263
263
264 # exclusive markers (return of the function)
264 # exclusive markers (return of the function)
265 exclmarkers = set()
265 exclmarkers = set()
266 # we need fast membership testing
266 # we need fast membership testing
267 nodes = set(nodes)
267 nodes = set(nodes)
268 # looking for head in the obshistory
268 # looking for head in the obshistory
269 #
269 #
270 # XXX we are ignoring all issues in regard with cycle for now.
270 # XXX we are ignoring all issues in regard with cycle for now.
271 stack = [n for n in nodes if not _filterprunes(successormarkers.get(n, ()))]
271 stack = [n for n in nodes if not _filterprunes(successormarkers.get(n, ()))]
272 stack.sort()
272 stack.sort()
273 # nodes already stacked
273 # nodes already stacked
274 seennodes = set(stack)
274 seennodes = set(stack)
275 while stack:
275 while stack:
276 current = stack.pop()
276 current = stack.pop()
277 # fetch precursors markers
277 # fetch precursors markers
278 markers = list(precursorsmarkers.get(current, ()))
278 markers = list(precursorsmarkers.get(current, ()))
279 # extend the list with prune markers
279 # extend the list with prune markers
280 for mark in successormarkers.get(current, ()):
280 for mark in successormarkers.get(current, ()):
281 if not mark[1]:
281 if not mark[1]:
282 markers.append(mark)
282 markers.append(mark)
283 # and markers from children (looking for prune)
283 # and markers from children (looking for prune)
284 for mark in childrenmarkers.get(current, ()):
284 for mark in childrenmarkers.get(current, ()):
285 if not mark[1]:
285 if not mark[1]:
286 markers.append(mark)
286 markers.append(mark)
287 # traverse the markers
287 # traverse the markers
288 for mark in markers:
288 for mark in markers:
289 if mark in exclmarkers:
289 if mark in exclmarkers:
290 # markers already selected
290 # markers already selected
291 continue
291 continue
292
292
293 # If the markers is about the current node, select it
293 # If the markers is about the current node, select it
294 #
294 #
295 # (this delay the addition of markers from children)
295 # (this delay the addition of markers from children)
296 if mark[1] or mark[0] == current:
296 if mark[1] or mark[0] == current:
297 exclmarkers.add(mark)
297 exclmarkers.add(mark)
298
298
299 # should we keep traversing through the precursors?
299 # should we keep traversing through the precursors?
300 prec = mark[0]
300 prec = mark[0]
301
301
302 # nodes in the stack or already processed
302 # nodes in the stack or already processed
303 if prec in seennodes:
303 if prec in seennodes:
304 continue
304 continue
305
305
306 # is this a locally known node ?
306 # is this a locally known node ?
307 known = has_node(prec)
307 known = has_node(prec)
308 # if locally-known and not in the <nodes> set the traversal
308 # if locally-known and not in the <nodes> set the traversal
309 # stop here.
309 # stop here.
310 if known and prec not in nodes:
310 if known and prec not in nodes:
311 continue
311 continue
312
312
313 # do not keep going if there are unselected markers pointing to this
313 # do not keep going if there are unselected markers pointing to this
314 # nodes. If we end up traversing these unselected markers later the
314 # nodes. If we end up traversing these unselected markers later the
315 # node will be taken care of at that point.
315 # node will be taken care of at that point.
316 precmarkers = _filterprunes(successormarkers.get(prec))
316 precmarkers = _filterprunes(successormarkers.get(prec))
317 if precmarkers.issubset(exclmarkers):
317 if precmarkers.issubset(exclmarkers):
318 seennodes.add(prec)
318 seennodes.add(prec)
319 stack.append(prec)
319 stack.append(prec)
320
320
321 return exclmarkers
321 return exclmarkers
322
322
323
323
324 def foreground(repo, nodes):
324 def foreground(repo, nodes):
325 """return all nodes in the "foreground" of other node
325 """return all nodes in the "foreground" of other node
326
326
327 The foreground of a revision is anything reachable using parent -> children
327 The foreground of a revision is anything reachable using parent -> children
328 or precursor -> successor relation. It is very similar to "descendant" but
328 or precursor -> successor relation. It is very similar to "descendant" but
329 augmented with obsolescence information.
329 augmented with obsolescence information.
330
330
331 Beware that possible obsolescence cycle may result if complex situation.
331 Beware that possible obsolescence cycle may result if complex situation.
332 """
332 """
333 repo = repo.unfiltered()
333 repo = repo.unfiltered()
334 foreground = set(repo.set(b'%ln::', nodes))
334 foreground = set(repo.set(b'%ln::', nodes))
335 if repo.obsstore:
335 if repo.obsstore:
336 # We only need this complicated logic if there is obsolescence
336 # We only need this complicated logic if there is obsolescence
337 # XXX will probably deserve an optimised revset.
337 # XXX will probably deserve an optimised revset.
338 has_node = repo.changelog.index.has_node
338 has_node = repo.changelog.index.has_node
339 plen = -1
339 plen = -1
340 # compute the whole set of successors or descendants
340 # compute the whole set of successors or descendants
341 while len(foreground) != plen:
341 while len(foreground) != plen:
342 plen = len(foreground)
342 plen = len(foreground)
343 succs = {c.node() for c in foreground}
343 succs = {c.node() for c in foreground}
344 mutable = [c.node() for c in foreground if c.mutable()]
344 mutable = [c.node() for c in foreground if c.mutable()]
345 succs.update(allsuccessors(repo.obsstore, mutable))
345 succs.update(allsuccessors(repo.obsstore, mutable))
346 known = (n for n in succs if has_node(n))
346 known = (n for n in succs if has_node(n))
347 foreground = set(repo.set(b'%ln::', known))
347 foreground = set(repo.set(b'%ln::', known))
348 return {c.node() for c in foreground}
348 return {c.node() for c in foreground}
349
349
350
350
351 # effectflag field
351 # effectflag field
352 #
352 #
353 # Effect-flag is a 1-byte bit field used to store what changed between a
353 # Effect-flag is a 1-byte bit field used to store what changed between a
354 # changeset and its successor(s).
354 # changeset and its successor(s).
355 #
355 #
356 # The effect flag is stored in obs-markers metadata while we iterate on the
356 # The effect flag is stored in obs-markers metadata while we iterate on the
357 # information design. That's why we have the EFFECTFLAGFIELD. If we come up
357 # information design. That's why we have the EFFECTFLAGFIELD. If we come up
358 # with an incompatible design for effect flag, we can store a new design under
358 # with an incompatible design for effect flag, we can store a new design under
359 # another field name so we don't break readers. We plan to extend the existing
359 # another field name so we don't break readers. We plan to extend the existing
360 # obsmarkers bit-field when the effect flag design will be stabilized.
360 # obsmarkers bit-field when the effect flag design will be stabilized.
361 #
361 #
362 # The effect-flag is placed behind an experimental flag
362 # The effect-flag is placed behind an experimental flag
363 # `effect-flags` set to off by default.
363 # `effect-flags` set to off by default.
364 #
364 #
365
365
366 EFFECTFLAGFIELD = b"ef1"
366 EFFECTFLAGFIELD = b"ef1"
367
367
368 DESCCHANGED = 1 << 0 # action changed the description
368 DESCCHANGED = 1 << 0 # action changed the description
369 METACHANGED = 1 << 1 # action change the meta
369 METACHANGED = 1 << 1 # action change the meta
370 DIFFCHANGED = 1 << 3 # action change diff introduced by the changeset
370 DIFFCHANGED = 1 << 3 # action change diff introduced by the changeset
371 PARENTCHANGED = 1 << 2 # action change the parent
371 PARENTCHANGED = 1 << 2 # action change the parent
372 USERCHANGED = 1 << 4 # the user changed
372 USERCHANGED = 1 << 4 # the user changed
373 DATECHANGED = 1 << 5 # the date changed
373 DATECHANGED = 1 << 5 # the date changed
374 BRANCHCHANGED = 1 << 6 # the branch changed
374 BRANCHCHANGED = 1 << 6 # the branch changed
375
375
376 METABLACKLIST = [
376 METABLACKLIST = [
377 re.compile(b'^branch$'),
377 re.compile(b'^branch$'),
378 re.compile(b'^.*-source$'),
378 re.compile(b'^.*-source$'),
379 re.compile(b'^.*_source$'),
379 re.compile(b'^.*_source$'),
380 re.compile(b'^source$'),
380 re.compile(b'^source$'),
381 ]
381 ]
382
382
383
383
384 def metanotblacklisted(metaitem):
384 def metanotblacklisted(metaitem):
385 """Check that the key of a meta item (extrakey, extravalue) does not
385 """Check that the key of a meta item (extrakey, extravalue) does not
386 match at least one of the blacklist pattern
386 match at least one of the blacklist pattern
387 """
387 """
388 metakey = metaitem[0]
388 metakey = metaitem[0]
389
389
390 return not any(pattern.match(metakey) for pattern in METABLACKLIST)
390 return not any(pattern.match(metakey) for pattern in METABLACKLIST)
391
391
392
392
393 def _prepare_hunk(hunk):
393 def _prepare_hunk(hunk):
394 """Drop all information but the username and patch"""
394 """Drop all information but the username and patch"""
395 cleanhunk = []
395 cleanhunk = []
396 for line in hunk.splitlines():
396 for line in hunk.splitlines():
397 if line.startswith(b'# User') or not line.startswith(b'#'):
397 if line.startswith(b'# User') or not line.startswith(b'#'):
398 if line.startswith(b'@@'):
398 if line.startswith(b'@@'):
399 line = b'@@\n'
399 line = b'@@\n'
400 cleanhunk.append(line)
400 cleanhunk.append(line)
401 return cleanhunk
401 return cleanhunk
402
402
403
403
404 def _getdifflines(iterdiff):
404 def _getdifflines(iterdiff):
405 """return a cleaned up lines"""
405 """return a cleaned up lines"""
406 lines = next(iterdiff, None)
406 lines = next(iterdiff, None)
407
407
408 if lines is None:
408 if lines is None:
409 return lines
409 return lines
410
410
411 return _prepare_hunk(lines)
411 return _prepare_hunk(lines)
412
412
413
413
414 def _cmpdiff(leftctx, rightctx):
414 def _cmpdiff(leftctx, rightctx):
415 """return True if both ctx introduce the "same diff"
415 """return True if both ctx introduce the "same diff"
416
416
417 This is a first and basic implementation, with many shortcoming.
417 This is a first and basic implementation, with many shortcoming.
418 """
418 """
419 diffopts = diffutil.diffallopts(leftctx.repo().ui, {b'git': True})
419 diffopts = diffutil.diffallopts(leftctx.repo().ui, {b'git': True})
420
420
421 # Leftctx or right ctx might be filtered, so we need to use the contexts
421 # Leftctx or right ctx might be filtered, so we need to use the contexts
422 # with an unfiltered repository to safely compute the diff
422 # with an unfiltered repository to safely compute the diff
423
423
424 # leftctx and rightctx can be from different repository views in case of
424 # leftctx and rightctx can be from different repository views in case of
425 # hgsubversion, do don't try to access them from same repository
425 # hgsubversion, do don't try to access them from same repository
426 # rightctx.repo() and leftctx.repo() are not always the same
426 # rightctx.repo() and leftctx.repo() are not always the same
427 leftunfi = leftctx._repo.unfiltered()[leftctx.rev()]
427 leftunfi = leftctx._repo.unfiltered()[leftctx.rev()]
428 leftdiff = leftunfi.diff(opts=diffopts)
428 leftdiff = leftunfi.diff(opts=diffopts)
429 rightunfi = rightctx._repo.unfiltered()[rightctx.rev()]
429 rightunfi = rightctx._repo.unfiltered()[rightctx.rev()]
430 rightdiff = rightunfi.diff(opts=diffopts)
430 rightdiff = rightunfi.diff(opts=diffopts)
431
431
432 left, right = (0, 0)
432 left, right = (0, 0)
433 while None not in (left, right):
433 while None not in (left, right):
434 left = _getdifflines(leftdiff)
434 left = _getdifflines(leftdiff)
435 right = _getdifflines(rightdiff)
435 right = _getdifflines(rightdiff)
436
436
437 if left != right:
437 if left != right:
438 return False
438 return False
439 return True
439 return True
440
440
441
441
442 def geteffectflag(source, successors):
442 def geteffectflag(source, successors):
443 """From an obs-marker relation, compute what changed between the
443 """From an obs-marker relation, compute what changed between the
444 predecessor and the successor.
444 predecessor and the successor.
445 """
445 """
446 effects = 0
446 effects = 0
447
447
448 for changectx in successors:
448 for changectx in successors:
449 # Check if description has changed
449 # Check if description has changed
450 if changectx.description() != source.description():
450 if changectx.description() != source.description():
451 effects |= DESCCHANGED
451 effects |= DESCCHANGED
452
452
453 # Check if user has changed
453 # Check if user has changed
454 if changectx.user() != source.user():
454 if changectx.user() != source.user():
455 effects |= USERCHANGED
455 effects |= USERCHANGED
456
456
457 # Check if date has changed
457 # Check if date has changed
458 if changectx.date() != source.date():
458 if changectx.date() != source.date():
459 effects |= DATECHANGED
459 effects |= DATECHANGED
460
460
461 # Check if branch has changed
461 # Check if branch has changed
462 if changectx.branch() != source.branch():
462 if changectx.branch() != source.branch():
463 effects |= BRANCHCHANGED
463 effects |= BRANCHCHANGED
464
464
465 # Check if at least one of the parent has changed
465 # Check if at least one of the parent has changed
466 if changectx.parents() != source.parents():
466 if changectx.parents() != source.parents():
467 effects |= PARENTCHANGED
467 effects |= PARENTCHANGED
468
468
469 # Check if other meta has changed
469 # Check if other meta has changed
470 changeextra = changectx.extra().items()
470 changeextra = changectx.extra().items()
471 ctxmeta = sorted(filter(metanotblacklisted, changeextra))
471 ctxmeta = sorted(filter(metanotblacklisted, changeextra))
472
472
473 sourceextra = source.extra().items()
473 sourceextra = source.extra().items()
474 srcmeta = sorted(filter(metanotblacklisted, sourceextra))
474 srcmeta = sorted(filter(metanotblacklisted, sourceextra))
475
475
476 if ctxmeta != srcmeta:
476 if ctxmeta != srcmeta:
477 effects |= METACHANGED
477 effects |= METACHANGED
478
478
479 # Check if the diff has changed
479 # Check if the diff has changed
480 if not _cmpdiff(source, changectx):
480 if not _cmpdiff(source, changectx):
481 effects |= DIFFCHANGED
481 effects |= DIFFCHANGED
482
482
483 return effects
483 return effects
484
484
485
485
486 def getobsoleted(repo, tr=None, changes=None):
486 def getobsoleted(repo, tr=None, changes=None):
487 """return the set of pre-existing revisions obsoleted by a transaction
487 """return the set of pre-existing revisions obsoleted by a transaction
488
488
489 Either the transaction or changes item of the transaction (for hooks)
489 Either the transaction or changes item of the transaction (for hooks)
490 must be provided, but not both.
490 must be provided, but not both.
491 """
491 """
492 if (tr is None) == (changes is None):
492 if (tr is None) == (changes is None):
493 e = b"exactly one of tr and changes must be provided"
493 e = b"exactly one of tr and changes must be provided"
494 raise error.ProgrammingError(e)
494 raise error.ProgrammingError(e)
495 torev = repo.unfiltered().changelog.index.get_rev
495 torev = repo.unfiltered().changelog.index.get_rev
496 phase = repo._phasecache.phase
496 phase = repo._phasecache.phase
497 succsmarkers = repo.obsstore.successors.get
497 succsmarkers = repo.obsstore.successors.get
498 public = phases.public
498 public = phases.public
499 if changes is None:
499 if changes is None:
500 changes = tr.changes
500 changes = tr.changes
501 addedmarkers = changes[b'obsmarkers']
501 addedmarkers = changes[b'obsmarkers']
502 origrepolen = changes[b'origrepolen']
502 origrepolen = changes[b'origrepolen']
503 seenrevs = set()
503 seenrevs = set()
504 obsoleted = set()
504 obsoleted = set()
505 for mark in addedmarkers:
505 for mark in addedmarkers:
506 node = mark[0]
506 node = mark[0]
507 rev = torev(node)
507 rev = torev(node)
508 if rev is None or rev in seenrevs or rev >= origrepolen:
508 if rev is None or rev in seenrevs or rev >= origrepolen:
509 continue
509 continue
510 seenrevs.add(rev)
510 seenrevs.add(rev)
511 if phase(repo, rev) == public:
511 if phase(repo, rev) == public:
512 continue
512 continue
513 if set(succsmarkers(node) or []).issubset(addedmarkers):
513 if set(succsmarkers(node) or []).issubset(addedmarkers):
514 obsoleted.add(rev)
514 obsoleted.add(rev)
515 return obsoleted
515 return obsoleted
516
516
517
517
518 class _succs(list):
518 class _succs(list):
519 """small class to represent a successors with some metadata about it"""
519 """small class to represent a successors with some metadata about it"""
520
520
521 def __init__(self, *args, **kwargs):
521 def __init__(self, *args, **kwargs):
522 super(_succs, self).__init__(*args, **kwargs)
522 super(_succs, self).__init__(*args, **kwargs)
523 self.markers = set()
523 self.markers = set()
524
524
525 def copy(self):
525 def copy(self):
526 new = _succs(self)
526 new = _succs(self)
527 new.markers = self.markers.copy()
527 new.markers = self.markers.copy()
528 return new
528 return new
529
529
530 @util.propertycache
530 @util.propertycache
531 def _set(self):
531 def _set(self):
532 # immutable
532 # immutable
533 return set(self)
533 return set(self)
534
534
535 def canmerge(self, other):
535 def canmerge(self, other):
536 return self._set.issubset(other._set)
536 return self._set.issubset(other._set)
537
537
538
538
539 def successorssets(repo, initialnode, closest=False, cache=None):
539 def successorssets(repo, initialnode, closest=False, cache=None):
540 """Return set of all latest successors of initial nodes
540 """Return set of all latest successors of initial nodes
541
541
542 The successors set of a changeset A are the group of revisions that succeed
542 The successors set of a changeset A are the group of revisions that succeed
543 A. It succeeds A as a consistent whole, each revision being only a partial
543 A. It succeeds A as a consistent whole, each revision being only a partial
544 replacement. By default, the successors set contains non-obsolete
544 replacement. By default, the successors set contains non-obsolete
545 changesets only, walking the obsolescence graph until reaching a leaf. If
545 changesets only, walking the obsolescence graph until reaching a leaf. If
546 'closest' is set to True, closest successors-sets are return (the
546 'closest' is set to True, closest successors-sets are return (the
547 obsolescence walk stops on known changesets).
547 obsolescence walk stops on known changesets).
548
548
549 This function returns the full list of successor sets which is why it
549 This function returns the full list of successor sets which is why it
550 returns a list of tuples and not just a single tuple. Each tuple is a valid
550 returns a list of tuples and not just a single tuple. Each tuple is a valid
551 successors set. Note that (A,) may be a valid successors set for changeset A
551 successors set. Note that (A,) may be a valid successors set for changeset A
552 (see below).
552 (see below).
553
553
554 In most cases, a changeset A will have a single element (e.g. the changeset
554 In most cases, a changeset A will have a single element (e.g. the changeset
555 A is replaced by A') in its successors set. Though, it is also common for a
555 A is replaced by A') in its successors set. Though, it is also common for a
556 changeset A to have no elements in its successor set (e.g. the changeset
556 changeset A to have no elements in its successor set (e.g. the changeset
557 has been pruned). Therefore, the returned list of successors sets will be
557 has been pruned). Therefore, the returned list of successors sets will be
558 [(A',)] or [], respectively.
558 [(A',)] or [], respectively.
559
559
560 When a changeset A is split into A' and B', however, it will result in a
560 When a changeset A is split into A' and B', however, it will result in a
561 successors set containing more than a single element, i.e. [(A',B')].
561 successors set containing more than a single element, i.e. [(A',B')].
562 Divergent changesets will result in multiple successors sets, i.e. [(A',),
562 Divergent changesets will result in multiple successors sets, i.e. [(A',),
563 (A'')].
563 (A'')].
564
564
565 If a changeset A is not obsolete, then it will conceptually have no
565 If a changeset A is not obsolete, then it will conceptually have no
566 successors set. To distinguish this from a pruned changeset, the successor
566 successors set. To distinguish this from a pruned changeset, the successor
567 set will contain itself only, i.e. [(A,)].
567 set will contain itself only, i.e. [(A,)].
568
568
569 Finally, final successors unknown locally are considered to be pruned
569 Finally, final successors unknown locally are considered to be pruned
570 (pruned: obsoleted without any successors). (Final: successors not affected
570 (pruned: obsoleted without any successors). (Final: successors not affected
571 by markers).
571 by markers).
572
572
573 The 'closest' mode respect the repoview filtering. For example, without
573 The 'closest' mode respect the repoview filtering. For example, without
574 filter it will stop at the first locally known changeset, with 'visible'
574 filter it will stop at the first locally known changeset, with 'visible'
575 filter it will stop on visible changesets).
575 filter it will stop on visible changesets).
576
576
577 The optional `cache` parameter is a dictionary that may contains
577 The optional `cache` parameter is a dictionary that may contains
578 precomputed successors sets. It is meant to reuse the computation of a
578 precomputed successors sets. It is meant to reuse the computation of a
579 previous call to `successorssets` when multiple calls are made at the same
579 previous call to `successorssets` when multiple calls are made at the same
580 time. The cache dictionary is updated in place. The caller is responsible
580 time. The cache dictionary is updated in place. The caller is responsible
581 for its life span. Code that makes multiple calls to `successorssets`
581 for its life span. Code that makes multiple calls to `successorssets`
582 *should* use this cache mechanism or risk a performance hit.
582 *should* use this cache mechanism or risk a performance hit.
583
583
584 Since results are different depending of the 'closest' most, the same cache
584 Since results are different depending of the 'closest' most, the same cache
585 cannot be reused for both mode.
585 cannot be reused for both mode.
586 """
586 """
587
587
588 succmarkers = repo.obsstore.successors
588 succmarkers = repo.obsstore.successors
589
589
590 # Stack of nodes we search successors sets for
590 # Stack of nodes we search successors sets for
591 toproceed = [initialnode]
591 toproceed = [initialnode]
592 # set version of above list for fast loop detection
592 # set version of above list for fast loop detection
593 # element added to "toproceed" must be added here
593 # element added to "toproceed" must be added here
594 stackedset = set(toproceed)
594 stackedset = set(toproceed)
595 if cache is None:
595 if cache is None:
596 cache = {}
596 cache = {}
597
597
598 # This while loop is the flattened version of a recursive search for
598 # This while loop is the flattened version of a recursive search for
599 # successors sets
599 # successors sets
600 #
600 #
601 # def successorssets(x):
601 # def successorssets(x):
602 # successors = directsuccessors(x)
602 # successors = directsuccessors(x)
603 # ss = [[]]
603 # ss = [[]]
604 # for succ in directsuccessors(x):
604 # for succ in directsuccessors(x):
605 # # product as in itertools cartesian product
605 # # product as in itertools cartesian product
606 # ss = product(ss, successorssets(succ))
606 # ss = product(ss, successorssets(succ))
607 # return ss
607 # return ss
608 #
608 #
609 # But we can not use plain recursive calls here:
609 # But we can not use plain recursive calls here:
610 # - that would blow the python call stack
610 # - that would blow the python call stack
611 # - obsolescence markers may have cycles, we need to handle them.
611 # - obsolescence markers may have cycles, we need to handle them.
612 #
612 #
613 # The `toproceed` list act as our call stack. Every node we search
613 # The `toproceed` list act as our call stack. Every node we search
614 # successors set for are stacked there.
614 # successors set for are stacked there.
615 #
615 #
616 # The `stackedset` is set version of this stack used to check if a node is
616 # The `stackedset` is set version of this stack used to check if a node is
617 # already stacked. This check is used to detect cycles and prevent infinite
617 # already stacked. This check is used to detect cycles and prevent infinite
618 # loop.
618 # loop.
619 #
619 #
620 # successors set of all nodes are stored in the `cache` dictionary.
620 # successors set of all nodes are stored in the `cache` dictionary.
621 #
621 #
622 # After this while loop ends we use the cache to return the successors sets
622 # After this while loop ends we use the cache to return the successors sets
623 # for the node requested by the caller.
623 # for the node requested by the caller.
624 while toproceed:
624 while toproceed:
625 # Every iteration tries to compute the successors sets of the topmost
625 # Every iteration tries to compute the successors sets of the topmost
626 # node of the stack: CURRENT.
626 # node of the stack: CURRENT.
627 #
627 #
628 # There are four possible outcomes:
628 # There are four possible outcomes:
629 #
629 #
630 # 1) We already know the successors sets of CURRENT:
630 # 1) We already know the successors sets of CURRENT:
631 # -> mission accomplished, pop it from the stack.
631 # -> mission accomplished, pop it from the stack.
632 # 2) Stop the walk:
632 # 2) Stop the walk:
633 # default case: Node is not obsolete
633 # default case: Node is not obsolete
634 # closest case: Node is known at this repo filter level
634 # closest case: Node is known at this repo filter level
635 # -> the node is its own successors sets. Add it to the cache.
635 # -> the node is its own successors sets. Add it to the cache.
636 # 3) We do not know successors set of direct successors of CURRENT:
636 # 3) We do not know successors set of direct successors of CURRENT:
637 # -> We add those successors to the stack.
637 # -> We add those successors to the stack.
638 # 4) We know successors sets of all direct successors of CURRENT:
638 # 4) We know successors sets of all direct successors of CURRENT:
639 # -> We can compute CURRENT successors set and add it to the
639 # -> We can compute CURRENT successors set and add it to the
640 # cache.
640 # cache.
641 #
641 #
642 current = toproceed[-1]
642 current = toproceed[-1]
643
643
644 # case 2 condition is a bit hairy because of closest,
644 # case 2 condition is a bit hairy because of closest,
645 # we compute it on its own
645 # we compute it on its own
646 case2condition = (current not in succmarkers) or (
646 case2condition = (current not in succmarkers) or (
647 closest and current != initialnode and current in repo
647 closest and current != initialnode and current in repo
648 )
648 )
649
649
650 if current in cache:
650 if current in cache:
651 # case (1): We already know the successors sets
651 # case (1): We already know the successors sets
652 stackedset.remove(toproceed.pop())
652 stackedset.remove(toproceed.pop())
653 elif case2condition:
653 elif case2condition:
654 # case (2): end of walk.
654 # case (2): end of walk.
655 if current in repo:
655 if current in repo:
656 # We have a valid successors.
656 # We have a valid successors.
657 cache[current] = [_succs((current,))]
657 cache[current] = [_succs((current,))]
658 else:
658 else:
659 # Final obsolete version is unknown locally.
659 # Final obsolete version is unknown locally.
660 # Do not count that as a valid successors
660 # Do not count that as a valid successors
661 cache[current] = []
661 cache[current] = []
662 else:
662 else:
663 # cases (3) and (4)
663 # cases (3) and (4)
664 #
664 #
665 # We proceed in two phases. Phase 1 aims to distinguish case (3)
665 # We proceed in two phases. Phase 1 aims to distinguish case (3)
666 # from case (4):
666 # from case (4):
667 #
667 #
668 # For each direct successors of CURRENT, we check whether its
668 # For each direct successors of CURRENT, we check whether its
669 # successors sets are known. If they are not, we stack the
669 # successors sets are known. If they are not, we stack the
670 # unknown node and proceed to the next iteration of the while
670 # unknown node and proceed to the next iteration of the while
671 # loop. (case 3)
671 # loop. (case 3)
672 #
672 #
673 # During this step, we may detect obsolescence cycles: a node
673 # During this step, we may detect obsolescence cycles: a node
674 # with unknown successors sets but already in the call stack.
674 # with unknown successors sets but already in the call stack.
675 # In such a situation, we arbitrary set the successors sets of
675 # In such a situation, we arbitrary set the successors sets of
676 # the node to nothing (node pruned) to break the cycle.
676 # the node to nothing (node pruned) to break the cycle.
677 #
677 #
678 # If no break was encountered we proceed to phase 2.
678 # If no break was encountered we proceed to phase 2.
679 #
679 #
680 # Phase 2 computes successors sets of CURRENT (case 4); see details
680 # Phase 2 computes successors sets of CURRENT (case 4); see details
681 # in phase 2 itself.
681 # in phase 2 itself.
682 #
682 #
683 # Note the two levels of iteration in each phase.
683 # Note the two levels of iteration in each phase.
684 # - The first one handles obsolescence markers using CURRENT as
684 # - The first one handles obsolescence markers using CURRENT as
685 # precursor (successors markers of CURRENT).
685 # precursor (successors markers of CURRENT).
686 #
686 #
687 # Having multiple entry here means divergence.
687 # Having multiple entry here means divergence.
688 #
688 #
689 # - The second one handles successors defined in each marker.
689 # - The second one handles successors defined in each marker.
690 #
690 #
691 # Having none means pruned node, multiple successors means split,
691 # Having none means pruned node, multiple successors means split,
692 # single successors are standard replacement.
692 # single successors are standard replacement.
693 #
693 #
694 for mark in sortedmarkers(succmarkers[current]):
694 for mark in sortedmarkers(succmarkers[current]):
695 for suc in mark[1]:
695 for suc in mark[1]:
696 if suc not in cache:
696 if suc not in cache:
697 if suc in stackedset:
697 if suc in stackedset:
698 # cycle breaking
698 # cycle breaking
699 cache[suc] = []
699 cache[suc] = []
700 else:
700 else:
701 # case (3) If we have not computed successors sets
701 # case (3) If we have not computed successors sets
702 # of one of those successors we add it to the
702 # of one of those successors we add it to the
703 # `toproceed` stack and stop all work for this
703 # `toproceed` stack and stop all work for this
704 # iteration.
704 # iteration.
705 toproceed.append(suc)
705 toproceed.append(suc)
706 stackedset.add(suc)
706 stackedset.add(suc)
707 break
707 break
708 else:
708 else:
709 continue
709 continue
710 break
710 break
711 else:
711 else:
712 # case (4): we know all successors sets of all direct
712 # case (4): we know all successors sets of all direct
713 # successors
713 # successors
714 #
714 #
715 # Successors set contributed by each marker depends on the
715 # Successors set contributed by each marker depends on the
716 # successors sets of all its "successors" node.
716 # successors sets of all its "successors" node.
717 #
717 #
718 # Each different marker is a divergence in the obsolescence
718 # Each different marker is a divergence in the obsolescence
719 # history. It contributes successors sets distinct from other
719 # history. It contributes successors sets distinct from other
720 # markers.
720 # markers.
721 #
721 #
722 # Within a marker, a successor may have divergent successors
722 # Within a marker, a successor may have divergent successors
723 # sets. In such a case, the marker will contribute multiple
723 # sets. In such a case, the marker will contribute multiple
724 # divergent successors sets. If multiple successors have
724 # divergent successors sets. If multiple successors have
725 # divergent successors sets, a Cartesian product is used.
725 # divergent successors sets, a Cartesian product is used.
726 #
726 #
727 # At the end we post-process successors sets to remove
727 # At the end we post-process successors sets to remove
728 # duplicated entry and successors set that are strict subset of
728 # duplicated entry and successors set that are strict subset of
729 # another one.
729 # another one.
730 succssets = []
730 succssets = []
731 for mark in sortedmarkers(succmarkers[current]):
731 for mark in sortedmarkers(succmarkers[current]):
732 # successors sets contributed by this marker
732 # successors sets contributed by this marker
733 base = _succs()
733 base = _succs()
734 base.markers.add(mark)
734 base.markers.add(mark)
735 markss = [base]
735 markss = [base]
736 for suc in mark[1]:
736 for suc in mark[1]:
737 # cardinal product with previous successors
737 # cardinal product with previous successors
738 productresult = []
738 productresult = []
739 for prefix in markss:
739 for prefix in markss:
740 for suffix in cache[suc]:
740 for suffix in cache[suc]:
741 newss = prefix.copy()
741 newss = prefix.copy()
742 newss.markers.update(suffix.markers)
742 newss.markers.update(suffix.markers)
743 for part in suffix:
743 for part in suffix:
744 # do not duplicated entry in successors set
744 # do not duplicated entry in successors set
745 # first entry wins.
745 # first entry wins.
746 if part not in newss:
746 if part not in newss:
747 newss.append(part)
747 newss.append(part)
748 productresult.append(newss)
748 productresult.append(newss)
749 if productresult:
749 if productresult:
750 markss = productresult
750 markss = productresult
751 succssets.extend(markss)
751 succssets.extend(markss)
752 # remove duplicated and subset
752 # remove duplicated and subset
753 seen = []
753 seen = []
754 final = []
754 final = []
755 candidates = sorted(
755 candidates = sorted(
756 (s for s in succssets if s), key=len, reverse=True
756 (s for s in succssets if s), key=len, reverse=True
757 )
757 )
758 for cand in candidates:
758 for cand in candidates:
759 for seensuccs in seen:
759 for seensuccs in seen:
760 if cand.canmerge(seensuccs):
760 if cand.canmerge(seensuccs):
761 seensuccs.markers.update(cand.markers)
761 seensuccs.markers.update(cand.markers)
762 break
762 break
763 else:
763 else:
764 final.append(cand)
764 final.append(cand)
765 seen.append(cand)
765 seen.append(cand)
766 final.reverse() # put small successors set first
766 final.reverse() # put small successors set first
767 cache[current] = final
767 cache[current] = final
768 return cache[initialnode]
768 return cache[initialnode]
769
769
770
770
771 def successorsandmarkers(repo, ctx):
771 def successorsandmarkers(repo, ctx):
772 """compute the raw data needed for computing obsfate
772 """compute the raw data needed for computing obsfate
773 Returns a list of dict, one dict per successors set
773 Returns a list of dict, one dict per successors set
774 """
774 """
775 if not ctx.obsolete():
775 if not ctx.obsolete():
776 return None
776 return None
777
777
778 ssets = successorssets(repo, ctx.node(), closest=True)
778 ssets = successorssets(repo, ctx.node(), closest=True)
779
779
780 # closestsuccessors returns an empty list for pruned revisions, remap it
780 # closestsuccessors returns an empty list for pruned revisions, remap it
781 # into a list containing an empty list for future processing
781 # into a list containing an empty list for future processing
782 if ssets == []:
782 if ssets == []:
783 ssets = [_succs()]
783 ssets = [_succs()]
784
784
785 # Try to recover pruned markers
785 # Try to recover pruned markers
786 succsmap = repo.obsstore.successors
786 succsmap = repo.obsstore.successors
787 fullsuccessorsets = [] # successor set + markers
787 fullsuccessorsets = [] # successor set + markers
788 for sset in ssets:
788 for sset in ssets:
789 if sset:
789 if sset:
790 fullsuccessorsets.append(sset)
790 fullsuccessorsets.append(sset)
791 else:
791 else:
792 # successorsset return an empty set() when ctx or one of its
792 # successorsset return an empty set() when ctx or one of its
793 # successors is pruned.
793 # successors is pruned.
794 # In this case, walk the obs-markers tree again starting with ctx
794 # In this case, walk the obs-markers tree again starting with ctx
795 # and find the relevant pruning obs-makers, the ones without
795 # and find the relevant pruning obs-makers, the ones without
796 # successors.
796 # successors.
797 # Having these markers allow us to compute some information about
797 # Having these markers allow us to compute some information about
798 # its fate, like who pruned this changeset and when.
798 # its fate, like who pruned this changeset and when.
799
799
800 # XXX we do not catch all prune markers (eg rewritten then pruned)
800 # XXX we do not catch all prune markers (eg rewritten then pruned)
801 # (fix me later)
801 # (fix me later)
802 foundany = False
802 foundany = False
803 for mark in succsmap.get(ctx.node(), ()):
803 for mark in succsmap.get(ctx.node(), ()):
804 if not mark[1]:
804 if not mark[1]:
805 foundany = True
805 foundany = True
806 sset = _succs()
806 sset = _succs()
807 sset.markers.add(mark)
807 sset.markers.add(mark)
808 fullsuccessorsets.append(sset)
808 fullsuccessorsets.append(sset)
809 if not foundany:
809 if not foundany:
810 fullsuccessorsets.append(_succs())
810 fullsuccessorsets.append(_succs())
811
811
812 values = []
812 values = []
813 for sset in fullsuccessorsets:
813 for sset in fullsuccessorsets:
814 values.append({b'successors': sset, b'markers': sset.markers})
814 values.append({b'successors': sset, b'markers': sset.markers})
815
815
816 return values
816 return values
817
817
818
818
819 def _getobsfate(successorssets):
819 def _getobsfate(successorssets):
820 """Compute a changeset obsolescence fate based on its successorssets.
820 """Compute a changeset obsolescence fate based on its successorssets.
821 Successors can be the tipmost ones or the immediate ones. This function
821 Successors can be the tipmost ones or the immediate ones. This function
822 return values are not meant to be shown directly to users, it is meant to
822 return values are not meant to be shown directly to users, it is meant to
823 be used by internal functions only.
823 be used by internal functions only.
824 Returns one fate from the following values:
824 Returns one fate from the following values:
825 - pruned
825 - pruned
826 - diverged
826 - diverged
827 - superseded
827 - superseded
828 - superseded_split
828 - superseded_split
829 """
829 """
830
830
831 if len(successorssets) == 0:
831 if len(successorssets) == 0:
832 # The commit has been pruned
832 # The commit has been pruned
833 return b'pruned'
833 return b'pruned'
834 elif len(successorssets) > 1:
834 elif len(successorssets) > 1:
835 return b'diverged'
835 return b'diverged'
836 else:
836 else:
837 # No divergence, only one set of successors
837 # No divergence, only one set of successors
838 successors = successorssets[0]
838 successors = successorssets[0]
839
839
840 if len(successors) == 1:
840 if len(successors) == 1:
841 return b'superseded'
841 return b'superseded'
842 else:
842 else:
843 return b'superseded_split'
843 return b'superseded_split'
844
844
845
845
846 def obsfateverb(successorset, markers):
846 def obsfateverb(successorset, markers):
847 """Return the verb summarizing the successorset and potentially using
847 """Return the verb summarizing the successorset and potentially using
848 information from the markers
848 information from the markers
849 """
849 """
850 if not successorset:
850 if not successorset:
851 verb = b'pruned'
851 verb = b'pruned'
852 elif len(successorset) == 1:
852 elif len(successorset) == 1:
853 verb = b'rewritten'
853 verb = b'rewritten'
854 else:
854 else:
855 verb = b'split'
855 verb = b'split'
856 return verb
856 return verb
857
857
858
858
859 def markersdates(markers):
859 def markersdates(markers):
860 """returns the list of dates for a list of markers"""
860 """returns the list of dates for a list of markers"""
861 return [m[4] for m in markers]
861 return [m[4] for m in markers]
862
862
863
863
864 def markersusers(markers):
864 def markersusers(markers):
865 """Returns a sorted list of markers users without duplicates"""
865 """Returns a sorted list of markers users without duplicates"""
866 markersmeta = [dict(m[3]) for m in markers]
866 markersmeta = [dict(m[3]) for m in markers]
867 users = {
867 users = {
868 encoding.tolocal(meta[b'user'])
868 encoding.tolocal(meta[b'user'])
869 for meta in markersmeta
869 for meta in markersmeta
870 if meta.get(b'user')
870 if meta.get(b'user')
871 }
871 }
872
872
873 return sorted(users)
873 return sorted(users)
874
874
875
875
876 def markersoperations(markers):
876 def markersoperations(markers):
877 """Returns a sorted list of markers operations without duplicates"""
877 """Returns a sorted list of markers operations without duplicates"""
878 markersmeta = [dict(m[3]) for m in markers]
878 markersmeta = [dict(m[3]) for m in markers]
879 operations = {
879 operations = {
880 meta.get(b'operation') for meta in markersmeta if meta.get(b'operation')
880 meta.get(b'operation') for meta in markersmeta if meta.get(b'operation')
881 }
881 }
882
882
883 return sorted(operations)
883 return sorted(operations)
884
884
885
885
886 def obsfateprinter(ui, repo, successors, markers, formatctx):
886 def obsfateprinter(ui, repo, successors, markers, formatctx):
887 """Build a obsfate string for a single successorset using all obsfate
887 """Build a obsfate string for a single successorset using all obsfate
888 related function defined in obsutil
888 related function defined in obsutil
889 """
889 """
890 quiet = ui.quiet
890 quiet = ui.quiet
891 verbose = ui.verbose
891 verbose = ui.verbose
892 normal = not verbose and not quiet
892 normal = not verbose and not quiet
893
893
894 line = []
894 line = []
895
895
896 # Verb
896 # Verb
897 line.append(obsfateverb(successors, markers))
897 line.append(obsfateverb(successors, markers))
898
898
899 # Operations
899 # Operations
900 operations = markersoperations(markers)
900 operations = markersoperations(markers)
901 if operations:
901 if operations:
902 line.append(b" using %s" % b", ".join(operations))
902 line.append(b" using %s" % b", ".join(operations))
903
903
904 # Successors
904 # Successors
905 if successors:
905 if successors:
906 fmtsuccessors = [formatctx(repo[succ]) for succ in successors]
906 fmtsuccessors = [formatctx(repo[succ]) for succ in successors]
907 line.append(b" as %s" % b", ".join(fmtsuccessors))
907 line.append(b" as %s" % b", ".join(fmtsuccessors))
908
908
909 # Users
909 # Users
910 users = markersusers(markers)
910 users = markersusers(markers)
911 # Filter out current user in not verbose mode to reduce amount of
911 # Filter out current user in not verbose mode to reduce amount of
912 # information
912 # information
913 if not verbose:
913 if not verbose:
914 currentuser = ui.username(acceptempty=True)
914 currentuser = ui.username(acceptempty=True)
915 if len(users) == 1 and currentuser in users:
915 if len(users) == 1 and currentuser in users:
916 users = None
916 users = None
917
917
918 if (verbose or normal) and users:
918 if (verbose or normal) and users:
919 line.append(b" by %s" % b", ".join(users))
919 line.append(b" by %s" % b", ".join(users))
920
920
921 # Date
921 # Date
922 dates = markersdates(markers)
922 dates = markersdates(markers)
923
923
924 if dates and verbose:
924 if dates and verbose:
925 min_date = min(dates)
925 min_date = min(dates)
926 max_date = max(dates)
926 max_date = max(dates)
927
927
928 if min_date == max_date:
928 if min_date == max_date:
929 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
929 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
930 line.append(b" (at %s)" % fmtmin_date)
930 line.append(b" (at %s)" % fmtmin_date)
931 else:
931 else:
932 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
932 fmtmin_date = dateutil.datestr(min_date, b'%Y-%m-%d %H:%M %1%2')
933 fmtmax_date = dateutil.datestr(max_date, b'%Y-%m-%d %H:%M %1%2')
933 fmtmax_date = dateutil.datestr(max_date, b'%Y-%m-%d %H:%M %1%2')
934 line.append(b" (between %s and %s)" % (fmtmin_date, fmtmax_date))
934 line.append(b" (between %s and %s)" % (fmtmin_date, fmtmax_date))
935
935
936 return b"".join(line)
936 return b"".join(line)
937
937
938
938
939 filteredmsgtable = {
939 filteredmsgtable = {
940 b"pruned": _(b"hidden revision '%s' is pruned"),
940 b"pruned": _(b"hidden revision '%s' is pruned"),
941 b"diverged": _(b"hidden revision '%s' has diverged"),
941 b"diverged": _(b"hidden revision '%s' has diverged"),
942 b"superseded": _(b"hidden revision '%s' was rewritten as: %s"),
942 b"superseded": _(b"hidden revision '%s' was rewritten as: %s"),
943 b"superseded_split": _(b"hidden revision '%s' was split as: %s"),
943 b"superseded_split": _(b"hidden revision '%s' was split as: %s"),
944 b"superseded_split_several": _(
944 b"superseded_split_several": _(
945 b"hidden revision '%s' was split as: %s and %d more"
945 b"hidden revision '%s' was split as: %s and %d more"
946 ),
946 ),
947 }
947 }
948
948
949
949
950 def _getfilteredreason(repo, changeid, ctx):
950 def _getfilteredreason(repo, changeid, ctx):
951 """return a human-friendly string on why a obsolete changeset is hidden"""
951 """return a human-friendly string on why a obsolete changeset is hidden"""
952 successors = successorssets(repo, ctx.node())
952 successors = successorssets(repo, ctx.node())
953 fate = _getobsfate(successors)
953 fate = _getobsfate(successors)
954
954
955 # Be more precise in case the revision is superseded
955 # Be more precise in case the revision is superseded
956 if fate == b'pruned':
956 if fate == b'pruned':
957 return filteredmsgtable[b'pruned'] % changeid
957 return filteredmsgtable[b'pruned'] % changeid
958 elif fate == b'diverged':
958 elif fate == b'diverged':
959 return filteredmsgtable[b'diverged'] % changeid
959 return filteredmsgtable[b'diverged'] % changeid
960 elif fate == b'superseded':
960 elif fate == b'superseded':
961 single_successor = short(successors[0][0])
961 single_successor = short(successors[0][0])
962 return filteredmsgtable[b'superseded'] % (changeid, single_successor)
962 return filteredmsgtable[b'superseded'] % (changeid, single_successor)
963 elif fate == b'superseded_split':
963 elif fate == b'superseded_split':
964
964
965 succs = []
965 succs = []
966 for node_id in successors[0]:
966 for node_id in successors[0]:
967 succs.append(short(node_id))
967 succs.append(short(node_id))
968
968
969 if len(succs) <= 2:
969 if len(succs) <= 2:
970 fmtsuccs = b', '.join(succs)
970 fmtsuccs = b', '.join(succs)
971 return filteredmsgtable[b'superseded_split'] % (changeid, fmtsuccs)
971 return filteredmsgtable[b'superseded_split'] % (changeid, fmtsuccs)
972 else:
972 else:
973 firstsuccessors = b', '.join(succs[:2])
973 firstsuccessors = b', '.join(succs[:2])
974 remainingnumber = len(succs) - 2
974 remainingnumber = len(succs) - 2
975
975
976 args = (changeid, firstsuccessors, remainingnumber)
976 args = (changeid, firstsuccessors, remainingnumber)
977 return filteredmsgtable[b'superseded_split_several'] % args
977 return filteredmsgtable[b'superseded_split_several'] % args
978
978
979
979
980 def divergentsets(repo, ctx):
980 def divergentsets(repo, ctx):
981 """Compute sets of commits divergent with a given one"""
981 """Compute sets of commits divergent with a given one"""
982 cache = {}
982 cache = {}
983 base = {}
983 base = {}
984 for n in allpredecessors(repo.obsstore, [ctx.node()]):
984 for n in allpredecessors(repo.obsstore, [ctx.node()]):
985 if n == ctx.node():
985 if n == ctx.node():
986 # a node can't be a base for divergence with itself
986 # a node can't be a base for divergence with itself
987 continue
987 continue
988 nsuccsets = successorssets(repo, n, cache)
988 nsuccsets = successorssets(repo, n, cache)
989 for nsuccset in nsuccsets:
989 for nsuccset in nsuccsets:
990 if ctx.node() in nsuccset:
990 if ctx.node() in nsuccset:
991 # we are only interested in *other* successor sets
991 # we are only interested in *other* successor sets
992 continue
992 continue
993 if tuple(nsuccset) in base:
993 if tuple(nsuccset) in base:
994 # we already know the latest base for this divergency
994 # we already know the latest base for this divergency
995 continue
995 continue
996 base[tuple(nsuccset)] = n
996 base[tuple(nsuccset)] = n
997 return [
997 return [
998 {b'divergentnodes': divset, b'commonpredecessor': b}
998 {b'divergentnodes': divset, b'commonpredecessor': b}
999 for divset, b in base.items()
999 for divset, b in base.items()
1000 ]
1000 ]
1001
1001
1002
1002
1003 def whyunstable(repo, ctx):
1003 def whyunstable(repo, ctx):
1004 result = []
1004 result = []
1005 if ctx.orphan():
1005 if ctx.orphan():
1006 for parent in ctx.parents():
1006 for parent in ctx.parents():
1007 kind = None
1007 kind = None
1008 if parent.orphan():
1008 if parent.orphan():
1009 kind = b'orphan'
1009 kind = b'orphan'
1010 elif parent.obsolete():
1010 elif parent.obsolete():
1011 kind = b'obsolete'
1011 kind = b'obsolete'
1012 if kind is not None:
1012 if kind is not None:
1013 result.append(
1013 result.append(
1014 {
1014 {
1015 b'instability': b'orphan',
1015 b'instability': b'orphan',
1016 b'reason': b'%s parent' % kind,
1016 b'reason': b'%s parent' % kind,
1017 b'node': parent.hex(),
1017 b'node': parent.hex(),
1018 }
1018 }
1019 )
1019 )
1020 if ctx.phasedivergent():
1020 if ctx.phasedivergent():
1021 predecessors = allpredecessors(
1021 predecessors = allpredecessors(
1022 repo.obsstore, [ctx.node()], ignoreflags=bumpedfix
1022 repo.obsstore, [ctx.node()], ignoreflags=bumpedfix
1023 )
1023 )
1024 immutable = [
1024 immutable = [
1025 repo[p] for p in predecessors if p in repo and not repo[p].mutable()
1025 repo[p] for p in predecessors if p in repo and not repo[p].mutable()
1026 ]
1026 ]
1027 for predecessor in immutable:
1027 for predecessor in immutable:
1028 result.append(
1028 result.append(
1029 {
1029 {
1030 b'instability': b'phase-divergent',
1030 b'instability': b'phase-divergent',
1031 b'reason': b'immutable predecessor',
1031 b'reason': b'immutable predecessor',
1032 b'node': predecessor.hex(),
1032 b'node': predecessor.hex(),
1033 }
1033 }
1034 )
1034 )
1035 if ctx.contentdivergent():
1035 if ctx.contentdivergent():
1036 dsets = divergentsets(repo, ctx)
1036 dsets = divergentsets(repo, ctx)
1037 for dset in dsets:
1037 for dset in dsets:
1038 divnodes = [repo[n] for n in dset[b'divergentnodes']]
1038 divnodes = [repo[n] for n in dset[b'divergentnodes']]
1039 result.append(
1039 result.append(
1040 {
1040 {
1041 b'instability': b'content-divergent',
1041 b'instability': b'content-divergent',
1042 b'divergentnodes': divnodes,
1042 b'divergentnodes': divnodes,
1043 b'reason': b'predecessor',
1043 b'reason': b'predecessor',
1044 b'node': hex(dset[b'commonpredecessor']),
1044 b'node': hex(dset[b'commonpredecessor']),
1045 }
1045 }
1046 )
1046 )
1047 return result
1047 return result
General Comments 0
You need to be logged in to leave comments. Login now