##// END OF EJS Templates
sidedata: apply basic but tight security around exchange...
marmoute -
r43401:c17a63eb default
parent child Browse files
Show More
@@ -1,2555 +1,2574
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157
157
158 from .i18n import _
158 from .i18n import _
159 from . import (
159 from . import (
160 bookmarks,
160 bookmarks,
161 changegroup,
161 changegroup,
162 encoding,
162 encoding,
163 error,
163 error,
164 node as nodemod,
164 node as nodemod,
165 obsolete,
165 obsolete,
166 phases,
166 phases,
167 pushkey,
167 pushkey,
168 pycompat,
168 pycompat,
169 streamclone,
169 streamclone,
170 tags,
170 tags,
171 url,
171 url,
172 util,
172 util,
173 )
173 )
174 from .utils import stringutil
174 from .utils import stringutil
175
175
176 urlerr = util.urlerr
176 urlerr = util.urlerr
177 urlreq = util.urlreq
177 urlreq = util.urlreq
178
178
179 _pack = struct.pack
179 _pack = struct.pack
180 _unpack = struct.unpack
180 _unpack = struct.unpack
181
181
182 _fstreamparamsize = b'>i'
182 _fstreamparamsize = b'>i'
183 _fpartheadersize = b'>i'
183 _fpartheadersize = b'>i'
184 _fparttypesize = b'>B'
184 _fparttypesize = b'>B'
185 _fpartid = b'>I'
185 _fpartid = b'>I'
186 _fpayloadsize = b'>i'
186 _fpayloadsize = b'>i'
187 _fpartparamcount = b'>BB'
187 _fpartparamcount = b'>BB'
188
188
189 preferedchunksize = 32768
189 preferedchunksize = 32768
190
190
191 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
191 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
192
192
193
193
194 def outdebug(ui, message):
194 def outdebug(ui, message):
195 """debug regarding output stream (bundling)"""
195 """debug regarding output stream (bundling)"""
196 if ui.configbool(b'devel', b'bundle2.debug'):
196 if ui.configbool(b'devel', b'bundle2.debug'):
197 ui.debug(b'bundle2-output: %s\n' % message)
197 ui.debug(b'bundle2-output: %s\n' % message)
198
198
199
199
200 def indebug(ui, message):
200 def indebug(ui, message):
201 """debug on input stream (unbundling)"""
201 """debug on input stream (unbundling)"""
202 if ui.configbool(b'devel', b'bundle2.debug'):
202 if ui.configbool(b'devel', b'bundle2.debug'):
203 ui.debug(b'bundle2-input: %s\n' % message)
203 ui.debug(b'bundle2-input: %s\n' % message)
204
204
205
205
206 def validateparttype(parttype):
206 def validateparttype(parttype):
207 """raise ValueError if a parttype contains invalid character"""
207 """raise ValueError if a parttype contains invalid character"""
208 if _parttypeforbidden.search(parttype):
208 if _parttypeforbidden.search(parttype):
209 raise ValueError(parttype)
209 raise ValueError(parttype)
210
210
211
211
212 def _makefpartparamsizes(nbparams):
212 def _makefpartparamsizes(nbparams):
213 """return a struct format to read part parameter sizes
213 """return a struct format to read part parameter sizes
214
214
215 The number parameters is variable so we need to build that format
215 The number parameters is variable so we need to build that format
216 dynamically.
216 dynamically.
217 """
217 """
218 return b'>' + (b'BB' * nbparams)
218 return b'>' + (b'BB' * nbparams)
219
219
220
220
221 parthandlermapping = {}
221 parthandlermapping = {}
222
222
223
223
224 def parthandler(parttype, params=()):
224 def parthandler(parttype, params=()):
225 """decorator that register a function as a bundle2 part handler
225 """decorator that register a function as a bundle2 part handler
226
226
227 eg::
227 eg::
228
228
229 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
229 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
230 def myparttypehandler(...):
230 def myparttypehandler(...):
231 '''process a part of type "my part".'''
231 '''process a part of type "my part".'''
232 ...
232 ...
233 """
233 """
234 validateparttype(parttype)
234 validateparttype(parttype)
235
235
236 def _decorator(func):
236 def _decorator(func):
237 lparttype = parttype.lower() # enforce lower case matching.
237 lparttype = parttype.lower() # enforce lower case matching.
238 assert lparttype not in parthandlermapping
238 assert lparttype not in parthandlermapping
239 parthandlermapping[lparttype] = func
239 parthandlermapping[lparttype] = func
240 func.params = frozenset(params)
240 func.params = frozenset(params)
241 return func
241 return func
242
242
243 return _decorator
243 return _decorator
244
244
245
245
246 class unbundlerecords(object):
246 class unbundlerecords(object):
247 """keep record of what happens during and unbundle
247 """keep record of what happens during and unbundle
248
248
249 New records are added using `records.add('cat', obj)`. Where 'cat' is a
249 New records are added using `records.add('cat', obj)`. Where 'cat' is a
250 category of record and obj is an arbitrary object.
250 category of record and obj is an arbitrary object.
251
251
252 `records['cat']` will return all entries of this category 'cat'.
252 `records['cat']` will return all entries of this category 'cat'.
253
253
254 Iterating on the object itself will yield `('category', obj)` tuples
254 Iterating on the object itself will yield `('category', obj)` tuples
255 for all entries.
255 for all entries.
256
256
257 All iterations happens in chronological order.
257 All iterations happens in chronological order.
258 """
258 """
259
259
260 def __init__(self):
260 def __init__(self):
261 self._categories = {}
261 self._categories = {}
262 self._sequences = []
262 self._sequences = []
263 self._replies = {}
263 self._replies = {}
264
264
265 def add(self, category, entry, inreplyto=None):
265 def add(self, category, entry, inreplyto=None):
266 """add a new record of a given category.
266 """add a new record of a given category.
267
267
268 The entry can then be retrieved in the list returned by
268 The entry can then be retrieved in the list returned by
269 self['category']."""
269 self['category']."""
270 self._categories.setdefault(category, []).append(entry)
270 self._categories.setdefault(category, []).append(entry)
271 self._sequences.append((category, entry))
271 self._sequences.append((category, entry))
272 if inreplyto is not None:
272 if inreplyto is not None:
273 self.getreplies(inreplyto).add(category, entry)
273 self.getreplies(inreplyto).add(category, entry)
274
274
275 def getreplies(self, partid):
275 def getreplies(self, partid):
276 """get the records that are replies to a specific part"""
276 """get the records that are replies to a specific part"""
277 return self._replies.setdefault(partid, unbundlerecords())
277 return self._replies.setdefault(partid, unbundlerecords())
278
278
279 def __getitem__(self, cat):
279 def __getitem__(self, cat):
280 return tuple(self._categories.get(cat, ()))
280 return tuple(self._categories.get(cat, ()))
281
281
282 def __iter__(self):
282 def __iter__(self):
283 return iter(self._sequences)
283 return iter(self._sequences)
284
284
285 def __len__(self):
285 def __len__(self):
286 return len(self._sequences)
286 return len(self._sequences)
287
287
288 def __nonzero__(self):
288 def __nonzero__(self):
289 return bool(self._sequences)
289 return bool(self._sequences)
290
290
291 __bool__ = __nonzero__
291 __bool__ = __nonzero__
292
292
293
293
294 class bundleoperation(object):
294 class bundleoperation(object):
295 """an object that represents a single bundling process
295 """an object that represents a single bundling process
296
296
297 Its purpose is to carry unbundle-related objects and states.
297 Its purpose is to carry unbundle-related objects and states.
298
298
299 A new object should be created at the beginning of each bundle processing.
299 A new object should be created at the beginning of each bundle processing.
300 The object is to be returned by the processing function.
300 The object is to be returned by the processing function.
301
301
302 The object has very little content now it will ultimately contain:
302 The object has very little content now it will ultimately contain:
303 * an access to the repo the bundle is applied to,
303 * an access to the repo the bundle is applied to,
304 * a ui object,
304 * a ui object,
305 * a way to retrieve a transaction to add changes to the repo,
305 * a way to retrieve a transaction to add changes to the repo,
306 * a way to record the result of processing each part,
306 * a way to record the result of processing each part,
307 * a way to construct a bundle response when applicable.
307 * a way to construct a bundle response when applicable.
308 """
308 """
309
309
310 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
310 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
311 self.repo = repo
311 self.repo = repo
312 self.ui = repo.ui
312 self.ui = repo.ui
313 self.records = unbundlerecords()
313 self.records = unbundlerecords()
314 self.reply = None
314 self.reply = None
315 self.captureoutput = captureoutput
315 self.captureoutput = captureoutput
316 self.hookargs = {}
316 self.hookargs = {}
317 self._gettransaction = transactiongetter
317 self._gettransaction = transactiongetter
318 # carries value that can modify part behavior
318 # carries value that can modify part behavior
319 self.modes = {}
319 self.modes = {}
320 self.source = source
320 self.source = source
321
321
322 def gettransaction(self):
322 def gettransaction(self):
323 transaction = self._gettransaction()
323 transaction = self._gettransaction()
324
324
325 if self.hookargs:
325 if self.hookargs:
326 # the ones added to the transaction supercede those added
326 # the ones added to the transaction supercede those added
327 # to the operation.
327 # to the operation.
328 self.hookargs.update(transaction.hookargs)
328 self.hookargs.update(transaction.hookargs)
329 transaction.hookargs = self.hookargs
329 transaction.hookargs = self.hookargs
330
330
331 # mark the hookargs as flushed. further attempts to add to
331 # mark the hookargs as flushed. further attempts to add to
332 # hookargs will result in an abort.
332 # hookargs will result in an abort.
333 self.hookargs = None
333 self.hookargs = None
334
334
335 return transaction
335 return transaction
336
336
337 def addhookargs(self, hookargs):
337 def addhookargs(self, hookargs):
338 if self.hookargs is None:
338 if self.hookargs is None:
339 raise error.ProgrammingError(
339 raise error.ProgrammingError(
340 b'attempted to add hookargs to '
340 b'attempted to add hookargs to '
341 b'operation after transaction started'
341 b'operation after transaction started'
342 )
342 )
343 self.hookargs.update(hookargs)
343 self.hookargs.update(hookargs)
344
344
345
345
346 class TransactionUnavailable(RuntimeError):
346 class TransactionUnavailable(RuntimeError):
347 pass
347 pass
348
348
349
349
350 def _notransaction():
350 def _notransaction():
351 """default method to get a transaction while processing a bundle
351 """default method to get a transaction while processing a bundle
352
352
353 Raise an exception to highlight the fact that no transaction was expected
353 Raise an exception to highlight the fact that no transaction was expected
354 to be created"""
354 to be created"""
355 raise TransactionUnavailable()
355 raise TransactionUnavailable()
356
356
357
357
358 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
358 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
359 # transform me into unbundler.apply() as soon as the freeze is lifted
359 # transform me into unbundler.apply() as soon as the freeze is lifted
360 if isinstance(unbundler, unbundle20):
360 if isinstance(unbundler, unbundle20):
361 tr.hookargs[b'bundle2'] = b'1'
361 tr.hookargs[b'bundle2'] = b'1'
362 if source is not None and b'source' not in tr.hookargs:
362 if source is not None and b'source' not in tr.hookargs:
363 tr.hookargs[b'source'] = source
363 tr.hookargs[b'source'] = source
364 if url is not None and b'url' not in tr.hookargs:
364 if url is not None and b'url' not in tr.hookargs:
365 tr.hookargs[b'url'] = url
365 tr.hookargs[b'url'] = url
366 return processbundle(repo, unbundler, lambda: tr, source=source)
366 return processbundle(repo, unbundler, lambda: tr, source=source)
367 else:
367 else:
368 # the transactiongetter won't be used, but we might as well set it
368 # the transactiongetter won't be used, but we might as well set it
369 op = bundleoperation(repo, lambda: tr, source=source)
369 op = bundleoperation(repo, lambda: tr, source=source)
370 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
370 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
371 return op
371 return op
372
372
373
373
374 class partiterator(object):
374 class partiterator(object):
375 def __init__(self, repo, op, unbundler):
375 def __init__(self, repo, op, unbundler):
376 self.repo = repo
376 self.repo = repo
377 self.op = op
377 self.op = op
378 self.unbundler = unbundler
378 self.unbundler = unbundler
379 self.iterator = None
379 self.iterator = None
380 self.count = 0
380 self.count = 0
381 self.current = None
381 self.current = None
382
382
383 def __enter__(self):
383 def __enter__(self):
384 def func():
384 def func():
385 itr = enumerate(self.unbundler.iterparts(), 1)
385 itr = enumerate(self.unbundler.iterparts(), 1)
386 for count, p in itr:
386 for count, p in itr:
387 self.count = count
387 self.count = count
388 self.current = p
388 self.current = p
389 yield p
389 yield p
390 p.consume()
390 p.consume()
391 self.current = None
391 self.current = None
392
392
393 self.iterator = func()
393 self.iterator = func()
394 return self.iterator
394 return self.iterator
395
395
396 def __exit__(self, type, exc, tb):
396 def __exit__(self, type, exc, tb):
397 if not self.iterator:
397 if not self.iterator:
398 return
398 return
399
399
400 # Only gracefully abort in a normal exception situation. User aborts
400 # Only gracefully abort in a normal exception situation. User aborts
401 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
401 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
402 # and should not gracefully cleanup.
402 # and should not gracefully cleanup.
403 if isinstance(exc, Exception):
403 if isinstance(exc, Exception):
404 # Any exceptions seeking to the end of the bundle at this point are
404 # Any exceptions seeking to the end of the bundle at this point are
405 # almost certainly related to the underlying stream being bad.
405 # almost certainly related to the underlying stream being bad.
406 # And, chances are that the exception we're handling is related to
406 # And, chances are that the exception we're handling is related to
407 # getting in that bad state. So, we swallow the seeking error and
407 # getting in that bad state. So, we swallow the seeking error and
408 # re-raise the original error.
408 # re-raise the original error.
409 seekerror = False
409 seekerror = False
410 try:
410 try:
411 if self.current:
411 if self.current:
412 # consume the part content to not corrupt the stream.
412 # consume the part content to not corrupt the stream.
413 self.current.consume()
413 self.current.consume()
414
414
415 for part in self.iterator:
415 for part in self.iterator:
416 # consume the bundle content
416 # consume the bundle content
417 part.consume()
417 part.consume()
418 except Exception:
418 except Exception:
419 seekerror = True
419 seekerror = True
420
420
421 # Small hack to let caller code distinguish exceptions from bundle2
421 # Small hack to let caller code distinguish exceptions from bundle2
422 # processing from processing the old format. This is mostly needed
422 # processing from processing the old format. This is mostly needed
423 # to handle different return codes to unbundle according to the type
423 # to handle different return codes to unbundle according to the type
424 # of bundle. We should probably clean up or drop this return code
424 # of bundle. We should probably clean up or drop this return code
425 # craziness in a future version.
425 # craziness in a future version.
426 exc.duringunbundle2 = True
426 exc.duringunbundle2 = True
427 salvaged = []
427 salvaged = []
428 replycaps = None
428 replycaps = None
429 if self.op.reply is not None:
429 if self.op.reply is not None:
430 salvaged = self.op.reply.salvageoutput()
430 salvaged = self.op.reply.salvageoutput()
431 replycaps = self.op.reply.capabilities
431 replycaps = self.op.reply.capabilities
432 exc._replycaps = replycaps
432 exc._replycaps = replycaps
433 exc._bundle2salvagedoutput = salvaged
433 exc._bundle2salvagedoutput = salvaged
434
434
435 # Re-raising from a variable loses the original stack. So only use
435 # Re-raising from a variable loses the original stack. So only use
436 # that form if we need to.
436 # that form if we need to.
437 if seekerror:
437 if seekerror:
438 raise exc
438 raise exc
439
439
440 self.repo.ui.debug(
440 self.repo.ui.debug(
441 b'bundle2-input-bundle: %i parts total\n' % self.count
441 b'bundle2-input-bundle: %i parts total\n' % self.count
442 )
442 )
443
443
444
444
445 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
445 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
446 """This function process a bundle, apply effect to/from a repo
446 """This function process a bundle, apply effect to/from a repo
447
447
448 It iterates over each part then searches for and uses the proper handling
448 It iterates over each part then searches for and uses the proper handling
449 code to process the part. Parts are processed in order.
449 code to process the part. Parts are processed in order.
450
450
451 Unknown Mandatory part will abort the process.
451 Unknown Mandatory part will abort the process.
452
452
453 It is temporarily possible to provide a prebuilt bundleoperation to the
453 It is temporarily possible to provide a prebuilt bundleoperation to the
454 function. This is used to ensure output is properly propagated in case of
454 function. This is used to ensure output is properly propagated in case of
455 an error during the unbundling. This output capturing part will likely be
455 an error during the unbundling. This output capturing part will likely be
456 reworked and this ability will probably go away in the process.
456 reworked and this ability will probably go away in the process.
457 """
457 """
458 if op is None:
458 if op is None:
459 if transactiongetter is None:
459 if transactiongetter is None:
460 transactiongetter = _notransaction
460 transactiongetter = _notransaction
461 op = bundleoperation(repo, transactiongetter, source=source)
461 op = bundleoperation(repo, transactiongetter, source=source)
462 # todo:
462 # todo:
463 # - replace this is a init function soon.
463 # - replace this is a init function soon.
464 # - exception catching
464 # - exception catching
465 unbundler.params
465 unbundler.params
466 if repo.ui.debugflag:
466 if repo.ui.debugflag:
467 msg = [b'bundle2-input-bundle:']
467 msg = [b'bundle2-input-bundle:']
468 if unbundler.params:
468 if unbundler.params:
469 msg.append(b' %i params' % len(unbundler.params))
469 msg.append(b' %i params' % len(unbundler.params))
470 if op._gettransaction is None or op._gettransaction is _notransaction:
470 if op._gettransaction is None or op._gettransaction is _notransaction:
471 msg.append(b' no-transaction')
471 msg.append(b' no-transaction')
472 else:
472 else:
473 msg.append(b' with-transaction')
473 msg.append(b' with-transaction')
474 msg.append(b'\n')
474 msg.append(b'\n')
475 repo.ui.debug(b''.join(msg))
475 repo.ui.debug(b''.join(msg))
476
476
477 processparts(repo, op, unbundler)
477 processparts(repo, op, unbundler)
478
478
479 return op
479 return op
480
480
481
481
482 def processparts(repo, op, unbundler):
482 def processparts(repo, op, unbundler):
483 with partiterator(repo, op, unbundler) as parts:
483 with partiterator(repo, op, unbundler) as parts:
484 for part in parts:
484 for part in parts:
485 _processpart(op, part)
485 _processpart(op, part)
486
486
487
487
488 def _processchangegroup(op, cg, tr, source, url, **kwargs):
488 def _processchangegroup(op, cg, tr, source, url, **kwargs):
489 ret = cg.apply(op.repo, tr, source, url, **kwargs)
489 ret = cg.apply(op.repo, tr, source, url, **kwargs)
490 op.records.add(b'changegroup', {b'return': ret,})
490 op.records.add(b'changegroup', {b'return': ret,})
491 return ret
491 return ret
492
492
493
493
494 def _gethandler(op, part):
494 def _gethandler(op, part):
495 status = b'unknown' # used by debug output
495 status = b'unknown' # used by debug output
496 try:
496 try:
497 handler = parthandlermapping.get(part.type)
497 handler = parthandlermapping.get(part.type)
498 if handler is None:
498 if handler is None:
499 status = b'unsupported-type'
499 status = b'unsupported-type'
500 raise error.BundleUnknownFeatureError(parttype=part.type)
500 raise error.BundleUnknownFeatureError(parttype=part.type)
501 indebug(op.ui, b'found a handler for part %s' % part.type)
501 indebug(op.ui, b'found a handler for part %s' % part.type)
502 unknownparams = part.mandatorykeys - handler.params
502 unknownparams = part.mandatorykeys - handler.params
503 if unknownparams:
503 if unknownparams:
504 unknownparams = list(unknownparams)
504 unknownparams = list(unknownparams)
505 unknownparams.sort()
505 unknownparams.sort()
506 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
506 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
507 raise error.BundleUnknownFeatureError(
507 raise error.BundleUnknownFeatureError(
508 parttype=part.type, params=unknownparams
508 parttype=part.type, params=unknownparams
509 )
509 )
510 status = b'supported'
510 status = b'supported'
511 except error.BundleUnknownFeatureError as exc:
511 except error.BundleUnknownFeatureError as exc:
512 if part.mandatory: # mandatory parts
512 if part.mandatory: # mandatory parts
513 raise
513 raise
514 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
514 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
515 return # skip to part processing
515 return # skip to part processing
516 finally:
516 finally:
517 if op.ui.debugflag:
517 if op.ui.debugflag:
518 msg = [b'bundle2-input-part: "%s"' % part.type]
518 msg = [b'bundle2-input-part: "%s"' % part.type]
519 if not part.mandatory:
519 if not part.mandatory:
520 msg.append(b' (advisory)')
520 msg.append(b' (advisory)')
521 nbmp = len(part.mandatorykeys)
521 nbmp = len(part.mandatorykeys)
522 nbap = len(part.params) - nbmp
522 nbap = len(part.params) - nbmp
523 if nbmp or nbap:
523 if nbmp or nbap:
524 msg.append(b' (params:')
524 msg.append(b' (params:')
525 if nbmp:
525 if nbmp:
526 msg.append(b' %i mandatory' % nbmp)
526 msg.append(b' %i mandatory' % nbmp)
527 if nbap:
527 if nbap:
528 msg.append(b' %i advisory' % nbmp)
528 msg.append(b' %i advisory' % nbmp)
529 msg.append(b')')
529 msg.append(b')')
530 msg.append(b' %s\n' % status)
530 msg.append(b' %s\n' % status)
531 op.ui.debug(b''.join(msg))
531 op.ui.debug(b''.join(msg))
532
532
533 return handler
533 return handler
534
534
535
535
536 def _processpart(op, part):
536 def _processpart(op, part):
537 """process a single part from a bundle
537 """process a single part from a bundle
538
538
539 The part is guaranteed to have been fully consumed when the function exits
539 The part is guaranteed to have been fully consumed when the function exits
540 (even if an exception is raised)."""
540 (even if an exception is raised)."""
541 handler = _gethandler(op, part)
541 handler = _gethandler(op, part)
542 if handler is None:
542 if handler is None:
543 return
543 return
544
544
545 # handler is called outside the above try block so that we don't
545 # handler is called outside the above try block so that we don't
546 # risk catching KeyErrors from anything other than the
546 # risk catching KeyErrors from anything other than the
547 # parthandlermapping lookup (any KeyError raised by handler()
547 # parthandlermapping lookup (any KeyError raised by handler()
548 # itself represents a defect of a different variety).
548 # itself represents a defect of a different variety).
549 output = None
549 output = None
550 if op.captureoutput and op.reply is not None:
550 if op.captureoutput and op.reply is not None:
551 op.ui.pushbuffer(error=True, subproc=True)
551 op.ui.pushbuffer(error=True, subproc=True)
552 output = b''
552 output = b''
553 try:
553 try:
554 handler(op, part)
554 handler(op, part)
555 finally:
555 finally:
556 if output is not None:
556 if output is not None:
557 output = op.ui.popbuffer()
557 output = op.ui.popbuffer()
558 if output:
558 if output:
559 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
559 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
560 outpart.addparam(
560 outpart.addparam(
561 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
561 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
562 )
562 )
563
563
564
564
565 def decodecaps(blob):
565 def decodecaps(blob):
566 """decode a bundle2 caps bytes blob into a dictionary
566 """decode a bundle2 caps bytes blob into a dictionary
567
567
568 The blob is a list of capabilities (one per line)
568 The blob is a list of capabilities (one per line)
569 Capabilities may have values using a line of the form::
569 Capabilities may have values using a line of the form::
570
570
571 capability=value1,value2,value3
571 capability=value1,value2,value3
572
572
573 The values are always a list."""
573 The values are always a list."""
574 caps = {}
574 caps = {}
575 for line in blob.splitlines():
575 for line in blob.splitlines():
576 if not line:
576 if not line:
577 continue
577 continue
578 if b'=' not in line:
578 if b'=' not in line:
579 key, vals = line, ()
579 key, vals = line, ()
580 else:
580 else:
581 key, vals = line.split(b'=', 1)
581 key, vals = line.split(b'=', 1)
582 vals = vals.split(b',')
582 vals = vals.split(b',')
583 key = urlreq.unquote(key)
583 key = urlreq.unquote(key)
584 vals = [urlreq.unquote(v) for v in vals]
584 vals = [urlreq.unquote(v) for v in vals]
585 caps[key] = vals
585 caps[key] = vals
586 return caps
586 return caps
587
587
588
588
589 def encodecaps(caps):
589 def encodecaps(caps):
590 """encode a bundle2 caps dictionary into a bytes blob"""
590 """encode a bundle2 caps dictionary into a bytes blob"""
591 chunks = []
591 chunks = []
592 for ca in sorted(caps):
592 for ca in sorted(caps):
593 vals = caps[ca]
593 vals = caps[ca]
594 ca = urlreq.quote(ca)
594 ca = urlreq.quote(ca)
595 vals = [urlreq.quote(v) for v in vals]
595 vals = [urlreq.quote(v) for v in vals]
596 if vals:
596 if vals:
597 ca = b"%s=%s" % (ca, b','.join(vals))
597 ca = b"%s=%s" % (ca, b','.join(vals))
598 chunks.append(ca)
598 chunks.append(ca)
599 return b'\n'.join(chunks)
599 return b'\n'.join(chunks)
600
600
601
601
602 bundletypes = {
602 bundletypes = {
603 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
603 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
604 # since the unification ssh accepts a header but there
604 # since the unification ssh accepts a header but there
605 # is no capability signaling it.
605 # is no capability signaling it.
606 b"HG20": (), # special-cased below
606 b"HG20": (), # special-cased below
607 b"HG10UN": (b"HG10UN", b'UN'),
607 b"HG10UN": (b"HG10UN", b'UN'),
608 b"HG10BZ": (b"HG10", b'BZ'),
608 b"HG10BZ": (b"HG10", b'BZ'),
609 b"HG10GZ": (b"HG10GZ", b'GZ'),
609 b"HG10GZ": (b"HG10GZ", b'GZ'),
610 }
610 }
611
611
612 # hgweb uses this list to communicate its preferred type
612 # hgweb uses this list to communicate its preferred type
613 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
613 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
614
614
615
615
616 class bundle20(object):
616 class bundle20(object):
617 """represent an outgoing bundle2 container
617 """represent an outgoing bundle2 container
618
618
619 Use the `addparam` method to add stream level parameter. and `newpart` to
619 Use the `addparam` method to add stream level parameter. and `newpart` to
620 populate it. Then call `getchunks` to retrieve all the binary chunks of
620 populate it. Then call `getchunks` to retrieve all the binary chunks of
621 data that compose the bundle2 container."""
621 data that compose the bundle2 container."""
622
622
623 _magicstring = b'HG20'
623 _magicstring = b'HG20'
624
624
625 def __init__(self, ui, capabilities=()):
625 def __init__(self, ui, capabilities=()):
626 self.ui = ui
626 self.ui = ui
627 self._params = []
627 self._params = []
628 self._parts = []
628 self._parts = []
629 self.capabilities = dict(capabilities)
629 self.capabilities = dict(capabilities)
630 self._compengine = util.compengines.forbundletype(b'UN')
630 self._compengine = util.compengines.forbundletype(b'UN')
631 self._compopts = None
631 self._compopts = None
632 # If compression is being handled by a consumer of the raw
632 # If compression is being handled by a consumer of the raw
633 # data (e.g. the wire protocol), unsetting this flag tells
633 # data (e.g. the wire protocol), unsetting this flag tells
634 # consumers that the bundle is best left uncompressed.
634 # consumers that the bundle is best left uncompressed.
635 self.prefercompressed = True
635 self.prefercompressed = True
636
636
637 def setcompression(self, alg, compopts=None):
637 def setcompression(self, alg, compopts=None):
638 """setup core part compression to <alg>"""
638 """setup core part compression to <alg>"""
639 if alg in (None, b'UN'):
639 if alg in (None, b'UN'):
640 return
640 return
641 assert not any(n.lower() == b'compression' for n, v in self._params)
641 assert not any(n.lower() == b'compression' for n, v in self._params)
642 self.addparam(b'Compression', alg)
642 self.addparam(b'Compression', alg)
643 self._compengine = util.compengines.forbundletype(alg)
643 self._compengine = util.compengines.forbundletype(alg)
644 self._compopts = compopts
644 self._compopts = compopts
645
645
646 @property
646 @property
647 def nbparts(self):
647 def nbparts(self):
648 """total number of parts added to the bundler"""
648 """total number of parts added to the bundler"""
649 return len(self._parts)
649 return len(self._parts)
650
650
651 # methods used to defines the bundle2 content
651 # methods used to defines the bundle2 content
652 def addparam(self, name, value=None):
652 def addparam(self, name, value=None):
653 """add a stream level parameter"""
653 """add a stream level parameter"""
654 if not name:
654 if not name:
655 raise error.ProgrammingError(b'empty parameter name')
655 raise error.ProgrammingError(b'empty parameter name')
656 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
656 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
657 raise error.ProgrammingError(
657 raise error.ProgrammingError(
658 b'non letter first character: %s' % name
658 b'non letter first character: %s' % name
659 )
659 )
660 self._params.append((name, value))
660 self._params.append((name, value))
661
661
662 def addpart(self, part):
662 def addpart(self, part):
663 """add a new part to the bundle2 container
663 """add a new part to the bundle2 container
664
664
665 Parts contains the actual applicative payload."""
665 Parts contains the actual applicative payload."""
666 assert part.id is None
666 assert part.id is None
667 part.id = len(self._parts) # very cheap counter
667 part.id = len(self._parts) # very cheap counter
668 self._parts.append(part)
668 self._parts.append(part)
669
669
670 def newpart(self, typeid, *args, **kwargs):
670 def newpart(self, typeid, *args, **kwargs):
671 """create a new part and add it to the containers
671 """create a new part and add it to the containers
672
672
673 As the part is directly added to the containers. For now, this means
673 As the part is directly added to the containers. For now, this means
674 that any failure to properly initialize the part after calling
674 that any failure to properly initialize the part after calling
675 ``newpart`` should result in a failure of the whole bundling process.
675 ``newpart`` should result in a failure of the whole bundling process.
676
676
677 You can still fall back to manually create and add if you need better
677 You can still fall back to manually create and add if you need better
678 control."""
678 control."""
679 part = bundlepart(typeid, *args, **kwargs)
679 part = bundlepart(typeid, *args, **kwargs)
680 self.addpart(part)
680 self.addpart(part)
681 return part
681 return part
682
682
683 # methods used to generate the bundle2 stream
683 # methods used to generate the bundle2 stream
684 def getchunks(self):
684 def getchunks(self):
685 if self.ui.debugflag:
685 if self.ui.debugflag:
686 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
686 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
687 if self._params:
687 if self._params:
688 msg.append(b' (%i params)' % len(self._params))
688 msg.append(b' (%i params)' % len(self._params))
689 msg.append(b' %i parts total\n' % len(self._parts))
689 msg.append(b' %i parts total\n' % len(self._parts))
690 self.ui.debug(b''.join(msg))
690 self.ui.debug(b''.join(msg))
691 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
691 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
692 yield self._magicstring
692 yield self._magicstring
693 param = self._paramchunk()
693 param = self._paramchunk()
694 outdebug(self.ui, b'bundle parameter: %s' % param)
694 outdebug(self.ui, b'bundle parameter: %s' % param)
695 yield _pack(_fstreamparamsize, len(param))
695 yield _pack(_fstreamparamsize, len(param))
696 if param:
696 if param:
697 yield param
697 yield param
698 for chunk in self._compengine.compressstream(
698 for chunk in self._compengine.compressstream(
699 self._getcorechunk(), self._compopts
699 self._getcorechunk(), self._compopts
700 ):
700 ):
701 yield chunk
701 yield chunk
702
702
703 def _paramchunk(self):
703 def _paramchunk(self):
704 """return a encoded version of all stream parameters"""
704 """return a encoded version of all stream parameters"""
705 blocks = []
705 blocks = []
706 for par, value in self._params:
706 for par, value in self._params:
707 par = urlreq.quote(par)
707 par = urlreq.quote(par)
708 if value is not None:
708 if value is not None:
709 value = urlreq.quote(value)
709 value = urlreq.quote(value)
710 par = b'%s=%s' % (par, value)
710 par = b'%s=%s' % (par, value)
711 blocks.append(par)
711 blocks.append(par)
712 return b' '.join(blocks)
712 return b' '.join(blocks)
713
713
714 def _getcorechunk(self):
714 def _getcorechunk(self):
715 """yield chunk for the core part of the bundle
715 """yield chunk for the core part of the bundle
716
716
717 (all but headers and parameters)"""
717 (all but headers and parameters)"""
718 outdebug(self.ui, b'start of parts')
718 outdebug(self.ui, b'start of parts')
719 for part in self._parts:
719 for part in self._parts:
720 outdebug(self.ui, b'bundle part: "%s"' % part.type)
720 outdebug(self.ui, b'bundle part: "%s"' % part.type)
721 for chunk in part.getchunks(ui=self.ui):
721 for chunk in part.getchunks(ui=self.ui):
722 yield chunk
722 yield chunk
723 outdebug(self.ui, b'end of bundle')
723 outdebug(self.ui, b'end of bundle')
724 yield _pack(_fpartheadersize, 0)
724 yield _pack(_fpartheadersize, 0)
725
725
726 def salvageoutput(self):
726 def salvageoutput(self):
727 """return a list with a copy of all output parts in the bundle
727 """return a list with a copy of all output parts in the bundle
728
728
729 This is meant to be used during error handling to make sure we preserve
729 This is meant to be used during error handling to make sure we preserve
730 server output"""
730 server output"""
731 salvaged = []
731 salvaged = []
732 for part in self._parts:
732 for part in self._parts:
733 if part.type.startswith(b'output'):
733 if part.type.startswith(b'output'):
734 salvaged.append(part.copy())
734 salvaged.append(part.copy())
735 return salvaged
735 return salvaged
736
736
737
737
738 class unpackermixin(object):
738 class unpackermixin(object):
739 """A mixin to extract bytes and struct data from a stream"""
739 """A mixin to extract bytes and struct data from a stream"""
740
740
741 def __init__(self, fp):
741 def __init__(self, fp):
742 self._fp = fp
742 self._fp = fp
743
743
744 def _unpack(self, format):
744 def _unpack(self, format):
745 """unpack this struct format from the stream
745 """unpack this struct format from the stream
746
746
747 This method is meant for internal usage by the bundle2 protocol only.
747 This method is meant for internal usage by the bundle2 protocol only.
748 They directly manipulate the low level stream including bundle2 level
748 They directly manipulate the low level stream including bundle2 level
749 instruction.
749 instruction.
750
750
751 Do not use it to implement higher-level logic or methods."""
751 Do not use it to implement higher-level logic or methods."""
752 data = self._readexact(struct.calcsize(format))
752 data = self._readexact(struct.calcsize(format))
753 return _unpack(format, data)
753 return _unpack(format, data)
754
754
755 def _readexact(self, size):
755 def _readexact(self, size):
756 """read exactly <size> bytes from the stream
756 """read exactly <size> bytes from the stream
757
757
758 This method is meant for internal usage by the bundle2 protocol only.
758 This method is meant for internal usage by the bundle2 protocol only.
759 They directly manipulate the low level stream including bundle2 level
759 They directly manipulate the low level stream including bundle2 level
760 instruction.
760 instruction.
761
761
762 Do not use it to implement higher-level logic or methods."""
762 Do not use it to implement higher-level logic or methods."""
763 return changegroup.readexactly(self._fp, size)
763 return changegroup.readexactly(self._fp, size)
764
764
765
765
766 def getunbundler(ui, fp, magicstring=None):
766 def getunbundler(ui, fp, magicstring=None):
767 """return a valid unbundler object for a given magicstring"""
767 """return a valid unbundler object for a given magicstring"""
768 if magicstring is None:
768 if magicstring is None:
769 magicstring = changegroup.readexactly(fp, 4)
769 magicstring = changegroup.readexactly(fp, 4)
770 magic, version = magicstring[0:2], magicstring[2:4]
770 magic, version = magicstring[0:2], magicstring[2:4]
771 if magic != b'HG':
771 if magic != b'HG':
772 ui.debug(
772 ui.debug(
773 b"error: invalid magic: %r (version %r), should be 'HG'\n"
773 b"error: invalid magic: %r (version %r), should be 'HG'\n"
774 % (magic, version)
774 % (magic, version)
775 )
775 )
776 raise error.Abort(_(b'not a Mercurial bundle'))
776 raise error.Abort(_(b'not a Mercurial bundle'))
777 unbundlerclass = formatmap.get(version)
777 unbundlerclass = formatmap.get(version)
778 if unbundlerclass is None:
778 if unbundlerclass is None:
779 raise error.Abort(_(b'unknown bundle version %s') % version)
779 raise error.Abort(_(b'unknown bundle version %s') % version)
780 unbundler = unbundlerclass(ui, fp)
780 unbundler = unbundlerclass(ui, fp)
781 indebug(ui, b'start processing of %s stream' % magicstring)
781 indebug(ui, b'start processing of %s stream' % magicstring)
782 return unbundler
782 return unbundler
783
783
784
784
785 class unbundle20(unpackermixin):
785 class unbundle20(unpackermixin):
786 """interpret a bundle2 stream
786 """interpret a bundle2 stream
787
787
788 This class is fed with a binary stream and yields parts through its
788 This class is fed with a binary stream and yields parts through its
789 `iterparts` methods."""
789 `iterparts` methods."""
790
790
791 _magicstring = b'HG20'
791 _magicstring = b'HG20'
792
792
793 def __init__(self, ui, fp):
793 def __init__(self, ui, fp):
794 """If header is specified, we do not read it out of the stream."""
794 """If header is specified, we do not read it out of the stream."""
795 self.ui = ui
795 self.ui = ui
796 self._compengine = util.compengines.forbundletype(b'UN')
796 self._compengine = util.compengines.forbundletype(b'UN')
797 self._compressed = None
797 self._compressed = None
798 super(unbundle20, self).__init__(fp)
798 super(unbundle20, self).__init__(fp)
799
799
800 @util.propertycache
800 @util.propertycache
801 def params(self):
801 def params(self):
802 """dictionary of stream level parameters"""
802 """dictionary of stream level parameters"""
803 indebug(self.ui, b'reading bundle2 stream parameters')
803 indebug(self.ui, b'reading bundle2 stream parameters')
804 params = {}
804 params = {}
805 paramssize = self._unpack(_fstreamparamsize)[0]
805 paramssize = self._unpack(_fstreamparamsize)[0]
806 if paramssize < 0:
806 if paramssize < 0:
807 raise error.BundleValueError(
807 raise error.BundleValueError(
808 b'negative bundle param size: %i' % paramssize
808 b'negative bundle param size: %i' % paramssize
809 )
809 )
810 if paramssize:
810 if paramssize:
811 params = self._readexact(paramssize)
811 params = self._readexact(paramssize)
812 params = self._processallparams(params)
812 params = self._processallparams(params)
813 return params
813 return params
814
814
815 def _processallparams(self, paramsblock):
815 def _processallparams(self, paramsblock):
816 """"""
816 """"""
817 params = util.sortdict()
817 params = util.sortdict()
818 for p in paramsblock.split(b' '):
818 for p in paramsblock.split(b' '):
819 p = p.split(b'=', 1)
819 p = p.split(b'=', 1)
820 p = [urlreq.unquote(i) for i in p]
820 p = [urlreq.unquote(i) for i in p]
821 if len(p) < 2:
821 if len(p) < 2:
822 p.append(None)
822 p.append(None)
823 self._processparam(*p)
823 self._processparam(*p)
824 params[p[0]] = p[1]
824 params[p[0]] = p[1]
825 return params
825 return params
826
826
827 def _processparam(self, name, value):
827 def _processparam(self, name, value):
828 """process a parameter, applying its effect if needed
828 """process a parameter, applying its effect if needed
829
829
830 Parameter starting with a lower case letter are advisory and will be
830 Parameter starting with a lower case letter are advisory and will be
831 ignored when unknown. Those starting with an upper case letter are
831 ignored when unknown. Those starting with an upper case letter are
832 mandatory and will this function will raise a KeyError when unknown.
832 mandatory and will this function will raise a KeyError when unknown.
833
833
834 Note: no option are currently supported. Any input will be either
834 Note: no option are currently supported. Any input will be either
835 ignored or failing.
835 ignored or failing.
836 """
836 """
837 if not name:
837 if not name:
838 raise ValueError(r'empty parameter name')
838 raise ValueError(r'empty parameter name')
839 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
839 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
840 raise ValueError(r'non letter first character: %s' % name)
840 raise ValueError(r'non letter first character: %s' % name)
841 try:
841 try:
842 handler = b2streamparamsmap[name.lower()]
842 handler = b2streamparamsmap[name.lower()]
843 except KeyError:
843 except KeyError:
844 if name[0:1].islower():
844 if name[0:1].islower():
845 indebug(self.ui, b"ignoring unknown parameter %s" % name)
845 indebug(self.ui, b"ignoring unknown parameter %s" % name)
846 else:
846 else:
847 raise error.BundleUnknownFeatureError(params=(name,))
847 raise error.BundleUnknownFeatureError(params=(name,))
848 else:
848 else:
849 handler(self, name, value)
849 handler(self, name, value)
850
850
851 def _forwardchunks(self):
851 def _forwardchunks(self):
852 """utility to transfer a bundle2 as binary
852 """utility to transfer a bundle2 as binary
853
853
854 This is made necessary by the fact the 'getbundle' command over 'ssh'
854 This is made necessary by the fact the 'getbundle' command over 'ssh'
855 have no way to know then the reply end, relying on the bundle to be
855 have no way to know then the reply end, relying on the bundle to be
856 interpreted to know its end. This is terrible and we are sorry, but we
856 interpreted to know its end. This is terrible and we are sorry, but we
857 needed to move forward to get general delta enabled.
857 needed to move forward to get general delta enabled.
858 """
858 """
859 yield self._magicstring
859 yield self._magicstring
860 assert b'params' not in vars(self)
860 assert b'params' not in vars(self)
861 paramssize = self._unpack(_fstreamparamsize)[0]
861 paramssize = self._unpack(_fstreamparamsize)[0]
862 if paramssize < 0:
862 if paramssize < 0:
863 raise error.BundleValueError(
863 raise error.BundleValueError(
864 b'negative bundle param size: %i' % paramssize
864 b'negative bundle param size: %i' % paramssize
865 )
865 )
866 if paramssize:
866 if paramssize:
867 params = self._readexact(paramssize)
867 params = self._readexact(paramssize)
868 self._processallparams(params)
868 self._processallparams(params)
869 # The payload itself is decompressed below, so drop
869 # The payload itself is decompressed below, so drop
870 # the compression parameter passed down to compensate.
870 # the compression parameter passed down to compensate.
871 outparams = []
871 outparams = []
872 for p in params.split(b' '):
872 for p in params.split(b' '):
873 k, v = p.split(b'=', 1)
873 k, v = p.split(b'=', 1)
874 if k.lower() != b'compression':
874 if k.lower() != b'compression':
875 outparams.append(p)
875 outparams.append(p)
876 outparams = b' '.join(outparams)
876 outparams = b' '.join(outparams)
877 yield _pack(_fstreamparamsize, len(outparams))
877 yield _pack(_fstreamparamsize, len(outparams))
878 yield outparams
878 yield outparams
879 else:
879 else:
880 yield _pack(_fstreamparamsize, paramssize)
880 yield _pack(_fstreamparamsize, paramssize)
881 # From there, payload might need to be decompressed
881 # From there, payload might need to be decompressed
882 self._fp = self._compengine.decompressorreader(self._fp)
882 self._fp = self._compengine.decompressorreader(self._fp)
883 emptycount = 0
883 emptycount = 0
884 while emptycount < 2:
884 while emptycount < 2:
885 # so we can brainlessly loop
885 # so we can brainlessly loop
886 assert _fpartheadersize == _fpayloadsize
886 assert _fpartheadersize == _fpayloadsize
887 size = self._unpack(_fpartheadersize)[0]
887 size = self._unpack(_fpartheadersize)[0]
888 yield _pack(_fpartheadersize, size)
888 yield _pack(_fpartheadersize, size)
889 if size:
889 if size:
890 emptycount = 0
890 emptycount = 0
891 else:
891 else:
892 emptycount += 1
892 emptycount += 1
893 continue
893 continue
894 if size == flaginterrupt:
894 if size == flaginterrupt:
895 continue
895 continue
896 elif size < 0:
896 elif size < 0:
897 raise error.BundleValueError(b'negative chunk size: %i')
897 raise error.BundleValueError(b'negative chunk size: %i')
898 yield self._readexact(size)
898 yield self._readexact(size)
899
899
900 def iterparts(self, seekable=False):
900 def iterparts(self, seekable=False):
901 """yield all parts contained in the stream"""
901 """yield all parts contained in the stream"""
902 cls = seekableunbundlepart if seekable else unbundlepart
902 cls = seekableunbundlepart if seekable else unbundlepart
903 # make sure param have been loaded
903 # make sure param have been loaded
904 self.params
904 self.params
905 # From there, payload need to be decompressed
905 # From there, payload need to be decompressed
906 self._fp = self._compengine.decompressorreader(self._fp)
906 self._fp = self._compengine.decompressorreader(self._fp)
907 indebug(self.ui, b'start extraction of bundle2 parts')
907 indebug(self.ui, b'start extraction of bundle2 parts')
908 headerblock = self._readpartheader()
908 headerblock = self._readpartheader()
909 while headerblock is not None:
909 while headerblock is not None:
910 part = cls(self.ui, headerblock, self._fp)
910 part = cls(self.ui, headerblock, self._fp)
911 yield part
911 yield part
912 # Ensure part is fully consumed so we can start reading the next
912 # Ensure part is fully consumed so we can start reading the next
913 # part.
913 # part.
914 part.consume()
914 part.consume()
915
915
916 headerblock = self._readpartheader()
916 headerblock = self._readpartheader()
917 indebug(self.ui, b'end of bundle2 stream')
917 indebug(self.ui, b'end of bundle2 stream')
918
918
919 def _readpartheader(self):
919 def _readpartheader(self):
920 """reads a part header size and return the bytes blob
920 """reads a part header size and return the bytes blob
921
921
922 returns None if empty"""
922 returns None if empty"""
923 headersize = self._unpack(_fpartheadersize)[0]
923 headersize = self._unpack(_fpartheadersize)[0]
924 if headersize < 0:
924 if headersize < 0:
925 raise error.BundleValueError(
925 raise error.BundleValueError(
926 b'negative part header size: %i' % headersize
926 b'negative part header size: %i' % headersize
927 )
927 )
928 indebug(self.ui, b'part header size: %i' % headersize)
928 indebug(self.ui, b'part header size: %i' % headersize)
929 if headersize:
929 if headersize:
930 return self._readexact(headersize)
930 return self._readexact(headersize)
931 return None
931 return None
932
932
933 def compressed(self):
933 def compressed(self):
934 self.params # load params
934 self.params # load params
935 return self._compressed
935 return self._compressed
936
936
937 def close(self):
937 def close(self):
938 """close underlying file"""
938 """close underlying file"""
939 if util.safehasattr(self._fp, 'close'):
939 if util.safehasattr(self._fp, 'close'):
940 return self._fp.close()
940 return self._fp.close()
941
941
942
942
943 formatmap = {b'20': unbundle20}
943 formatmap = {b'20': unbundle20}
944
944
945 b2streamparamsmap = {}
945 b2streamparamsmap = {}
946
946
947
947
948 def b2streamparamhandler(name):
948 def b2streamparamhandler(name):
949 """register a handler for a stream level parameter"""
949 """register a handler for a stream level parameter"""
950
950
951 def decorator(func):
951 def decorator(func):
952 assert name not in formatmap
952 assert name not in formatmap
953 b2streamparamsmap[name] = func
953 b2streamparamsmap[name] = func
954 return func
954 return func
955
955
956 return decorator
956 return decorator
957
957
958
958
959 @b2streamparamhandler(b'compression')
959 @b2streamparamhandler(b'compression')
960 def processcompression(unbundler, param, value):
960 def processcompression(unbundler, param, value):
961 """read compression parameter and install payload decompression"""
961 """read compression parameter and install payload decompression"""
962 if value not in util.compengines.supportedbundletypes:
962 if value not in util.compengines.supportedbundletypes:
963 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
963 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
964 unbundler._compengine = util.compengines.forbundletype(value)
964 unbundler._compengine = util.compengines.forbundletype(value)
965 if value is not None:
965 if value is not None:
966 unbundler._compressed = True
966 unbundler._compressed = True
967
967
968
968
969 class bundlepart(object):
969 class bundlepart(object):
970 """A bundle2 part contains application level payload
970 """A bundle2 part contains application level payload
971
971
972 The part `type` is used to route the part to the application level
972 The part `type` is used to route the part to the application level
973 handler.
973 handler.
974
974
975 The part payload is contained in ``part.data``. It could be raw bytes or a
975 The part payload is contained in ``part.data``. It could be raw bytes or a
976 generator of byte chunks.
976 generator of byte chunks.
977
977
978 You can add parameters to the part using the ``addparam`` method.
978 You can add parameters to the part using the ``addparam`` method.
979 Parameters can be either mandatory (default) or advisory. Remote side
979 Parameters can be either mandatory (default) or advisory. Remote side
980 should be able to safely ignore the advisory ones.
980 should be able to safely ignore the advisory ones.
981
981
982 Both data and parameters cannot be modified after the generation has begun.
982 Both data and parameters cannot be modified after the generation has begun.
983 """
983 """
984
984
985 def __init__(
985 def __init__(
986 self,
986 self,
987 parttype,
987 parttype,
988 mandatoryparams=(),
988 mandatoryparams=(),
989 advisoryparams=(),
989 advisoryparams=(),
990 data=b'',
990 data=b'',
991 mandatory=True,
991 mandatory=True,
992 ):
992 ):
993 validateparttype(parttype)
993 validateparttype(parttype)
994 self.id = None
994 self.id = None
995 self.type = parttype
995 self.type = parttype
996 self._data = data
996 self._data = data
997 self._mandatoryparams = list(mandatoryparams)
997 self._mandatoryparams = list(mandatoryparams)
998 self._advisoryparams = list(advisoryparams)
998 self._advisoryparams = list(advisoryparams)
999 # checking for duplicated entries
999 # checking for duplicated entries
1000 self._seenparams = set()
1000 self._seenparams = set()
1001 for pname, __ in self._mandatoryparams + self._advisoryparams:
1001 for pname, __ in self._mandatoryparams + self._advisoryparams:
1002 if pname in self._seenparams:
1002 if pname in self._seenparams:
1003 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1003 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1004 self._seenparams.add(pname)
1004 self._seenparams.add(pname)
1005 # status of the part's generation:
1005 # status of the part's generation:
1006 # - None: not started,
1006 # - None: not started,
1007 # - False: currently generated,
1007 # - False: currently generated,
1008 # - True: generation done.
1008 # - True: generation done.
1009 self._generated = None
1009 self._generated = None
1010 self.mandatory = mandatory
1010 self.mandatory = mandatory
1011
1011
1012 def __repr__(self):
1012 def __repr__(self):
1013 cls = b"%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1013 cls = b"%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1014 return b'<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1014 return b'<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1015 cls,
1015 cls,
1016 id(self),
1016 id(self),
1017 self.id,
1017 self.id,
1018 self.type,
1018 self.type,
1019 self.mandatory,
1019 self.mandatory,
1020 )
1020 )
1021
1021
1022 def copy(self):
1022 def copy(self):
1023 """return a copy of the part
1023 """return a copy of the part
1024
1024
1025 The new part have the very same content but no partid assigned yet.
1025 The new part have the very same content but no partid assigned yet.
1026 Parts with generated data cannot be copied."""
1026 Parts with generated data cannot be copied."""
1027 assert not util.safehasattr(self.data, 'next')
1027 assert not util.safehasattr(self.data, 'next')
1028 return self.__class__(
1028 return self.__class__(
1029 self.type,
1029 self.type,
1030 self._mandatoryparams,
1030 self._mandatoryparams,
1031 self._advisoryparams,
1031 self._advisoryparams,
1032 self._data,
1032 self._data,
1033 self.mandatory,
1033 self.mandatory,
1034 )
1034 )
1035
1035
1036 # methods used to defines the part content
1036 # methods used to defines the part content
1037 @property
1037 @property
1038 def data(self):
1038 def data(self):
1039 return self._data
1039 return self._data
1040
1040
1041 @data.setter
1041 @data.setter
1042 def data(self, data):
1042 def data(self, data):
1043 if self._generated is not None:
1043 if self._generated is not None:
1044 raise error.ReadOnlyPartError(b'part is being generated')
1044 raise error.ReadOnlyPartError(b'part is being generated')
1045 self._data = data
1045 self._data = data
1046
1046
1047 @property
1047 @property
1048 def mandatoryparams(self):
1048 def mandatoryparams(self):
1049 # make it an immutable tuple to force people through ``addparam``
1049 # make it an immutable tuple to force people through ``addparam``
1050 return tuple(self._mandatoryparams)
1050 return tuple(self._mandatoryparams)
1051
1051
1052 @property
1052 @property
1053 def advisoryparams(self):
1053 def advisoryparams(self):
1054 # make it an immutable tuple to force people through ``addparam``
1054 # make it an immutable tuple to force people through ``addparam``
1055 return tuple(self._advisoryparams)
1055 return tuple(self._advisoryparams)
1056
1056
1057 def addparam(self, name, value=b'', mandatory=True):
1057 def addparam(self, name, value=b'', mandatory=True):
1058 """add a parameter to the part
1058 """add a parameter to the part
1059
1059
1060 If 'mandatory' is set to True, the remote handler must claim support
1060 If 'mandatory' is set to True, the remote handler must claim support
1061 for this parameter or the unbundling will be aborted.
1061 for this parameter or the unbundling will be aborted.
1062
1062
1063 The 'name' and 'value' cannot exceed 255 bytes each.
1063 The 'name' and 'value' cannot exceed 255 bytes each.
1064 """
1064 """
1065 if self._generated is not None:
1065 if self._generated is not None:
1066 raise error.ReadOnlyPartError(b'part is being generated')
1066 raise error.ReadOnlyPartError(b'part is being generated')
1067 if name in self._seenparams:
1067 if name in self._seenparams:
1068 raise ValueError(b'duplicated params: %s' % name)
1068 raise ValueError(b'duplicated params: %s' % name)
1069 self._seenparams.add(name)
1069 self._seenparams.add(name)
1070 params = self._advisoryparams
1070 params = self._advisoryparams
1071 if mandatory:
1071 if mandatory:
1072 params = self._mandatoryparams
1072 params = self._mandatoryparams
1073 params.append((name, value))
1073 params.append((name, value))
1074
1074
1075 # methods used to generates the bundle2 stream
1075 # methods used to generates the bundle2 stream
1076 def getchunks(self, ui):
1076 def getchunks(self, ui):
1077 if self._generated is not None:
1077 if self._generated is not None:
1078 raise error.ProgrammingError(b'part can only be consumed once')
1078 raise error.ProgrammingError(b'part can only be consumed once')
1079 self._generated = False
1079 self._generated = False
1080
1080
1081 if ui.debugflag:
1081 if ui.debugflag:
1082 msg = [b'bundle2-output-part: "%s"' % self.type]
1082 msg = [b'bundle2-output-part: "%s"' % self.type]
1083 if not self.mandatory:
1083 if not self.mandatory:
1084 msg.append(b' (advisory)')
1084 msg.append(b' (advisory)')
1085 nbmp = len(self.mandatoryparams)
1085 nbmp = len(self.mandatoryparams)
1086 nbap = len(self.advisoryparams)
1086 nbap = len(self.advisoryparams)
1087 if nbmp or nbap:
1087 if nbmp or nbap:
1088 msg.append(b' (params:')
1088 msg.append(b' (params:')
1089 if nbmp:
1089 if nbmp:
1090 msg.append(b' %i mandatory' % nbmp)
1090 msg.append(b' %i mandatory' % nbmp)
1091 if nbap:
1091 if nbap:
1092 msg.append(b' %i advisory' % nbmp)
1092 msg.append(b' %i advisory' % nbmp)
1093 msg.append(b')')
1093 msg.append(b')')
1094 if not self.data:
1094 if not self.data:
1095 msg.append(b' empty payload')
1095 msg.append(b' empty payload')
1096 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1096 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1097 self.data, b'__next__'
1097 self.data, b'__next__'
1098 ):
1098 ):
1099 msg.append(b' streamed payload')
1099 msg.append(b' streamed payload')
1100 else:
1100 else:
1101 msg.append(b' %i bytes payload' % len(self.data))
1101 msg.append(b' %i bytes payload' % len(self.data))
1102 msg.append(b'\n')
1102 msg.append(b'\n')
1103 ui.debug(b''.join(msg))
1103 ui.debug(b''.join(msg))
1104
1104
1105 #### header
1105 #### header
1106 if self.mandatory:
1106 if self.mandatory:
1107 parttype = self.type.upper()
1107 parttype = self.type.upper()
1108 else:
1108 else:
1109 parttype = self.type.lower()
1109 parttype = self.type.lower()
1110 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1110 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1111 ## parttype
1111 ## parttype
1112 header = [
1112 header = [
1113 _pack(_fparttypesize, len(parttype)),
1113 _pack(_fparttypesize, len(parttype)),
1114 parttype,
1114 parttype,
1115 _pack(_fpartid, self.id),
1115 _pack(_fpartid, self.id),
1116 ]
1116 ]
1117 ## parameters
1117 ## parameters
1118 # count
1118 # count
1119 manpar = self.mandatoryparams
1119 manpar = self.mandatoryparams
1120 advpar = self.advisoryparams
1120 advpar = self.advisoryparams
1121 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1121 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1122 # size
1122 # size
1123 parsizes = []
1123 parsizes = []
1124 for key, value in manpar:
1124 for key, value in manpar:
1125 parsizes.append(len(key))
1125 parsizes.append(len(key))
1126 parsizes.append(len(value))
1126 parsizes.append(len(value))
1127 for key, value in advpar:
1127 for key, value in advpar:
1128 parsizes.append(len(key))
1128 parsizes.append(len(key))
1129 parsizes.append(len(value))
1129 parsizes.append(len(value))
1130 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1130 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1131 header.append(paramsizes)
1131 header.append(paramsizes)
1132 # key, value
1132 # key, value
1133 for key, value in manpar:
1133 for key, value in manpar:
1134 header.append(key)
1134 header.append(key)
1135 header.append(value)
1135 header.append(value)
1136 for key, value in advpar:
1136 for key, value in advpar:
1137 header.append(key)
1137 header.append(key)
1138 header.append(value)
1138 header.append(value)
1139 ## finalize header
1139 ## finalize header
1140 try:
1140 try:
1141 headerchunk = b''.join(header)
1141 headerchunk = b''.join(header)
1142 except TypeError:
1142 except TypeError:
1143 raise TypeError(
1143 raise TypeError(
1144 r'Found a non-bytes trying to '
1144 r'Found a non-bytes trying to '
1145 r'build bundle part header: %r' % header
1145 r'build bundle part header: %r' % header
1146 )
1146 )
1147 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1147 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1148 yield _pack(_fpartheadersize, len(headerchunk))
1148 yield _pack(_fpartheadersize, len(headerchunk))
1149 yield headerchunk
1149 yield headerchunk
1150 ## payload
1150 ## payload
1151 try:
1151 try:
1152 for chunk in self._payloadchunks():
1152 for chunk in self._payloadchunks():
1153 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1153 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1154 yield _pack(_fpayloadsize, len(chunk))
1154 yield _pack(_fpayloadsize, len(chunk))
1155 yield chunk
1155 yield chunk
1156 except GeneratorExit:
1156 except GeneratorExit:
1157 # GeneratorExit means that nobody is listening for our
1157 # GeneratorExit means that nobody is listening for our
1158 # results anyway, so just bail quickly rather than trying
1158 # results anyway, so just bail quickly rather than trying
1159 # to produce an error part.
1159 # to produce an error part.
1160 ui.debug(b'bundle2-generatorexit\n')
1160 ui.debug(b'bundle2-generatorexit\n')
1161 raise
1161 raise
1162 except BaseException as exc:
1162 except BaseException as exc:
1163 bexc = stringutil.forcebytestr(exc)
1163 bexc = stringutil.forcebytestr(exc)
1164 # backup exception data for later
1164 # backup exception data for later
1165 ui.debug(
1165 ui.debug(
1166 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1166 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1167 )
1167 )
1168 tb = sys.exc_info()[2]
1168 tb = sys.exc_info()[2]
1169 msg = b'unexpected error: %s' % bexc
1169 msg = b'unexpected error: %s' % bexc
1170 interpart = bundlepart(
1170 interpart = bundlepart(
1171 b'error:abort', [(b'message', msg)], mandatory=False
1171 b'error:abort', [(b'message', msg)], mandatory=False
1172 )
1172 )
1173 interpart.id = 0
1173 interpart.id = 0
1174 yield _pack(_fpayloadsize, -1)
1174 yield _pack(_fpayloadsize, -1)
1175 for chunk in interpart.getchunks(ui=ui):
1175 for chunk in interpart.getchunks(ui=ui):
1176 yield chunk
1176 yield chunk
1177 outdebug(ui, b'closing payload chunk')
1177 outdebug(ui, b'closing payload chunk')
1178 # abort current part payload
1178 # abort current part payload
1179 yield _pack(_fpayloadsize, 0)
1179 yield _pack(_fpayloadsize, 0)
1180 pycompat.raisewithtb(exc, tb)
1180 pycompat.raisewithtb(exc, tb)
1181 # end of payload
1181 # end of payload
1182 outdebug(ui, b'closing payload chunk')
1182 outdebug(ui, b'closing payload chunk')
1183 yield _pack(_fpayloadsize, 0)
1183 yield _pack(_fpayloadsize, 0)
1184 self._generated = True
1184 self._generated = True
1185
1185
1186 def _payloadchunks(self):
1186 def _payloadchunks(self):
1187 """yield chunks of a the part payload
1187 """yield chunks of a the part payload
1188
1188
1189 Exists to handle the different methods to provide data to a part."""
1189 Exists to handle the different methods to provide data to a part."""
1190 # we only support fixed size data now.
1190 # we only support fixed size data now.
1191 # This will be improved in the future.
1191 # This will be improved in the future.
1192 if util.safehasattr(self.data, 'next') or util.safehasattr(
1192 if util.safehasattr(self.data, 'next') or util.safehasattr(
1193 self.data, b'__next__'
1193 self.data, b'__next__'
1194 ):
1194 ):
1195 buff = util.chunkbuffer(self.data)
1195 buff = util.chunkbuffer(self.data)
1196 chunk = buff.read(preferedchunksize)
1196 chunk = buff.read(preferedchunksize)
1197 while chunk:
1197 while chunk:
1198 yield chunk
1198 yield chunk
1199 chunk = buff.read(preferedchunksize)
1199 chunk = buff.read(preferedchunksize)
1200 elif len(self.data):
1200 elif len(self.data):
1201 yield self.data
1201 yield self.data
1202
1202
1203
1203
1204 flaginterrupt = -1
1204 flaginterrupt = -1
1205
1205
1206
1206
1207 class interrupthandler(unpackermixin):
1207 class interrupthandler(unpackermixin):
1208 """read one part and process it with restricted capability
1208 """read one part and process it with restricted capability
1209
1209
1210 This allows to transmit exception raised on the producer size during part
1210 This allows to transmit exception raised on the producer size during part
1211 iteration while the consumer is reading a part.
1211 iteration while the consumer is reading a part.
1212
1212
1213 Part processed in this manner only have access to a ui object,"""
1213 Part processed in this manner only have access to a ui object,"""
1214
1214
1215 def __init__(self, ui, fp):
1215 def __init__(self, ui, fp):
1216 super(interrupthandler, self).__init__(fp)
1216 super(interrupthandler, self).__init__(fp)
1217 self.ui = ui
1217 self.ui = ui
1218
1218
1219 def _readpartheader(self):
1219 def _readpartheader(self):
1220 """reads a part header size and return the bytes blob
1220 """reads a part header size and return the bytes blob
1221
1221
1222 returns None if empty"""
1222 returns None if empty"""
1223 headersize = self._unpack(_fpartheadersize)[0]
1223 headersize = self._unpack(_fpartheadersize)[0]
1224 if headersize < 0:
1224 if headersize < 0:
1225 raise error.BundleValueError(
1225 raise error.BundleValueError(
1226 b'negative part header size: %i' % headersize
1226 b'negative part header size: %i' % headersize
1227 )
1227 )
1228 indebug(self.ui, b'part header size: %i\n' % headersize)
1228 indebug(self.ui, b'part header size: %i\n' % headersize)
1229 if headersize:
1229 if headersize:
1230 return self._readexact(headersize)
1230 return self._readexact(headersize)
1231 return None
1231 return None
1232
1232
1233 def __call__(self):
1233 def __call__(self):
1234
1234
1235 self.ui.debug(
1235 self.ui.debug(
1236 b'bundle2-input-stream-interrupt: opening out of band context\n'
1236 b'bundle2-input-stream-interrupt: opening out of band context\n'
1237 )
1237 )
1238 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1238 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1239 headerblock = self._readpartheader()
1239 headerblock = self._readpartheader()
1240 if headerblock is None:
1240 if headerblock is None:
1241 indebug(self.ui, b'no part found during interruption.')
1241 indebug(self.ui, b'no part found during interruption.')
1242 return
1242 return
1243 part = unbundlepart(self.ui, headerblock, self._fp)
1243 part = unbundlepart(self.ui, headerblock, self._fp)
1244 op = interruptoperation(self.ui)
1244 op = interruptoperation(self.ui)
1245 hardabort = False
1245 hardabort = False
1246 try:
1246 try:
1247 _processpart(op, part)
1247 _processpart(op, part)
1248 except (SystemExit, KeyboardInterrupt):
1248 except (SystemExit, KeyboardInterrupt):
1249 hardabort = True
1249 hardabort = True
1250 raise
1250 raise
1251 finally:
1251 finally:
1252 if not hardabort:
1252 if not hardabort:
1253 part.consume()
1253 part.consume()
1254 self.ui.debug(
1254 self.ui.debug(
1255 b'bundle2-input-stream-interrupt: closing out of band context\n'
1255 b'bundle2-input-stream-interrupt: closing out of band context\n'
1256 )
1256 )
1257
1257
1258
1258
1259 class interruptoperation(object):
1259 class interruptoperation(object):
1260 """A limited operation to be use by part handler during interruption
1260 """A limited operation to be use by part handler during interruption
1261
1261
1262 It only have access to an ui object.
1262 It only have access to an ui object.
1263 """
1263 """
1264
1264
1265 def __init__(self, ui):
1265 def __init__(self, ui):
1266 self.ui = ui
1266 self.ui = ui
1267 self.reply = None
1267 self.reply = None
1268 self.captureoutput = False
1268 self.captureoutput = False
1269
1269
1270 @property
1270 @property
1271 def repo(self):
1271 def repo(self):
1272 raise error.ProgrammingError(b'no repo access from stream interruption')
1272 raise error.ProgrammingError(b'no repo access from stream interruption')
1273
1273
1274 def gettransaction(self):
1274 def gettransaction(self):
1275 raise TransactionUnavailable(b'no repo access from stream interruption')
1275 raise TransactionUnavailable(b'no repo access from stream interruption')
1276
1276
1277
1277
1278 def decodepayloadchunks(ui, fh):
1278 def decodepayloadchunks(ui, fh):
1279 """Reads bundle2 part payload data into chunks.
1279 """Reads bundle2 part payload data into chunks.
1280
1280
1281 Part payload data consists of framed chunks. This function takes
1281 Part payload data consists of framed chunks. This function takes
1282 a file handle and emits those chunks.
1282 a file handle and emits those chunks.
1283 """
1283 """
1284 dolog = ui.configbool(b'devel', b'bundle2.debug')
1284 dolog = ui.configbool(b'devel', b'bundle2.debug')
1285 debug = ui.debug
1285 debug = ui.debug
1286
1286
1287 headerstruct = struct.Struct(_fpayloadsize)
1287 headerstruct = struct.Struct(_fpayloadsize)
1288 headersize = headerstruct.size
1288 headersize = headerstruct.size
1289 unpack = headerstruct.unpack
1289 unpack = headerstruct.unpack
1290
1290
1291 readexactly = changegroup.readexactly
1291 readexactly = changegroup.readexactly
1292 read = fh.read
1292 read = fh.read
1293
1293
1294 chunksize = unpack(readexactly(fh, headersize))[0]
1294 chunksize = unpack(readexactly(fh, headersize))[0]
1295 indebug(ui, b'payload chunk size: %i' % chunksize)
1295 indebug(ui, b'payload chunk size: %i' % chunksize)
1296
1296
1297 # changegroup.readexactly() is inlined below for performance.
1297 # changegroup.readexactly() is inlined below for performance.
1298 while chunksize:
1298 while chunksize:
1299 if chunksize >= 0:
1299 if chunksize >= 0:
1300 s = read(chunksize)
1300 s = read(chunksize)
1301 if len(s) < chunksize:
1301 if len(s) < chunksize:
1302 raise error.Abort(
1302 raise error.Abort(
1303 _(
1303 _(
1304 b'stream ended unexpectedly '
1304 b'stream ended unexpectedly '
1305 b' (got %d bytes, expected %d)'
1305 b' (got %d bytes, expected %d)'
1306 )
1306 )
1307 % (len(s), chunksize)
1307 % (len(s), chunksize)
1308 )
1308 )
1309
1309
1310 yield s
1310 yield s
1311 elif chunksize == flaginterrupt:
1311 elif chunksize == flaginterrupt:
1312 # Interrupt "signal" detected. The regular stream is interrupted
1312 # Interrupt "signal" detected. The regular stream is interrupted
1313 # and a bundle2 part follows. Consume it.
1313 # and a bundle2 part follows. Consume it.
1314 interrupthandler(ui, fh)()
1314 interrupthandler(ui, fh)()
1315 else:
1315 else:
1316 raise error.BundleValueError(
1316 raise error.BundleValueError(
1317 b'negative payload chunk size: %s' % chunksize
1317 b'negative payload chunk size: %s' % chunksize
1318 )
1318 )
1319
1319
1320 s = read(headersize)
1320 s = read(headersize)
1321 if len(s) < headersize:
1321 if len(s) < headersize:
1322 raise error.Abort(
1322 raise error.Abort(
1323 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1323 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1324 % (len(s), chunksize)
1324 % (len(s), chunksize)
1325 )
1325 )
1326
1326
1327 chunksize = unpack(s)[0]
1327 chunksize = unpack(s)[0]
1328
1328
1329 # indebug() inlined for performance.
1329 # indebug() inlined for performance.
1330 if dolog:
1330 if dolog:
1331 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1331 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1332
1332
1333
1333
1334 class unbundlepart(unpackermixin):
1334 class unbundlepart(unpackermixin):
1335 """a bundle part read from a bundle"""
1335 """a bundle part read from a bundle"""
1336
1336
1337 def __init__(self, ui, header, fp):
1337 def __init__(self, ui, header, fp):
1338 super(unbundlepart, self).__init__(fp)
1338 super(unbundlepart, self).__init__(fp)
1339 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1339 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1340 fp, b'tell'
1340 fp, b'tell'
1341 )
1341 )
1342 self.ui = ui
1342 self.ui = ui
1343 # unbundle state attr
1343 # unbundle state attr
1344 self._headerdata = header
1344 self._headerdata = header
1345 self._headeroffset = 0
1345 self._headeroffset = 0
1346 self._initialized = False
1346 self._initialized = False
1347 self.consumed = False
1347 self.consumed = False
1348 # part data
1348 # part data
1349 self.id = None
1349 self.id = None
1350 self.type = None
1350 self.type = None
1351 self.mandatoryparams = None
1351 self.mandatoryparams = None
1352 self.advisoryparams = None
1352 self.advisoryparams = None
1353 self.params = None
1353 self.params = None
1354 self.mandatorykeys = ()
1354 self.mandatorykeys = ()
1355 self._readheader()
1355 self._readheader()
1356 self._mandatory = None
1356 self._mandatory = None
1357 self._pos = 0
1357 self._pos = 0
1358
1358
1359 def _fromheader(self, size):
1359 def _fromheader(self, size):
1360 """return the next <size> byte from the header"""
1360 """return the next <size> byte from the header"""
1361 offset = self._headeroffset
1361 offset = self._headeroffset
1362 data = self._headerdata[offset : (offset + size)]
1362 data = self._headerdata[offset : (offset + size)]
1363 self._headeroffset = offset + size
1363 self._headeroffset = offset + size
1364 return data
1364 return data
1365
1365
1366 def _unpackheader(self, format):
1366 def _unpackheader(self, format):
1367 """read given format from header
1367 """read given format from header
1368
1368
1369 This automatically compute the size of the format to read."""
1369 This automatically compute the size of the format to read."""
1370 data = self._fromheader(struct.calcsize(format))
1370 data = self._fromheader(struct.calcsize(format))
1371 return _unpack(format, data)
1371 return _unpack(format, data)
1372
1372
1373 def _initparams(self, mandatoryparams, advisoryparams):
1373 def _initparams(self, mandatoryparams, advisoryparams):
1374 """internal function to setup all logic related parameters"""
1374 """internal function to setup all logic related parameters"""
1375 # make it read only to prevent people touching it by mistake.
1375 # make it read only to prevent people touching it by mistake.
1376 self.mandatoryparams = tuple(mandatoryparams)
1376 self.mandatoryparams = tuple(mandatoryparams)
1377 self.advisoryparams = tuple(advisoryparams)
1377 self.advisoryparams = tuple(advisoryparams)
1378 # user friendly UI
1378 # user friendly UI
1379 self.params = util.sortdict(self.mandatoryparams)
1379 self.params = util.sortdict(self.mandatoryparams)
1380 self.params.update(self.advisoryparams)
1380 self.params.update(self.advisoryparams)
1381 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1381 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1382
1382
1383 def _readheader(self):
1383 def _readheader(self):
1384 """read the header and setup the object"""
1384 """read the header and setup the object"""
1385 typesize = self._unpackheader(_fparttypesize)[0]
1385 typesize = self._unpackheader(_fparttypesize)[0]
1386 self.type = self._fromheader(typesize)
1386 self.type = self._fromheader(typesize)
1387 indebug(self.ui, b'part type: "%s"' % self.type)
1387 indebug(self.ui, b'part type: "%s"' % self.type)
1388 self.id = self._unpackheader(_fpartid)[0]
1388 self.id = self._unpackheader(_fpartid)[0]
1389 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1389 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1390 # extract mandatory bit from type
1390 # extract mandatory bit from type
1391 self.mandatory = self.type != self.type.lower()
1391 self.mandatory = self.type != self.type.lower()
1392 self.type = self.type.lower()
1392 self.type = self.type.lower()
1393 ## reading parameters
1393 ## reading parameters
1394 # param count
1394 # param count
1395 mancount, advcount = self._unpackheader(_fpartparamcount)
1395 mancount, advcount = self._unpackheader(_fpartparamcount)
1396 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1396 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1397 # param size
1397 # param size
1398 fparamsizes = _makefpartparamsizes(mancount + advcount)
1398 fparamsizes = _makefpartparamsizes(mancount + advcount)
1399 paramsizes = self._unpackheader(fparamsizes)
1399 paramsizes = self._unpackheader(fparamsizes)
1400 # make it a list of couple again
1400 # make it a list of couple again
1401 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1401 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1402 # split mandatory from advisory
1402 # split mandatory from advisory
1403 mansizes = paramsizes[:mancount]
1403 mansizes = paramsizes[:mancount]
1404 advsizes = paramsizes[mancount:]
1404 advsizes = paramsizes[mancount:]
1405 # retrieve param value
1405 # retrieve param value
1406 manparams = []
1406 manparams = []
1407 for key, value in mansizes:
1407 for key, value in mansizes:
1408 manparams.append((self._fromheader(key), self._fromheader(value)))
1408 manparams.append((self._fromheader(key), self._fromheader(value)))
1409 advparams = []
1409 advparams = []
1410 for key, value in advsizes:
1410 for key, value in advsizes:
1411 advparams.append((self._fromheader(key), self._fromheader(value)))
1411 advparams.append((self._fromheader(key), self._fromheader(value)))
1412 self._initparams(manparams, advparams)
1412 self._initparams(manparams, advparams)
1413 ## part payload
1413 ## part payload
1414 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1414 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1415 # we read the data, tell it
1415 # we read the data, tell it
1416 self._initialized = True
1416 self._initialized = True
1417
1417
1418 def _payloadchunks(self):
1418 def _payloadchunks(self):
1419 """Generator of decoded chunks in the payload."""
1419 """Generator of decoded chunks in the payload."""
1420 return decodepayloadchunks(self.ui, self._fp)
1420 return decodepayloadchunks(self.ui, self._fp)
1421
1421
1422 def consume(self):
1422 def consume(self):
1423 """Read the part payload until completion.
1423 """Read the part payload until completion.
1424
1424
1425 By consuming the part data, the underlying stream read offset will
1425 By consuming the part data, the underlying stream read offset will
1426 be advanced to the next part (or end of stream).
1426 be advanced to the next part (or end of stream).
1427 """
1427 """
1428 if self.consumed:
1428 if self.consumed:
1429 return
1429 return
1430
1430
1431 chunk = self.read(32768)
1431 chunk = self.read(32768)
1432 while chunk:
1432 while chunk:
1433 self._pos += len(chunk)
1433 self._pos += len(chunk)
1434 chunk = self.read(32768)
1434 chunk = self.read(32768)
1435
1435
1436 def read(self, size=None):
1436 def read(self, size=None):
1437 """read payload data"""
1437 """read payload data"""
1438 if not self._initialized:
1438 if not self._initialized:
1439 self._readheader()
1439 self._readheader()
1440 if size is None:
1440 if size is None:
1441 data = self._payloadstream.read()
1441 data = self._payloadstream.read()
1442 else:
1442 else:
1443 data = self._payloadstream.read(size)
1443 data = self._payloadstream.read(size)
1444 self._pos += len(data)
1444 self._pos += len(data)
1445 if size is None or len(data) < size:
1445 if size is None or len(data) < size:
1446 if not self.consumed and self._pos:
1446 if not self.consumed and self._pos:
1447 self.ui.debug(
1447 self.ui.debug(
1448 b'bundle2-input-part: total payload size %i\n' % self._pos
1448 b'bundle2-input-part: total payload size %i\n' % self._pos
1449 )
1449 )
1450 self.consumed = True
1450 self.consumed = True
1451 return data
1451 return data
1452
1452
1453
1453
1454 class seekableunbundlepart(unbundlepart):
1454 class seekableunbundlepart(unbundlepart):
1455 """A bundle2 part in a bundle that is seekable.
1455 """A bundle2 part in a bundle that is seekable.
1456
1456
1457 Regular ``unbundlepart`` instances can only be read once. This class
1457 Regular ``unbundlepart`` instances can only be read once. This class
1458 extends ``unbundlepart`` to enable bi-directional seeking within the
1458 extends ``unbundlepart`` to enable bi-directional seeking within the
1459 part.
1459 part.
1460
1460
1461 Bundle2 part data consists of framed chunks. Offsets when seeking
1461 Bundle2 part data consists of framed chunks. Offsets when seeking
1462 refer to the decoded data, not the offsets in the underlying bundle2
1462 refer to the decoded data, not the offsets in the underlying bundle2
1463 stream.
1463 stream.
1464
1464
1465 To facilitate quickly seeking within the decoded data, instances of this
1465 To facilitate quickly seeking within the decoded data, instances of this
1466 class maintain a mapping between offsets in the underlying stream and
1466 class maintain a mapping between offsets in the underlying stream and
1467 the decoded payload. This mapping will consume memory in proportion
1467 the decoded payload. This mapping will consume memory in proportion
1468 to the number of chunks within the payload (which almost certainly
1468 to the number of chunks within the payload (which almost certainly
1469 increases in proportion with the size of the part).
1469 increases in proportion with the size of the part).
1470 """
1470 """
1471
1471
1472 def __init__(self, ui, header, fp):
1472 def __init__(self, ui, header, fp):
1473 # (payload, file) offsets for chunk starts.
1473 # (payload, file) offsets for chunk starts.
1474 self._chunkindex = []
1474 self._chunkindex = []
1475
1475
1476 super(seekableunbundlepart, self).__init__(ui, header, fp)
1476 super(seekableunbundlepart, self).__init__(ui, header, fp)
1477
1477
1478 def _payloadchunks(self, chunknum=0):
1478 def _payloadchunks(self, chunknum=0):
1479 '''seek to specified chunk and start yielding data'''
1479 '''seek to specified chunk and start yielding data'''
1480 if len(self._chunkindex) == 0:
1480 if len(self._chunkindex) == 0:
1481 assert chunknum == 0, b'Must start with chunk 0'
1481 assert chunknum == 0, b'Must start with chunk 0'
1482 self._chunkindex.append((0, self._tellfp()))
1482 self._chunkindex.append((0, self._tellfp()))
1483 else:
1483 else:
1484 assert chunknum < len(self._chunkindex), (
1484 assert chunknum < len(self._chunkindex), (
1485 b'Unknown chunk %d' % chunknum
1485 b'Unknown chunk %d' % chunknum
1486 )
1486 )
1487 self._seekfp(self._chunkindex[chunknum][1])
1487 self._seekfp(self._chunkindex[chunknum][1])
1488
1488
1489 pos = self._chunkindex[chunknum][0]
1489 pos = self._chunkindex[chunknum][0]
1490
1490
1491 for chunk in decodepayloadchunks(self.ui, self._fp):
1491 for chunk in decodepayloadchunks(self.ui, self._fp):
1492 chunknum += 1
1492 chunknum += 1
1493 pos += len(chunk)
1493 pos += len(chunk)
1494 if chunknum == len(self._chunkindex):
1494 if chunknum == len(self._chunkindex):
1495 self._chunkindex.append((pos, self._tellfp()))
1495 self._chunkindex.append((pos, self._tellfp()))
1496
1496
1497 yield chunk
1497 yield chunk
1498
1498
1499 def _findchunk(self, pos):
1499 def _findchunk(self, pos):
1500 '''for a given payload position, return a chunk number and offset'''
1500 '''for a given payload position, return a chunk number and offset'''
1501 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1501 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1502 if ppos == pos:
1502 if ppos == pos:
1503 return chunk, 0
1503 return chunk, 0
1504 elif ppos > pos:
1504 elif ppos > pos:
1505 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1505 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1506 raise ValueError(b'Unknown chunk')
1506 raise ValueError(b'Unknown chunk')
1507
1507
1508 def tell(self):
1508 def tell(self):
1509 return self._pos
1509 return self._pos
1510
1510
1511 def seek(self, offset, whence=os.SEEK_SET):
1511 def seek(self, offset, whence=os.SEEK_SET):
1512 if whence == os.SEEK_SET:
1512 if whence == os.SEEK_SET:
1513 newpos = offset
1513 newpos = offset
1514 elif whence == os.SEEK_CUR:
1514 elif whence == os.SEEK_CUR:
1515 newpos = self._pos + offset
1515 newpos = self._pos + offset
1516 elif whence == os.SEEK_END:
1516 elif whence == os.SEEK_END:
1517 if not self.consumed:
1517 if not self.consumed:
1518 # Can't use self.consume() here because it advances self._pos.
1518 # Can't use self.consume() here because it advances self._pos.
1519 chunk = self.read(32768)
1519 chunk = self.read(32768)
1520 while chunk:
1520 while chunk:
1521 chunk = self.read(32768)
1521 chunk = self.read(32768)
1522 newpos = self._chunkindex[-1][0] - offset
1522 newpos = self._chunkindex[-1][0] - offset
1523 else:
1523 else:
1524 raise ValueError(b'Unknown whence value: %r' % (whence,))
1524 raise ValueError(b'Unknown whence value: %r' % (whence,))
1525
1525
1526 if newpos > self._chunkindex[-1][0] and not self.consumed:
1526 if newpos > self._chunkindex[-1][0] and not self.consumed:
1527 # Can't use self.consume() here because it advances self._pos.
1527 # Can't use self.consume() here because it advances self._pos.
1528 chunk = self.read(32768)
1528 chunk = self.read(32768)
1529 while chunk:
1529 while chunk:
1530 chunk = self.read(32668)
1530 chunk = self.read(32668)
1531
1531
1532 if not 0 <= newpos <= self._chunkindex[-1][0]:
1532 if not 0 <= newpos <= self._chunkindex[-1][0]:
1533 raise ValueError(b'Offset out of range')
1533 raise ValueError(b'Offset out of range')
1534
1534
1535 if self._pos != newpos:
1535 if self._pos != newpos:
1536 chunk, internaloffset = self._findchunk(newpos)
1536 chunk, internaloffset = self._findchunk(newpos)
1537 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1537 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1538 adjust = self.read(internaloffset)
1538 adjust = self.read(internaloffset)
1539 if len(adjust) != internaloffset:
1539 if len(adjust) != internaloffset:
1540 raise error.Abort(_(b'Seek failed\n'))
1540 raise error.Abort(_(b'Seek failed\n'))
1541 self._pos = newpos
1541 self._pos = newpos
1542
1542
1543 def _seekfp(self, offset, whence=0):
1543 def _seekfp(self, offset, whence=0):
1544 """move the underlying file pointer
1544 """move the underlying file pointer
1545
1545
1546 This method is meant for internal usage by the bundle2 protocol only.
1546 This method is meant for internal usage by the bundle2 protocol only.
1547 They directly manipulate the low level stream including bundle2 level
1547 They directly manipulate the low level stream including bundle2 level
1548 instruction.
1548 instruction.
1549
1549
1550 Do not use it to implement higher-level logic or methods."""
1550 Do not use it to implement higher-level logic or methods."""
1551 if self._seekable:
1551 if self._seekable:
1552 return self._fp.seek(offset, whence)
1552 return self._fp.seek(offset, whence)
1553 else:
1553 else:
1554 raise NotImplementedError(_(b'File pointer is not seekable'))
1554 raise NotImplementedError(_(b'File pointer is not seekable'))
1555
1555
1556 def _tellfp(self):
1556 def _tellfp(self):
1557 """return the file offset, or None if file is not seekable
1557 """return the file offset, or None if file is not seekable
1558
1558
1559 This method is meant for internal usage by the bundle2 protocol only.
1559 This method is meant for internal usage by the bundle2 protocol only.
1560 They directly manipulate the low level stream including bundle2 level
1560 They directly manipulate the low level stream including bundle2 level
1561 instruction.
1561 instruction.
1562
1562
1563 Do not use it to implement higher-level logic or methods."""
1563 Do not use it to implement higher-level logic or methods."""
1564 if self._seekable:
1564 if self._seekable:
1565 try:
1565 try:
1566 return self._fp.tell()
1566 return self._fp.tell()
1567 except IOError as e:
1567 except IOError as e:
1568 if e.errno == errno.ESPIPE:
1568 if e.errno == errno.ESPIPE:
1569 self._seekable = False
1569 self._seekable = False
1570 else:
1570 else:
1571 raise
1571 raise
1572 return None
1572 return None
1573
1573
1574
1574
1575 # These are only the static capabilities.
1575 # These are only the static capabilities.
1576 # Check the 'getrepocaps' function for the rest.
1576 # Check the 'getrepocaps' function for the rest.
1577 capabilities = {
1577 capabilities = {
1578 b'HG20': (),
1578 b'HG20': (),
1579 b'bookmarks': (),
1579 b'bookmarks': (),
1580 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1580 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1581 b'listkeys': (),
1581 b'listkeys': (),
1582 b'pushkey': (),
1582 b'pushkey': (),
1583 b'digests': tuple(sorted(util.DIGESTS.keys())),
1583 b'digests': tuple(sorted(util.DIGESTS.keys())),
1584 b'remote-changegroup': (b'http', b'https'),
1584 b'remote-changegroup': (b'http', b'https'),
1585 b'hgtagsfnodes': (),
1585 b'hgtagsfnodes': (),
1586 b'rev-branch-cache': (),
1586 b'rev-branch-cache': (),
1587 b'phases': (b'heads',),
1587 b'phases': (b'heads',),
1588 b'stream': (b'v2',),
1588 b'stream': (b'v2',),
1589 }
1589 }
1590
1590
1591
1591
1592 def getrepocaps(repo, allowpushback=False, role=None):
1592 def getrepocaps(repo, allowpushback=False, role=None):
1593 """return the bundle2 capabilities for a given repo
1593 """return the bundle2 capabilities for a given repo
1594
1594
1595 Exists to allow extensions (like evolution) to mutate the capabilities.
1595 Exists to allow extensions (like evolution) to mutate the capabilities.
1596
1596
1597 The returned value is used for servers advertising their capabilities as
1597 The returned value is used for servers advertising their capabilities as
1598 well as clients advertising their capabilities to servers as part of
1598 well as clients advertising their capabilities to servers as part of
1599 bundle2 requests. The ``role`` argument specifies which is which.
1599 bundle2 requests. The ``role`` argument specifies which is which.
1600 """
1600 """
1601 if role not in (b'client', b'server'):
1601 if role not in (b'client', b'server'):
1602 raise error.ProgrammingError(b'role argument must be client or server')
1602 raise error.ProgrammingError(b'role argument must be client or server')
1603
1603
1604 caps = capabilities.copy()
1604 caps = capabilities.copy()
1605 caps[b'changegroup'] = tuple(
1605 caps[b'changegroup'] = tuple(
1606 sorted(changegroup.supportedincomingversions(repo))
1606 sorted(changegroup.supportedincomingversions(repo))
1607 )
1607 )
1608 if obsolete.isenabled(repo, obsolete.exchangeopt):
1608 if obsolete.isenabled(repo, obsolete.exchangeopt):
1609 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1609 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1610 caps[b'obsmarkers'] = supportedformat
1610 caps[b'obsmarkers'] = supportedformat
1611 if allowpushback:
1611 if allowpushback:
1612 caps[b'pushback'] = ()
1612 caps[b'pushback'] = ()
1613 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1613 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1614 if cpmode == b'check-related':
1614 if cpmode == b'check-related':
1615 caps[b'checkheads'] = (b'related',)
1615 caps[b'checkheads'] = (b'related',)
1616 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1616 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1617 caps.pop(b'phases')
1617 caps.pop(b'phases')
1618
1618
1619 # Don't advertise stream clone support in server mode if not configured.
1619 # Don't advertise stream clone support in server mode if not configured.
1620 if role == b'server':
1620 if role == b'server':
1621 streamsupported = repo.ui.configbool(
1621 streamsupported = repo.ui.configbool(
1622 b'server', b'uncompressed', untrusted=True
1622 b'server', b'uncompressed', untrusted=True
1623 )
1623 )
1624 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1624 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1625
1625
1626 if not streamsupported or not featuresupported:
1626 if not streamsupported or not featuresupported:
1627 caps.pop(b'stream')
1627 caps.pop(b'stream')
1628 # Else always advertise support on client, because payload support
1628 # Else always advertise support on client, because payload support
1629 # should always be advertised.
1629 # should always be advertised.
1630
1630
1631 return caps
1631 return caps
1632
1632
1633
1633
1634 def bundle2caps(remote):
1634 def bundle2caps(remote):
1635 """return the bundle capabilities of a peer as dict"""
1635 """return the bundle capabilities of a peer as dict"""
1636 raw = remote.capable(b'bundle2')
1636 raw = remote.capable(b'bundle2')
1637 if not raw and raw != b'':
1637 if not raw and raw != b'':
1638 return {}
1638 return {}
1639 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1639 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1640 return decodecaps(capsblob)
1640 return decodecaps(capsblob)
1641
1641
1642
1642
1643 def obsmarkersversion(caps):
1643 def obsmarkersversion(caps):
1644 """extract the list of supported obsmarkers versions from a bundle2caps dict
1644 """extract the list of supported obsmarkers versions from a bundle2caps dict
1645 """
1645 """
1646 obscaps = caps.get(b'obsmarkers', ())
1646 obscaps = caps.get(b'obsmarkers', ())
1647 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1647 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1648
1648
1649
1649
1650 def writenewbundle(
1650 def writenewbundle(
1651 ui,
1651 ui,
1652 repo,
1652 repo,
1653 source,
1653 source,
1654 filename,
1654 filename,
1655 bundletype,
1655 bundletype,
1656 outgoing,
1656 outgoing,
1657 opts,
1657 opts,
1658 vfs=None,
1658 vfs=None,
1659 compression=None,
1659 compression=None,
1660 compopts=None,
1660 compopts=None,
1661 ):
1661 ):
1662 if bundletype.startswith(b'HG10'):
1662 if bundletype.startswith(b'HG10'):
1663 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1663 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1664 return writebundle(
1664 return writebundle(
1665 ui,
1665 ui,
1666 cg,
1666 cg,
1667 filename,
1667 filename,
1668 bundletype,
1668 bundletype,
1669 vfs=vfs,
1669 vfs=vfs,
1670 compression=compression,
1670 compression=compression,
1671 compopts=compopts,
1671 compopts=compopts,
1672 )
1672 )
1673 elif not bundletype.startswith(b'HG20'):
1673 elif not bundletype.startswith(b'HG20'):
1674 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1674 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1675
1675
1676 caps = {}
1676 caps = {}
1677 if b'obsolescence' in opts:
1677 if b'obsolescence' in opts:
1678 caps[b'obsmarkers'] = (b'V1',)
1678 caps[b'obsmarkers'] = (b'V1',)
1679 bundle = bundle20(ui, caps)
1679 bundle = bundle20(ui, caps)
1680 bundle.setcompression(compression, compopts)
1680 bundle.setcompression(compression, compopts)
1681 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1681 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1682 chunkiter = bundle.getchunks()
1682 chunkiter = bundle.getchunks()
1683
1683
1684 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1684 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1685
1685
1686
1686
1687 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1687 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1688 # We should eventually reconcile this logic with the one behind
1688 # We should eventually reconcile this logic with the one behind
1689 # 'exchange.getbundle2partsgenerator'.
1689 # 'exchange.getbundle2partsgenerator'.
1690 #
1690 #
1691 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1691 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1692 # different right now. So we keep them separated for now for the sake of
1692 # different right now. So we keep them separated for now for the sake of
1693 # simplicity.
1693 # simplicity.
1694
1694
1695 # we might not always want a changegroup in such bundle, for example in
1695 # we might not always want a changegroup in such bundle, for example in
1696 # stream bundles
1696 # stream bundles
1697 if opts.get(b'changegroup', True):
1697 if opts.get(b'changegroup', True):
1698 cgversion = opts.get(b'cg.version')
1698 cgversion = opts.get(b'cg.version')
1699 if cgversion is None:
1699 if cgversion is None:
1700 cgversion = changegroup.safeversion(repo)
1700 cgversion = changegroup.safeversion(repo)
1701 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1701 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1702 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1702 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1703 part.addparam(b'version', cg.version)
1703 part.addparam(b'version', cg.version)
1704 if b'clcount' in cg.extras:
1704 if b'clcount' in cg.extras:
1705 part.addparam(
1705 part.addparam(
1706 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1706 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1707 )
1707 )
1708 if opts.get(b'phases') and repo.revs(
1708 if opts.get(b'phases') and repo.revs(
1709 b'%ln and secret()', outgoing.missingheads
1709 b'%ln and secret()', outgoing.missingheads
1710 ):
1710 ):
1711 part.addparam(
1711 part.addparam(
1712 b'targetphase', b'%d' % phases.secret, mandatory=False
1712 b'targetphase', b'%d' % phases.secret, mandatory=False
1713 )
1713 )
1714 if b'exp-sidedata-flag' in repo.requirements:
1715 part.addparam(b'exp-sidedata', b'1')
1714
1716
1715 if opts.get(b'streamv2', False):
1717 if opts.get(b'streamv2', False):
1716 addpartbundlestream2(bundler, repo, stream=True)
1718 addpartbundlestream2(bundler, repo, stream=True)
1717
1719
1718 if opts.get(b'tagsfnodescache', True):
1720 if opts.get(b'tagsfnodescache', True):
1719 addparttagsfnodescache(repo, bundler, outgoing)
1721 addparttagsfnodescache(repo, bundler, outgoing)
1720
1722
1721 if opts.get(b'revbranchcache', True):
1723 if opts.get(b'revbranchcache', True):
1722 addpartrevbranchcache(repo, bundler, outgoing)
1724 addpartrevbranchcache(repo, bundler, outgoing)
1723
1725
1724 if opts.get(b'obsolescence', False):
1726 if opts.get(b'obsolescence', False):
1725 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1727 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1726 buildobsmarkerspart(bundler, obsmarkers)
1728 buildobsmarkerspart(bundler, obsmarkers)
1727
1729
1728 if opts.get(b'phases', False):
1730 if opts.get(b'phases', False):
1729 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1731 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1730 phasedata = phases.binaryencode(headsbyphase)
1732 phasedata = phases.binaryencode(headsbyphase)
1731 bundler.newpart(b'phase-heads', data=phasedata)
1733 bundler.newpart(b'phase-heads', data=phasedata)
1732
1734
1733
1735
1734 def addparttagsfnodescache(repo, bundler, outgoing):
1736 def addparttagsfnodescache(repo, bundler, outgoing):
1735 # we include the tags fnode cache for the bundle changeset
1737 # we include the tags fnode cache for the bundle changeset
1736 # (as an optional parts)
1738 # (as an optional parts)
1737 cache = tags.hgtagsfnodescache(repo.unfiltered())
1739 cache = tags.hgtagsfnodescache(repo.unfiltered())
1738 chunks = []
1740 chunks = []
1739
1741
1740 # .hgtags fnodes are only relevant for head changesets. While we could
1742 # .hgtags fnodes are only relevant for head changesets. While we could
1741 # transfer values for all known nodes, there will likely be little to
1743 # transfer values for all known nodes, there will likely be little to
1742 # no benefit.
1744 # no benefit.
1743 #
1745 #
1744 # We don't bother using a generator to produce output data because
1746 # We don't bother using a generator to produce output data because
1745 # a) we only have 40 bytes per head and even esoteric numbers of heads
1747 # a) we only have 40 bytes per head and even esoteric numbers of heads
1746 # consume little memory (1M heads is 40MB) b) we don't want to send the
1748 # consume little memory (1M heads is 40MB) b) we don't want to send the
1747 # part if we don't have entries and knowing if we have entries requires
1749 # part if we don't have entries and knowing if we have entries requires
1748 # cache lookups.
1750 # cache lookups.
1749 for node in outgoing.missingheads:
1751 for node in outgoing.missingheads:
1750 # Don't compute missing, as this may slow down serving.
1752 # Don't compute missing, as this may slow down serving.
1751 fnode = cache.getfnode(node, computemissing=False)
1753 fnode = cache.getfnode(node, computemissing=False)
1752 if fnode is not None:
1754 if fnode is not None:
1753 chunks.extend([node, fnode])
1755 chunks.extend([node, fnode])
1754
1756
1755 if chunks:
1757 if chunks:
1756 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1758 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1757
1759
1758
1760
1759 def addpartrevbranchcache(repo, bundler, outgoing):
1761 def addpartrevbranchcache(repo, bundler, outgoing):
1760 # we include the rev branch cache for the bundle changeset
1762 # we include the rev branch cache for the bundle changeset
1761 # (as an optional parts)
1763 # (as an optional parts)
1762 cache = repo.revbranchcache()
1764 cache = repo.revbranchcache()
1763 cl = repo.unfiltered().changelog
1765 cl = repo.unfiltered().changelog
1764 branchesdata = collections.defaultdict(lambda: (set(), set()))
1766 branchesdata = collections.defaultdict(lambda: (set(), set()))
1765 for node in outgoing.missing:
1767 for node in outgoing.missing:
1766 branch, close = cache.branchinfo(cl.rev(node))
1768 branch, close = cache.branchinfo(cl.rev(node))
1767 branchesdata[branch][close].add(node)
1769 branchesdata[branch][close].add(node)
1768
1770
1769 def generate():
1771 def generate():
1770 for branch, (nodes, closed) in sorted(branchesdata.items()):
1772 for branch, (nodes, closed) in sorted(branchesdata.items()):
1771 utf8branch = encoding.fromlocal(branch)
1773 utf8branch = encoding.fromlocal(branch)
1772 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1774 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1773 yield utf8branch
1775 yield utf8branch
1774 for n in sorted(nodes):
1776 for n in sorted(nodes):
1775 yield n
1777 yield n
1776 for n in sorted(closed):
1778 for n in sorted(closed):
1777 yield n
1779 yield n
1778
1780
1779 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1781 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1780
1782
1781
1783
1782 def _formatrequirementsspec(requirements):
1784 def _formatrequirementsspec(requirements):
1783 requirements = [req for req in requirements if req != b"shared"]
1785 requirements = [req for req in requirements if req != b"shared"]
1784 return urlreq.quote(b','.join(sorted(requirements)))
1786 return urlreq.quote(b','.join(sorted(requirements)))
1785
1787
1786
1788
1787 def _formatrequirementsparams(requirements):
1789 def _formatrequirementsparams(requirements):
1788 requirements = _formatrequirementsspec(requirements)
1790 requirements = _formatrequirementsspec(requirements)
1789 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1791 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1790 return params
1792 return params
1791
1793
1792
1794
1793 def addpartbundlestream2(bundler, repo, **kwargs):
1795 def addpartbundlestream2(bundler, repo, **kwargs):
1794 if not kwargs.get(r'stream', False):
1796 if not kwargs.get(r'stream', False):
1795 return
1797 return
1796
1798
1797 if not streamclone.allowservergeneration(repo):
1799 if not streamclone.allowservergeneration(repo):
1798 raise error.Abort(
1800 raise error.Abort(
1799 _(
1801 _(
1800 b'stream data requested but server does not allow '
1802 b'stream data requested but server does not allow '
1801 b'this feature'
1803 b'this feature'
1802 ),
1804 ),
1803 hint=_(
1805 hint=_(
1804 b'well-behaved clients should not be '
1806 b'well-behaved clients should not be '
1805 b'requesting stream data from servers not '
1807 b'requesting stream data from servers not '
1806 b'advertising it; the client may be buggy'
1808 b'advertising it; the client may be buggy'
1807 ),
1809 ),
1808 )
1810 )
1809
1811
1810 # Stream clones don't compress well. And compression undermines a
1812 # Stream clones don't compress well. And compression undermines a
1811 # goal of stream clones, which is to be fast. Communicate the desire
1813 # goal of stream clones, which is to be fast. Communicate the desire
1812 # to avoid compression to consumers of the bundle.
1814 # to avoid compression to consumers of the bundle.
1813 bundler.prefercompressed = False
1815 bundler.prefercompressed = False
1814
1816
1815 # get the includes and excludes
1817 # get the includes and excludes
1816 includepats = kwargs.get(r'includepats')
1818 includepats = kwargs.get(r'includepats')
1817 excludepats = kwargs.get(r'excludepats')
1819 excludepats = kwargs.get(r'excludepats')
1818
1820
1819 narrowstream = repo.ui.configbool(
1821 narrowstream = repo.ui.configbool(
1820 b'experimental', b'server.stream-narrow-clones'
1822 b'experimental', b'server.stream-narrow-clones'
1821 )
1823 )
1822
1824
1823 if (includepats or excludepats) and not narrowstream:
1825 if (includepats or excludepats) and not narrowstream:
1824 raise error.Abort(_(b'server does not support narrow stream clones'))
1826 raise error.Abort(_(b'server does not support narrow stream clones'))
1825
1827
1826 includeobsmarkers = False
1828 includeobsmarkers = False
1827 if repo.obsstore:
1829 if repo.obsstore:
1828 remoteversions = obsmarkersversion(bundler.capabilities)
1830 remoteversions = obsmarkersversion(bundler.capabilities)
1829 if not remoteversions:
1831 if not remoteversions:
1830 raise error.Abort(
1832 raise error.Abort(
1831 _(
1833 _(
1832 b'server has obsolescence markers, but client '
1834 b'server has obsolescence markers, but client '
1833 b'cannot receive them via stream clone'
1835 b'cannot receive them via stream clone'
1834 )
1836 )
1835 )
1837 )
1836 elif repo.obsstore._version in remoteversions:
1838 elif repo.obsstore._version in remoteversions:
1837 includeobsmarkers = True
1839 includeobsmarkers = True
1838
1840
1839 filecount, bytecount, it = streamclone.generatev2(
1841 filecount, bytecount, it = streamclone.generatev2(
1840 repo, includepats, excludepats, includeobsmarkers
1842 repo, includepats, excludepats, includeobsmarkers
1841 )
1843 )
1842 requirements = _formatrequirementsspec(repo.requirements)
1844 requirements = _formatrequirementsspec(repo.requirements)
1843 part = bundler.newpart(b'stream2', data=it)
1845 part = bundler.newpart(b'stream2', data=it)
1844 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1846 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1845 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1847 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1846 part.addparam(b'requirements', requirements, mandatory=True)
1848 part.addparam(b'requirements', requirements, mandatory=True)
1847
1849
1848
1850
1849 def buildobsmarkerspart(bundler, markers):
1851 def buildobsmarkerspart(bundler, markers):
1850 """add an obsmarker part to the bundler with <markers>
1852 """add an obsmarker part to the bundler with <markers>
1851
1853
1852 No part is created if markers is empty.
1854 No part is created if markers is empty.
1853 Raises ValueError if the bundler doesn't support any known obsmarker format.
1855 Raises ValueError if the bundler doesn't support any known obsmarker format.
1854 """
1856 """
1855 if not markers:
1857 if not markers:
1856 return None
1858 return None
1857
1859
1858 remoteversions = obsmarkersversion(bundler.capabilities)
1860 remoteversions = obsmarkersversion(bundler.capabilities)
1859 version = obsolete.commonversion(remoteversions)
1861 version = obsolete.commonversion(remoteversions)
1860 if version is None:
1862 if version is None:
1861 raise ValueError(b'bundler does not support common obsmarker format')
1863 raise ValueError(b'bundler does not support common obsmarker format')
1862 stream = obsolete.encodemarkers(markers, True, version=version)
1864 stream = obsolete.encodemarkers(markers, True, version=version)
1863 return bundler.newpart(b'obsmarkers', data=stream)
1865 return bundler.newpart(b'obsmarkers', data=stream)
1864
1866
1865
1867
1866 def writebundle(
1868 def writebundle(
1867 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1869 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1868 ):
1870 ):
1869 """Write a bundle file and return its filename.
1871 """Write a bundle file and return its filename.
1870
1872
1871 Existing files will not be overwritten.
1873 Existing files will not be overwritten.
1872 If no filename is specified, a temporary file is created.
1874 If no filename is specified, a temporary file is created.
1873 bz2 compression can be turned off.
1875 bz2 compression can be turned off.
1874 The bundle file will be deleted in case of errors.
1876 The bundle file will be deleted in case of errors.
1875 """
1877 """
1876
1878
1877 if bundletype == b"HG20":
1879 if bundletype == b"HG20":
1878 bundle = bundle20(ui)
1880 bundle = bundle20(ui)
1879 bundle.setcompression(compression, compopts)
1881 bundle.setcompression(compression, compopts)
1880 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1882 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1881 part.addparam(b'version', cg.version)
1883 part.addparam(b'version', cg.version)
1882 if b'clcount' in cg.extras:
1884 if b'clcount' in cg.extras:
1883 part.addparam(
1885 part.addparam(
1884 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1886 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1885 )
1887 )
1886 chunkiter = bundle.getchunks()
1888 chunkiter = bundle.getchunks()
1887 else:
1889 else:
1888 # compression argument is only for the bundle2 case
1890 # compression argument is only for the bundle2 case
1889 assert compression is None
1891 assert compression is None
1890 if cg.version != b'01':
1892 if cg.version != b'01':
1891 raise error.Abort(
1893 raise error.Abort(
1892 _(b'old bundle types only supports v1 changegroups')
1894 _(b'old bundle types only supports v1 changegroups')
1893 )
1895 )
1894 header, comp = bundletypes[bundletype]
1896 header, comp = bundletypes[bundletype]
1895 if comp not in util.compengines.supportedbundletypes:
1897 if comp not in util.compengines.supportedbundletypes:
1896 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1898 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1897 compengine = util.compengines.forbundletype(comp)
1899 compengine = util.compengines.forbundletype(comp)
1898
1900
1899 def chunkiter():
1901 def chunkiter():
1900 yield header
1902 yield header
1901 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1903 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1902 yield chunk
1904 yield chunk
1903
1905
1904 chunkiter = chunkiter()
1906 chunkiter = chunkiter()
1905
1907
1906 # parse the changegroup data, otherwise we will block
1908 # parse the changegroup data, otherwise we will block
1907 # in case of sshrepo because we don't know the end of the stream
1909 # in case of sshrepo because we don't know the end of the stream
1908 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1910 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1909
1911
1910
1912
1911 def combinechangegroupresults(op):
1913 def combinechangegroupresults(op):
1912 """logic to combine 0 or more addchangegroup results into one"""
1914 """logic to combine 0 or more addchangegroup results into one"""
1913 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1915 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1914 changedheads = 0
1916 changedheads = 0
1915 result = 1
1917 result = 1
1916 for ret in results:
1918 for ret in results:
1917 # If any changegroup result is 0, return 0
1919 # If any changegroup result is 0, return 0
1918 if ret == 0:
1920 if ret == 0:
1919 result = 0
1921 result = 0
1920 break
1922 break
1921 if ret < -1:
1923 if ret < -1:
1922 changedheads += ret + 1
1924 changedheads += ret + 1
1923 elif ret > 1:
1925 elif ret > 1:
1924 changedheads += ret - 1
1926 changedheads += ret - 1
1925 if changedheads > 0:
1927 if changedheads > 0:
1926 result = 1 + changedheads
1928 result = 1 + changedheads
1927 elif changedheads < 0:
1929 elif changedheads < 0:
1928 result = -1 + changedheads
1930 result = -1 + changedheads
1929 return result
1931 return result
1930
1932
1931
1933
1932 @parthandler(
1934 @parthandler(
1933 b'changegroup', (b'version', b'nbchanges', b'treemanifest', b'targetphase')
1935 b'changegroup',
1936 (
1937 b'version',
1938 b'nbchanges',
1939 b'exp-sidedata',
1940 b'treemanifest',
1941 b'targetphase',
1942 ),
1934 )
1943 )
1935 def handlechangegroup(op, inpart):
1944 def handlechangegroup(op, inpart):
1936 """apply a changegroup part on the repo
1945 """apply a changegroup part on the repo
1937
1946
1938 This is a very early implementation that will massive rework before being
1947 This is a very early implementation that will massive rework before being
1939 inflicted to any end-user.
1948 inflicted to any end-user.
1940 """
1949 """
1941 from . import localrepo
1950 from . import localrepo
1942
1951
1943 tr = op.gettransaction()
1952 tr = op.gettransaction()
1944 unpackerversion = inpart.params.get(b'version', b'01')
1953 unpackerversion = inpart.params.get(b'version', b'01')
1945 # We should raise an appropriate exception here
1954 # We should raise an appropriate exception here
1946 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1955 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1947 # the source and url passed here are overwritten by the one contained in
1956 # the source and url passed here are overwritten by the one contained in
1948 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1957 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1949 nbchangesets = None
1958 nbchangesets = None
1950 if b'nbchanges' in inpart.params:
1959 if b'nbchanges' in inpart.params:
1951 nbchangesets = int(inpart.params.get(b'nbchanges'))
1960 nbchangesets = int(inpart.params.get(b'nbchanges'))
1952 if (
1961 if (
1953 b'treemanifest' in inpart.params
1962 b'treemanifest' in inpart.params
1954 and b'treemanifest' not in op.repo.requirements
1963 and b'treemanifest' not in op.repo.requirements
1955 ):
1964 ):
1956 if len(op.repo.changelog) != 0:
1965 if len(op.repo.changelog) != 0:
1957 raise error.Abort(
1966 raise error.Abort(
1958 _(
1967 _(
1959 b"bundle contains tree manifests, but local repo is "
1968 b"bundle contains tree manifests, but local repo is "
1960 b"non-empty and does not use tree manifests"
1969 b"non-empty and does not use tree manifests"
1961 )
1970 )
1962 )
1971 )
1963 op.repo.requirements.add(b'treemanifest')
1972 op.repo.requirements.add(b'treemanifest')
1964 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
1973 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
1965 op.repo.ui, op.repo.requirements, op.repo.features
1974 op.repo.ui, op.repo.requirements, op.repo.features
1966 )
1975 )
1967 op.repo._writerequirements()
1976 op.repo._writerequirements()
1977
1978 bundlesidedata = bool(b'exp-sidedata' in inpart.params)
1979 reposidedata = bool(b'exp-sidedata-flag' in op.repo.requirements)
1980 if reposidedata and not bundlesidedata:
1981 msg = b"repository is using sidedata but the bundle source do not"
1982 hint = b'this is currently unsupported'
1983 raise error.Abort(msg, hint=hint)
1984
1968 extrakwargs = {}
1985 extrakwargs = {}
1969 targetphase = inpart.params.get(b'targetphase')
1986 targetphase = inpart.params.get(b'targetphase')
1970 if targetphase is not None:
1987 if targetphase is not None:
1971 extrakwargs[r'targetphase'] = int(targetphase)
1988 extrakwargs[r'targetphase'] = int(targetphase)
1972 ret = _processchangegroup(
1989 ret = _processchangegroup(
1973 op,
1990 op,
1974 cg,
1991 cg,
1975 tr,
1992 tr,
1976 b'bundle2',
1993 b'bundle2',
1977 b'bundle2',
1994 b'bundle2',
1978 expectedtotal=nbchangesets,
1995 expectedtotal=nbchangesets,
1979 **extrakwargs
1996 **extrakwargs
1980 )
1997 )
1981 if op.reply is not None:
1998 if op.reply is not None:
1982 # This is definitely not the final form of this
1999 # This is definitely not the final form of this
1983 # return. But one need to start somewhere.
2000 # return. But one need to start somewhere.
1984 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2001 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
1985 part.addparam(
2002 part.addparam(
1986 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2003 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
1987 )
2004 )
1988 part.addparam(b'return', b'%i' % ret, mandatory=False)
2005 part.addparam(b'return', b'%i' % ret, mandatory=False)
1989 assert not inpart.read()
2006 assert not inpart.read()
1990
2007
1991
2008
1992 _remotechangegroupparams = tuple(
2009 _remotechangegroupparams = tuple(
1993 [b'url', b'size', b'digests']
2010 [b'url', b'size', b'digests']
1994 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2011 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
1995 )
2012 )
1996
2013
1997
2014
1998 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2015 @parthandler(b'remote-changegroup', _remotechangegroupparams)
1999 def handleremotechangegroup(op, inpart):
2016 def handleremotechangegroup(op, inpart):
2000 """apply a bundle10 on the repo, given an url and validation information
2017 """apply a bundle10 on the repo, given an url and validation information
2001
2018
2002 All the information about the remote bundle to import are given as
2019 All the information about the remote bundle to import are given as
2003 parameters. The parameters include:
2020 parameters. The parameters include:
2004 - url: the url to the bundle10.
2021 - url: the url to the bundle10.
2005 - size: the bundle10 file size. It is used to validate what was
2022 - size: the bundle10 file size. It is used to validate what was
2006 retrieved by the client matches the server knowledge about the bundle.
2023 retrieved by the client matches the server knowledge about the bundle.
2007 - digests: a space separated list of the digest types provided as
2024 - digests: a space separated list of the digest types provided as
2008 parameters.
2025 parameters.
2009 - digest:<digest-type>: the hexadecimal representation of the digest with
2026 - digest:<digest-type>: the hexadecimal representation of the digest with
2010 that name. Like the size, it is used to validate what was retrieved by
2027 that name. Like the size, it is used to validate what was retrieved by
2011 the client matches what the server knows about the bundle.
2028 the client matches what the server knows about the bundle.
2012
2029
2013 When multiple digest types are given, all of them are checked.
2030 When multiple digest types are given, all of them are checked.
2014 """
2031 """
2015 try:
2032 try:
2016 raw_url = inpart.params[b'url']
2033 raw_url = inpart.params[b'url']
2017 except KeyError:
2034 except KeyError:
2018 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2035 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2019 parsed_url = util.url(raw_url)
2036 parsed_url = util.url(raw_url)
2020 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2037 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2021 raise error.Abort(
2038 raise error.Abort(
2022 _(b'remote-changegroup does not support %s urls')
2039 _(b'remote-changegroup does not support %s urls')
2023 % parsed_url.scheme
2040 % parsed_url.scheme
2024 )
2041 )
2025
2042
2026 try:
2043 try:
2027 size = int(inpart.params[b'size'])
2044 size = int(inpart.params[b'size'])
2028 except ValueError:
2045 except ValueError:
2029 raise error.Abort(
2046 raise error.Abort(
2030 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2047 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2031 )
2048 )
2032 except KeyError:
2049 except KeyError:
2033 raise error.Abort(
2050 raise error.Abort(
2034 _(b'remote-changegroup: missing "%s" param') % b'size'
2051 _(b'remote-changegroup: missing "%s" param') % b'size'
2035 )
2052 )
2036
2053
2037 digests = {}
2054 digests = {}
2038 for typ in inpart.params.get(b'digests', b'').split():
2055 for typ in inpart.params.get(b'digests', b'').split():
2039 param = b'digest:%s' % typ
2056 param = b'digest:%s' % typ
2040 try:
2057 try:
2041 value = inpart.params[param]
2058 value = inpart.params[param]
2042 except KeyError:
2059 except KeyError:
2043 raise error.Abort(
2060 raise error.Abort(
2044 _(b'remote-changegroup: missing "%s" param') % param
2061 _(b'remote-changegroup: missing "%s" param') % param
2045 )
2062 )
2046 digests[typ] = value
2063 digests[typ] = value
2047
2064
2048 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2065 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2049
2066
2050 tr = op.gettransaction()
2067 tr = op.gettransaction()
2051 from . import exchange
2068 from . import exchange
2052
2069
2053 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2070 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2054 if not isinstance(cg, changegroup.cg1unpacker):
2071 if not isinstance(cg, changegroup.cg1unpacker):
2055 raise error.Abort(
2072 raise error.Abort(
2056 _(b'%s: not a bundle version 1.0') % util.hidepassword(raw_url)
2073 _(b'%s: not a bundle version 1.0') % util.hidepassword(raw_url)
2057 )
2074 )
2058 ret = _processchangegroup(op, cg, tr, b'bundle2', b'bundle2')
2075 ret = _processchangegroup(op, cg, tr, b'bundle2', b'bundle2')
2059 if op.reply is not None:
2076 if op.reply is not None:
2060 # This is definitely not the final form of this
2077 # This is definitely not the final form of this
2061 # return. But one need to start somewhere.
2078 # return. But one need to start somewhere.
2062 part = op.reply.newpart(b'reply:changegroup')
2079 part = op.reply.newpart(b'reply:changegroup')
2063 part.addparam(
2080 part.addparam(
2064 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2081 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2065 )
2082 )
2066 part.addparam(b'return', b'%i' % ret, mandatory=False)
2083 part.addparam(b'return', b'%i' % ret, mandatory=False)
2067 try:
2084 try:
2068 real_part.validate()
2085 real_part.validate()
2069 except error.Abort as e:
2086 except error.Abort as e:
2070 raise error.Abort(
2087 raise error.Abort(
2071 _(b'bundle at %s is corrupted:\n%s')
2088 _(b'bundle at %s is corrupted:\n%s')
2072 % (util.hidepassword(raw_url), bytes(e))
2089 % (util.hidepassword(raw_url), bytes(e))
2073 )
2090 )
2074 assert not inpart.read()
2091 assert not inpart.read()
2075
2092
2076
2093
2077 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2094 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2078 def handlereplychangegroup(op, inpart):
2095 def handlereplychangegroup(op, inpart):
2079 ret = int(inpart.params[b'return'])
2096 ret = int(inpart.params[b'return'])
2080 replyto = int(inpart.params[b'in-reply-to'])
2097 replyto = int(inpart.params[b'in-reply-to'])
2081 op.records.add(b'changegroup', {b'return': ret}, replyto)
2098 op.records.add(b'changegroup', {b'return': ret}, replyto)
2082
2099
2083
2100
2084 @parthandler(b'check:bookmarks')
2101 @parthandler(b'check:bookmarks')
2085 def handlecheckbookmarks(op, inpart):
2102 def handlecheckbookmarks(op, inpart):
2086 """check location of bookmarks
2103 """check location of bookmarks
2087
2104
2088 This part is to be used to detect push race regarding bookmark, it
2105 This part is to be used to detect push race regarding bookmark, it
2089 contains binary encoded (bookmark, node) tuple. If the local state does
2106 contains binary encoded (bookmark, node) tuple. If the local state does
2090 not marks the one in the part, a PushRaced exception is raised
2107 not marks the one in the part, a PushRaced exception is raised
2091 """
2108 """
2092 bookdata = bookmarks.binarydecode(inpart)
2109 bookdata = bookmarks.binarydecode(inpart)
2093
2110
2094 msgstandard = (
2111 msgstandard = (
2095 b'remote repository changed while pushing - please try again '
2112 b'remote repository changed while pushing - please try again '
2096 b'(bookmark "%s" move from %s to %s)'
2113 b'(bookmark "%s" move from %s to %s)'
2097 )
2114 )
2098 msgmissing = (
2115 msgmissing = (
2099 b'remote repository changed while pushing - please try again '
2116 b'remote repository changed while pushing - please try again '
2100 b'(bookmark "%s" is missing, expected %s)'
2117 b'(bookmark "%s" is missing, expected %s)'
2101 )
2118 )
2102 msgexist = (
2119 msgexist = (
2103 b'remote repository changed while pushing - please try again '
2120 b'remote repository changed while pushing - please try again '
2104 b'(bookmark "%s" set on %s, expected missing)'
2121 b'(bookmark "%s" set on %s, expected missing)'
2105 )
2122 )
2106 for book, node in bookdata:
2123 for book, node in bookdata:
2107 currentnode = op.repo._bookmarks.get(book)
2124 currentnode = op.repo._bookmarks.get(book)
2108 if currentnode != node:
2125 if currentnode != node:
2109 if node is None:
2126 if node is None:
2110 finalmsg = msgexist % (book, nodemod.short(currentnode))
2127 finalmsg = msgexist % (book, nodemod.short(currentnode))
2111 elif currentnode is None:
2128 elif currentnode is None:
2112 finalmsg = msgmissing % (book, nodemod.short(node))
2129 finalmsg = msgmissing % (book, nodemod.short(node))
2113 else:
2130 else:
2114 finalmsg = msgstandard % (
2131 finalmsg = msgstandard % (
2115 book,
2132 book,
2116 nodemod.short(node),
2133 nodemod.short(node),
2117 nodemod.short(currentnode),
2134 nodemod.short(currentnode),
2118 )
2135 )
2119 raise error.PushRaced(finalmsg)
2136 raise error.PushRaced(finalmsg)
2120
2137
2121
2138
2122 @parthandler(b'check:heads')
2139 @parthandler(b'check:heads')
2123 def handlecheckheads(op, inpart):
2140 def handlecheckheads(op, inpart):
2124 """check that head of the repo did not change
2141 """check that head of the repo did not change
2125
2142
2126 This is used to detect a push race when using unbundle.
2143 This is used to detect a push race when using unbundle.
2127 This replaces the "heads" argument of unbundle."""
2144 This replaces the "heads" argument of unbundle."""
2128 h = inpart.read(20)
2145 h = inpart.read(20)
2129 heads = []
2146 heads = []
2130 while len(h) == 20:
2147 while len(h) == 20:
2131 heads.append(h)
2148 heads.append(h)
2132 h = inpart.read(20)
2149 h = inpart.read(20)
2133 assert not h
2150 assert not h
2134 # Trigger a transaction so that we are guaranteed to have the lock now.
2151 # Trigger a transaction so that we are guaranteed to have the lock now.
2135 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2152 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2136 op.gettransaction()
2153 op.gettransaction()
2137 if sorted(heads) != sorted(op.repo.heads()):
2154 if sorted(heads) != sorted(op.repo.heads()):
2138 raise error.PushRaced(
2155 raise error.PushRaced(
2139 b'remote repository changed while pushing - please try again'
2156 b'remote repository changed while pushing - please try again'
2140 )
2157 )
2141
2158
2142
2159
2143 @parthandler(b'check:updated-heads')
2160 @parthandler(b'check:updated-heads')
2144 def handlecheckupdatedheads(op, inpart):
2161 def handlecheckupdatedheads(op, inpart):
2145 """check for race on the heads touched by a push
2162 """check for race on the heads touched by a push
2146
2163
2147 This is similar to 'check:heads' but focus on the heads actually updated
2164 This is similar to 'check:heads' but focus on the heads actually updated
2148 during the push. If other activities happen on unrelated heads, it is
2165 during the push. If other activities happen on unrelated heads, it is
2149 ignored.
2166 ignored.
2150
2167
2151 This allow server with high traffic to avoid push contention as long as
2168 This allow server with high traffic to avoid push contention as long as
2152 unrelated parts of the graph are involved."""
2169 unrelated parts of the graph are involved."""
2153 h = inpart.read(20)
2170 h = inpart.read(20)
2154 heads = []
2171 heads = []
2155 while len(h) == 20:
2172 while len(h) == 20:
2156 heads.append(h)
2173 heads.append(h)
2157 h = inpart.read(20)
2174 h = inpart.read(20)
2158 assert not h
2175 assert not h
2159 # trigger a transaction so that we are guaranteed to have the lock now.
2176 # trigger a transaction so that we are guaranteed to have the lock now.
2160 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2177 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2161 op.gettransaction()
2178 op.gettransaction()
2162
2179
2163 currentheads = set()
2180 currentheads = set()
2164 for ls in op.repo.branchmap().iterheads():
2181 for ls in op.repo.branchmap().iterheads():
2165 currentheads.update(ls)
2182 currentheads.update(ls)
2166
2183
2167 for h in heads:
2184 for h in heads:
2168 if h not in currentheads:
2185 if h not in currentheads:
2169 raise error.PushRaced(
2186 raise error.PushRaced(
2170 b'remote repository changed while pushing - '
2187 b'remote repository changed while pushing - '
2171 b'please try again'
2188 b'please try again'
2172 )
2189 )
2173
2190
2174
2191
2175 @parthandler(b'check:phases')
2192 @parthandler(b'check:phases')
2176 def handlecheckphases(op, inpart):
2193 def handlecheckphases(op, inpart):
2177 """check that phase boundaries of the repository did not change
2194 """check that phase boundaries of the repository did not change
2178
2195
2179 This is used to detect a push race.
2196 This is used to detect a push race.
2180 """
2197 """
2181 phasetonodes = phases.binarydecode(inpart)
2198 phasetonodes = phases.binarydecode(inpart)
2182 unfi = op.repo.unfiltered()
2199 unfi = op.repo.unfiltered()
2183 cl = unfi.changelog
2200 cl = unfi.changelog
2184 phasecache = unfi._phasecache
2201 phasecache = unfi._phasecache
2185 msg = (
2202 msg = (
2186 b'remote repository changed while pushing - please try again '
2203 b'remote repository changed while pushing - please try again '
2187 b'(%s is %s expected %s)'
2204 b'(%s is %s expected %s)'
2188 )
2205 )
2189 for expectedphase, nodes in enumerate(phasetonodes):
2206 for expectedphase, nodes in enumerate(phasetonodes):
2190 for n in nodes:
2207 for n in nodes:
2191 actualphase = phasecache.phase(unfi, cl.rev(n))
2208 actualphase = phasecache.phase(unfi, cl.rev(n))
2192 if actualphase != expectedphase:
2209 if actualphase != expectedphase:
2193 finalmsg = msg % (
2210 finalmsg = msg % (
2194 nodemod.short(n),
2211 nodemod.short(n),
2195 phases.phasenames[actualphase],
2212 phases.phasenames[actualphase],
2196 phases.phasenames[expectedphase],
2213 phases.phasenames[expectedphase],
2197 )
2214 )
2198 raise error.PushRaced(finalmsg)
2215 raise error.PushRaced(finalmsg)
2199
2216
2200
2217
2201 @parthandler(b'output')
2218 @parthandler(b'output')
2202 def handleoutput(op, inpart):
2219 def handleoutput(op, inpart):
2203 """forward output captured on the server to the client"""
2220 """forward output captured on the server to the client"""
2204 for line in inpart.read().splitlines():
2221 for line in inpart.read().splitlines():
2205 op.ui.status(_(b'remote: %s\n') % line)
2222 op.ui.status(_(b'remote: %s\n') % line)
2206
2223
2207
2224
2208 @parthandler(b'replycaps')
2225 @parthandler(b'replycaps')
2209 def handlereplycaps(op, inpart):
2226 def handlereplycaps(op, inpart):
2210 """Notify that a reply bundle should be created
2227 """Notify that a reply bundle should be created
2211
2228
2212 The payload contains the capabilities information for the reply"""
2229 The payload contains the capabilities information for the reply"""
2213 caps = decodecaps(inpart.read())
2230 caps = decodecaps(inpart.read())
2214 if op.reply is None:
2231 if op.reply is None:
2215 op.reply = bundle20(op.ui, caps)
2232 op.reply = bundle20(op.ui, caps)
2216
2233
2217
2234
2218 class AbortFromPart(error.Abort):
2235 class AbortFromPart(error.Abort):
2219 """Sub-class of Abort that denotes an error from a bundle2 part."""
2236 """Sub-class of Abort that denotes an error from a bundle2 part."""
2220
2237
2221
2238
2222 @parthandler(b'error:abort', (b'message', b'hint'))
2239 @parthandler(b'error:abort', (b'message', b'hint'))
2223 def handleerrorabort(op, inpart):
2240 def handleerrorabort(op, inpart):
2224 """Used to transmit abort error over the wire"""
2241 """Used to transmit abort error over the wire"""
2225 raise AbortFromPart(
2242 raise AbortFromPart(
2226 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2243 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2227 )
2244 )
2228
2245
2229
2246
2230 @parthandler(
2247 @parthandler(
2231 b'error:pushkey',
2248 b'error:pushkey',
2232 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2249 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2233 )
2250 )
2234 def handleerrorpushkey(op, inpart):
2251 def handleerrorpushkey(op, inpart):
2235 """Used to transmit failure of a mandatory pushkey over the wire"""
2252 """Used to transmit failure of a mandatory pushkey over the wire"""
2236 kwargs = {}
2253 kwargs = {}
2237 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2254 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2238 value = inpart.params.get(name)
2255 value = inpart.params.get(name)
2239 if value is not None:
2256 if value is not None:
2240 kwargs[name] = value
2257 kwargs[name] = value
2241 raise error.PushkeyFailed(
2258 raise error.PushkeyFailed(
2242 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2259 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2243 )
2260 )
2244
2261
2245
2262
2246 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2263 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2247 def handleerrorunsupportedcontent(op, inpart):
2264 def handleerrorunsupportedcontent(op, inpart):
2248 """Used to transmit unknown content error over the wire"""
2265 """Used to transmit unknown content error over the wire"""
2249 kwargs = {}
2266 kwargs = {}
2250 parttype = inpart.params.get(b'parttype')
2267 parttype = inpart.params.get(b'parttype')
2251 if parttype is not None:
2268 if parttype is not None:
2252 kwargs[b'parttype'] = parttype
2269 kwargs[b'parttype'] = parttype
2253 params = inpart.params.get(b'params')
2270 params = inpart.params.get(b'params')
2254 if params is not None:
2271 if params is not None:
2255 kwargs[b'params'] = params.split(b'\0')
2272 kwargs[b'params'] = params.split(b'\0')
2256
2273
2257 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2274 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2258
2275
2259
2276
2260 @parthandler(b'error:pushraced', (b'message',))
2277 @parthandler(b'error:pushraced', (b'message',))
2261 def handleerrorpushraced(op, inpart):
2278 def handleerrorpushraced(op, inpart):
2262 """Used to transmit push race error over the wire"""
2279 """Used to transmit push race error over the wire"""
2263 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2280 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2264
2281
2265
2282
2266 @parthandler(b'listkeys', (b'namespace',))
2283 @parthandler(b'listkeys', (b'namespace',))
2267 def handlelistkeys(op, inpart):
2284 def handlelistkeys(op, inpart):
2268 """retrieve pushkey namespace content stored in a bundle2"""
2285 """retrieve pushkey namespace content stored in a bundle2"""
2269 namespace = inpart.params[b'namespace']
2286 namespace = inpart.params[b'namespace']
2270 r = pushkey.decodekeys(inpart.read())
2287 r = pushkey.decodekeys(inpart.read())
2271 op.records.add(b'listkeys', (namespace, r))
2288 op.records.add(b'listkeys', (namespace, r))
2272
2289
2273
2290
2274 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2291 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2275 def handlepushkey(op, inpart):
2292 def handlepushkey(op, inpart):
2276 """process a pushkey request"""
2293 """process a pushkey request"""
2277 dec = pushkey.decode
2294 dec = pushkey.decode
2278 namespace = dec(inpart.params[b'namespace'])
2295 namespace = dec(inpart.params[b'namespace'])
2279 key = dec(inpart.params[b'key'])
2296 key = dec(inpart.params[b'key'])
2280 old = dec(inpart.params[b'old'])
2297 old = dec(inpart.params[b'old'])
2281 new = dec(inpart.params[b'new'])
2298 new = dec(inpart.params[b'new'])
2282 # Grab the transaction to ensure that we have the lock before performing the
2299 # Grab the transaction to ensure that we have the lock before performing the
2283 # pushkey.
2300 # pushkey.
2284 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2301 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2285 op.gettransaction()
2302 op.gettransaction()
2286 ret = op.repo.pushkey(namespace, key, old, new)
2303 ret = op.repo.pushkey(namespace, key, old, new)
2287 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2304 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2288 op.records.add(b'pushkey', record)
2305 op.records.add(b'pushkey', record)
2289 if op.reply is not None:
2306 if op.reply is not None:
2290 rpart = op.reply.newpart(b'reply:pushkey')
2307 rpart = op.reply.newpart(b'reply:pushkey')
2291 rpart.addparam(
2308 rpart.addparam(
2292 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2309 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2293 )
2310 )
2294 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2311 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2295 if inpart.mandatory and not ret:
2312 if inpart.mandatory and not ret:
2296 kwargs = {}
2313 kwargs = {}
2297 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2314 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2298 if key in inpart.params:
2315 if key in inpart.params:
2299 kwargs[key] = inpart.params[key]
2316 kwargs[key] = inpart.params[key]
2300 raise error.PushkeyFailed(
2317 raise error.PushkeyFailed(
2301 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2318 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2302 )
2319 )
2303
2320
2304
2321
2305 @parthandler(b'bookmarks')
2322 @parthandler(b'bookmarks')
2306 def handlebookmark(op, inpart):
2323 def handlebookmark(op, inpart):
2307 """transmit bookmark information
2324 """transmit bookmark information
2308
2325
2309 The part contains binary encoded bookmark information.
2326 The part contains binary encoded bookmark information.
2310
2327
2311 The exact behavior of this part can be controlled by the 'bookmarks' mode
2328 The exact behavior of this part can be controlled by the 'bookmarks' mode
2312 on the bundle operation.
2329 on the bundle operation.
2313
2330
2314 When mode is 'apply' (the default) the bookmark information is applied as
2331 When mode is 'apply' (the default) the bookmark information is applied as
2315 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2332 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2316 issued earlier to check for push races in such update. This behavior is
2333 issued earlier to check for push races in such update. This behavior is
2317 suitable for pushing.
2334 suitable for pushing.
2318
2335
2319 When mode is 'records', the information is recorded into the 'bookmarks'
2336 When mode is 'records', the information is recorded into the 'bookmarks'
2320 records of the bundle operation. This behavior is suitable for pulling.
2337 records of the bundle operation. This behavior is suitable for pulling.
2321 """
2338 """
2322 changes = bookmarks.binarydecode(inpart)
2339 changes = bookmarks.binarydecode(inpart)
2323
2340
2324 pushkeycompat = op.repo.ui.configbool(
2341 pushkeycompat = op.repo.ui.configbool(
2325 b'server', b'bookmarks-pushkey-compat'
2342 b'server', b'bookmarks-pushkey-compat'
2326 )
2343 )
2327 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2344 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2328
2345
2329 if bookmarksmode == b'apply':
2346 if bookmarksmode == b'apply':
2330 tr = op.gettransaction()
2347 tr = op.gettransaction()
2331 bookstore = op.repo._bookmarks
2348 bookstore = op.repo._bookmarks
2332 if pushkeycompat:
2349 if pushkeycompat:
2333 allhooks = []
2350 allhooks = []
2334 for book, node in changes:
2351 for book, node in changes:
2335 hookargs = tr.hookargs.copy()
2352 hookargs = tr.hookargs.copy()
2336 hookargs[b'pushkeycompat'] = b'1'
2353 hookargs[b'pushkeycompat'] = b'1'
2337 hookargs[b'namespace'] = b'bookmarks'
2354 hookargs[b'namespace'] = b'bookmarks'
2338 hookargs[b'key'] = book
2355 hookargs[b'key'] = book
2339 hookargs[b'old'] = nodemod.hex(bookstore.get(book, b''))
2356 hookargs[b'old'] = nodemod.hex(bookstore.get(book, b''))
2340 hookargs[b'new'] = nodemod.hex(
2357 hookargs[b'new'] = nodemod.hex(
2341 node if node is not None else b''
2358 node if node is not None else b''
2342 )
2359 )
2343 allhooks.append(hookargs)
2360 allhooks.append(hookargs)
2344
2361
2345 for hookargs in allhooks:
2362 for hookargs in allhooks:
2346 op.repo.hook(
2363 op.repo.hook(
2347 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2364 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2348 )
2365 )
2349
2366
2350 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2367 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2351
2368
2352 if pushkeycompat:
2369 if pushkeycompat:
2353
2370
2354 def runhook():
2371 def runhook():
2355 for hookargs in allhooks:
2372 for hookargs in allhooks:
2356 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2373 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2357
2374
2358 op.repo._afterlock(runhook)
2375 op.repo._afterlock(runhook)
2359
2376
2360 elif bookmarksmode == b'records':
2377 elif bookmarksmode == b'records':
2361 for book, node in changes:
2378 for book, node in changes:
2362 record = {b'bookmark': book, b'node': node}
2379 record = {b'bookmark': book, b'node': node}
2363 op.records.add(b'bookmarks', record)
2380 op.records.add(b'bookmarks', record)
2364 else:
2381 else:
2365 raise error.ProgrammingError(
2382 raise error.ProgrammingError(
2366 b'unkown bookmark mode: %s' % bookmarksmode
2383 b'unkown bookmark mode: %s' % bookmarksmode
2367 )
2384 )
2368
2385
2369
2386
2370 @parthandler(b'phase-heads')
2387 @parthandler(b'phase-heads')
2371 def handlephases(op, inpart):
2388 def handlephases(op, inpart):
2372 """apply phases from bundle part to repo"""
2389 """apply phases from bundle part to repo"""
2373 headsbyphase = phases.binarydecode(inpart)
2390 headsbyphase = phases.binarydecode(inpart)
2374 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2391 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2375
2392
2376
2393
2377 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2394 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2378 def handlepushkeyreply(op, inpart):
2395 def handlepushkeyreply(op, inpart):
2379 """retrieve the result of a pushkey request"""
2396 """retrieve the result of a pushkey request"""
2380 ret = int(inpart.params[b'return'])
2397 ret = int(inpart.params[b'return'])
2381 partid = int(inpart.params[b'in-reply-to'])
2398 partid = int(inpart.params[b'in-reply-to'])
2382 op.records.add(b'pushkey', {b'return': ret}, partid)
2399 op.records.add(b'pushkey', {b'return': ret}, partid)
2383
2400
2384
2401
2385 @parthandler(b'obsmarkers')
2402 @parthandler(b'obsmarkers')
2386 def handleobsmarker(op, inpart):
2403 def handleobsmarker(op, inpart):
2387 """add a stream of obsmarkers to the repo"""
2404 """add a stream of obsmarkers to the repo"""
2388 tr = op.gettransaction()
2405 tr = op.gettransaction()
2389 markerdata = inpart.read()
2406 markerdata = inpart.read()
2390 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2407 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2391 op.ui.writenoi18n(
2408 op.ui.writenoi18n(
2392 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2409 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2393 )
2410 )
2394 # The mergemarkers call will crash if marker creation is not enabled.
2411 # The mergemarkers call will crash if marker creation is not enabled.
2395 # we want to avoid this if the part is advisory.
2412 # we want to avoid this if the part is advisory.
2396 if not inpart.mandatory and op.repo.obsstore.readonly:
2413 if not inpart.mandatory and op.repo.obsstore.readonly:
2397 op.repo.ui.debug(
2414 op.repo.ui.debug(
2398 b'ignoring obsolescence markers, feature not enabled\n'
2415 b'ignoring obsolescence markers, feature not enabled\n'
2399 )
2416 )
2400 return
2417 return
2401 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2418 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2402 op.repo.invalidatevolatilesets()
2419 op.repo.invalidatevolatilesets()
2403 op.records.add(b'obsmarkers', {b'new': new})
2420 op.records.add(b'obsmarkers', {b'new': new})
2404 if op.reply is not None:
2421 if op.reply is not None:
2405 rpart = op.reply.newpart(b'reply:obsmarkers')
2422 rpart = op.reply.newpart(b'reply:obsmarkers')
2406 rpart.addparam(
2423 rpart.addparam(
2407 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2424 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2408 )
2425 )
2409 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2426 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2410
2427
2411
2428
2412 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2429 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2413 def handleobsmarkerreply(op, inpart):
2430 def handleobsmarkerreply(op, inpart):
2414 """retrieve the result of a pushkey request"""
2431 """retrieve the result of a pushkey request"""
2415 ret = int(inpart.params[b'new'])
2432 ret = int(inpart.params[b'new'])
2416 partid = int(inpart.params[b'in-reply-to'])
2433 partid = int(inpart.params[b'in-reply-to'])
2417 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2434 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2418
2435
2419
2436
2420 @parthandler(b'hgtagsfnodes')
2437 @parthandler(b'hgtagsfnodes')
2421 def handlehgtagsfnodes(op, inpart):
2438 def handlehgtagsfnodes(op, inpart):
2422 """Applies .hgtags fnodes cache entries to the local repo.
2439 """Applies .hgtags fnodes cache entries to the local repo.
2423
2440
2424 Payload is pairs of 20 byte changeset nodes and filenodes.
2441 Payload is pairs of 20 byte changeset nodes and filenodes.
2425 """
2442 """
2426 # Grab the transaction so we ensure that we have the lock at this point.
2443 # Grab the transaction so we ensure that we have the lock at this point.
2427 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2444 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2428 op.gettransaction()
2445 op.gettransaction()
2429 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2446 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2430
2447
2431 count = 0
2448 count = 0
2432 while True:
2449 while True:
2433 node = inpart.read(20)
2450 node = inpart.read(20)
2434 fnode = inpart.read(20)
2451 fnode = inpart.read(20)
2435 if len(node) < 20 or len(fnode) < 20:
2452 if len(node) < 20 or len(fnode) < 20:
2436 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2453 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2437 break
2454 break
2438 cache.setfnode(node, fnode)
2455 cache.setfnode(node, fnode)
2439 count += 1
2456 count += 1
2440
2457
2441 cache.write()
2458 cache.write()
2442 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2459 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2443
2460
2444
2461
2445 rbcstruct = struct.Struct(b'>III')
2462 rbcstruct = struct.Struct(b'>III')
2446
2463
2447
2464
2448 @parthandler(b'cache:rev-branch-cache')
2465 @parthandler(b'cache:rev-branch-cache')
2449 def handlerbc(op, inpart):
2466 def handlerbc(op, inpart):
2450 """receive a rev-branch-cache payload and update the local cache
2467 """receive a rev-branch-cache payload and update the local cache
2451
2468
2452 The payload is a series of data related to each branch
2469 The payload is a series of data related to each branch
2453
2470
2454 1) branch name length
2471 1) branch name length
2455 2) number of open heads
2472 2) number of open heads
2456 3) number of closed heads
2473 3) number of closed heads
2457 4) open heads nodes
2474 4) open heads nodes
2458 5) closed heads nodes
2475 5) closed heads nodes
2459 """
2476 """
2460 total = 0
2477 total = 0
2461 rawheader = inpart.read(rbcstruct.size)
2478 rawheader = inpart.read(rbcstruct.size)
2462 cache = op.repo.revbranchcache()
2479 cache = op.repo.revbranchcache()
2463 cl = op.repo.unfiltered().changelog
2480 cl = op.repo.unfiltered().changelog
2464 while rawheader:
2481 while rawheader:
2465 header = rbcstruct.unpack(rawheader)
2482 header = rbcstruct.unpack(rawheader)
2466 total += header[1] + header[2]
2483 total += header[1] + header[2]
2467 utf8branch = inpart.read(header[0])
2484 utf8branch = inpart.read(header[0])
2468 branch = encoding.tolocal(utf8branch)
2485 branch = encoding.tolocal(utf8branch)
2469 for x in pycompat.xrange(header[1]):
2486 for x in pycompat.xrange(header[1]):
2470 node = inpart.read(20)
2487 node = inpart.read(20)
2471 rev = cl.rev(node)
2488 rev = cl.rev(node)
2472 cache.setdata(branch, rev, node, False)
2489 cache.setdata(branch, rev, node, False)
2473 for x in pycompat.xrange(header[2]):
2490 for x in pycompat.xrange(header[2]):
2474 node = inpart.read(20)
2491 node = inpart.read(20)
2475 rev = cl.rev(node)
2492 rev = cl.rev(node)
2476 cache.setdata(branch, rev, node, True)
2493 cache.setdata(branch, rev, node, True)
2477 rawheader = inpart.read(rbcstruct.size)
2494 rawheader = inpart.read(rbcstruct.size)
2478 cache.write()
2495 cache.write()
2479
2496
2480
2497
2481 @parthandler(b'pushvars')
2498 @parthandler(b'pushvars')
2482 def bundle2getvars(op, part):
2499 def bundle2getvars(op, part):
2483 '''unbundle a bundle2 containing shellvars on the server'''
2500 '''unbundle a bundle2 containing shellvars on the server'''
2484 # An option to disable unbundling on server-side for security reasons
2501 # An option to disable unbundling on server-side for security reasons
2485 if op.ui.configbool(b'push', b'pushvars.server'):
2502 if op.ui.configbool(b'push', b'pushvars.server'):
2486 hookargs = {}
2503 hookargs = {}
2487 for key, value in part.advisoryparams:
2504 for key, value in part.advisoryparams:
2488 key = key.upper()
2505 key = key.upper()
2489 # We want pushed variables to have USERVAR_ prepended so we know
2506 # We want pushed variables to have USERVAR_ prepended so we know
2490 # they came from the --pushvar flag.
2507 # they came from the --pushvar flag.
2491 key = b"USERVAR_" + key
2508 key = b"USERVAR_" + key
2492 hookargs[key] = value
2509 hookargs[key] = value
2493 op.addhookargs(hookargs)
2510 op.addhookargs(hookargs)
2494
2511
2495
2512
2496 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2513 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2497 def handlestreamv2bundle(op, part):
2514 def handlestreamv2bundle(op, part):
2498
2515
2499 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2516 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2500 filecount = int(part.params[b'filecount'])
2517 filecount = int(part.params[b'filecount'])
2501 bytecount = int(part.params[b'bytecount'])
2518 bytecount = int(part.params[b'bytecount'])
2502
2519
2503 repo = op.repo
2520 repo = op.repo
2504 if len(repo):
2521 if len(repo):
2505 msg = _(b'cannot apply stream clone to non empty repository')
2522 msg = _(b'cannot apply stream clone to non empty repository')
2506 raise error.Abort(msg)
2523 raise error.Abort(msg)
2507
2524
2508 repo.ui.debug(b'applying stream bundle\n')
2525 repo.ui.debug(b'applying stream bundle\n')
2509 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2526 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2510
2527
2511
2528
2512 def widen_bundle(
2529 def widen_bundle(
2513 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2530 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2514 ):
2531 ):
2515 """generates bundle2 for widening a narrow clone
2532 """generates bundle2 for widening a narrow clone
2516
2533
2517 bundler is the bundle to which data should be added
2534 bundler is the bundle to which data should be added
2518 repo is the localrepository instance
2535 repo is the localrepository instance
2519 oldmatcher matches what the client already has
2536 oldmatcher matches what the client already has
2520 newmatcher matches what the client needs (including what it already has)
2537 newmatcher matches what the client needs (including what it already has)
2521 common is set of common heads between server and client
2538 common is set of common heads between server and client
2522 known is a set of revs known on the client side (used in ellipses)
2539 known is a set of revs known on the client side (used in ellipses)
2523 cgversion is the changegroup version to send
2540 cgversion is the changegroup version to send
2524 ellipses is boolean value telling whether to send ellipses data or not
2541 ellipses is boolean value telling whether to send ellipses data or not
2525
2542
2526 returns bundle2 of the data required for extending
2543 returns bundle2 of the data required for extending
2527 """
2544 """
2528 commonnodes = set()
2545 commonnodes = set()
2529 cl = repo.changelog
2546 cl = repo.changelog
2530 for r in repo.revs(b"::%ln", common):
2547 for r in repo.revs(b"::%ln", common):
2531 commonnodes.add(cl.node(r))
2548 commonnodes.add(cl.node(r))
2532 if commonnodes:
2549 if commonnodes:
2533 # XXX: we should only send the filelogs (and treemanifest). user
2550 # XXX: we should only send the filelogs (and treemanifest). user
2534 # already has the changelog and manifest
2551 # already has the changelog and manifest
2535 packer = changegroup.getbundler(
2552 packer = changegroup.getbundler(
2536 cgversion,
2553 cgversion,
2537 repo,
2554 repo,
2538 oldmatcher=oldmatcher,
2555 oldmatcher=oldmatcher,
2539 matcher=newmatcher,
2556 matcher=newmatcher,
2540 fullnodes=commonnodes,
2557 fullnodes=commonnodes,
2541 )
2558 )
2542 cgdata = packer.generate(
2559 cgdata = packer.generate(
2543 {nodemod.nullid},
2560 {nodemod.nullid},
2544 list(commonnodes),
2561 list(commonnodes),
2545 False,
2562 False,
2546 b'narrow_widen',
2563 b'narrow_widen',
2547 changelog=False,
2564 changelog=False,
2548 )
2565 )
2549
2566
2550 part = bundler.newpart(b'changegroup', data=cgdata)
2567 part = bundler.newpart(b'changegroup', data=cgdata)
2551 part.addparam(b'version', cgversion)
2568 part.addparam(b'version', cgversion)
2552 if b'treemanifest' in repo.requirements:
2569 if b'treemanifest' in repo.requirements:
2553 part.addparam(b'treemanifest', b'1')
2570 part.addparam(b'treemanifest', b'1')
2571 if b'exp-sidedata-flag' in repo.requirements:
2572 part.addparam(b'exp-sidedata', b'1')
2554
2573
2555 return bundler
2574 return bundler
@@ -1,3079 +1,3084
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 nullrev,
17 nullrev,
18 )
18 )
19 from .thirdparty import attr
19 from .thirdparty import attr
20 from . import (
20 from . import (
21 bookmarks as bookmod,
21 bookmarks as bookmod,
22 bundle2,
22 bundle2,
23 changegroup,
23 changegroup,
24 discovery,
24 discovery,
25 error,
25 error,
26 exchangev2,
26 exchangev2,
27 lock as lockmod,
27 lock as lockmod,
28 logexchange,
28 logexchange,
29 narrowspec,
29 narrowspec,
30 obsolete,
30 obsolete,
31 phases,
31 phases,
32 pushkey,
32 pushkey,
33 pycompat,
33 pycompat,
34 scmutil,
34 scmutil,
35 sslutil,
35 sslutil,
36 streamclone,
36 streamclone,
37 url as urlmod,
37 url as urlmod,
38 util,
38 util,
39 wireprototypes,
39 wireprototypes,
40 )
40 )
41 from .interfaces import repository
41 from .interfaces import repository
42 from .utils import stringutil
42 from .utils import stringutil
43
43
44 urlerr = util.urlerr
44 urlerr = util.urlerr
45 urlreq = util.urlreq
45 urlreq = util.urlreq
46
46
47 _NARROWACL_SECTION = b'narrowacl'
47 _NARROWACL_SECTION = b'narrowacl'
48
48
49 # Maps bundle version human names to changegroup versions.
49 # Maps bundle version human names to changegroup versions.
50 _bundlespeccgversions = {
50 _bundlespeccgversions = {
51 b'v1': b'01',
51 b'v1': b'01',
52 b'v2': b'02',
52 b'v2': b'02',
53 b'packed1': b's1',
53 b'packed1': b's1',
54 b'bundle2': b'02', # legacy
54 b'bundle2': b'02', # legacy
55 }
55 }
56
56
57 # Maps bundle version with content opts to choose which part to bundle
57 # Maps bundle version with content opts to choose which part to bundle
58 _bundlespeccontentopts = {
58 _bundlespeccontentopts = {
59 b'v1': {
59 b'v1': {
60 b'changegroup': True,
60 b'changegroup': True,
61 b'cg.version': b'01',
61 b'cg.version': b'01',
62 b'obsolescence': False,
62 b'obsolescence': False,
63 b'phases': False,
63 b'phases': False,
64 b'tagsfnodescache': False,
64 b'tagsfnodescache': False,
65 b'revbranchcache': False,
65 b'revbranchcache': False,
66 },
66 },
67 b'v2': {
67 b'v2': {
68 b'changegroup': True,
68 b'changegroup': True,
69 b'cg.version': b'02',
69 b'cg.version': b'02',
70 b'obsolescence': False,
70 b'obsolescence': False,
71 b'phases': False,
71 b'phases': False,
72 b'tagsfnodescache': True,
72 b'tagsfnodescache': True,
73 b'revbranchcache': True,
73 b'revbranchcache': True,
74 },
74 },
75 b'packed1': {b'cg.version': b's1'},
75 b'packed1': {b'cg.version': b's1'},
76 }
76 }
77 _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2']
77 _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2']
78
78
79 _bundlespecvariants = {
79 _bundlespecvariants = {
80 b"streamv2": {
80 b"streamv2": {
81 b"changegroup": False,
81 b"changegroup": False,
82 b"streamv2": True,
82 b"streamv2": True,
83 b"tagsfnodescache": False,
83 b"tagsfnodescache": False,
84 b"revbranchcache": False,
84 b"revbranchcache": False,
85 }
85 }
86 }
86 }
87
87
88 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
88 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
89 _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'}
89 _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'}
90
90
91
91
92 @attr.s
92 @attr.s
93 class bundlespec(object):
93 class bundlespec(object):
94 compression = attr.ib()
94 compression = attr.ib()
95 wirecompression = attr.ib()
95 wirecompression = attr.ib()
96 version = attr.ib()
96 version = attr.ib()
97 wireversion = attr.ib()
97 wireversion = attr.ib()
98 params = attr.ib()
98 params = attr.ib()
99 contentopts = attr.ib()
99 contentopts = attr.ib()
100
100
101
101
102 def parsebundlespec(repo, spec, strict=True):
102 def parsebundlespec(repo, spec, strict=True):
103 """Parse a bundle string specification into parts.
103 """Parse a bundle string specification into parts.
104
104
105 Bundle specifications denote a well-defined bundle/exchange format.
105 Bundle specifications denote a well-defined bundle/exchange format.
106 The content of a given specification should not change over time in
106 The content of a given specification should not change over time in
107 order to ensure that bundles produced by a newer version of Mercurial are
107 order to ensure that bundles produced by a newer version of Mercurial are
108 readable from an older version.
108 readable from an older version.
109
109
110 The string currently has the form:
110 The string currently has the form:
111
111
112 <compression>-<type>[;<parameter0>[;<parameter1>]]
112 <compression>-<type>[;<parameter0>[;<parameter1>]]
113
113
114 Where <compression> is one of the supported compression formats
114 Where <compression> is one of the supported compression formats
115 and <type> is (currently) a version string. A ";" can follow the type and
115 and <type> is (currently) a version string. A ";" can follow the type and
116 all text afterwards is interpreted as URI encoded, ";" delimited key=value
116 all text afterwards is interpreted as URI encoded, ";" delimited key=value
117 pairs.
117 pairs.
118
118
119 If ``strict`` is True (the default) <compression> is required. Otherwise,
119 If ``strict`` is True (the default) <compression> is required. Otherwise,
120 it is optional.
120 it is optional.
121
121
122 Returns a bundlespec object of (compression, version, parameters).
122 Returns a bundlespec object of (compression, version, parameters).
123 Compression will be ``None`` if not in strict mode and a compression isn't
123 Compression will be ``None`` if not in strict mode and a compression isn't
124 defined.
124 defined.
125
125
126 An ``InvalidBundleSpecification`` is raised when the specification is
126 An ``InvalidBundleSpecification`` is raised when the specification is
127 not syntactically well formed.
127 not syntactically well formed.
128
128
129 An ``UnsupportedBundleSpecification`` is raised when the compression or
129 An ``UnsupportedBundleSpecification`` is raised when the compression or
130 bundle type/version is not recognized.
130 bundle type/version is not recognized.
131
131
132 Note: this function will likely eventually return a more complex data
132 Note: this function will likely eventually return a more complex data
133 structure, including bundle2 part information.
133 structure, including bundle2 part information.
134 """
134 """
135
135
136 def parseparams(s):
136 def parseparams(s):
137 if b';' not in s:
137 if b';' not in s:
138 return s, {}
138 return s, {}
139
139
140 params = {}
140 params = {}
141 version, paramstr = s.split(b';', 1)
141 version, paramstr = s.split(b';', 1)
142
142
143 for p in paramstr.split(b';'):
143 for p in paramstr.split(b';'):
144 if b'=' not in p:
144 if b'=' not in p:
145 raise error.InvalidBundleSpecification(
145 raise error.InvalidBundleSpecification(
146 _(
146 _(
147 b'invalid bundle specification: '
147 b'invalid bundle specification: '
148 b'missing "=" in parameter: %s'
148 b'missing "=" in parameter: %s'
149 )
149 )
150 % p
150 % p
151 )
151 )
152
152
153 key, value = p.split(b'=', 1)
153 key, value = p.split(b'=', 1)
154 key = urlreq.unquote(key)
154 key = urlreq.unquote(key)
155 value = urlreq.unquote(value)
155 value = urlreq.unquote(value)
156 params[key] = value
156 params[key] = value
157
157
158 return version, params
158 return version, params
159
159
160 if strict and b'-' not in spec:
160 if strict and b'-' not in spec:
161 raise error.InvalidBundleSpecification(
161 raise error.InvalidBundleSpecification(
162 _(
162 _(
163 b'invalid bundle specification; '
163 b'invalid bundle specification; '
164 b'must be prefixed with compression: %s'
164 b'must be prefixed with compression: %s'
165 )
165 )
166 % spec
166 % spec
167 )
167 )
168
168
169 if b'-' in spec:
169 if b'-' in spec:
170 compression, version = spec.split(b'-', 1)
170 compression, version = spec.split(b'-', 1)
171
171
172 if compression not in util.compengines.supportedbundlenames:
172 if compression not in util.compengines.supportedbundlenames:
173 raise error.UnsupportedBundleSpecification(
173 raise error.UnsupportedBundleSpecification(
174 _(b'%s compression is not supported') % compression
174 _(b'%s compression is not supported') % compression
175 )
175 )
176
176
177 version, params = parseparams(version)
177 version, params = parseparams(version)
178
178
179 if version not in _bundlespeccgversions:
179 if version not in _bundlespeccgversions:
180 raise error.UnsupportedBundleSpecification(
180 raise error.UnsupportedBundleSpecification(
181 _(b'%s is not a recognized bundle version') % version
181 _(b'%s is not a recognized bundle version') % version
182 )
182 )
183 else:
183 else:
184 # Value could be just the compression or just the version, in which
184 # Value could be just the compression or just the version, in which
185 # case some defaults are assumed (but only when not in strict mode).
185 # case some defaults are assumed (but only when not in strict mode).
186 assert not strict
186 assert not strict
187
187
188 spec, params = parseparams(spec)
188 spec, params = parseparams(spec)
189
189
190 if spec in util.compengines.supportedbundlenames:
190 if spec in util.compengines.supportedbundlenames:
191 compression = spec
191 compression = spec
192 version = b'v1'
192 version = b'v1'
193 # Generaldelta repos require v2.
193 # Generaldelta repos require v2.
194 if b'generaldelta' in repo.requirements:
194 if b'generaldelta' in repo.requirements:
195 version = b'v2'
195 version = b'v2'
196 # Modern compression engines require v2.
196 # Modern compression engines require v2.
197 if compression not in _bundlespecv1compengines:
197 if compression not in _bundlespecv1compengines:
198 version = b'v2'
198 version = b'v2'
199 elif spec in _bundlespeccgversions:
199 elif spec in _bundlespeccgversions:
200 if spec == b'packed1':
200 if spec == b'packed1':
201 compression = b'none'
201 compression = b'none'
202 else:
202 else:
203 compression = b'bzip2'
203 compression = b'bzip2'
204 version = spec
204 version = spec
205 else:
205 else:
206 raise error.UnsupportedBundleSpecification(
206 raise error.UnsupportedBundleSpecification(
207 _(b'%s is not a recognized bundle specification') % spec
207 _(b'%s is not a recognized bundle specification') % spec
208 )
208 )
209
209
210 # Bundle version 1 only supports a known set of compression engines.
210 # Bundle version 1 only supports a known set of compression engines.
211 if version == b'v1' and compression not in _bundlespecv1compengines:
211 if version == b'v1' and compression not in _bundlespecv1compengines:
212 raise error.UnsupportedBundleSpecification(
212 raise error.UnsupportedBundleSpecification(
213 _(b'compression engine %s is not supported on v1 bundles')
213 _(b'compression engine %s is not supported on v1 bundles')
214 % compression
214 % compression
215 )
215 )
216
216
217 # The specification for packed1 can optionally declare the data formats
217 # The specification for packed1 can optionally declare the data formats
218 # required to apply it. If we see this metadata, compare against what the
218 # required to apply it. If we see this metadata, compare against what the
219 # repo supports and error if the bundle isn't compatible.
219 # repo supports and error if the bundle isn't compatible.
220 if version == b'packed1' and b'requirements' in params:
220 if version == b'packed1' and b'requirements' in params:
221 requirements = set(params[b'requirements'].split(b','))
221 requirements = set(params[b'requirements'].split(b','))
222 missingreqs = requirements - repo.supportedformats
222 missingreqs = requirements - repo.supportedformats
223 if missingreqs:
223 if missingreqs:
224 raise error.UnsupportedBundleSpecification(
224 raise error.UnsupportedBundleSpecification(
225 _(b'missing support for repository features: %s')
225 _(b'missing support for repository features: %s')
226 % b', '.join(sorted(missingreqs))
226 % b', '.join(sorted(missingreqs))
227 )
227 )
228
228
229 # Compute contentopts based on the version
229 # Compute contentopts based on the version
230 contentopts = _bundlespeccontentopts.get(version, {}).copy()
230 contentopts = _bundlespeccontentopts.get(version, {}).copy()
231
231
232 # Process the variants
232 # Process the variants
233 if b"stream" in params and params[b"stream"] == b"v2":
233 if b"stream" in params and params[b"stream"] == b"v2":
234 variant = _bundlespecvariants[b"streamv2"]
234 variant = _bundlespecvariants[b"streamv2"]
235 contentopts.update(variant)
235 contentopts.update(variant)
236
236
237 engine = util.compengines.forbundlename(compression)
237 engine = util.compengines.forbundlename(compression)
238 compression, wirecompression = engine.bundletype()
238 compression, wirecompression = engine.bundletype()
239 wireversion = _bundlespeccgversions[version]
239 wireversion = _bundlespeccgversions[version]
240
240
241 return bundlespec(
241 return bundlespec(
242 compression, wirecompression, version, wireversion, params, contentopts
242 compression, wirecompression, version, wireversion, params, contentopts
243 )
243 )
244
244
245
245
246 def readbundle(ui, fh, fname, vfs=None):
246 def readbundle(ui, fh, fname, vfs=None):
247 header = changegroup.readexactly(fh, 4)
247 header = changegroup.readexactly(fh, 4)
248
248
249 alg = None
249 alg = None
250 if not fname:
250 if not fname:
251 fname = b"stream"
251 fname = b"stream"
252 if not header.startswith(b'HG') and header.startswith(b'\0'):
252 if not header.startswith(b'HG') and header.startswith(b'\0'):
253 fh = changegroup.headerlessfixup(fh, header)
253 fh = changegroup.headerlessfixup(fh, header)
254 header = b"HG10"
254 header = b"HG10"
255 alg = b'UN'
255 alg = b'UN'
256 elif vfs:
256 elif vfs:
257 fname = vfs.join(fname)
257 fname = vfs.join(fname)
258
258
259 magic, version = header[0:2], header[2:4]
259 magic, version = header[0:2], header[2:4]
260
260
261 if magic != b'HG':
261 if magic != b'HG':
262 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
262 raise error.Abort(_(b'%s: not a Mercurial bundle') % fname)
263 if version == b'10':
263 if version == b'10':
264 if alg is None:
264 if alg is None:
265 alg = changegroup.readexactly(fh, 2)
265 alg = changegroup.readexactly(fh, 2)
266 return changegroup.cg1unpacker(fh, alg)
266 return changegroup.cg1unpacker(fh, alg)
267 elif version.startswith(b'2'):
267 elif version.startswith(b'2'):
268 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
268 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
269 elif version == b'S1':
269 elif version == b'S1':
270 return streamclone.streamcloneapplier(fh)
270 return streamclone.streamcloneapplier(fh)
271 else:
271 else:
272 raise error.Abort(
272 raise error.Abort(
273 _(b'%s: unknown bundle version %s') % (fname, version)
273 _(b'%s: unknown bundle version %s') % (fname, version)
274 )
274 )
275
275
276
276
277 def getbundlespec(ui, fh):
277 def getbundlespec(ui, fh):
278 """Infer the bundlespec from a bundle file handle.
278 """Infer the bundlespec from a bundle file handle.
279
279
280 The input file handle is seeked and the original seek position is not
280 The input file handle is seeked and the original seek position is not
281 restored.
281 restored.
282 """
282 """
283
283
284 def speccompression(alg):
284 def speccompression(alg):
285 try:
285 try:
286 return util.compengines.forbundletype(alg).bundletype()[0]
286 return util.compengines.forbundletype(alg).bundletype()[0]
287 except KeyError:
287 except KeyError:
288 return None
288 return None
289
289
290 b = readbundle(ui, fh, None)
290 b = readbundle(ui, fh, None)
291 if isinstance(b, changegroup.cg1unpacker):
291 if isinstance(b, changegroup.cg1unpacker):
292 alg = b._type
292 alg = b._type
293 if alg == b'_truncatedBZ':
293 if alg == b'_truncatedBZ':
294 alg = b'BZ'
294 alg = b'BZ'
295 comp = speccompression(alg)
295 comp = speccompression(alg)
296 if not comp:
296 if not comp:
297 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
297 raise error.Abort(_(b'unknown compression algorithm: %s') % alg)
298 return b'%s-v1' % comp
298 return b'%s-v1' % comp
299 elif isinstance(b, bundle2.unbundle20):
299 elif isinstance(b, bundle2.unbundle20):
300 if b'Compression' in b.params:
300 if b'Compression' in b.params:
301 comp = speccompression(b.params[b'Compression'])
301 comp = speccompression(b.params[b'Compression'])
302 if not comp:
302 if not comp:
303 raise error.Abort(
303 raise error.Abort(
304 _(b'unknown compression algorithm: %s') % comp
304 _(b'unknown compression algorithm: %s') % comp
305 )
305 )
306 else:
306 else:
307 comp = b'none'
307 comp = b'none'
308
308
309 version = None
309 version = None
310 for part in b.iterparts():
310 for part in b.iterparts():
311 if part.type == b'changegroup':
311 if part.type == b'changegroup':
312 version = part.params[b'version']
312 version = part.params[b'version']
313 if version in (b'01', b'02'):
313 if version in (b'01', b'02'):
314 version = b'v2'
314 version = b'v2'
315 else:
315 else:
316 raise error.Abort(
316 raise error.Abort(
317 _(
317 _(
318 b'changegroup version %s does not have '
318 b'changegroup version %s does not have '
319 b'a known bundlespec'
319 b'a known bundlespec'
320 )
320 )
321 % version,
321 % version,
322 hint=_(b'try upgrading your Mercurial client'),
322 hint=_(b'try upgrading your Mercurial client'),
323 )
323 )
324 elif part.type == b'stream2' and version is None:
324 elif part.type == b'stream2' and version is None:
325 # A stream2 part requires to be part of a v2 bundle
325 # A stream2 part requires to be part of a v2 bundle
326 requirements = urlreq.unquote(part.params[b'requirements'])
326 requirements = urlreq.unquote(part.params[b'requirements'])
327 splitted = requirements.split()
327 splitted = requirements.split()
328 params = bundle2._formatrequirementsparams(splitted)
328 params = bundle2._formatrequirementsparams(splitted)
329 return b'none-v2;stream=v2;%s' % params
329 return b'none-v2;stream=v2;%s' % params
330
330
331 if not version:
331 if not version:
332 raise error.Abort(
332 raise error.Abort(
333 _(b'could not identify changegroup version in bundle')
333 _(b'could not identify changegroup version in bundle')
334 )
334 )
335
335
336 return b'%s-%s' % (comp, version)
336 return b'%s-%s' % (comp, version)
337 elif isinstance(b, streamclone.streamcloneapplier):
337 elif isinstance(b, streamclone.streamcloneapplier):
338 requirements = streamclone.readbundle1header(fh)[2]
338 requirements = streamclone.readbundle1header(fh)[2]
339 formatted = bundle2._formatrequirementsparams(requirements)
339 formatted = bundle2._formatrequirementsparams(requirements)
340 return b'none-packed1;%s' % formatted
340 return b'none-packed1;%s' % formatted
341 else:
341 else:
342 raise error.Abort(_(b'unknown bundle type: %s') % b)
342 raise error.Abort(_(b'unknown bundle type: %s') % b)
343
343
344
344
345 def _computeoutgoing(repo, heads, common):
345 def _computeoutgoing(repo, heads, common):
346 """Computes which revs are outgoing given a set of common
346 """Computes which revs are outgoing given a set of common
347 and a set of heads.
347 and a set of heads.
348
348
349 This is a separate function so extensions can have access to
349 This is a separate function so extensions can have access to
350 the logic.
350 the logic.
351
351
352 Returns a discovery.outgoing object.
352 Returns a discovery.outgoing object.
353 """
353 """
354 cl = repo.changelog
354 cl = repo.changelog
355 if common:
355 if common:
356 hasnode = cl.hasnode
356 hasnode = cl.hasnode
357 common = [n for n in common if hasnode(n)]
357 common = [n for n in common if hasnode(n)]
358 else:
358 else:
359 common = [nullid]
359 common = [nullid]
360 if not heads:
360 if not heads:
361 heads = cl.heads()
361 heads = cl.heads()
362 return discovery.outgoing(repo, common, heads)
362 return discovery.outgoing(repo, common, heads)
363
363
364
364
365 def _checkpublish(pushop):
365 def _checkpublish(pushop):
366 repo = pushop.repo
366 repo = pushop.repo
367 ui = repo.ui
367 ui = repo.ui
368 behavior = ui.config(b'experimental', b'auto-publish')
368 behavior = ui.config(b'experimental', b'auto-publish')
369 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
369 if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'):
370 return
370 return
371 remotephases = listkeys(pushop.remote, b'phases')
371 remotephases = listkeys(pushop.remote, b'phases')
372 if not remotephases.get(b'publishing', False):
372 if not remotephases.get(b'publishing', False):
373 return
373 return
374
374
375 if pushop.revs is None:
375 if pushop.revs is None:
376 published = repo.filtered(b'served').revs(b'not public()')
376 published = repo.filtered(b'served').revs(b'not public()')
377 else:
377 else:
378 published = repo.revs(b'::%ln - public()', pushop.revs)
378 published = repo.revs(b'::%ln - public()', pushop.revs)
379 if published:
379 if published:
380 if behavior == b'warn':
380 if behavior == b'warn':
381 ui.warn(
381 ui.warn(
382 _(b'%i changesets about to be published\n') % len(published)
382 _(b'%i changesets about to be published\n') % len(published)
383 )
383 )
384 elif behavior == b'confirm':
384 elif behavior == b'confirm':
385 if ui.promptchoice(
385 if ui.promptchoice(
386 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
386 _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No')
387 % len(published)
387 % len(published)
388 ):
388 ):
389 raise error.Abort(_(b'user quit'))
389 raise error.Abort(_(b'user quit'))
390 elif behavior == b'abort':
390 elif behavior == b'abort':
391 msg = _(b'push would publish %i changesets') % len(published)
391 msg = _(b'push would publish %i changesets') % len(published)
392 hint = _(
392 hint = _(
393 b"use --publish or adjust 'experimental.auto-publish'"
393 b"use --publish or adjust 'experimental.auto-publish'"
394 b" config"
394 b" config"
395 )
395 )
396 raise error.Abort(msg, hint=hint)
396 raise error.Abort(msg, hint=hint)
397
397
398
398
399 def _forcebundle1(op):
399 def _forcebundle1(op):
400 """return true if a pull/push must use bundle1
400 """return true if a pull/push must use bundle1
401
401
402 This function is used to allow testing of the older bundle version"""
402 This function is used to allow testing of the older bundle version"""
403 ui = op.repo.ui
403 ui = op.repo.ui
404 # The goal is this config is to allow developer to choose the bundle
404 # The goal is this config is to allow developer to choose the bundle
405 # version used during exchanged. This is especially handy during test.
405 # version used during exchanged. This is especially handy during test.
406 # Value is a list of bundle version to be picked from, highest version
406 # Value is a list of bundle version to be picked from, highest version
407 # should be used.
407 # should be used.
408 #
408 #
409 # developer config: devel.legacy.exchange
409 # developer config: devel.legacy.exchange
410 exchange = ui.configlist(b'devel', b'legacy.exchange')
410 exchange = ui.configlist(b'devel', b'legacy.exchange')
411 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
411 forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange
412 return forcebundle1 or not op.remote.capable(b'bundle2')
412 return forcebundle1 or not op.remote.capable(b'bundle2')
413
413
414
414
415 class pushoperation(object):
415 class pushoperation(object):
416 """A object that represent a single push operation
416 """A object that represent a single push operation
417
417
418 Its purpose is to carry push related state and very common operations.
418 Its purpose is to carry push related state and very common operations.
419
419
420 A new pushoperation should be created at the beginning of each push and
420 A new pushoperation should be created at the beginning of each push and
421 discarded afterward.
421 discarded afterward.
422 """
422 """
423
423
424 def __init__(
424 def __init__(
425 self,
425 self,
426 repo,
426 repo,
427 remote,
427 remote,
428 force=False,
428 force=False,
429 revs=None,
429 revs=None,
430 newbranch=False,
430 newbranch=False,
431 bookmarks=(),
431 bookmarks=(),
432 publish=False,
432 publish=False,
433 pushvars=None,
433 pushvars=None,
434 ):
434 ):
435 # repo we push from
435 # repo we push from
436 self.repo = repo
436 self.repo = repo
437 self.ui = repo.ui
437 self.ui = repo.ui
438 # repo we push to
438 # repo we push to
439 self.remote = remote
439 self.remote = remote
440 # force option provided
440 # force option provided
441 self.force = force
441 self.force = force
442 # revs to be pushed (None is "all")
442 # revs to be pushed (None is "all")
443 self.revs = revs
443 self.revs = revs
444 # bookmark explicitly pushed
444 # bookmark explicitly pushed
445 self.bookmarks = bookmarks
445 self.bookmarks = bookmarks
446 # allow push of new branch
446 # allow push of new branch
447 self.newbranch = newbranch
447 self.newbranch = newbranch
448 # step already performed
448 # step already performed
449 # (used to check what steps have been already performed through bundle2)
449 # (used to check what steps have been already performed through bundle2)
450 self.stepsdone = set()
450 self.stepsdone = set()
451 # Integer version of the changegroup push result
451 # Integer version of the changegroup push result
452 # - None means nothing to push
452 # - None means nothing to push
453 # - 0 means HTTP error
453 # - 0 means HTTP error
454 # - 1 means we pushed and remote head count is unchanged *or*
454 # - 1 means we pushed and remote head count is unchanged *or*
455 # we have outgoing changesets but refused to push
455 # we have outgoing changesets but refused to push
456 # - other values as described by addchangegroup()
456 # - other values as described by addchangegroup()
457 self.cgresult = None
457 self.cgresult = None
458 # Boolean value for the bookmark push
458 # Boolean value for the bookmark push
459 self.bkresult = None
459 self.bkresult = None
460 # discover.outgoing object (contains common and outgoing data)
460 # discover.outgoing object (contains common and outgoing data)
461 self.outgoing = None
461 self.outgoing = None
462 # all remote topological heads before the push
462 # all remote topological heads before the push
463 self.remoteheads = None
463 self.remoteheads = None
464 # Details of the remote branch pre and post push
464 # Details of the remote branch pre and post push
465 #
465 #
466 # mapping: {'branch': ([remoteheads],
466 # mapping: {'branch': ([remoteheads],
467 # [newheads],
467 # [newheads],
468 # [unsyncedheads],
468 # [unsyncedheads],
469 # [discardedheads])}
469 # [discardedheads])}
470 # - branch: the branch name
470 # - branch: the branch name
471 # - remoteheads: the list of remote heads known locally
471 # - remoteheads: the list of remote heads known locally
472 # None if the branch is new
472 # None if the branch is new
473 # - newheads: the new remote heads (known locally) with outgoing pushed
473 # - newheads: the new remote heads (known locally) with outgoing pushed
474 # - unsyncedheads: the list of remote heads unknown locally.
474 # - unsyncedheads: the list of remote heads unknown locally.
475 # - discardedheads: the list of remote heads made obsolete by the push
475 # - discardedheads: the list of remote heads made obsolete by the push
476 self.pushbranchmap = None
476 self.pushbranchmap = None
477 # testable as a boolean indicating if any nodes are missing locally.
477 # testable as a boolean indicating if any nodes are missing locally.
478 self.incoming = None
478 self.incoming = None
479 # summary of the remote phase situation
479 # summary of the remote phase situation
480 self.remotephases = None
480 self.remotephases = None
481 # phases changes that must be pushed along side the changesets
481 # phases changes that must be pushed along side the changesets
482 self.outdatedphases = None
482 self.outdatedphases = None
483 # phases changes that must be pushed if changeset push fails
483 # phases changes that must be pushed if changeset push fails
484 self.fallbackoutdatedphases = None
484 self.fallbackoutdatedphases = None
485 # outgoing obsmarkers
485 # outgoing obsmarkers
486 self.outobsmarkers = set()
486 self.outobsmarkers = set()
487 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
487 # outgoing bookmarks, list of (bm, oldnode | '', newnode | '')
488 self.outbookmarks = []
488 self.outbookmarks = []
489 # transaction manager
489 # transaction manager
490 self.trmanager = None
490 self.trmanager = None
491 # map { pushkey partid -> callback handling failure}
491 # map { pushkey partid -> callback handling failure}
492 # used to handle exception from mandatory pushkey part failure
492 # used to handle exception from mandatory pushkey part failure
493 self.pkfailcb = {}
493 self.pkfailcb = {}
494 # an iterable of pushvars or None
494 # an iterable of pushvars or None
495 self.pushvars = pushvars
495 self.pushvars = pushvars
496 # publish pushed changesets
496 # publish pushed changesets
497 self.publish = publish
497 self.publish = publish
498
498
499 @util.propertycache
499 @util.propertycache
500 def futureheads(self):
500 def futureheads(self):
501 """future remote heads if the changeset push succeeds"""
501 """future remote heads if the changeset push succeeds"""
502 return self.outgoing.missingheads
502 return self.outgoing.missingheads
503
503
504 @util.propertycache
504 @util.propertycache
505 def fallbackheads(self):
505 def fallbackheads(self):
506 """future remote heads if the changeset push fails"""
506 """future remote heads if the changeset push fails"""
507 if self.revs is None:
507 if self.revs is None:
508 # not target to push, all common are relevant
508 # not target to push, all common are relevant
509 return self.outgoing.commonheads
509 return self.outgoing.commonheads
510 unfi = self.repo.unfiltered()
510 unfi = self.repo.unfiltered()
511 # I want cheads = heads(::missingheads and ::commonheads)
511 # I want cheads = heads(::missingheads and ::commonheads)
512 # (missingheads is revs with secret changeset filtered out)
512 # (missingheads is revs with secret changeset filtered out)
513 #
513 #
514 # This can be expressed as:
514 # This can be expressed as:
515 # cheads = ( (missingheads and ::commonheads)
515 # cheads = ( (missingheads and ::commonheads)
516 # + (commonheads and ::missingheads))"
516 # + (commonheads and ::missingheads))"
517 # )
517 # )
518 #
518 #
519 # while trying to push we already computed the following:
519 # while trying to push we already computed the following:
520 # common = (::commonheads)
520 # common = (::commonheads)
521 # missing = ((commonheads::missingheads) - commonheads)
521 # missing = ((commonheads::missingheads) - commonheads)
522 #
522 #
523 # We can pick:
523 # We can pick:
524 # * missingheads part of common (::commonheads)
524 # * missingheads part of common (::commonheads)
525 common = self.outgoing.common
525 common = self.outgoing.common
526 nm = self.repo.changelog.nodemap
526 nm = self.repo.changelog.nodemap
527 cheads = [node for node in self.revs if nm[node] in common]
527 cheads = [node for node in self.revs if nm[node] in common]
528 # and
528 # and
529 # * commonheads parents on missing
529 # * commonheads parents on missing
530 revset = unfi.set(
530 revset = unfi.set(
531 b'%ln and parents(roots(%ln))',
531 b'%ln and parents(roots(%ln))',
532 self.outgoing.commonheads,
532 self.outgoing.commonheads,
533 self.outgoing.missing,
533 self.outgoing.missing,
534 )
534 )
535 cheads.extend(c.node() for c in revset)
535 cheads.extend(c.node() for c in revset)
536 return cheads
536 return cheads
537
537
538 @property
538 @property
539 def commonheads(self):
539 def commonheads(self):
540 """set of all common heads after changeset bundle push"""
540 """set of all common heads after changeset bundle push"""
541 if self.cgresult:
541 if self.cgresult:
542 return self.futureheads
542 return self.futureheads
543 else:
543 else:
544 return self.fallbackheads
544 return self.fallbackheads
545
545
546
546
547 # mapping of message used when pushing bookmark
547 # mapping of message used when pushing bookmark
548 bookmsgmap = {
548 bookmsgmap = {
549 b'update': (
549 b'update': (
550 _(b"updating bookmark %s\n"),
550 _(b"updating bookmark %s\n"),
551 _(b'updating bookmark %s failed!\n'),
551 _(b'updating bookmark %s failed!\n'),
552 ),
552 ),
553 b'export': (
553 b'export': (
554 _(b"exporting bookmark %s\n"),
554 _(b"exporting bookmark %s\n"),
555 _(b'exporting bookmark %s failed!\n'),
555 _(b'exporting bookmark %s failed!\n'),
556 ),
556 ),
557 b'delete': (
557 b'delete': (
558 _(b"deleting remote bookmark %s\n"),
558 _(b"deleting remote bookmark %s\n"),
559 _(b'deleting remote bookmark %s failed!\n'),
559 _(b'deleting remote bookmark %s failed!\n'),
560 ),
560 ),
561 }
561 }
562
562
563
563
564 def push(
564 def push(
565 repo,
565 repo,
566 remote,
566 remote,
567 force=False,
567 force=False,
568 revs=None,
568 revs=None,
569 newbranch=False,
569 newbranch=False,
570 bookmarks=(),
570 bookmarks=(),
571 publish=False,
571 publish=False,
572 opargs=None,
572 opargs=None,
573 ):
573 ):
574 '''Push outgoing changesets (limited by revs) from a local
574 '''Push outgoing changesets (limited by revs) from a local
575 repository to remote. Return an integer:
575 repository to remote. Return an integer:
576 - None means nothing to push
576 - None means nothing to push
577 - 0 means HTTP error
577 - 0 means HTTP error
578 - 1 means we pushed and remote head count is unchanged *or*
578 - 1 means we pushed and remote head count is unchanged *or*
579 we have outgoing changesets but refused to push
579 we have outgoing changesets but refused to push
580 - other values as described by addchangegroup()
580 - other values as described by addchangegroup()
581 '''
581 '''
582 if opargs is None:
582 if opargs is None:
583 opargs = {}
583 opargs = {}
584 pushop = pushoperation(
584 pushop = pushoperation(
585 repo,
585 repo,
586 remote,
586 remote,
587 force,
587 force,
588 revs,
588 revs,
589 newbranch,
589 newbranch,
590 bookmarks,
590 bookmarks,
591 publish,
591 publish,
592 **pycompat.strkwargs(opargs)
592 **pycompat.strkwargs(opargs)
593 )
593 )
594 if pushop.remote.local():
594 if pushop.remote.local():
595 missing = (
595 missing = (
596 set(pushop.repo.requirements) - pushop.remote.local().supported
596 set(pushop.repo.requirements) - pushop.remote.local().supported
597 )
597 )
598 if missing:
598 if missing:
599 msg = _(
599 msg = _(
600 b"required features are not"
600 b"required features are not"
601 b" supported in the destination:"
601 b" supported in the destination:"
602 b" %s"
602 b" %s"
603 ) % (b', '.join(sorted(missing)))
603 ) % (b', '.join(sorted(missing)))
604 raise error.Abort(msg)
604 raise error.Abort(msg)
605
605
606 if not pushop.remote.canpush():
606 if not pushop.remote.canpush():
607 raise error.Abort(_(b"destination does not support push"))
607 raise error.Abort(_(b"destination does not support push"))
608
608
609 if not pushop.remote.capable(b'unbundle'):
609 if not pushop.remote.capable(b'unbundle'):
610 raise error.Abort(
610 raise error.Abort(
611 _(
611 _(
612 b'cannot push: destination does not support the '
612 b'cannot push: destination does not support the '
613 b'unbundle wire protocol command'
613 b'unbundle wire protocol command'
614 )
614 )
615 )
615 )
616
616
617 # get lock as we might write phase data
617 # get lock as we might write phase data
618 wlock = lock = None
618 wlock = lock = None
619 try:
619 try:
620 # bundle2 push may receive a reply bundle touching bookmarks
620 # bundle2 push may receive a reply bundle touching bookmarks
621 # requiring the wlock. Take it now to ensure proper ordering.
621 # requiring the wlock. Take it now to ensure proper ordering.
622 maypushback = pushop.ui.configbool(b'experimental', b'bundle2.pushback')
622 maypushback = pushop.ui.configbool(b'experimental', b'bundle2.pushback')
623 if (
623 if (
624 (not _forcebundle1(pushop))
624 (not _forcebundle1(pushop))
625 and maypushback
625 and maypushback
626 and not bookmod.bookmarksinstore(repo)
626 and not bookmod.bookmarksinstore(repo)
627 ):
627 ):
628 wlock = pushop.repo.wlock()
628 wlock = pushop.repo.wlock()
629 lock = pushop.repo.lock()
629 lock = pushop.repo.lock()
630 pushop.trmanager = transactionmanager(
630 pushop.trmanager = transactionmanager(
631 pushop.repo, b'push-response', pushop.remote.url()
631 pushop.repo, b'push-response', pushop.remote.url()
632 )
632 )
633 except error.LockUnavailable as err:
633 except error.LockUnavailable as err:
634 # source repo cannot be locked.
634 # source repo cannot be locked.
635 # We do not abort the push, but just disable the local phase
635 # We do not abort the push, but just disable the local phase
636 # synchronisation.
636 # synchronisation.
637 msg = b'cannot lock source repository: %s\n' % stringutil.forcebytestr(
637 msg = b'cannot lock source repository: %s\n' % stringutil.forcebytestr(
638 err
638 err
639 )
639 )
640 pushop.ui.debug(msg)
640 pushop.ui.debug(msg)
641
641
642 with wlock or util.nullcontextmanager():
642 with wlock or util.nullcontextmanager():
643 with lock or util.nullcontextmanager():
643 with lock or util.nullcontextmanager():
644 with pushop.trmanager or util.nullcontextmanager():
644 with pushop.trmanager or util.nullcontextmanager():
645 pushop.repo.checkpush(pushop)
645 pushop.repo.checkpush(pushop)
646 _checkpublish(pushop)
646 _checkpublish(pushop)
647 _pushdiscovery(pushop)
647 _pushdiscovery(pushop)
648 if not _forcebundle1(pushop):
648 if not _forcebundle1(pushop):
649 _pushbundle2(pushop)
649 _pushbundle2(pushop)
650 _pushchangeset(pushop)
650 _pushchangeset(pushop)
651 _pushsyncphase(pushop)
651 _pushsyncphase(pushop)
652 _pushobsolete(pushop)
652 _pushobsolete(pushop)
653 _pushbookmark(pushop)
653 _pushbookmark(pushop)
654
654
655 if repo.ui.configbool(b'experimental', b'remotenames'):
655 if repo.ui.configbool(b'experimental', b'remotenames'):
656 logexchange.pullremotenames(repo, remote)
656 logexchange.pullremotenames(repo, remote)
657
657
658 return pushop
658 return pushop
659
659
660
660
661 # list of steps to perform discovery before push
661 # list of steps to perform discovery before push
662 pushdiscoveryorder = []
662 pushdiscoveryorder = []
663
663
664 # Mapping between step name and function
664 # Mapping between step name and function
665 #
665 #
666 # This exists to help extensions wrap steps if necessary
666 # This exists to help extensions wrap steps if necessary
667 pushdiscoverymapping = {}
667 pushdiscoverymapping = {}
668
668
669
669
670 def pushdiscovery(stepname):
670 def pushdiscovery(stepname):
671 """decorator for function performing discovery before push
671 """decorator for function performing discovery before push
672
672
673 The function is added to the step -> function mapping and appended to the
673 The function is added to the step -> function mapping and appended to the
674 list of steps. Beware that decorated function will be added in order (this
674 list of steps. Beware that decorated function will be added in order (this
675 may matter).
675 may matter).
676
676
677 You can only use this decorator for a new step, if you want to wrap a step
677 You can only use this decorator for a new step, if you want to wrap a step
678 from an extension, change the pushdiscovery dictionary directly."""
678 from an extension, change the pushdiscovery dictionary directly."""
679
679
680 def dec(func):
680 def dec(func):
681 assert stepname not in pushdiscoverymapping
681 assert stepname not in pushdiscoverymapping
682 pushdiscoverymapping[stepname] = func
682 pushdiscoverymapping[stepname] = func
683 pushdiscoveryorder.append(stepname)
683 pushdiscoveryorder.append(stepname)
684 return func
684 return func
685
685
686 return dec
686 return dec
687
687
688
688
689 def _pushdiscovery(pushop):
689 def _pushdiscovery(pushop):
690 """Run all discovery steps"""
690 """Run all discovery steps"""
691 for stepname in pushdiscoveryorder:
691 for stepname in pushdiscoveryorder:
692 step = pushdiscoverymapping[stepname]
692 step = pushdiscoverymapping[stepname]
693 step(pushop)
693 step(pushop)
694
694
695
695
696 @pushdiscovery(b'changeset')
696 @pushdiscovery(b'changeset')
697 def _pushdiscoverychangeset(pushop):
697 def _pushdiscoverychangeset(pushop):
698 """discover the changeset that need to be pushed"""
698 """discover the changeset that need to be pushed"""
699 fci = discovery.findcommonincoming
699 fci = discovery.findcommonincoming
700 if pushop.revs:
700 if pushop.revs:
701 commoninc = fci(
701 commoninc = fci(
702 pushop.repo,
702 pushop.repo,
703 pushop.remote,
703 pushop.remote,
704 force=pushop.force,
704 force=pushop.force,
705 ancestorsof=pushop.revs,
705 ancestorsof=pushop.revs,
706 )
706 )
707 else:
707 else:
708 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
708 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
709 common, inc, remoteheads = commoninc
709 common, inc, remoteheads = commoninc
710 fco = discovery.findcommonoutgoing
710 fco = discovery.findcommonoutgoing
711 outgoing = fco(
711 outgoing = fco(
712 pushop.repo,
712 pushop.repo,
713 pushop.remote,
713 pushop.remote,
714 onlyheads=pushop.revs,
714 onlyheads=pushop.revs,
715 commoninc=commoninc,
715 commoninc=commoninc,
716 force=pushop.force,
716 force=pushop.force,
717 )
717 )
718 pushop.outgoing = outgoing
718 pushop.outgoing = outgoing
719 pushop.remoteheads = remoteheads
719 pushop.remoteheads = remoteheads
720 pushop.incoming = inc
720 pushop.incoming = inc
721
721
722
722
723 @pushdiscovery(b'phase')
723 @pushdiscovery(b'phase')
724 def _pushdiscoveryphase(pushop):
724 def _pushdiscoveryphase(pushop):
725 """discover the phase that needs to be pushed
725 """discover the phase that needs to be pushed
726
726
727 (computed for both success and failure case for changesets push)"""
727 (computed for both success and failure case for changesets push)"""
728 outgoing = pushop.outgoing
728 outgoing = pushop.outgoing
729 unfi = pushop.repo.unfiltered()
729 unfi = pushop.repo.unfiltered()
730 remotephases = listkeys(pushop.remote, b'phases')
730 remotephases = listkeys(pushop.remote, b'phases')
731
731
732 if (
732 if (
733 pushop.ui.configbool(b'ui', b'_usedassubrepo')
733 pushop.ui.configbool(b'ui', b'_usedassubrepo')
734 and remotephases # server supports phases
734 and remotephases # server supports phases
735 and not pushop.outgoing.missing # no changesets to be pushed
735 and not pushop.outgoing.missing # no changesets to be pushed
736 and remotephases.get(b'publishing', False)
736 and remotephases.get(b'publishing', False)
737 ):
737 ):
738 # When:
738 # When:
739 # - this is a subrepo push
739 # - this is a subrepo push
740 # - and remote support phase
740 # - and remote support phase
741 # - and no changeset are to be pushed
741 # - and no changeset are to be pushed
742 # - and remote is publishing
742 # - and remote is publishing
743 # We may be in issue 3781 case!
743 # We may be in issue 3781 case!
744 # We drop the possible phase synchronisation done by
744 # We drop the possible phase synchronisation done by
745 # courtesy to publish changesets possibly locally draft
745 # courtesy to publish changesets possibly locally draft
746 # on the remote.
746 # on the remote.
747 pushop.outdatedphases = []
747 pushop.outdatedphases = []
748 pushop.fallbackoutdatedphases = []
748 pushop.fallbackoutdatedphases = []
749 return
749 return
750
750
751 pushop.remotephases = phases.remotephasessummary(
751 pushop.remotephases = phases.remotephasessummary(
752 pushop.repo, pushop.fallbackheads, remotephases
752 pushop.repo, pushop.fallbackheads, remotephases
753 )
753 )
754 droots = pushop.remotephases.draftroots
754 droots = pushop.remotephases.draftroots
755
755
756 extracond = b''
756 extracond = b''
757 if not pushop.remotephases.publishing:
757 if not pushop.remotephases.publishing:
758 extracond = b' and public()'
758 extracond = b' and public()'
759 revset = b'heads((%%ln::%%ln) %s)' % extracond
759 revset = b'heads((%%ln::%%ln) %s)' % extracond
760 # Get the list of all revs draft on remote by public here.
760 # Get the list of all revs draft on remote by public here.
761 # XXX Beware that revset break if droots is not strictly
761 # XXX Beware that revset break if droots is not strictly
762 # XXX root we may want to ensure it is but it is costly
762 # XXX root we may want to ensure it is but it is costly
763 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
763 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
764 if not pushop.remotephases.publishing and pushop.publish:
764 if not pushop.remotephases.publishing and pushop.publish:
765 future = list(
765 future = list(
766 unfi.set(
766 unfi.set(
767 b'%ln and (not public() or %ln::)', pushop.futureheads, droots
767 b'%ln and (not public() or %ln::)', pushop.futureheads, droots
768 )
768 )
769 )
769 )
770 elif not outgoing.missing:
770 elif not outgoing.missing:
771 future = fallback
771 future = fallback
772 else:
772 else:
773 # adds changeset we are going to push as draft
773 # adds changeset we are going to push as draft
774 #
774 #
775 # should not be necessary for publishing server, but because of an
775 # should not be necessary for publishing server, but because of an
776 # issue fixed in xxxxx we have to do it anyway.
776 # issue fixed in xxxxx we have to do it anyway.
777 fdroots = list(
777 fdroots = list(
778 unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots)
778 unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots)
779 )
779 )
780 fdroots = [f.node() for f in fdroots]
780 fdroots = [f.node() for f in fdroots]
781 future = list(unfi.set(revset, fdroots, pushop.futureheads))
781 future = list(unfi.set(revset, fdroots, pushop.futureheads))
782 pushop.outdatedphases = future
782 pushop.outdatedphases = future
783 pushop.fallbackoutdatedphases = fallback
783 pushop.fallbackoutdatedphases = fallback
784
784
785
785
786 @pushdiscovery(b'obsmarker')
786 @pushdiscovery(b'obsmarker')
787 def _pushdiscoveryobsmarkers(pushop):
787 def _pushdiscoveryobsmarkers(pushop):
788 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
788 if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt):
789 return
789 return
790
790
791 if not pushop.repo.obsstore:
791 if not pushop.repo.obsstore:
792 return
792 return
793
793
794 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
794 if b'obsolete' not in listkeys(pushop.remote, b'namespaces'):
795 return
795 return
796
796
797 repo = pushop.repo
797 repo = pushop.repo
798 # very naive computation, that can be quite expensive on big repo.
798 # very naive computation, that can be quite expensive on big repo.
799 # However: evolution is currently slow on them anyway.
799 # However: evolution is currently slow on them anyway.
800 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
800 nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads))
801 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
801 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
802
802
803
803
804 @pushdiscovery(b'bookmarks')
804 @pushdiscovery(b'bookmarks')
805 def _pushdiscoverybookmarks(pushop):
805 def _pushdiscoverybookmarks(pushop):
806 ui = pushop.ui
806 ui = pushop.ui
807 repo = pushop.repo.unfiltered()
807 repo = pushop.repo.unfiltered()
808 remote = pushop.remote
808 remote = pushop.remote
809 ui.debug(b"checking for updated bookmarks\n")
809 ui.debug(b"checking for updated bookmarks\n")
810 ancestors = ()
810 ancestors = ()
811 if pushop.revs:
811 if pushop.revs:
812 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
812 revnums = pycompat.maplist(repo.changelog.rev, pushop.revs)
813 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
813 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
814
814
815 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
815 remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks'))
816
816
817 explicit = {
817 explicit = {
818 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
818 repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks
819 }
819 }
820
820
821 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
821 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
822 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
822 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
823
823
824
824
825 def _processcompared(pushop, pushed, explicit, remotebms, comp):
825 def _processcompared(pushop, pushed, explicit, remotebms, comp):
826 """take decision on bookmarks to push to the remote repo
826 """take decision on bookmarks to push to the remote repo
827
827
828 Exists to help extensions alter this behavior.
828 Exists to help extensions alter this behavior.
829 """
829 """
830 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
830 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
831
831
832 repo = pushop.repo
832 repo = pushop.repo
833
833
834 for b, scid, dcid in advsrc:
834 for b, scid, dcid in advsrc:
835 if b in explicit:
835 if b in explicit:
836 explicit.remove(b)
836 explicit.remove(b)
837 if not pushed or repo[scid].rev() in pushed:
837 if not pushed or repo[scid].rev() in pushed:
838 pushop.outbookmarks.append((b, dcid, scid))
838 pushop.outbookmarks.append((b, dcid, scid))
839 # search added bookmark
839 # search added bookmark
840 for b, scid, dcid in addsrc:
840 for b, scid, dcid in addsrc:
841 if b in explicit:
841 if b in explicit:
842 explicit.remove(b)
842 explicit.remove(b)
843 pushop.outbookmarks.append((b, b'', scid))
843 pushop.outbookmarks.append((b, b'', scid))
844 # search for overwritten bookmark
844 # search for overwritten bookmark
845 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
845 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
846 if b in explicit:
846 if b in explicit:
847 explicit.remove(b)
847 explicit.remove(b)
848 pushop.outbookmarks.append((b, dcid, scid))
848 pushop.outbookmarks.append((b, dcid, scid))
849 # search for bookmark to delete
849 # search for bookmark to delete
850 for b, scid, dcid in adddst:
850 for b, scid, dcid in adddst:
851 if b in explicit:
851 if b in explicit:
852 explicit.remove(b)
852 explicit.remove(b)
853 # treat as "deleted locally"
853 # treat as "deleted locally"
854 pushop.outbookmarks.append((b, dcid, b''))
854 pushop.outbookmarks.append((b, dcid, b''))
855 # identical bookmarks shouldn't get reported
855 # identical bookmarks shouldn't get reported
856 for b, scid, dcid in same:
856 for b, scid, dcid in same:
857 if b in explicit:
857 if b in explicit:
858 explicit.remove(b)
858 explicit.remove(b)
859
859
860 if explicit:
860 if explicit:
861 explicit = sorted(explicit)
861 explicit = sorted(explicit)
862 # we should probably list all of them
862 # we should probably list all of them
863 pushop.ui.warn(
863 pushop.ui.warn(
864 _(
864 _(
865 b'bookmark %s does not exist on the local '
865 b'bookmark %s does not exist on the local '
866 b'or remote repository!\n'
866 b'or remote repository!\n'
867 )
867 )
868 % explicit[0]
868 % explicit[0]
869 )
869 )
870 pushop.bkresult = 2
870 pushop.bkresult = 2
871
871
872 pushop.outbookmarks.sort()
872 pushop.outbookmarks.sort()
873
873
874
874
875 def _pushcheckoutgoing(pushop):
875 def _pushcheckoutgoing(pushop):
876 outgoing = pushop.outgoing
876 outgoing = pushop.outgoing
877 unfi = pushop.repo.unfiltered()
877 unfi = pushop.repo.unfiltered()
878 if not outgoing.missing:
878 if not outgoing.missing:
879 # nothing to push
879 # nothing to push
880 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
880 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
881 return False
881 return False
882 # something to push
882 # something to push
883 if not pushop.force:
883 if not pushop.force:
884 # if repo.obsstore == False --> no obsolete
884 # if repo.obsstore == False --> no obsolete
885 # then, save the iteration
885 # then, save the iteration
886 if unfi.obsstore:
886 if unfi.obsstore:
887 # this message are here for 80 char limit reason
887 # this message are here for 80 char limit reason
888 mso = _(b"push includes obsolete changeset: %s!")
888 mso = _(b"push includes obsolete changeset: %s!")
889 mspd = _(b"push includes phase-divergent changeset: %s!")
889 mspd = _(b"push includes phase-divergent changeset: %s!")
890 mscd = _(b"push includes content-divergent changeset: %s!")
890 mscd = _(b"push includes content-divergent changeset: %s!")
891 mst = {
891 mst = {
892 b"orphan": _(b"push includes orphan changeset: %s!"),
892 b"orphan": _(b"push includes orphan changeset: %s!"),
893 b"phase-divergent": mspd,
893 b"phase-divergent": mspd,
894 b"content-divergent": mscd,
894 b"content-divergent": mscd,
895 }
895 }
896 # If we are to push if there is at least one
896 # If we are to push if there is at least one
897 # obsolete or unstable changeset in missing, at
897 # obsolete or unstable changeset in missing, at
898 # least one of the missinghead will be obsolete or
898 # least one of the missinghead will be obsolete or
899 # unstable. So checking heads only is ok
899 # unstable. So checking heads only is ok
900 for node in outgoing.missingheads:
900 for node in outgoing.missingheads:
901 ctx = unfi[node]
901 ctx = unfi[node]
902 if ctx.obsolete():
902 if ctx.obsolete():
903 raise error.Abort(mso % ctx)
903 raise error.Abort(mso % ctx)
904 elif ctx.isunstable():
904 elif ctx.isunstable():
905 # TODO print more than one instability in the abort
905 # TODO print more than one instability in the abort
906 # message
906 # message
907 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
907 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
908
908
909 discovery.checkheads(pushop)
909 discovery.checkheads(pushop)
910 return True
910 return True
911
911
912
912
913 # List of names of steps to perform for an outgoing bundle2, order matters.
913 # List of names of steps to perform for an outgoing bundle2, order matters.
914 b2partsgenorder = []
914 b2partsgenorder = []
915
915
916 # Mapping between step name and function
916 # Mapping between step name and function
917 #
917 #
918 # This exists to help extensions wrap steps if necessary
918 # This exists to help extensions wrap steps if necessary
919 b2partsgenmapping = {}
919 b2partsgenmapping = {}
920
920
921
921
922 def b2partsgenerator(stepname, idx=None):
922 def b2partsgenerator(stepname, idx=None):
923 """decorator for function generating bundle2 part
923 """decorator for function generating bundle2 part
924
924
925 The function is added to the step -> function mapping and appended to the
925 The function is added to the step -> function mapping and appended to the
926 list of steps. Beware that decorated functions will be added in order
926 list of steps. Beware that decorated functions will be added in order
927 (this may matter).
927 (this may matter).
928
928
929 You can only use this decorator for new steps, if you want to wrap a step
929 You can only use this decorator for new steps, if you want to wrap a step
930 from an extension, attack the b2partsgenmapping dictionary directly."""
930 from an extension, attack the b2partsgenmapping dictionary directly."""
931
931
932 def dec(func):
932 def dec(func):
933 assert stepname not in b2partsgenmapping
933 assert stepname not in b2partsgenmapping
934 b2partsgenmapping[stepname] = func
934 b2partsgenmapping[stepname] = func
935 if idx is None:
935 if idx is None:
936 b2partsgenorder.append(stepname)
936 b2partsgenorder.append(stepname)
937 else:
937 else:
938 b2partsgenorder.insert(idx, stepname)
938 b2partsgenorder.insert(idx, stepname)
939 return func
939 return func
940
940
941 return dec
941 return dec
942
942
943
943
944 def _pushb2ctxcheckheads(pushop, bundler):
944 def _pushb2ctxcheckheads(pushop, bundler):
945 """Generate race condition checking parts
945 """Generate race condition checking parts
946
946
947 Exists as an independent function to aid extensions
947 Exists as an independent function to aid extensions
948 """
948 """
949 # * 'force' do not check for push race,
949 # * 'force' do not check for push race,
950 # * if we don't push anything, there are nothing to check.
950 # * if we don't push anything, there are nothing to check.
951 if not pushop.force and pushop.outgoing.missingheads:
951 if not pushop.force and pushop.outgoing.missingheads:
952 allowunrelated = b'related' in bundler.capabilities.get(
952 allowunrelated = b'related' in bundler.capabilities.get(
953 b'checkheads', ()
953 b'checkheads', ()
954 )
954 )
955 emptyremote = pushop.pushbranchmap is None
955 emptyremote = pushop.pushbranchmap is None
956 if not allowunrelated or emptyremote:
956 if not allowunrelated or emptyremote:
957 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
957 bundler.newpart(b'check:heads', data=iter(pushop.remoteheads))
958 else:
958 else:
959 affected = set()
959 affected = set()
960 for branch, heads in pycompat.iteritems(pushop.pushbranchmap):
960 for branch, heads in pycompat.iteritems(pushop.pushbranchmap):
961 remoteheads, newheads, unsyncedheads, discardedheads = heads
961 remoteheads, newheads, unsyncedheads, discardedheads = heads
962 if remoteheads is not None:
962 if remoteheads is not None:
963 remote = set(remoteheads)
963 remote = set(remoteheads)
964 affected |= set(discardedheads) & remote
964 affected |= set(discardedheads) & remote
965 affected |= remote - set(newheads)
965 affected |= remote - set(newheads)
966 if affected:
966 if affected:
967 data = iter(sorted(affected))
967 data = iter(sorted(affected))
968 bundler.newpart(b'check:updated-heads', data=data)
968 bundler.newpart(b'check:updated-heads', data=data)
969
969
970
970
971 def _pushing(pushop):
971 def _pushing(pushop):
972 """return True if we are pushing anything"""
972 """return True if we are pushing anything"""
973 return bool(
973 return bool(
974 pushop.outgoing.missing
974 pushop.outgoing.missing
975 or pushop.outdatedphases
975 or pushop.outdatedphases
976 or pushop.outobsmarkers
976 or pushop.outobsmarkers
977 or pushop.outbookmarks
977 or pushop.outbookmarks
978 )
978 )
979
979
980
980
981 @b2partsgenerator(b'check-bookmarks')
981 @b2partsgenerator(b'check-bookmarks')
982 def _pushb2checkbookmarks(pushop, bundler):
982 def _pushb2checkbookmarks(pushop, bundler):
983 """insert bookmark move checking"""
983 """insert bookmark move checking"""
984 if not _pushing(pushop) or pushop.force:
984 if not _pushing(pushop) or pushop.force:
985 return
985 return
986 b2caps = bundle2.bundle2caps(pushop.remote)
986 b2caps = bundle2.bundle2caps(pushop.remote)
987 hasbookmarkcheck = b'bookmarks' in b2caps
987 hasbookmarkcheck = b'bookmarks' in b2caps
988 if not (pushop.outbookmarks and hasbookmarkcheck):
988 if not (pushop.outbookmarks and hasbookmarkcheck):
989 return
989 return
990 data = []
990 data = []
991 for book, old, new in pushop.outbookmarks:
991 for book, old, new in pushop.outbookmarks:
992 data.append((book, old))
992 data.append((book, old))
993 checkdata = bookmod.binaryencode(data)
993 checkdata = bookmod.binaryencode(data)
994 bundler.newpart(b'check:bookmarks', data=checkdata)
994 bundler.newpart(b'check:bookmarks', data=checkdata)
995
995
996
996
997 @b2partsgenerator(b'check-phases')
997 @b2partsgenerator(b'check-phases')
998 def _pushb2checkphases(pushop, bundler):
998 def _pushb2checkphases(pushop, bundler):
999 """insert phase move checking"""
999 """insert phase move checking"""
1000 if not _pushing(pushop) or pushop.force:
1000 if not _pushing(pushop) or pushop.force:
1001 return
1001 return
1002 b2caps = bundle2.bundle2caps(pushop.remote)
1002 b2caps = bundle2.bundle2caps(pushop.remote)
1003 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1003 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1004 if pushop.remotephases is not None and hasphaseheads:
1004 if pushop.remotephases is not None and hasphaseheads:
1005 # check that the remote phase has not changed
1005 # check that the remote phase has not changed
1006 checks = [[] for p in phases.allphases]
1006 checks = [[] for p in phases.allphases]
1007 checks[phases.public].extend(pushop.remotephases.publicheads)
1007 checks[phases.public].extend(pushop.remotephases.publicheads)
1008 checks[phases.draft].extend(pushop.remotephases.draftroots)
1008 checks[phases.draft].extend(pushop.remotephases.draftroots)
1009 if any(checks):
1009 if any(checks):
1010 for nodes in checks:
1010 for nodes in checks:
1011 nodes.sort()
1011 nodes.sort()
1012 checkdata = phases.binaryencode(checks)
1012 checkdata = phases.binaryencode(checks)
1013 bundler.newpart(b'check:phases', data=checkdata)
1013 bundler.newpart(b'check:phases', data=checkdata)
1014
1014
1015
1015
1016 @b2partsgenerator(b'changeset')
1016 @b2partsgenerator(b'changeset')
1017 def _pushb2ctx(pushop, bundler):
1017 def _pushb2ctx(pushop, bundler):
1018 """handle changegroup push through bundle2
1018 """handle changegroup push through bundle2
1019
1019
1020 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
1020 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
1021 """
1021 """
1022 if b'changesets' in pushop.stepsdone:
1022 if b'changesets' in pushop.stepsdone:
1023 return
1023 return
1024 pushop.stepsdone.add(b'changesets')
1024 pushop.stepsdone.add(b'changesets')
1025 # Send known heads to the server for race detection.
1025 # Send known heads to the server for race detection.
1026 if not _pushcheckoutgoing(pushop):
1026 if not _pushcheckoutgoing(pushop):
1027 return
1027 return
1028 pushop.repo.prepushoutgoinghooks(pushop)
1028 pushop.repo.prepushoutgoinghooks(pushop)
1029
1029
1030 _pushb2ctxcheckheads(pushop, bundler)
1030 _pushb2ctxcheckheads(pushop, bundler)
1031
1031
1032 b2caps = bundle2.bundle2caps(pushop.remote)
1032 b2caps = bundle2.bundle2caps(pushop.remote)
1033 version = b'01'
1033 version = b'01'
1034 cgversions = b2caps.get(b'changegroup')
1034 cgversions = b2caps.get(b'changegroup')
1035 if cgversions: # 3.1 and 3.2 ship with an empty value
1035 if cgversions: # 3.1 and 3.2 ship with an empty value
1036 cgversions = [
1036 cgversions = [
1037 v
1037 v
1038 for v in cgversions
1038 for v in cgversions
1039 if v in changegroup.supportedoutgoingversions(pushop.repo)
1039 if v in changegroup.supportedoutgoingversions(pushop.repo)
1040 ]
1040 ]
1041 if not cgversions:
1041 if not cgversions:
1042 raise error.Abort(_(b'no common changegroup version'))
1042 raise error.Abort(_(b'no common changegroup version'))
1043 version = max(cgversions)
1043 version = max(cgversions)
1044 cgstream = changegroup.makestream(
1044 cgstream = changegroup.makestream(
1045 pushop.repo, pushop.outgoing, version, b'push'
1045 pushop.repo, pushop.outgoing, version, b'push'
1046 )
1046 )
1047 cgpart = bundler.newpart(b'changegroup', data=cgstream)
1047 cgpart = bundler.newpart(b'changegroup', data=cgstream)
1048 if cgversions:
1048 if cgversions:
1049 cgpart.addparam(b'version', version)
1049 cgpart.addparam(b'version', version)
1050 if b'treemanifest' in pushop.repo.requirements:
1050 if b'treemanifest' in pushop.repo.requirements:
1051 cgpart.addparam(b'treemanifest', b'1')
1051 cgpart.addparam(b'treemanifest', b'1')
1052 if b'exp-sidedata-flag' in pushop.repo.requirements:
1053 cgpart.addparam(b'exp-sidedata', b'1')
1052
1054
1053 def handlereply(op):
1055 def handlereply(op):
1054 """extract addchangegroup returns from server reply"""
1056 """extract addchangegroup returns from server reply"""
1055 cgreplies = op.records.getreplies(cgpart.id)
1057 cgreplies = op.records.getreplies(cgpart.id)
1056 assert len(cgreplies[b'changegroup']) == 1
1058 assert len(cgreplies[b'changegroup']) == 1
1057 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
1059 pushop.cgresult = cgreplies[b'changegroup'][0][b'return']
1058
1060
1059 return handlereply
1061 return handlereply
1060
1062
1061
1063
1062 @b2partsgenerator(b'phase')
1064 @b2partsgenerator(b'phase')
1063 def _pushb2phases(pushop, bundler):
1065 def _pushb2phases(pushop, bundler):
1064 """handle phase push through bundle2"""
1066 """handle phase push through bundle2"""
1065 if b'phases' in pushop.stepsdone:
1067 if b'phases' in pushop.stepsdone:
1066 return
1068 return
1067 b2caps = bundle2.bundle2caps(pushop.remote)
1069 b2caps = bundle2.bundle2caps(pushop.remote)
1068 ui = pushop.repo.ui
1070 ui = pushop.repo.ui
1069
1071
1070 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1072 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1071 haspushkey = b'pushkey' in b2caps
1073 haspushkey = b'pushkey' in b2caps
1072 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1074 hasphaseheads = b'heads' in b2caps.get(b'phases', ())
1073
1075
1074 if hasphaseheads and not legacyphase:
1076 if hasphaseheads and not legacyphase:
1075 return _pushb2phaseheads(pushop, bundler)
1077 return _pushb2phaseheads(pushop, bundler)
1076 elif haspushkey:
1078 elif haspushkey:
1077 return _pushb2phasespushkey(pushop, bundler)
1079 return _pushb2phasespushkey(pushop, bundler)
1078
1080
1079
1081
1080 def _pushb2phaseheads(pushop, bundler):
1082 def _pushb2phaseheads(pushop, bundler):
1081 """push phase information through a bundle2 - binary part"""
1083 """push phase information through a bundle2 - binary part"""
1082 pushop.stepsdone.add(b'phases')
1084 pushop.stepsdone.add(b'phases')
1083 if pushop.outdatedphases:
1085 if pushop.outdatedphases:
1084 updates = [[] for p in phases.allphases]
1086 updates = [[] for p in phases.allphases]
1085 updates[0].extend(h.node() for h in pushop.outdatedphases)
1087 updates[0].extend(h.node() for h in pushop.outdatedphases)
1086 phasedata = phases.binaryencode(updates)
1088 phasedata = phases.binaryencode(updates)
1087 bundler.newpart(b'phase-heads', data=phasedata)
1089 bundler.newpart(b'phase-heads', data=phasedata)
1088
1090
1089
1091
1090 def _pushb2phasespushkey(pushop, bundler):
1092 def _pushb2phasespushkey(pushop, bundler):
1091 """push phase information through a bundle2 - pushkey part"""
1093 """push phase information through a bundle2 - pushkey part"""
1092 pushop.stepsdone.add(b'phases')
1094 pushop.stepsdone.add(b'phases')
1093 part2node = []
1095 part2node = []
1094
1096
1095 def handlefailure(pushop, exc):
1097 def handlefailure(pushop, exc):
1096 targetid = int(exc.partid)
1098 targetid = int(exc.partid)
1097 for partid, node in part2node:
1099 for partid, node in part2node:
1098 if partid == targetid:
1100 if partid == targetid:
1099 raise error.Abort(_(b'updating %s to public failed') % node)
1101 raise error.Abort(_(b'updating %s to public failed') % node)
1100
1102
1101 enc = pushkey.encode
1103 enc = pushkey.encode
1102 for newremotehead in pushop.outdatedphases:
1104 for newremotehead in pushop.outdatedphases:
1103 part = bundler.newpart(b'pushkey')
1105 part = bundler.newpart(b'pushkey')
1104 part.addparam(b'namespace', enc(b'phases'))
1106 part.addparam(b'namespace', enc(b'phases'))
1105 part.addparam(b'key', enc(newremotehead.hex()))
1107 part.addparam(b'key', enc(newremotehead.hex()))
1106 part.addparam(b'old', enc(b'%d' % phases.draft))
1108 part.addparam(b'old', enc(b'%d' % phases.draft))
1107 part.addparam(b'new', enc(b'%d' % phases.public))
1109 part.addparam(b'new', enc(b'%d' % phases.public))
1108 part2node.append((part.id, newremotehead))
1110 part2node.append((part.id, newremotehead))
1109 pushop.pkfailcb[part.id] = handlefailure
1111 pushop.pkfailcb[part.id] = handlefailure
1110
1112
1111 def handlereply(op):
1113 def handlereply(op):
1112 for partid, node in part2node:
1114 for partid, node in part2node:
1113 partrep = op.records.getreplies(partid)
1115 partrep = op.records.getreplies(partid)
1114 results = partrep[b'pushkey']
1116 results = partrep[b'pushkey']
1115 assert len(results) <= 1
1117 assert len(results) <= 1
1116 msg = None
1118 msg = None
1117 if not results:
1119 if not results:
1118 msg = _(b'server ignored update of %s to public!\n') % node
1120 msg = _(b'server ignored update of %s to public!\n') % node
1119 elif not int(results[0][b'return']):
1121 elif not int(results[0][b'return']):
1120 msg = _(b'updating %s to public failed!\n') % node
1122 msg = _(b'updating %s to public failed!\n') % node
1121 if msg is not None:
1123 if msg is not None:
1122 pushop.ui.warn(msg)
1124 pushop.ui.warn(msg)
1123
1125
1124 return handlereply
1126 return handlereply
1125
1127
1126
1128
1127 @b2partsgenerator(b'obsmarkers')
1129 @b2partsgenerator(b'obsmarkers')
1128 def _pushb2obsmarkers(pushop, bundler):
1130 def _pushb2obsmarkers(pushop, bundler):
1129 if b'obsmarkers' in pushop.stepsdone:
1131 if b'obsmarkers' in pushop.stepsdone:
1130 return
1132 return
1131 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1133 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
1132 if obsolete.commonversion(remoteversions) is None:
1134 if obsolete.commonversion(remoteversions) is None:
1133 return
1135 return
1134 pushop.stepsdone.add(b'obsmarkers')
1136 pushop.stepsdone.add(b'obsmarkers')
1135 if pushop.outobsmarkers:
1137 if pushop.outobsmarkers:
1136 markers = sorted(pushop.outobsmarkers)
1138 markers = sorted(pushop.outobsmarkers)
1137 bundle2.buildobsmarkerspart(bundler, markers)
1139 bundle2.buildobsmarkerspart(bundler, markers)
1138
1140
1139
1141
1140 @b2partsgenerator(b'bookmarks')
1142 @b2partsgenerator(b'bookmarks')
1141 def _pushb2bookmarks(pushop, bundler):
1143 def _pushb2bookmarks(pushop, bundler):
1142 """handle bookmark push through bundle2"""
1144 """handle bookmark push through bundle2"""
1143 if b'bookmarks' in pushop.stepsdone:
1145 if b'bookmarks' in pushop.stepsdone:
1144 return
1146 return
1145 b2caps = bundle2.bundle2caps(pushop.remote)
1147 b2caps = bundle2.bundle2caps(pushop.remote)
1146
1148
1147 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1149 legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange')
1148 legacybooks = b'bookmarks' in legacy
1150 legacybooks = b'bookmarks' in legacy
1149
1151
1150 if not legacybooks and b'bookmarks' in b2caps:
1152 if not legacybooks and b'bookmarks' in b2caps:
1151 return _pushb2bookmarkspart(pushop, bundler)
1153 return _pushb2bookmarkspart(pushop, bundler)
1152 elif b'pushkey' in b2caps:
1154 elif b'pushkey' in b2caps:
1153 return _pushb2bookmarkspushkey(pushop, bundler)
1155 return _pushb2bookmarkspushkey(pushop, bundler)
1154
1156
1155
1157
1156 def _bmaction(old, new):
1158 def _bmaction(old, new):
1157 """small utility for bookmark pushing"""
1159 """small utility for bookmark pushing"""
1158 if not old:
1160 if not old:
1159 return b'export'
1161 return b'export'
1160 elif not new:
1162 elif not new:
1161 return b'delete'
1163 return b'delete'
1162 return b'update'
1164 return b'update'
1163
1165
1164
1166
1165 def _abortonsecretctx(pushop, node, b):
1167 def _abortonsecretctx(pushop, node, b):
1166 """abort if a given bookmark points to a secret changeset"""
1168 """abort if a given bookmark points to a secret changeset"""
1167 if node and pushop.repo[node].phase() == phases.secret:
1169 if node and pushop.repo[node].phase() == phases.secret:
1168 raise error.Abort(
1170 raise error.Abort(
1169 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1171 _(b'cannot push bookmark %s as it points to a secret changeset') % b
1170 )
1172 )
1171
1173
1172
1174
1173 def _pushb2bookmarkspart(pushop, bundler):
1175 def _pushb2bookmarkspart(pushop, bundler):
1174 pushop.stepsdone.add(b'bookmarks')
1176 pushop.stepsdone.add(b'bookmarks')
1175 if not pushop.outbookmarks:
1177 if not pushop.outbookmarks:
1176 return
1178 return
1177
1179
1178 allactions = []
1180 allactions = []
1179 data = []
1181 data = []
1180 for book, old, new in pushop.outbookmarks:
1182 for book, old, new in pushop.outbookmarks:
1181 _abortonsecretctx(pushop, new, book)
1183 _abortonsecretctx(pushop, new, book)
1182 data.append((book, new))
1184 data.append((book, new))
1183 allactions.append((book, _bmaction(old, new)))
1185 allactions.append((book, _bmaction(old, new)))
1184 checkdata = bookmod.binaryencode(data)
1186 checkdata = bookmod.binaryencode(data)
1185 bundler.newpart(b'bookmarks', data=checkdata)
1187 bundler.newpart(b'bookmarks', data=checkdata)
1186
1188
1187 def handlereply(op):
1189 def handlereply(op):
1188 ui = pushop.ui
1190 ui = pushop.ui
1189 # if success
1191 # if success
1190 for book, action in allactions:
1192 for book, action in allactions:
1191 ui.status(bookmsgmap[action][0] % book)
1193 ui.status(bookmsgmap[action][0] % book)
1192
1194
1193 return handlereply
1195 return handlereply
1194
1196
1195
1197
1196 def _pushb2bookmarkspushkey(pushop, bundler):
1198 def _pushb2bookmarkspushkey(pushop, bundler):
1197 pushop.stepsdone.add(b'bookmarks')
1199 pushop.stepsdone.add(b'bookmarks')
1198 part2book = []
1200 part2book = []
1199 enc = pushkey.encode
1201 enc = pushkey.encode
1200
1202
1201 def handlefailure(pushop, exc):
1203 def handlefailure(pushop, exc):
1202 targetid = int(exc.partid)
1204 targetid = int(exc.partid)
1203 for partid, book, action in part2book:
1205 for partid, book, action in part2book:
1204 if partid == targetid:
1206 if partid == targetid:
1205 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1207 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1206 # we should not be called for part we did not generated
1208 # we should not be called for part we did not generated
1207 assert False
1209 assert False
1208
1210
1209 for book, old, new in pushop.outbookmarks:
1211 for book, old, new in pushop.outbookmarks:
1210 _abortonsecretctx(pushop, new, book)
1212 _abortonsecretctx(pushop, new, book)
1211 part = bundler.newpart(b'pushkey')
1213 part = bundler.newpart(b'pushkey')
1212 part.addparam(b'namespace', enc(b'bookmarks'))
1214 part.addparam(b'namespace', enc(b'bookmarks'))
1213 part.addparam(b'key', enc(book))
1215 part.addparam(b'key', enc(book))
1214 part.addparam(b'old', enc(hex(old)))
1216 part.addparam(b'old', enc(hex(old)))
1215 part.addparam(b'new', enc(hex(new)))
1217 part.addparam(b'new', enc(hex(new)))
1216 action = b'update'
1218 action = b'update'
1217 if not old:
1219 if not old:
1218 action = b'export'
1220 action = b'export'
1219 elif not new:
1221 elif not new:
1220 action = b'delete'
1222 action = b'delete'
1221 part2book.append((part.id, book, action))
1223 part2book.append((part.id, book, action))
1222 pushop.pkfailcb[part.id] = handlefailure
1224 pushop.pkfailcb[part.id] = handlefailure
1223
1225
1224 def handlereply(op):
1226 def handlereply(op):
1225 ui = pushop.ui
1227 ui = pushop.ui
1226 for partid, book, action in part2book:
1228 for partid, book, action in part2book:
1227 partrep = op.records.getreplies(partid)
1229 partrep = op.records.getreplies(partid)
1228 results = partrep[b'pushkey']
1230 results = partrep[b'pushkey']
1229 assert len(results) <= 1
1231 assert len(results) <= 1
1230 if not results:
1232 if not results:
1231 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1233 pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book)
1232 else:
1234 else:
1233 ret = int(results[0][b'return'])
1235 ret = int(results[0][b'return'])
1234 if ret:
1236 if ret:
1235 ui.status(bookmsgmap[action][0] % book)
1237 ui.status(bookmsgmap[action][0] % book)
1236 else:
1238 else:
1237 ui.warn(bookmsgmap[action][1] % book)
1239 ui.warn(bookmsgmap[action][1] % book)
1238 if pushop.bkresult is not None:
1240 if pushop.bkresult is not None:
1239 pushop.bkresult = 1
1241 pushop.bkresult = 1
1240
1242
1241 return handlereply
1243 return handlereply
1242
1244
1243
1245
1244 @b2partsgenerator(b'pushvars', idx=0)
1246 @b2partsgenerator(b'pushvars', idx=0)
1245 def _getbundlesendvars(pushop, bundler):
1247 def _getbundlesendvars(pushop, bundler):
1246 '''send shellvars via bundle2'''
1248 '''send shellvars via bundle2'''
1247 pushvars = pushop.pushvars
1249 pushvars = pushop.pushvars
1248 if pushvars:
1250 if pushvars:
1249 shellvars = {}
1251 shellvars = {}
1250 for raw in pushvars:
1252 for raw in pushvars:
1251 if b'=' not in raw:
1253 if b'=' not in raw:
1252 msg = (
1254 msg = (
1253 b"unable to parse variable '%s', should follow "
1255 b"unable to parse variable '%s', should follow "
1254 b"'KEY=VALUE' or 'KEY=' format"
1256 b"'KEY=VALUE' or 'KEY=' format"
1255 )
1257 )
1256 raise error.Abort(msg % raw)
1258 raise error.Abort(msg % raw)
1257 k, v = raw.split(b'=', 1)
1259 k, v = raw.split(b'=', 1)
1258 shellvars[k] = v
1260 shellvars[k] = v
1259
1261
1260 part = bundler.newpart(b'pushvars')
1262 part = bundler.newpart(b'pushvars')
1261
1263
1262 for key, value in pycompat.iteritems(shellvars):
1264 for key, value in pycompat.iteritems(shellvars):
1263 part.addparam(key, value, mandatory=False)
1265 part.addparam(key, value, mandatory=False)
1264
1266
1265
1267
1266 def _pushbundle2(pushop):
1268 def _pushbundle2(pushop):
1267 """push data to the remote using bundle2
1269 """push data to the remote using bundle2
1268
1270
1269 The only currently supported type of data is changegroup but this will
1271 The only currently supported type of data is changegroup but this will
1270 evolve in the future."""
1272 evolve in the future."""
1271 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1273 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1272 pushback = pushop.trmanager and pushop.ui.configbool(
1274 pushback = pushop.trmanager and pushop.ui.configbool(
1273 b'experimental', b'bundle2.pushback'
1275 b'experimental', b'bundle2.pushback'
1274 )
1276 )
1275
1277
1276 # create reply capability
1278 # create reply capability
1277 capsblob = bundle2.encodecaps(
1279 capsblob = bundle2.encodecaps(
1278 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1280 bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client')
1279 )
1281 )
1280 bundler.newpart(b'replycaps', data=capsblob)
1282 bundler.newpart(b'replycaps', data=capsblob)
1281 replyhandlers = []
1283 replyhandlers = []
1282 for partgenname in b2partsgenorder:
1284 for partgenname in b2partsgenorder:
1283 partgen = b2partsgenmapping[partgenname]
1285 partgen = b2partsgenmapping[partgenname]
1284 ret = partgen(pushop, bundler)
1286 ret = partgen(pushop, bundler)
1285 if callable(ret):
1287 if callable(ret):
1286 replyhandlers.append(ret)
1288 replyhandlers.append(ret)
1287 # do not push if nothing to push
1289 # do not push if nothing to push
1288 if bundler.nbparts <= 1:
1290 if bundler.nbparts <= 1:
1289 return
1291 return
1290 stream = util.chunkbuffer(bundler.getchunks())
1292 stream = util.chunkbuffer(bundler.getchunks())
1291 try:
1293 try:
1292 try:
1294 try:
1293 with pushop.remote.commandexecutor() as e:
1295 with pushop.remote.commandexecutor() as e:
1294 reply = e.callcommand(
1296 reply = e.callcommand(
1295 b'unbundle',
1297 b'unbundle',
1296 {
1298 {
1297 b'bundle': stream,
1299 b'bundle': stream,
1298 b'heads': [b'force'],
1300 b'heads': [b'force'],
1299 b'url': pushop.remote.url(),
1301 b'url': pushop.remote.url(),
1300 },
1302 },
1301 ).result()
1303 ).result()
1302 except error.BundleValueError as exc:
1304 except error.BundleValueError as exc:
1303 raise error.Abort(_(b'missing support for %s') % exc)
1305 raise error.Abort(_(b'missing support for %s') % exc)
1304 try:
1306 try:
1305 trgetter = None
1307 trgetter = None
1306 if pushback:
1308 if pushback:
1307 trgetter = pushop.trmanager.transaction
1309 trgetter = pushop.trmanager.transaction
1308 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1310 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1309 except error.BundleValueError as exc:
1311 except error.BundleValueError as exc:
1310 raise error.Abort(_(b'missing support for %s') % exc)
1312 raise error.Abort(_(b'missing support for %s') % exc)
1311 except bundle2.AbortFromPart as exc:
1313 except bundle2.AbortFromPart as exc:
1312 pushop.ui.status(_(b'remote: %s\n') % exc)
1314 pushop.ui.status(_(b'remote: %s\n') % exc)
1313 if exc.hint is not None:
1315 if exc.hint is not None:
1314 pushop.ui.status(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1316 pushop.ui.status(_(b'remote: %s\n') % (b'(%s)' % exc.hint))
1315 raise error.Abort(_(b'push failed on remote'))
1317 raise error.Abort(_(b'push failed on remote'))
1316 except error.PushkeyFailed as exc:
1318 except error.PushkeyFailed as exc:
1317 partid = int(exc.partid)
1319 partid = int(exc.partid)
1318 if partid not in pushop.pkfailcb:
1320 if partid not in pushop.pkfailcb:
1319 raise
1321 raise
1320 pushop.pkfailcb[partid](pushop, exc)
1322 pushop.pkfailcb[partid](pushop, exc)
1321 for rephand in replyhandlers:
1323 for rephand in replyhandlers:
1322 rephand(op)
1324 rephand(op)
1323
1325
1324
1326
1325 def _pushchangeset(pushop):
1327 def _pushchangeset(pushop):
1326 """Make the actual push of changeset bundle to remote repo"""
1328 """Make the actual push of changeset bundle to remote repo"""
1327 if b'changesets' in pushop.stepsdone:
1329 if b'changesets' in pushop.stepsdone:
1328 return
1330 return
1329 pushop.stepsdone.add(b'changesets')
1331 pushop.stepsdone.add(b'changesets')
1330 if not _pushcheckoutgoing(pushop):
1332 if not _pushcheckoutgoing(pushop):
1331 return
1333 return
1332
1334
1333 # Should have verified this in push().
1335 # Should have verified this in push().
1334 assert pushop.remote.capable(b'unbundle')
1336 assert pushop.remote.capable(b'unbundle')
1335
1337
1336 pushop.repo.prepushoutgoinghooks(pushop)
1338 pushop.repo.prepushoutgoinghooks(pushop)
1337 outgoing = pushop.outgoing
1339 outgoing = pushop.outgoing
1338 # TODO: get bundlecaps from remote
1340 # TODO: get bundlecaps from remote
1339 bundlecaps = None
1341 bundlecaps = None
1340 # create a changegroup from local
1342 # create a changegroup from local
1341 if pushop.revs is None and not (
1343 if pushop.revs is None and not (
1342 outgoing.excluded or pushop.repo.changelog.filteredrevs
1344 outgoing.excluded or pushop.repo.changelog.filteredrevs
1343 ):
1345 ):
1344 # push everything,
1346 # push everything,
1345 # use the fast path, no race possible on push
1347 # use the fast path, no race possible on push
1346 cg = changegroup.makechangegroup(
1348 cg = changegroup.makechangegroup(
1347 pushop.repo,
1349 pushop.repo,
1348 outgoing,
1350 outgoing,
1349 b'01',
1351 b'01',
1350 b'push',
1352 b'push',
1351 fastpath=True,
1353 fastpath=True,
1352 bundlecaps=bundlecaps,
1354 bundlecaps=bundlecaps,
1353 )
1355 )
1354 else:
1356 else:
1355 cg = changegroup.makechangegroup(
1357 cg = changegroup.makechangegroup(
1356 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1358 pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps
1357 )
1359 )
1358
1360
1359 # apply changegroup to remote
1361 # apply changegroup to remote
1360 # local repo finds heads on server, finds out what
1362 # local repo finds heads on server, finds out what
1361 # revs it must push. once revs transferred, if server
1363 # revs it must push. once revs transferred, if server
1362 # finds it has different heads (someone else won
1364 # finds it has different heads (someone else won
1363 # commit/push race), server aborts.
1365 # commit/push race), server aborts.
1364 if pushop.force:
1366 if pushop.force:
1365 remoteheads = [b'force']
1367 remoteheads = [b'force']
1366 else:
1368 else:
1367 remoteheads = pushop.remoteheads
1369 remoteheads = pushop.remoteheads
1368 # ssh: return remote's addchangegroup()
1370 # ssh: return remote's addchangegroup()
1369 # http: return remote's addchangegroup() or 0 for error
1371 # http: return remote's addchangegroup() or 0 for error
1370 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1372 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url())
1371
1373
1372
1374
1373 def _pushsyncphase(pushop):
1375 def _pushsyncphase(pushop):
1374 """synchronise phase information locally and remotely"""
1376 """synchronise phase information locally and remotely"""
1375 cheads = pushop.commonheads
1377 cheads = pushop.commonheads
1376 # even when we don't push, exchanging phase data is useful
1378 # even when we don't push, exchanging phase data is useful
1377 remotephases = listkeys(pushop.remote, b'phases')
1379 remotephases = listkeys(pushop.remote, b'phases')
1378 if (
1380 if (
1379 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1381 pushop.ui.configbool(b'ui', b'_usedassubrepo')
1380 and remotephases # server supports phases
1382 and remotephases # server supports phases
1381 and pushop.cgresult is None # nothing was pushed
1383 and pushop.cgresult is None # nothing was pushed
1382 and remotephases.get(b'publishing', False)
1384 and remotephases.get(b'publishing', False)
1383 ):
1385 ):
1384 # When:
1386 # When:
1385 # - this is a subrepo push
1387 # - this is a subrepo push
1386 # - and remote support phase
1388 # - and remote support phase
1387 # - and no changeset was pushed
1389 # - and no changeset was pushed
1388 # - and remote is publishing
1390 # - and remote is publishing
1389 # We may be in issue 3871 case!
1391 # We may be in issue 3871 case!
1390 # We drop the possible phase synchronisation done by
1392 # We drop the possible phase synchronisation done by
1391 # courtesy to publish changesets possibly locally draft
1393 # courtesy to publish changesets possibly locally draft
1392 # on the remote.
1394 # on the remote.
1393 remotephases = {b'publishing': b'True'}
1395 remotephases = {b'publishing': b'True'}
1394 if not remotephases: # old server or public only reply from non-publishing
1396 if not remotephases: # old server or public only reply from non-publishing
1395 _localphasemove(pushop, cheads)
1397 _localphasemove(pushop, cheads)
1396 # don't push any phase data as there is nothing to push
1398 # don't push any phase data as there is nothing to push
1397 else:
1399 else:
1398 ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases)
1400 ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases)
1399 pheads, droots = ana
1401 pheads, droots = ana
1400 ### Apply remote phase on local
1402 ### Apply remote phase on local
1401 if remotephases.get(b'publishing', False):
1403 if remotephases.get(b'publishing', False):
1402 _localphasemove(pushop, cheads)
1404 _localphasemove(pushop, cheads)
1403 else: # publish = False
1405 else: # publish = False
1404 _localphasemove(pushop, pheads)
1406 _localphasemove(pushop, pheads)
1405 _localphasemove(pushop, cheads, phases.draft)
1407 _localphasemove(pushop, cheads, phases.draft)
1406 ### Apply local phase on remote
1408 ### Apply local phase on remote
1407
1409
1408 if pushop.cgresult:
1410 if pushop.cgresult:
1409 if b'phases' in pushop.stepsdone:
1411 if b'phases' in pushop.stepsdone:
1410 # phases already pushed though bundle2
1412 # phases already pushed though bundle2
1411 return
1413 return
1412 outdated = pushop.outdatedphases
1414 outdated = pushop.outdatedphases
1413 else:
1415 else:
1414 outdated = pushop.fallbackoutdatedphases
1416 outdated = pushop.fallbackoutdatedphases
1415
1417
1416 pushop.stepsdone.add(b'phases')
1418 pushop.stepsdone.add(b'phases')
1417
1419
1418 # filter heads already turned public by the push
1420 # filter heads already turned public by the push
1419 outdated = [c for c in outdated if c.node() not in pheads]
1421 outdated = [c for c in outdated if c.node() not in pheads]
1420 # fallback to independent pushkey command
1422 # fallback to independent pushkey command
1421 for newremotehead in outdated:
1423 for newremotehead in outdated:
1422 with pushop.remote.commandexecutor() as e:
1424 with pushop.remote.commandexecutor() as e:
1423 r = e.callcommand(
1425 r = e.callcommand(
1424 b'pushkey',
1426 b'pushkey',
1425 {
1427 {
1426 b'namespace': b'phases',
1428 b'namespace': b'phases',
1427 b'key': newremotehead.hex(),
1429 b'key': newremotehead.hex(),
1428 b'old': b'%d' % phases.draft,
1430 b'old': b'%d' % phases.draft,
1429 b'new': b'%d' % phases.public,
1431 b'new': b'%d' % phases.public,
1430 },
1432 },
1431 ).result()
1433 ).result()
1432
1434
1433 if not r:
1435 if not r:
1434 pushop.ui.warn(
1436 pushop.ui.warn(
1435 _(b'updating %s to public failed!\n') % newremotehead
1437 _(b'updating %s to public failed!\n') % newremotehead
1436 )
1438 )
1437
1439
1438
1440
1439 def _localphasemove(pushop, nodes, phase=phases.public):
1441 def _localphasemove(pushop, nodes, phase=phases.public):
1440 """move <nodes> to <phase> in the local source repo"""
1442 """move <nodes> to <phase> in the local source repo"""
1441 if pushop.trmanager:
1443 if pushop.trmanager:
1442 phases.advanceboundary(
1444 phases.advanceboundary(
1443 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1445 pushop.repo, pushop.trmanager.transaction(), phase, nodes
1444 )
1446 )
1445 else:
1447 else:
1446 # repo is not locked, do not change any phases!
1448 # repo is not locked, do not change any phases!
1447 # Informs the user that phases should have been moved when
1449 # Informs the user that phases should have been moved when
1448 # applicable.
1450 # applicable.
1449 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1451 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1450 phasestr = phases.phasenames[phase]
1452 phasestr = phases.phasenames[phase]
1451 if actualmoves:
1453 if actualmoves:
1452 pushop.ui.status(
1454 pushop.ui.status(
1453 _(
1455 _(
1454 b'cannot lock source repo, skipping '
1456 b'cannot lock source repo, skipping '
1455 b'local %s phase update\n'
1457 b'local %s phase update\n'
1456 )
1458 )
1457 % phasestr
1459 % phasestr
1458 )
1460 )
1459
1461
1460
1462
1461 def _pushobsolete(pushop):
1463 def _pushobsolete(pushop):
1462 """utility function to push obsolete markers to a remote"""
1464 """utility function to push obsolete markers to a remote"""
1463 if b'obsmarkers' in pushop.stepsdone:
1465 if b'obsmarkers' in pushop.stepsdone:
1464 return
1466 return
1465 repo = pushop.repo
1467 repo = pushop.repo
1466 remote = pushop.remote
1468 remote = pushop.remote
1467 pushop.stepsdone.add(b'obsmarkers')
1469 pushop.stepsdone.add(b'obsmarkers')
1468 if pushop.outobsmarkers:
1470 if pushop.outobsmarkers:
1469 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1471 pushop.ui.debug(b'try to push obsolete markers to remote\n')
1470 rslts = []
1472 rslts = []
1471 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1473 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1472 for key in sorted(remotedata, reverse=True):
1474 for key in sorted(remotedata, reverse=True):
1473 # reverse sort to ensure we end with dump0
1475 # reverse sort to ensure we end with dump0
1474 data = remotedata[key]
1476 data = remotedata[key]
1475 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1477 rslts.append(remote.pushkey(b'obsolete', key, b'', data))
1476 if [r for r in rslts if not r]:
1478 if [r for r in rslts if not r]:
1477 msg = _(b'failed to push some obsolete markers!\n')
1479 msg = _(b'failed to push some obsolete markers!\n')
1478 repo.ui.warn(msg)
1480 repo.ui.warn(msg)
1479
1481
1480
1482
1481 def _pushbookmark(pushop):
1483 def _pushbookmark(pushop):
1482 """Update bookmark position on remote"""
1484 """Update bookmark position on remote"""
1483 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1485 if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone:
1484 return
1486 return
1485 pushop.stepsdone.add(b'bookmarks')
1487 pushop.stepsdone.add(b'bookmarks')
1486 ui = pushop.ui
1488 ui = pushop.ui
1487 remote = pushop.remote
1489 remote = pushop.remote
1488
1490
1489 for b, old, new in pushop.outbookmarks:
1491 for b, old, new in pushop.outbookmarks:
1490 action = b'update'
1492 action = b'update'
1491 if not old:
1493 if not old:
1492 action = b'export'
1494 action = b'export'
1493 elif not new:
1495 elif not new:
1494 action = b'delete'
1496 action = b'delete'
1495
1497
1496 with remote.commandexecutor() as e:
1498 with remote.commandexecutor() as e:
1497 r = e.callcommand(
1499 r = e.callcommand(
1498 b'pushkey',
1500 b'pushkey',
1499 {
1501 {
1500 b'namespace': b'bookmarks',
1502 b'namespace': b'bookmarks',
1501 b'key': b,
1503 b'key': b,
1502 b'old': hex(old),
1504 b'old': hex(old),
1503 b'new': hex(new),
1505 b'new': hex(new),
1504 },
1506 },
1505 ).result()
1507 ).result()
1506
1508
1507 if r:
1509 if r:
1508 ui.status(bookmsgmap[action][0] % b)
1510 ui.status(bookmsgmap[action][0] % b)
1509 else:
1511 else:
1510 ui.warn(bookmsgmap[action][1] % b)
1512 ui.warn(bookmsgmap[action][1] % b)
1511 # discovery can have set the value form invalid entry
1513 # discovery can have set the value form invalid entry
1512 if pushop.bkresult is not None:
1514 if pushop.bkresult is not None:
1513 pushop.bkresult = 1
1515 pushop.bkresult = 1
1514
1516
1515
1517
1516 class pulloperation(object):
1518 class pulloperation(object):
1517 """A object that represent a single pull operation
1519 """A object that represent a single pull operation
1518
1520
1519 It purpose is to carry pull related state and very common operation.
1521 It purpose is to carry pull related state and very common operation.
1520
1522
1521 A new should be created at the beginning of each pull and discarded
1523 A new should be created at the beginning of each pull and discarded
1522 afterward.
1524 afterward.
1523 """
1525 """
1524
1526
1525 def __init__(
1527 def __init__(
1526 self,
1528 self,
1527 repo,
1529 repo,
1528 remote,
1530 remote,
1529 heads=None,
1531 heads=None,
1530 force=False,
1532 force=False,
1531 bookmarks=(),
1533 bookmarks=(),
1532 remotebookmarks=None,
1534 remotebookmarks=None,
1533 streamclonerequested=None,
1535 streamclonerequested=None,
1534 includepats=None,
1536 includepats=None,
1535 excludepats=None,
1537 excludepats=None,
1536 depth=None,
1538 depth=None,
1537 ):
1539 ):
1538 # repo we pull into
1540 # repo we pull into
1539 self.repo = repo
1541 self.repo = repo
1540 # repo we pull from
1542 # repo we pull from
1541 self.remote = remote
1543 self.remote = remote
1542 # revision we try to pull (None is "all")
1544 # revision we try to pull (None is "all")
1543 self.heads = heads
1545 self.heads = heads
1544 # bookmark pulled explicitly
1546 # bookmark pulled explicitly
1545 self.explicitbookmarks = [
1547 self.explicitbookmarks = [
1546 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1548 repo._bookmarks.expandname(bookmark) for bookmark in bookmarks
1547 ]
1549 ]
1548 # do we force pull?
1550 # do we force pull?
1549 self.force = force
1551 self.force = force
1550 # whether a streaming clone was requested
1552 # whether a streaming clone was requested
1551 self.streamclonerequested = streamclonerequested
1553 self.streamclonerequested = streamclonerequested
1552 # transaction manager
1554 # transaction manager
1553 self.trmanager = None
1555 self.trmanager = None
1554 # set of common changeset between local and remote before pull
1556 # set of common changeset between local and remote before pull
1555 self.common = None
1557 self.common = None
1556 # set of pulled head
1558 # set of pulled head
1557 self.rheads = None
1559 self.rheads = None
1558 # list of missing changeset to fetch remotely
1560 # list of missing changeset to fetch remotely
1559 self.fetch = None
1561 self.fetch = None
1560 # remote bookmarks data
1562 # remote bookmarks data
1561 self.remotebookmarks = remotebookmarks
1563 self.remotebookmarks = remotebookmarks
1562 # result of changegroup pulling (used as return code by pull)
1564 # result of changegroup pulling (used as return code by pull)
1563 self.cgresult = None
1565 self.cgresult = None
1564 # list of step already done
1566 # list of step already done
1565 self.stepsdone = set()
1567 self.stepsdone = set()
1566 # Whether we attempted a clone from pre-generated bundles.
1568 # Whether we attempted a clone from pre-generated bundles.
1567 self.clonebundleattempted = False
1569 self.clonebundleattempted = False
1568 # Set of file patterns to include.
1570 # Set of file patterns to include.
1569 self.includepats = includepats
1571 self.includepats = includepats
1570 # Set of file patterns to exclude.
1572 # Set of file patterns to exclude.
1571 self.excludepats = excludepats
1573 self.excludepats = excludepats
1572 # Number of ancestor changesets to pull from each pulled head.
1574 # Number of ancestor changesets to pull from each pulled head.
1573 self.depth = depth
1575 self.depth = depth
1574
1576
1575 @util.propertycache
1577 @util.propertycache
1576 def pulledsubset(self):
1578 def pulledsubset(self):
1577 """heads of the set of changeset target by the pull"""
1579 """heads of the set of changeset target by the pull"""
1578 # compute target subset
1580 # compute target subset
1579 if self.heads is None:
1581 if self.heads is None:
1580 # We pulled every thing possible
1582 # We pulled every thing possible
1581 # sync on everything common
1583 # sync on everything common
1582 c = set(self.common)
1584 c = set(self.common)
1583 ret = list(self.common)
1585 ret = list(self.common)
1584 for n in self.rheads:
1586 for n in self.rheads:
1585 if n not in c:
1587 if n not in c:
1586 ret.append(n)
1588 ret.append(n)
1587 return ret
1589 return ret
1588 else:
1590 else:
1589 # We pulled a specific subset
1591 # We pulled a specific subset
1590 # sync on this subset
1592 # sync on this subset
1591 return self.heads
1593 return self.heads
1592
1594
1593 @util.propertycache
1595 @util.propertycache
1594 def canusebundle2(self):
1596 def canusebundle2(self):
1595 return not _forcebundle1(self)
1597 return not _forcebundle1(self)
1596
1598
1597 @util.propertycache
1599 @util.propertycache
1598 def remotebundle2caps(self):
1600 def remotebundle2caps(self):
1599 return bundle2.bundle2caps(self.remote)
1601 return bundle2.bundle2caps(self.remote)
1600
1602
1601 def gettransaction(self):
1603 def gettransaction(self):
1602 # deprecated; talk to trmanager directly
1604 # deprecated; talk to trmanager directly
1603 return self.trmanager.transaction()
1605 return self.trmanager.transaction()
1604
1606
1605
1607
1606 class transactionmanager(util.transactional):
1608 class transactionmanager(util.transactional):
1607 """An object to manage the life cycle of a transaction
1609 """An object to manage the life cycle of a transaction
1608
1610
1609 It creates the transaction on demand and calls the appropriate hooks when
1611 It creates the transaction on demand and calls the appropriate hooks when
1610 closing the transaction."""
1612 closing the transaction."""
1611
1613
1612 def __init__(self, repo, source, url):
1614 def __init__(self, repo, source, url):
1613 self.repo = repo
1615 self.repo = repo
1614 self.source = source
1616 self.source = source
1615 self.url = url
1617 self.url = url
1616 self._tr = None
1618 self._tr = None
1617
1619
1618 def transaction(self):
1620 def transaction(self):
1619 """Return an open transaction object, constructing if necessary"""
1621 """Return an open transaction object, constructing if necessary"""
1620 if not self._tr:
1622 if not self._tr:
1621 trname = b'%s\n%s' % (self.source, util.hidepassword(self.url))
1623 trname = b'%s\n%s' % (self.source, util.hidepassword(self.url))
1622 self._tr = self.repo.transaction(trname)
1624 self._tr = self.repo.transaction(trname)
1623 self._tr.hookargs[b'source'] = self.source
1625 self._tr.hookargs[b'source'] = self.source
1624 self._tr.hookargs[b'url'] = self.url
1626 self._tr.hookargs[b'url'] = self.url
1625 return self._tr
1627 return self._tr
1626
1628
1627 def close(self):
1629 def close(self):
1628 """close transaction if created"""
1630 """close transaction if created"""
1629 if self._tr is not None:
1631 if self._tr is not None:
1630 self._tr.close()
1632 self._tr.close()
1631
1633
1632 def release(self):
1634 def release(self):
1633 """release transaction if created"""
1635 """release transaction if created"""
1634 if self._tr is not None:
1636 if self._tr is not None:
1635 self._tr.release()
1637 self._tr.release()
1636
1638
1637
1639
1638 def listkeys(remote, namespace):
1640 def listkeys(remote, namespace):
1639 with remote.commandexecutor() as e:
1641 with remote.commandexecutor() as e:
1640 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1642 return e.callcommand(b'listkeys', {b'namespace': namespace}).result()
1641
1643
1642
1644
1643 def _fullpullbundle2(repo, pullop):
1645 def _fullpullbundle2(repo, pullop):
1644 # The server may send a partial reply, i.e. when inlining
1646 # The server may send a partial reply, i.e. when inlining
1645 # pre-computed bundles. In that case, update the common
1647 # pre-computed bundles. In that case, update the common
1646 # set based on the results and pull another bundle.
1648 # set based on the results and pull another bundle.
1647 #
1649 #
1648 # There are two indicators that the process is finished:
1650 # There are two indicators that the process is finished:
1649 # - no changeset has been added, or
1651 # - no changeset has been added, or
1650 # - all remote heads are known locally.
1652 # - all remote heads are known locally.
1651 # The head check must use the unfiltered view as obsoletion
1653 # The head check must use the unfiltered view as obsoletion
1652 # markers can hide heads.
1654 # markers can hide heads.
1653 unfi = repo.unfiltered()
1655 unfi = repo.unfiltered()
1654 unficl = unfi.changelog
1656 unficl = unfi.changelog
1655
1657
1656 def headsofdiff(h1, h2):
1658 def headsofdiff(h1, h2):
1657 """Returns heads(h1 % h2)"""
1659 """Returns heads(h1 % h2)"""
1658 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1660 res = unfi.set(b'heads(%ln %% %ln)', h1, h2)
1659 return set(ctx.node() for ctx in res)
1661 return set(ctx.node() for ctx in res)
1660
1662
1661 def headsofunion(h1, h2):
1663 def headsofunion(h1, h2):
1662 """Returns heads((h1 + h2) - null)"""
1664 """Returns heads((h1 + h2) - null)"""
1663 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1665 res = unfi.set(b'heads((%ln + %ln - null))', h1, h2)
1664 return set(ctx.node() for ctx in res)
1666 return set(ctx.node() for ctx in res)
1665
1667
1666 while True:
1668 while True:
1667 old_heads = unficl.heads()
1669 old_heads = unficl.heads()
1668 clstart = len(unficl)
1670 clstart = len(unficl)
1669 _pullbundle2(pullop)
1671 _pullbundle2(pullop)
1670 if repository.NARROW_REQUIREMENT in repo.requirements:
1672 if repository.NARROW_REQUIREMENT in repo.requirements:
1671 # XXX narrow clones filter the heads on the server side during
1673 # XXX narrow clones filter the heads on the server side during
1672 # XXX getbundle and result in partial replies as well.
1674 # XXX getbundle and result in partial replies as well.
1673 # XXX Disable pull bundles in this case as band aid to avoid
1675 # XXX Disable pull bundles in this case as band aid to avoid
1674 # XXX extra round trips.
1676 # XXX extra round trips.
1675 break
1677 break
1676 if clstart == len(unficl):
1678 if clstart == len(unficl):
1677 break
1679 break
1678 if all(unficl.hasnode(n) for n in pullop.rheads):
1680 if all(unficl.hasnode(n) for n in pullop.rheads):
1679 break
1681 break
1680 new_heads = headsofdiff(unficl.heads(), old_heads)
1682 new_heads = headsofdiff(unficl.heads(), old_heads)
1681 pullop.common = headsofunion(new_heads, pullop.common)
1683 pullop.common = headsofunion(new_heads, pullop.common)
1682 pullop.rheads = set(pullop.rheads) - pullop.common
1684 pullop.rheads = set(pullop.rheads) - pullop.common
1683
1685
1684
1686
1685 def pull(
1687 def pull(
1686 repo,
1688 repo,
1687 remote,
1689 remote,
1688 heads=None,
1690 heads=None,
1689 force=False,
1691 force=False,
1690 bookmarks=(),
1692 bookmarks=(),
1691 opargs=None,
1693 opargs=None,
1692 streamclonerequested=None,
1694 streamclonerequested=None,
1693 includepats=None,
1695 includepats=None,
1694 excludepats=None,
1696 excludepats=None,
1695 depth=None,
1697 depth=None,
1696 ):
1698 ):
1697 """Fetch repository data from a remote.
1699 """Fetch repository data from a remote.
1698
1700
1699 This is the main function used to retrieve data from a remote repository.
1701 This is the main function used to retrieve data from a remote repository.
1700
1702
1701 ``repo`` is the local repository to clone into.
1703 ``repo`` is the local repository to clone into.
1702 ``remote`` is a peer instance.
1704 ``remote`` is a peer instance.
1703 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1705 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1704 default) means to pull everything from the remote.
1706 default) means to pull everything from the remote.
1705 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1707 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1706 default, all remote bookmarks are pulled.
1708 default, all remote bookmarks are pulled.
1707 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1709 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1708 initialization.
1710 initialization.
1709 ``streamclonerequested`` is a boolean indicating whether a "streaming
1711 ``streamclonerequested`` is a boolean indicating whether a "streaming
1710 clone" is requested. A "streaming clone" is essentially a raw file copy
1712 clone" is requested. A "streaming clone" is essentially a raw file copy
1711 of revlogs from the server. This only works when the local repository is
1713 of revlogs from the server. This only works when the local repository is
1712 empty. The default value of ``None`` means to respect the server
1714 empty. The default value of ``None`` means to respect the server
1713 configuration for preferring stream clones.
1715 configuration for preferring stream clones.
1714 ``includepats`` and ``excludepats`` define explicit file patterns to
1716 ``includepats`` and ``excludepats`` define explicit file patterns to
1715 include and exclude in storage, respectively. If not defined, narrow
1717 include and exclude in storage, respectively. If not defined, narrow
1716 patterns from the repo instance are used, if available.
1718 patterns from the repo instance are used, if available.
1717 ``depth`` is an integer indicating the DAG depth of history we're
1719 ``depth`` is an integer indicating the DAG depth of history we're
1718 interested in. If defined, for each revision specified in ``heads``, we
1720 interested in. If defined, for each revision specified in ``heads``, we
1719 will fetch up to this many of its ancestors and data associated with them.
1721 will fetch up to this many of its ancestors and data associated with them.
1720
1722
1721 Returns the ``pulloperation`` created for this pull.
1723 Returns the ``pulloperation`` created for this pull.
1722 """
1724 """
1723 if opargs is None:
1725 if opargs is None:
1724 opargs = {}
1726 opargs = {}
1725
1727
1726 # We allow the narrow patterns to be passed in explicitly to provide more
1728 # We allow the narrow patterns to be passed in explicitly to provide more
1727 # flexibility for API consumers.
1729 # flexibility for API consumers.
1728 if includepats or excludepats:
1730 if includepats or excludepats:
1729 includepats = includepats or set()
1731 includepats = includepats or set()
1730 excludepats = excludepats or set()
1732 excludepats = excludepats or set()
1731 else:
1733 else:
1732 includepats, excludepats = repo.narrowpats
1734 includepats, excludepats = repo.narrowpats
1733
1735
1734 narrowspec.validatepatterns(includepats)
1736 narrowspec.validatepatterns(includepats)
1735 narrowspec.validatepatterns(excludepats)
1737 narrowspec.validatepatterns(excludepats)
1736
1738
1737 pullop = pulloperation(
1739 pullop = pulloperation(
1738 repo,
1740 repo,
1739 remote,
1741 remote,
1740 heads,
1742 heads,
1741 force,
1743 force,
1742 bookmarks=bookmarks,
1744 bookmarks=bookmarks,
1743 streamclonerequested=streamclonerequested,
1745 streamclonerequested=streamclonerequested,
1744 includepats=includepats,
1746 includepats=includepats,
1745 excludepats=excludepats,
1747 excludepats=excludepats,
1746 depth=depth,
1748 depth=depth,
1747 **pycompat.strkwargs(opargs)
1749 **pycompat.strkwargs(opargs)
1748 )
1750 )
1749
1751
1750 peerlocal = pullop.remote.local()
1752 peerlocal = pullop.remote.local()
1751 if peerlocal:
1753 if peerlocal:
1752 missing = set(peerlocal.requirements) - pullop.repo.supported
1754 missing = set(peerlocal.requirements) - pullop.repo.supported
1753 if missing:
1755 if missing:
1754 msg = _(
1756 msg = _(
1755 b"required features are not"
1757 b"required features are not"
1756 b" supported in the destination:"
1758 b" supported in the destination:"
1757 b" %s"
1759 b" %s"
1758 ) % (b', '.join(sorted(missing)))
1760 ) % (b', '.join(sorted(missing)))
1759 raise error.Abort(msg)
1761 raise error.Abort(msg)
1760
1762
1761 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1763 pullop.trmanager = transactionmanager(repo, b'pull', remote.url())
1762 wlock = util.nullcontextmanager()
1764 wlock = util.nullcontextmanager()
1763 if not bookmod.bookmarksinstore(repo):
1765 if not bookmod.bookmarksinstore(repo):
1764 wlock = repo.wlock()
1766 wlock = repo.wlock()
1765 with wlock, repo.lock(), pullop.trmanager:
1767 with wlock, repo.lock(), pullop.trmanager:
1766 # Use the modern wire protocol, if available.
1768 # Use the modern wire protocol, if available.
1767 if remote.capable(b'command-changesetdata'):
1769 if remote.capable(b'command-changesetdata'):
1768 exchangev2.pull(pullop)
1770 exchangev2.pull(pullop)
1769 else:
1771 else:
1770 # This should ideally be in _pullbundle2(). However, it needs to run
1772 # This should ideally be in _pullbundle2(). However, it needs to run
1771 # before discovery to avoid extra work.
1773 # before discovery to avoid extra work.
1772 _maybeapplyclonebundle(pullop)
1774 _maybeapplyclonebundle(pullop)
1773 streamclone.maybeperformlegacystreamclone(pullop)
1775 streamclone.maybeperformlegacystreamclone(pullop)
1774 _pulldiscovery(pullop)
1776 _pulldiscovery(pullop)
1775 if pullop.canusebundle2:
1777 if pullop.canusebundle2:
1776 _fullpullbundle2(repo, pullop)
1778 _fullpullbundle2(repo, pullop)
1777 _pullchangeset(pullop)
1779 _pullchangeset(pullop)
1778 _pullphase(pullop)
1780 _pullphase(pullop)
1779 _pullbookmarks(pullop)
1781 _pullbookmarks(pullop)
1780 _pullobsolete(pullop)
1782 _pullobsolete(pullop)
1781
1783
1782 # storing remotenames
1784 # storing remotenames
1783 if repo.ui.configbool(b'experimental', b'remotenames'):
1785 if repo.ui.configbool(b'experimental', b'remotenames'):
1784 logexchange.pullremotenames(repo, remote)
1786 logexchange.pullremotenames(repo, remote)
1785
1787
1786 return pullop
1788 return pullop
1787
1789
1788
1790
1789 # list of steps to perform discovery before pull
1791 # list of steps to perform discovery before pull
1790 pulldiscoveryorder = []
1792 pulldiscoveryorder = []
1791
1793
1792 # Mapping between step name and function
1794 # Mapping between step name and function
1793 #
1795 #
1794 # This exists to help extensions wrap steps if necessary
1796 # This exists to help extensions wrap steps if necessary
1795 pulldiscoverymapping = {}
1797 pulldiscoverymapping = {}
1796
1798
1797
1799
1798 def pulldiscovery(stepname):
1800 def pulldiscovery(stepname):
1799 """decorator for function performing discovery before pull
1801 """decorator for function performing discovery before pull
1800
1802
1801 The function is added to the step -> function mapping and appended to the
1803 The function is added to the step -> function mapping and appended to the
1802 list of steps. Beware that decorated function will be added in order (this
1804 list of steps. Beware that decorated function will be added in order (this
1803 may matter).
1805 may matter).
1804
1806
1805 You can only use this decorator for a new step, if you want to wrap a step
1807 You can only use this decorator for a new step, if you want to wrap a step
1806 from an extension, change the pulldiscovery dictionary directly."""
1808 from an extension, change the pulldiscovery dictionary directly."""
1807
1809
1808 def dec(func):
1810 def dec(func):
1809 assert stepname not in pulldiscoverymapping
1811 assert stepname not in pulldiscoverymapping
1810 pulldiscoverymapping[stepname] = func
1812 pulldiscoverymapping[stepname] = func
1811 pulldiscoveryorder.append(stepname)
1813 pulldiscoveryorder.append(stepname)
1812 return func
1814 return func
1813
1815
1814 return dec
1816 return dec
1815
1817
1816
1818
1817 def _pulldiscovery(pullop):
1819 def _pulldiscovery(pullop):
1818 """Run all discovery steps"""
1820 """Run all discovery steps"""
1819 for stepname in pulldiscoveryorder:
1821 for stepname in pulldiscoveryorder:
1820 step = pulldiscoverymapping[stepname]
1822 step = pulldiscoverymapping[stepname]
1821 step(pullop)
1823 step(pullop)
1822
1824
1823
1825
1824 @pulldiscovery(b'b1:bookmarks')
1826 @pulldiscovery(b'b1:bookmarks')
1825 def _pullbookmarkbundle1(pullop):
1827 def _pullbookmarkbundle1(pullop):
1826 """fetch bookmark data in bundle1 case
1828 """fetch bookmark data in bundle1 case
1827
1829
1828 If not using bundle2, we have to fetch bookmarks before changeset
1830 If not using bundle2, we have to fetch bookmarks before changeset
1829 discovery to reduce the chance and impact of race conditions."""
1831 discovery to reduce the chance and impact of race conditions."""
1830 if pullop.remotebookmarks is not None:
1832 if pullop.remotebookmarks is not None:
1831 return
1833 return
1832 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1834 if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps:
1833 # all known bundle2 servers now support listkeys, but lets be nice with
1835 # all known bundle2 servers now support listkeys, but lets be nice with
1834 # new implementation.
1836 # new implementation.
1835 return
1837 return
1836 books = listkeys(pullop.remote, b'bookmarks')
1838 books = listkeys(pullop.remote, b'bookmarks')
1837 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1839 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1838
1840
1839
1841
1840 @pulldiscovery(b'changegroup')
1842 @pulldiscovery(b'changegroup')
1841 def _pulldiscoverychangegroup(pullop):
1843 def _pulldiscoverychangegroup(pullop):
1842 """discovery phase for the pull
1844 """discovery phase for the pull
1843
1845
1844 Current handle changeset discovery only, will change handle all discovery
1846 Current handle changeset discovery only, will change handle all discovery
1845 at some point."""
1847 at some point."""
1846 tmp = discovery.findcommonincoming(
1848 tmp = discovery.findcommonincoming(
1847 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1849 pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force
1848 )
1850 )
1849 common, fetch, rheads = tmp
1851 common, fetch, rheads = tmp
1850 nm = pullop.repo.unfiltered().changelog.nodemap
1852 nm = pullop.repo.unfiltered().changelog.nodemap
1851 if fetch and rheads:
1853 if fetch and rheads:
1852 # If a remote heads is filtered locally, put in back in common.
1854 # If a remote heads is filtered locally, put in back in common.
1853 #
1855 #
1854 # This is a hackish solution to catch most of "common but locally
1856 # This is a hackish solution to catch most of "common but locally
1855 # hidden situation". We do not performs discovery on unfiltered
1857 # hidden situation". We do not performs discovery on unfiltered
1856 # repository because it end up doing a pathological amount of round
1858 # repository because it end up doing a pathological amount of round
1857 # trip for w huge amount of changeset we do not care about.
1859 # trip for w huge amount of changeset we do not care about.
1858 #
1860 #
1859 # If a set of such "common but filtered" changeset exist on the server
1861 # If a set of such "common but filtered" changeset exist on the server
1860 # but are not including a remote heads, we'll not be able to detect it,
1862 # but are not including a remote heads, we'll not be able to detect it,
1861 scommon = set(common)
1863 scommon = set(common)
1862 for n in rheads:
1864 for n in rheads:
1863 if n in nm:
1865 if n in nm:
1864 if n not in scommon:
1866 if n not in scommon:
1865 common.append(n)
1867 common.append(n)
1866 if set(rheads).issubset(set(common)):
1868 if set(rheads).issubset(set(common)):
1867 fetch = []
1869 fetch = []
1868 pullop.common = common
1870 pullop.common = common
1869 pullop.fetch = fetch
1871 pullop.fetch = fetch
1870 pullop.rheads = rheads
1872 pullop.rheads = rheads
1871
1873
1872
1874
1873 def _pullbundle2(pullop):
1875 def _pullbundle2(pullop):
1874 """pull data using bundle2
1876 """pull data using bundle2
1875
1877
1876 For now, the only supported data are changegroup."""
1878 For now, the only supported data are changegroup."""
1877 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1879 kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')}
1878
1880
1879 # make ui easier to access
1881 # make ui easier to access
1880 ui = pullop.repo.ui
1882 ui = pullop.repo.ui
1881
1883
1882 # At the moment we don't do stream clones over bundle2. If that is
1884 # At the moment we don't do stream clones over bundle2. If that is
1883 # implemented then here's where the check for that will go.
1885 # implemented then here's where the check for that will go.
1884 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1886 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1885
1887
1886 # declare pull perimeters
1888 # declare pull perimeters
1887 kwargs[b'common'] = pullop.common
1889 kwargs[b'common'] = pullop.common
1888 kwargs[b'heads'] = pullop.heads or pullop.rheads
1890 kwargs[b'heads'] = pullop.heads or pullop.rheads
1889
1891
1890 # check server supports narrow and then adding includepats and excludepats
1892 # check server supports narrow and then adding includepats and excludepats
1891 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1893 servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP)
1892 if servernarrow and pullop.includepats:
1894 if servernarrow and pullop.includepats:
1893 kwargs[b'includepats'] = pullop.includepats
1895 kwargs[b'includepats'] = pullop.includepats
1894 if servernarrow and pullop.excludepats:
1896 if servernarrow and pullop.excludepats:
1895 kwargs[b'excludepats'] = pullop.excludepats
1897 kwargs[b'excludepats'] = pullop.excludepats
1896
1898
1897 if streaming:
1899 if streaming:
1898 kwargs[b'cg'] = False
1900 kwargs[b'cg'] = False
1899 kwargs[b'stream'] = True
1901 kwargs[b'stream'] = True
1900 pullop.stepsdone.add(b'changegroup')
1902 pullop.stepsdone.add(b'changegroup')
1901 pullop.stepsdone.add(b'phases')
1903 pullop.stepsdone.add(b'phases')
1902
1904
1903 else:
1905 else:
1904 # pulling changegroup
1906 # pulling changegroup
1905 pullop.stepsdone.add(b'changegroup')
1907 pullop.stepsdone.add(b'changegroup')
1906
1908
1907 kwargs[b'cg'] = pullop.fetch
1909 kwargs[b'cg'] = pullop.fetch
1908
1910
1909 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1911 legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange')
1910 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1912 hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ())
1911 if not legacyphase and hasbinaryphase:
1913 if not legacyphase and hasbinaryphase:
1912 kwargs[b'phases'] = True
1914 kwargs[b'phases'] = True
1913 pullop.stepsdone.add(b'phases')
1915 pullop.stepsdone.add(b'phases')
1914
1916
1915 if b'listkeys' in pullop.remotebundle2caps:
1917 if b'listkeys' in pullop.remotebundle2caps:
1916 if b'phases' not in pullop.stepsdone:
1918 if b'phases' not in pullop.stepsdone:
1917 kwargs[b'listkeys'] = [b'phases']
1919 kwargs[b'listkeys'] = [b'phases']
1918
1920
1919 bookmarksrequested = False
1921 bookmarksrequested = False
1920 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1922 legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange')
1921 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1923 hasbinarybook = b'bookmarks' in pullop.remotebundle2caps
1922
1924
1923 if pullop.remotebookmarks is not None:
1925 if pullop.remotebookmarks is not None:
1924 pullop.stepsdone.add(b'request-bookmarks')
1926 pullop.stepsdone.add(b'request-bookmarks')
1925
1927
1926 if (
1928 if (
1927 b'request-bookmarks' not in pullop.stepsdone
1929 b'request-bookmarks' not in pullop.stepsdone
1928 and pullop.remotebookmarks is None
1930 and pullop.remotebookmarks is None
1929 and not legacybookmark
1931 and not legacybookmark
1930 and hasbinarybook
1932 and hasbinarybook
1931 ):
1933 ):
1932 kwargs[b'bookmarks'] = True
1934 kwargs[b'bookmarks'] = True
1933 bookmarksrequested = True
1935 bookmarksrequested = True
1934
1936
1935 if b'listkeys' in pullop.remotebundle2caps:
1937 if b'listkeys' in pullop.remotebundle2caps:
1936 if b'request-bookmarks' not in pullop.stepsdone:
1938 if b'request-bookmarks' not in pullop.stepsdone:
1937 # make sure to always includes bookmark data when migrating
1939 # make sure to always includes bookmark data when migrating
1938 # `hg incoming --bundle` to using this function.
1940 # `hg incoming --bundle` to using this function.
1939 pullop.stepsdone.add(b'request-bookmarks')
1941 pullop.stepsdone.add(b'request-bookmarks')
1940 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1942 kwargs.setdefault(b'listkeys', []).append(b'bookmarks')
1941
1943
1942 # If this is a full pull / clone and the server supports the clone bundles
1944 # If this is a full pull / clone and the server supports the clone bundles
1943 # feature, tell the server whether we attempted a clone bundle. The
1945 # feature, tell the server whether we attempted a clone bundle. The
1944 # presence of this flag indicates the client supports clone bundles. This
1946 # presence of this flag indicates the client supports clone bundles. This
1945 # will enable the server to treat clients that support clone bundles
1947 # will enable the server to treat clients that support clone bundles
1946 # differently from those that don't.
1948 # differently from those that don't.
1947 if (
1949 if (
1948 pullop.remote.capable(b'clonebundles')
1950 pullop.remote.capable(b'clonebundles')
1949 and pullop.heads is None
1951 and pullop.heads is None
1950 and list(pullop.common) == [nullid]
1952 and list(pullop.common) == [nullid]
1951 ):
1953 ):
1952 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1954 kwargs[b'cbattempted'] = pullop.clonebundleattempted
1953
1955
1954 if streaming:
1956 if streaming:
1955 pullop.repo.ui.status(_(b'streaming all changes\n'))
1957 pullop.repo.ui.status(_(b'streaming all changes\n'))
1956 elif not pullop.fetch:
1958 elif not pullop.fetch:
1957 pullop.repo.ui.status(_(b"no changes found\n"))
1959 pullop.repo.ui.status(_(b"no changes found\n"))
1958 pullop.cgresult = 0
1960 pullop.cgresult = 0
1959 else:
1961 else:
1960 if pullop.heads is None and list(pullop.common) == [nullid]:
1962 if pullop.heads is None and list(pullop.common) == [nullid]:
1961 pullop.repo.ui.status(_(b"requesting all changes\n"))
1963 pullop.repo.ui.status(_(b"requesting all changes\n"))
1962 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1964 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1963 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1965 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1964 if obsolete.commonversion(remoteversions) is not None:
1966 if obsolete.commonversion(remoteversions) is not None:
1965 kwargs[b'obsmarkers'] = True
1967 kwargs[b'obsmarkers'] = True
1966 pullop.stepsdone.add(b'obsmarkers')
1968 pullop.stepsdone.add(b'obsmarkers')
1967 _pullbundle2extraprepare(pullop, kwargs)
1969 _pullbundle2extraprepare(pullop, kwargs)
1968
1970
1969 with pullop.remote.commandexecutor() as e:
1971 with pullop.remote.commandexecutor() as e:
1970 args = dict(kwargs)
1972 args = dict(kwargs)
1971 args[b'source'] = b'pull'
1973 args[b'source'] = b'pull'
1972 bundle = e.callcommand(b'getbundle', args).result()
1974 bundle = e.callcommand(b'getbundle', args).result()
1973
1975
1974 try:
1976 try:
1975 op = bundle2.bundleoperation(
1977 op = bundle2.bundleoperation(
1976 pullop.repo, pullop.gettransaction, source=b'pull'
1978 pullop.repo, pullop.gettransaction, source=b'pull'
1977 )
1979 )
1978 op.modes[b'bookmarks'] = b'records'
1980 op.modes[b'bookmarks'] = b'records'
1979 bundle2.processbundle(pullop.repo, bundle, op=op)
1981 bundle2.processbundle(pullop.repo, bundle, op=op)
1980 except bundle2.AbortFromPart as exc:
1982 except bundle2.AbortFromPart as exc:
1981 pullop.repo.ui.status(_(b'remote: abort: %s\n') % exc)
1983 pullop.repo.ui.status(_(b'remote: abort: %s\n') % exc)
1982 raise error.Abort(_(b'pull failed on remote'), hint=exc.hint)
1984 raise error.Abort(_(b'pull failed on remote'), hint=exc.hint)
1983 except error.BundleValueError as exc:
1985 except error.BundleValueError as exc:
1984 raise error.Abort(_(b'missing support for %s') % exc)
1986 raise error.Abort(_(b'missing support for %s') % exc)
1985
1987
1986 if pullop.fetch:
1988 if pullop.fetch:
1987 pullop.cgresult = bundle2.combinechangegroupresults(op)
1989 pullop.cgresult = bundle2.combinechangegroupresults(op)
1988
1990
1989 # processing phases change
1991 # processing phases change
1990 for namespace, value in op.records[b'listkeys']:
1992 for namespace, value in op.records[b'listkeys']:
1991 if namespace == b'phases':
1993 if namespace == b'phases':
1992 _pullapplyphases(pullop, value)
1994 _pullapplyphases(pullop, value)
1993
1995
1994 # processing bookmark update
1996 # processing bookmark update
1995 if bookmarksrequested:
1997 if bookmarksrequested:
1996 books = {}
1998 books = {}
1997 for record in op.records[b'bookmarks']:
1999 for record in op.records[b'bookmarks']:
1998 books[record[b'bookmark']] = record[b"node"]
2000 books[record[b'bookmark']] = record[b"node"]
1999 pullop.remotebookmarks = books
2001 pullop.remotebookmarks = books
2000 else:
2002 else:
2001 for namespace, value in op.records[b'listkeys']:
2003 for namespace, value in op.records[b'listkeys']:
2002 if namespace == b'bookmarks':
2004 if namespace == b'bookmarks':
2003 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2005 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
2004
2006
2005 # bookmark data were either already there or pulled in the bundle
2007 # bookmark data were either already there or pulled in the bundle
2006 if pullop.remotebookmarks is not None:
2008 if pullop.remotebookmarks is not None:
2007 _pullbookmarks(pullop)
2009 _pullbookmarks(pullop)
2008
2010
2009
2011
2010 def _pullbundle2extraprepare(pullop, kwargs):
2012 def _pullbundle2extraprepare(pullop, kwargs):
2011 """hook function so that extensions can extend the getbundle call"""
2013 """hook function so that extensions can extend the getbundle call"""
2012
2014
2013
2015
2014 def _pullchangeset(pullop):
2016 def _pullchangeset(pullop):
2015 """pull changeset from unbundle into the local repo"""
2017 """pull changeset from unbundle into the local repo"""
2016 # We delay the open of the transaction as late as possible so we
2018 # We delay the open of the transaction as late as possible so we
2017 # don't open transaction for nothing or you break future useful
2019 # don't open transaction for nothing or you break future useful
2018 # rollback call
2020 # rollback call
2019 if b'changegroup' in pullop.stepsdone:
2021 if b'changegroup' in pullop.stepsdone:
2020 return
2022 return
2021 pullop.stepsdone.add(b'changegroup')
2023 pullop.stepsdone.add(b'changegroup')
2022 if not pullop.fetch:
2024 if not pullop.fetch:
2023 pullop.repo.ui.status(_(b"no changes found\n"))
2025 pullop.repo.ui.status(_(b"no changes found\n"))
2024 pullop.cgresult = 0
2026 pullop.cgresult = 0
2025 return
2027 return
2026 tr = pullop.gettransaction()
2028 tr = pullop.gettransaction()
2027 if pullop.heads is None and list(pullop.common) == [nullid]:
2029 if pullop.heads is None and list(pullop.common) == [nullid]:
2028 pullop.repo.ui.status(_(b"requesting all changes\n"))
2030 pullop.repo.ui.status(_(b"requesting all changes\n"))
2029 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2031 elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'):
2030 # issue1320, avoid a race if remote changed after discovery
2032 # issue1320, avoid a race if remote changed after discovery
2031 pullop.heads = pullop.rheads
2033 pullop.heads = pullop.rheads
2032
2034
2033 if pullop.remote.capable(b'getbundle'):
2035 if pullop.remote.capable(b'getbundle'):
2034 # TODO: get bundlecaps from remote
2036 # TODO: get bundlecaps from remote
2035 cg = pullop.remote.getbundle(
2037 cg = pullop.remote.getbundle(
2036 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2038 b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads
2037 )
2039 )
2038 elif pullop.heads is None:
2040 elif pullop.heads is None:
2039 with pullop.remote.commandexecutor() as e:
2041 with pullop.remote.commandexecutor() as e:
2040 cg = e.callcommand(
2042 cg = e.callcommand(
2041 b'changegroup', {b'nodes': pullop.fetch, b'source': b'pull',}
2043 b'changegroup', {b'nodes': pullop.fetch, b'source': b'pull',}
2042 ).result()
2044 ).result()
2043
2045
2044 elif not pullop.remote.capable(b'changegroupsubset'):
2046 elif not pullop.remote.capable(b'changegroupsubset'):
2045 raise error.Abort(
2047 raise error.Abort(
2046 _(
2048 _(
2047 b"partial pull cannot be done because "
2049 b"partial pull cannot be done because "
2048 b"other repository doesn't support "
2050 b"other repository doesn't support "
2049 b"changegroupsubset."
2051 b"changegroupsubset."
2050 )
2052 )
2051 )
2053 )
2052 else:
2054 else:
2053 with pullop.remote.commandexecutor() as e:
2055 with pullop.remote.commandexecutor() as e:
2054 cg = e.callcommand(
2056 cg = e.callcommand(
2055 b'changegroupsubset',
2057 b'changegroupsubset',
2056 {
2058 {
2057 b'bases': pullop.fetch,
2059 b'bases': pullop.fetch,
2058 b'heads': pullop.heads,
2060 b'heads': pullop.heads,
2059 b'source': b'pull',
2061 b'source': b'pull',
2060 },
2062 },
2061 ).result()
2063 ).result()
2062
2064
2063 bundleop = bundle2.applybundle(
2065 bundleop = bundle2.applybundle(
2064 pullop.repo, cg, tr, b'pull', pullop.remote.url()
2066 pullop.repo, cg, tr, b'pull', pullop.remote.url()
2065 )
2067 )
2066 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2068 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
2067
2069
2068
2070
2069 def _pullphase(pullop):
2071 def _pullphase(pullop):
2070 # Get remote phases data from remote
2072 # Get remote phases data from remote
2071 if b'phases' in pullop.stepsdone:
2073 if b'phases' in pullop.stepsdone:
2072 return
2074 return
2073 remotephases = listkeys(pullop.remote, b'phases')
2075 remotephases = listkeys(pullop.remote, b'phases')
2074 _pullapplyphases(pullop, remotephases)
2076 _pullapplyphases(pullop, remotephases)
2075
2077
2076
2078
2077 def _pullapplyphases(pullop, remotephases):
2079 def _pullapplyphases(pullop, remotephases):
2078 """apply phase movement from observed remote state"""
2080 """apply phase movement from observed remote state"""
2079 if b'phases' in pullop.stepsdone:
2081 if b'phases' in pullop.stepsdone:
2080 return
2082 return
2081 pullop.stepsdone.add(b'phases')
2083 pullop.stepsdone.add(b'phases')
2082 publishing = bool(remotephases.get(b'publishing', False))
2084 publishing = bool(remotephases.get(b'publishing', False))
2083 if remotephases and not publishing:
2085 if remotephases and not publishing:
2084 # remote is new and non-publishing
2086 # remote is new and non-publishing
2085 pheads, _dr = phases.analyzeremotephases(
2087 pheads, _dr = phases.analyzeremotephases(
2086 pullop.repo, pullop.pulledsubset, remotephases
2088 pullop.repo, pullop.pulledsubset, remotephases
2087 )
2089 )
2088 dheads = pullop.pulledsubset
2090 dheads = pullop.pulledsubset
2089 else:
2091 else:
2090 # Remote is old or publishing all common changesets
2092 # Remote is old or publishing all common changesets
2091 # should be seen as public
2093 # should be seen as public
2092 pheads = pullop.pulledsubset
2094 pheads = pullop.pulledsubset
2093 dheads = []
2095 dheads = []
2094 unfi = pullop.repo.unfiltered()
2096 unfi = pullop.repo.unfiltered()
2095 phase = unfi._phasecache.phase
2097 phase = unfi._phasecache.phase
2096 rev = unfi.changelog.nodemap.get
2098 rev = unfi.changelog.nodemap.get
2097 public = phases.public
2099 public = phases.public
2098 draft = phases.draft
2100 draft = phases.draft
2099
2101
2100 # exclude changesets already public locally and update the others
2102 # exclude changesets already public locally and update the others
2101 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2103 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
2102 if pheads:
2104 if pheads:
2103 tr = pullop.gettransaction()
2105 tr = pullop.gettransaction()
2104 phases.advanceboundary(pullop.repo, tr, public, pheads)
2106 phases.advanceboundary(pullop.repo, tr, public, pheads)
2105
2107
2106 # exclude changesets already draft locally and update the others
2108 # exclude changesets already draft locally and update the others
2107 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2109 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
2108 if dheads:
2110 if dheads:
2109 tr = pullop.gettransaction()
2111 tr = pullop.gettransaction()
2110 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2112 phases.advanceboundary(pullop.repo, tr, draft, dheads)
2111
2113
2112
2114
2113 def _pullbookmarks(pullop):
2115 def _pullbookmarks(pullop):
2114 """process the remote bookmark information to update the local one"""
2116 """process the remote bookmark information to update the local one"""
2115 if b'bookmarks' in pullop.stepsdone:
2117 if b'bookmarks' in pullop.stepsdone:
2116 return
2118 return
2117 pullop.stepsdone.add(b'bookmarks')
2119 pullop.stepsdone.add(b'bookmarks')
2118 repo = pullop.repo
2120 repo = pullop.repo
2119 remotebookmarks = pullop.remotebookmarks
2121 remotebookmarks = pullop.remotebookmarks
2120 bookmod.updatefromremote(
2122 bookmod.updatefromremote(
2121 repo.ui,
2123 repo.ui,
2122 repo,
2124 repo,
2123 remotebookmarks,
2125 remotebookmarks,
2124 pullop.remote.url(),
2126 pullop.remote.url(),
2125 pullop.gettransaction,
2127 pullop.gettransaction,
2126 explicit=pullop.explicitbookmarks,
2128 explicit=pullop.explicitbookmarks,
2127 )
2129 )
2128
2130
2129
2131
2130 def _pullobsolete(pullop):
2132 def _pullobsolete(pullop):
2131 """utility function to pull obsolete markers from a remote
2133 """utility function to pull obsolete markers from a remote
2132
2134
2133 The `gettransaction` is function that return the pull transaction, creating
2135 The `gettransaction` is function that return the pull transaction, creating
2134 one if necessary. We return the transaction to inform the calling code that
2136 one if necessary. We return the transaction to inform the calling code that
2135 a new transaction have been created (when applicable).
2137 a new transaction have been created (when applicable).
2136
2138
2137 Exists mostly to allow overriding for experimentation purpose"""
2139 Exists mostly to allow overriding for experimentation purpose"""
2138 if b'obsmarkers' in pullop.stepsdone:
2140 if b'obsmarkers' in pullop.stepsdone:
2139 return
2141 return
2140 pullop.stepsdone.add(b'obsmarkers')
2142 pullop.stepsdone.add(b'obsmarkers')
2141 tr = None
2143 tr = None
2142 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2144 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
2143 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2145 pullop.repo.ui.debug(b'fetching remote obsolete markers\n')
2144 remoteobs = listkeys(pullop.remote, b'obsolete')
2146 remoteobs = listkeys(pullop.remote, b'obsolete')
2145 if b'dump0' in remoteobs:
2147 if b'dump0' in remoteobs:
2146 tr = pullop.gettransaction()
2148 tr = pullop.gettransaction()
2147 markers = []
2149 markers = []
2148 for key in sorted(remoteobs, reverse=True):
2150 for key in sorted(remoteobs, reverse=True):
2149 if key.startswith(b'dump'):
2151 if key.startswith(b'dump'):
2150 data = util.b85decode(remoteobs[key])
2152 data = util.b85decode(remoteobs[key])
2151 version, newmarks = obsolete._readmarkers(data)
2153 version, newmarks = obsolete._readmarkers(data)
2152 markers += newmarks
2154 markers += newmarks
2153 if markers:
2155 if markers:
2154 pullop.repo.obsstore.add(tr, markers)
2156 pullop.repo.obsstore.add(tr, markers)
2155 pullop.repo.invalidatevolatilesets()
2157 pullop.repo.invalidatevolatilesets()
2156 return tr
2158 return tr
2157
2159
2158
2160
2159 def applynarrowacl(repo, kwargs):
2161 def applynarrowacl(repo, kwargs):
2160 """Apply narrow fetch access control.
2162 """Apply narrow fetch access control.
2161
2163
2162 This massages the named arguments for getbundle wire protocol commands
2164 This massages the named arguments for getbundle wire protocol commands
2163 so requested data is filtered through access control rules.
2165 so requested data is filtered through access control rules.
2164 """
2166 """
2165 ui = repo.ui
2167 ui = repo.ui
2166 # TODO this assumes existence of HTTP and is a layering violation.
2168 # TODO this assumes existence of HTTP and is a layering violation.
2167 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2169 username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username())
2168 user_includes = ui.configlist(
2170 user_includes = ui.configlist(
2169 _NARROWACL_SECTION,
2171 _NARROWACL_SECTION,
2170 username + b'.includes',
2172 username + b'.includes',
2171 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2173 ui.configlist(_NARROWACL_SECTION, b'default.includes'),
2172 )
2174 )
2173 user_excludes = ui.configlist(
2175 user_excludes = ui.configlist(
2174 _NARROWACL_SECTION,
2176 _NARROWACL_SECTION,
2175 username + b'.excludes',
2177 username + b'.excludes',
2176 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2178 ui.configlist(_NARROWACL_SECTION, b'default.excludes'),
2177 )
2179 )
2178 if not user_includes:
2180 if not user_includes:
2179 raise error.Abort(
2181 raise error.Abort(
2180 _(b"{} configuration for user {} is empty").format(
2182 _(b"{} configuration for user {} is empty").format(
2181 _NARROWACL_SECTION, username
2183 _NARROWACL_SECTION, username
2182 )
2184 )
2183 )
2185 )
2184
2186
2185 user_includes = [
2187 user_includes = [
2186 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2188 b'path:.' if p == b'*' else b'path:' + p for p in user_includes
2187 ]
2189 ]
2188 user_excludes = [
2190 user_excludes = [
2189 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2191 b'path:.' if p == b'*' else b'path:' + p for p in user_excludes
2190 ]
2192 ]
2191
2193
2192 req_includes = set(kwargs.get(r'includepats', []))
2194 req_includes = set(kwargs.get(r'includepats', []))
2193 req_excludes = set(kwargs.get(r'excludepats', []))
2195 req_excludes = set(kwargs.get(r'excludepats', []))
2194
2196
2195 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2197 req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns(
2196 req_includes, req_excludes, user_includes, user_excludes
2198 req_includes, req_excludes, user_includes, user_excludes
2197 )
2199 )
2198
2200
2199 if invalid_includes:
2201 if invalid_includes:
2200 raise error.Abort(
2202 raise error.Abort(
2201 _(b"The following includes are not accessible for {}: {}").format(
2203 _(b"The following includes are not accessible for {}: {}").format(
2202 username, invalid_includes
2204 username, invalid_includes
2203 )
2205 )
2204 )
2206 )
2205
2207
2206 new_args = {}
2208 new_args = {}
2207 new_args.update(kwargs)
2209 new_args.update(kwargs)
2208 new_args[r'narrow'] = True
2210 new_args[r'narrow'] = True
2209 new_args[r'narrow_acl'] = True
2211 new_args[r'narrow_acl'] = True
2210 new_args[r'includepats'] = req_includes
2212 new_args[r'includepats'] = req_includes
2211 if req_excludes:
2213 if req_excludes:
2212 new_args[r'excludepats'] = req_excludes
2214 new_args[r'excludepats'] = req_excludes
2213
2215
2214 return new_args
2216 return new_args
2215
2217
2216
2218
2217 def _computeellipsis(repo, common, heads, known, match, depth=None):
2219 def _computeellipsis(repo, common, heads, known, match, depth=None):
2218 """Compute the shape of a narrowed DAG.
2220 """Compute the shape of a narrowed DAG.
2219
2221
2220 Args:
2222 Args:
2221 repo: The repository we're transferring.
2223 repo: The repository we're transferring.
2222 common: The roots of the DAG range we're transferring.
2224 common: The roots of the DAG range we're transferring.
2223 May be just [nullid], which means all ancestors of heads.
2225 May be just [nullid], which means all ancestors of heads.
2224 heads: The heads of the DAG range we're transferring.
2226 heads: The heads of the DAG range we're transferring.
2225 match: The narrowmatcher that allows us to identify relevant changes.
2227 match: The narrowmatcher that allows us to identify relevant changes.
2226 depth: If not None, only consider nodes to be full nodes if they are at
2228 depth: If not None, only consider nodes to be full nodes if they are at
2227 most depth changesets away from one of heads.
2229 most depth changesets away from one of heads.
2228
2230
2229 Returns:
2231 Returns:
2230 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2232 A tuple of (visitnodes, relevant_nodes, ellipsisroots) where:
2231
2233
2232 visitnodes: The list of nodes (either full or ellipsis) which
2234 visitnodes: The list of nodes (either full or ellipsis) which
2233 need to be sent to the client.
2235 need to be sent to the client.
2234 relevant_nodes: The set of changelog nodes which change a file inside
2236 relevant_nodes: The set of changelog nodes which change a file inside
2235 the narrowspec. The client needs these as non-ellipsis nodes.
2237 the narrowspec. The client needs these as non-ellipsis nodes.
2236 ellipsisroots: A dict of {rev: parents} that is used in
2238 ellipsisroots: A dict of {rev: parents} that is used in
2237 narrowchangegroup to produce ellipsis nodes with the
2239 narrowchangegroup to produce ellipsis nodes with the
2238 correct parents.
2240 correct parents.
2239 """
2241 """
2240 cl = repo.changelog
2242 cl = repo.changelog
2241 mfl = repo.manifestlog
2243 mfl = repo.manifestlog
2242
2244
2243 clrev = cl.rev
2245 clrev = cl.rev
2244
2246
2245 commonrevs = {clrev(n) for n in common} | {nullrev}
2247 commonrevs = {clrev(n) for n in common} | {nullrev}
2246 headsrevs = {clrev(n) for n in heads}
2248 headsrevs = {clrev(n) for n in heads}
2247
2249
2248 if depth:
2250 if depth:
2249 revdepth = {h: 0 for h in headsrevs}
2251 revdepth = {h: 0 for h in headsrevs}
2250
2252
2251 ellipsisheads = collections.defaultdict(set)
2253 ellipsisheads = collections.defaultdict(set)
2252 ellipsisroots = collections.defaultdict(set)
2254 ellipsisroots = collections.defaultdict(set)
2253
2255
2254 def addroot(head, curchange):
2256 def addroot(head, curchange):
2255 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2257 """Add a root to an ellipsis head, splitting heads with 3 roots."""
2256 ellipsisroots[head].add(curchange)
2258 ellipsisroots[head].add(curchange)
2257 # Recursively split ellipsis heads with 3 roots by finding the
2259 # Recursively split ellipsis heads with 3 roots by finding the
2258 # roots' youngest common descendant which is an elided merge commit.
2260 # roots' youngest common descendant which is an elided merge commit.
2259 # That descendant takes 2 of the 3 roots as its own, and becomes a
2261 # That descendant takes 2 of the 3 roots as its own, and becomes a
2260 # root of the head.
2262 # root of the head.
2261 while len(ellipsisroots[head]) > 2:
2263 while len(ellipsisroots[head]) > 2:
2262 child, roots = splithead(head)
2264 child, roots = splithead(head)
2263 splitroots(head, child, roots)
2265 splitroots(head, child, roots)
2264 head = child # Recurse in case we just added a 3rd root
2266 head = child # Recurse in case we just added a 3rd root
2265
2267
2266 def splitroots(head, child, roots):
2268 def splitroots(head, child, roots):
2267 ellipsisroots[head].difference_update(roots)
2269 ellipsisroots[head].difference_update(roots)
2268 ellipsisroots[head].add(child)
2270 ellipsisroots[head].add(child)
2269 ellipsisroots[child].update(roots)
2271 ellipsisroots[child].update(roots)
2270 ellipsisroots[child].discard(child)
2272 ellipsisroots[child].discard(child)
2271
2273
2272 def splithead(head):
2274 def splithead(head):
2273 r1, r2, r3 = sorted(ellipsisroots[head])
2275 r1, r2, r3 = sorted(ellipsisroots[head])
2274 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2276 for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)):
2275 mid = repo.revs(
2277 mid = repo.revs(
2276 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2278 b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head
2277 )
2279 )
2278 for j in mid:
2280 for j in mid:
2279 if j == nr2:
2281 if j == nr2:
2280 return nr2, (nr1, nr2)
2282 return nr2, (nr1, nr2)
2281 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2283 if j not in ellipsisroots or len(ellipsisroots[j]) < 2:
2282 return j, (nr1, nr2)
2284 return j, (nr1, nr2)
2283 raise error.Abort(
2285 raise error.Abort(
2284 _(
2286 _(
2285 b'Failed to split up ellipsis node! head: %d, '
2287 b'Failed to split up ellipsis node! head: %d, '
2286 b'roots: %d %d %d'
2288 b'roots: %d %d %d'
2287 )
2289 )
2288 % (head, r1, r2, r3)
2290 % (head, r1, r2, r3)
2289 )
2291 )
2290
2292
2291 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2293 missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs))
2292 visit = reversed(missing)
2294 visit = reversed(missing)
2293 relevant_nodes = set()
2295 relevant_nodes = set()
2294 visitnodes = [cl.node(m) for m in missing]
2296 visitnodes = [cl.node(m) for m in missing]
2295 required = set(headsrevs) | known
2297 required = set(headsrevs) | known
2296 for rev in visit:
2298 for rev in visit:
2297 clrev = cl.changelogrevision(rev)
2299 clrev = cl.changelogrevision(rev)
2298 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2300 ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev]
2299 if depth is not None:
2301 if depth is not None:
2300 curdepth = revdepth[rev]
2302 curdepth = revdepth[rev]
2301 for p in ps:
2303 for p in ps:
2302 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2304 revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1))
2303 needed = False
2305 needed = False
2304 shallow_enough = depth is None or revdepth[rev] <= depth
2306 shallow_enough = depth is None or revdepth[rev] <= depth
2305 if shallow_enough:
2307 if shallow_enough:
2306 curmf = mfl[clrev.manifest].read()
2308 curmf = mfl[clrev.manifest].read()
2307 if ps:
2309 if ps:
2308 # We choose to not trust the changed files list in
2310 # We choose to not trust the changed files list in
2309 # changesets because it's not always correct. TODO: could
2311 # changesets because it's not always correct. TODO: could
2310 # we trust it for the non-merge case?
2312 # we trust it for the non-merge case?
2311 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2313 p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read()
2312 needed = bool(curmf.diff(p1mf, match))
2314 needed = bool(curmf.diff(p1mf, match))
2313 if not needed and len(ps) > 1:
2315 if not needed and len(ps) > 1:
2314 # For merge changes, the list of changed files is not
2316 # For merge changes, the list of changed files is not
2315 # helpful, since we need to emit the merge if a file
2317 # helpful, since we need to emit the merge if a file
2316 # in the narrow spec has changed on either side of the
2318 # in the narrow spec has changed on either side of the
2317 # merge. As a result, we do a manifest diff to check.
2319 # merge. As a result, we do a manifest diff to check.
2318 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2320 p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read()
2319 needed = bool(curmf.diff(p2mf, match))
2321 needed = bool(curmf.diff(p2mf, match))
2320 else:
2322 else:
2321 # For a root node, we need to include the node if any
2323 # For a root node, we need to include the node if any
2322 # files in the node match the narrowspec.
2324 # files in the node match the narrowspec.
2323 needed = any(curmf.walk(match))
2325 needed = any(curmf.walk(match))
2324
2326
2325 if needed:
2327 if needed:
2326 for head in ellipsisheads[rev]:
2328 for head in ellipsisheads[rev]:
2327 addroot(head, rev)
2329 addroot(head, rev)
2328 for p in ps:
2330 for p in ps:
2329 required.add(p)
2331 required.add(p)
2330 relevant_nodes.add(cl.node(rev))
2332 relevant_nodes.add(cl.node(rev))
2331 else:
2333 else:
2332 if not ps:
2334 if not ps:
2333 ps = [nullrev]
2335 ps = [nullrev]
2334 if rev in required:
2336 if rev in required:
2335 for head in ellipsisheads[rev]:
2337 for head in ellipsisheads[rev]:
2336 addroot(head, rev)
2338 addroot(head, rev)
2337 for p in ps:
2339 for p in ps:
2338 ellipsisheads[p].add(rev)
2340 ellipsisheads[p].add(rev)
2339 else:
2341 else:
2340 for p in ps:
2342 for p in ps:
2341 ellipsisheads[p] |= ellipsisheads[rev]
2343 ellipsisheads[p] |= ellipsisheads[rev]
2342
2344
2343 # add common changesets as roots of their reachable ellipsis heads
2345 # add common changesets as roots of their reachable ellipsis heads
2344 for c in commonrevs:
2346 for c in commonrevs:
2345 for head in ellipsisheads[c]:
2347 for head in ellipsisheads[c]:
2346 addroot(head, c)
2348 addroot(head, c)
2347 return visitnodes, relevant_nodes, ellipsisroots
2349 return visitnodes, relevant_nodes, ellipsisroots
2348
2350
2349
2351
2350 def caps20to10(repo, role):
2352 def caps20to10(repo, role):
2351 """return a set with appropriate options to use bundle20 during getbundle"""
2353 """return a set with appropriate options to use bundle20 during getbundle"""
2352 caps = {b'HG20'}
2354 caps = {b'HG20'}
2353 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2355 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
2354 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2356 caps.add(b'bundle2=' + urlreq.quote(capsblob))
2355 return caps
2357 return caps
2356
2358
2357
2359
2358 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2360 # List of names of steps to perform for a bundle2 for getbundle, order matters.
2359 getbundle2partsorder = []
2361 getbundle2partsorder = []
2360
2362
2361 # Mapping between step name and function
2363 # Mapping between step name and function
2362 #
2364 #
2363 # This exists to help extensions wrap steps if necessary
2365 # This exists to help extensions wrap steps if necessary
2364 getbundle2partsmapping = {}
2366 getbundle2partsmapping = {}
2365
2367
2366
2368
2367 def getbundle2partsgenerator(stepname, idx=None):
2369 def getbundle2partsgenerator(stepname, idx=None):
2368 """decorator for function generating bundle2 part for getbundle
2370 """decorator for function generating bundle2 part for getbundle
2369
2371
2370 The function is added to the step -> function mapping and appended to the
2372 The function is added to the step -> function mapping and appended to the
2371 list of steps. Beware that decorated functions will be added in order
2373 list of steps. Beware that decorated functions will be added in order
2372 (this may matter).
2374 (this may matter).
2373
2375
2374 You can only use this decorator for new steps, if you want to wrap a step
2376 You can only use this decorator for new steps, if you want to wrap a step
2375 from an extension, attack the getbundle2partsmapping dictionary directly."""
2377 from an extension, attack the getbundle2partsmapping dictionary directly."""
2376
2378
2377 def dec(func):
2379 def dec(func):
2378 assert stepname not in getbundle2partsmapping
2380 assert stepname not in getbundle2partsmapping
2379 getbundle2partsmapping[stepname] = func
2381 getbundle2partsmapping[stepname] = func
2380 if idx is None:
2382 if idx is None:
2381 getbundle2partsorder.append(stepname)
2383 getbundle2partsorder.append(stepname)
2382 else:
2384 else:
2383 getbundle2partsorder.insert(idx, stepname)
2385 getbundle2partsorder.insert(idx, stepname)
2384 return func
2386 return func
2385
2387
2386 return dec
2388 return dec
2387
2389
2388
2390
2389 def bundle2requested(bundlecaps):
2391 def bundle2requested(bundlecaps):
2390 if bundlecaps is not None:
2392 if bundlecaps is not None:
2391 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2393 return any(cap.startswith(b'HG2') for cap in bundlecaps)
2392 return False
2394 return False
2393
2395
2394
2396
2395 def getbundlechunks(
2397 def getbundlechunks(
2396 repo, source, heads=None, common=None, bundlecaps=None, **kwargs
2398 repo, source, heads=None, common=None, bundlecaps=None, **kwargs
2397 ):
2399 ):
2398 """Return chunks constituting a bundle's raw data.
2400 """Return chunks constituting a bundle's raw data.
2399
2401
2400 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2402 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
2401 passed.
2403 passed.
2402
2404
2403 Returns a 2-tuple of a dict with metadata about the generated bundle
2405 Returns a 2-tuple of a dict with metadata about the generated bundle
2404 and an iterator over raw chunks (of varying sizes).
2406 and an iterator over raw chunks (of varying sizes).
2405 """
2407 """
2406 kwargs = pycompat.byteskwargs(kwargs)
2408 kwargs = pycompat.byteskwargs(kwargs)
2407 info = {}
2409 info = {}
2408 usebundle2 = bundle2requested(bundlecaps)
2410 usebundle2 = bundle2requested(bundlecaps)
2409 # bundle10 case
2411 # bundle10 case
2410 if not usebundle2:
2412 if not usebundle2:
2411 if bundlecaps and not kwargs.get(b'cg', True):
2413 if bundlecaps and not kwargs.get(b'cg', True):
2412 raise ValueError(
2414 raise ValueError(
2413 _(b'request for bundle10 must include changegroup')
2415 _(b'request for bundle10 must include changegroup')
2414 )
2416 )
2415
2417
2416 if kwargs:
2418 if kwargs:
2417 raise ValueError(
2419 raise ValueError(
2418 _(b'unsupported getbundle arguments: %s')
2420 _(b'unsupported getbundle arguments: %s')
2419 % b', '.join(sorted(kwargs.keys()))
2421 % b', '.join(sorted(kwargs.keys()))
2420 )
2422 )
2421 outgoing = _computeoutgoing(repo, heads, common)
2423 outgoing = _computeoutgoing(repo, heads, common)
2422 info[b'bundleversion'] = 1
2424 info[b'bundleversion'] = 1
2423 return (
2425 return (
2424 info,
2426 info,
2425 changegroup.makestream(
2427 changegroup.makestream(
2426 repo, outgoing, b'01', source, bundlecaps=bundlecaps
2428 repo, outgoing, b'01', source, bundlecaps=bundlecaps
2427 ),
2429 ),
2428 )
2430 )
2429
2431
2430 # bundle20 case
2432 # bundle20 case
2431 info[b'bundleversion'] = 2
2433 info[b'bundleversion'] = 2
2432 b2caps = {}
2434 b2caps = {}
2433 for bcaps in bundlecaps:
2435 for bcaps in bundlecaps:
2434 if bcaps.startswith(b'bundle2='):
2436 if bcaps.startswith(b'bundle2='):
2435 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2437 blob = urlreq.unquote(bcaps[len(b'bundle2=') :])
2436 b2caps.update(bundle2.decodecaps(blob))
2438 b2caps.update(bundle2.decodecaps(blob))
2437 bundler = bundle2.bundle20(repo.ui, b2caps)
2439 bundler = bundle2.bundle20(repo.ui, b2caps)
2438
2440
2439 kwargs[b'heads'] = heads
2441 kwargs[b'heads'] = heads
2440 kwargs[b'common'] = common
2442 kwargs[b'common'] = common
2441
2443
2442 for name in getbundle2partsorder:
2444 for name in getbundle2partsorder:
2443 func = getbundle2partsmapping[name]
2445 func = getbundle2partsmapping[name]
2444 func(
2446 func(
2445 bundler,
2447 bundler,
2446 repo,
2448 repo,
2447 source,
2449 source,
2448 bundlecaps=bundlecaps,
2450 bundlecaps=bundlecaps,
2449 b2caps=b2caps,
2451 b2caps=b2caps,
2450 **pycompat.strkwargs(kwargs)
2452 **pycompat.strkwargs(kwargs)
2451 )
2453 )
2452
2454
2453 info[b'prefercompressed'] = bundler.prefercompressed
2455 info[b'prefercompressed'] = bundler.prefercompressed
2454
2456
2455 return info, bundler.getchunks()
2457 return info, bundler.getchunks()
2456
2458
2457
2459
2458 @getbundle2partsgenerator(b'stream2')
2460 @getbundle2partsgenerator(b'stream2')
2459 def _getbundlestream2(bundler, repo, *args, **kwargs):
2461 def _getbundlestream2(bundler, repo, *args, **kwargs):
2460 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2462 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
2461
2463
2462
2464
2463 @getbundle2partsgenerator(b'changegroup')
2465 @getbundle2partsgenerator(b'changegroup')
2464 def _getbundlechangegrouppart(
2466 def _getbundlechangegrouppart(
2465 bundler,
2467 bundler,
2466 repo,
2468 repo,
2467 source,
2469 source,
2468 bundlecaps=None,
2470 bundlecaps=None,
2469 b2caps=None,
2471 b2caps=None,
2470 heads=None,
2472 heads=None,
2471 common=None,
2473 common=None,
2472 **kwargs
2474 **kwargs
2473 ):
2475 ):
2474 """add a changegroup part to the requested bundle"""
2476 """add a changegroup part to the requested bundle"""
2475 if not kwargs.get(r'cg', True):
2477 if not kwargs.get(r'cg', True):
2476 return
2478 return
2477
2479
2478 version = b'01'
2480 version = b'01'
2479 cgversions = b2caps.get(b'changegroup')
2481 cgversions = b2caps.get(b'changegroup')
2480 if cgversions: # 3.1 and 3.2 ship with an empty value
2482 if cgversions: # 3.1 and 3.2 ship with an empty value
2481 cgversions = [
2483 cgversions = [
2482 v
2484 v
2483 for v in cgversions
2485 for v in cgversions
2484 if v in changegroup.supportedoutgoingversions(repo)
2486 if v in changegroup.supportedoutgoingversions(repo)
2485 ]
2487 ]
2486 if not cgversions:
2488 if not cgversions:
2487 raise error.Abort(_(b'no common changegroup version'))
2489 raise error.Abort(_(b'no common changegroup version'))
2488 version = max(cgversions)
2490 version = max(cgversions)
2489
2491
2490 outgoing = _computeoutgoing(repo, heads, common)
2492 outgoing = _computeoutgoing(repo, heads, common)
2491 if not outgoing.missing:
2493 if not outgoing.missing:
2492 return
2494 return
2493
2495
2494 if kwargs.get(r'narrow', False):
2496 if kwargs.get(r'narrow', False):
2495 include = sorted(filter(bool, kwargs.get(r'includepats', [])))
2497 include = sorted(filter(bool, kwargs.get(r'includepats', [])))
2496 exclude = sorted(filter(bool, kwargs.get(r'excludepats', [])))
2498 exclude = sorted(filter(bool, kwargs.get(r'excludepats', [])))
2497 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2499 matcher = narrowspec.match(repo.root, include=include, exclude=exclude)
2498 else:
2500 else:
2499 matcher = None
2501 matcher = None
2500
2502
2501 cgstream = changegroup.makestream(
2503 cgstream = changegroup.makestream(
2502 repo, outgoing, version, source, bundlecaps=bundlecaps, matcher=matcher
2504 repo, outgoing, version, source, bundlecaps=bundlecaps, matcher=matcher
2503 )
2505 )
2504
2506
2505 part = bundler.newpart(b'changegroup', data=cgstream)
2507 part = bundler.newpart(b'changegroup', data=cgstream)
2506 if cgversions:
2508 if cgversions:
2507 part.addparam(b'version', version)
2509 part.addparam(b'version', version)
2508
2510
2509 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2511 part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False)
2510
2512
2511 if b'treemanifest' in repo.requirements:
2513 if b'treemanifest' in repo.requirements:
2512 part.addparam(b'treemanifest', b'1')
2514 part.addparam(b'treemanifest', b'1')
2513
2515
2516 if b'exp-sidedata-flag' in repo.requirements:
2517 part.addparam(b'exp-sidedata', b'1')
2518
2514 if (
2519 if (
2515 kwargs.get(r'narrow', False)
2520 kwargs.get(r'narrow', False)
2516 and kwargs.get(r'narrow_acl', False)
2521 and kwargs.get(r'narrow_acl', False)
2517 and (include or exclude)
2522 and (include or exclude)
2518 ):
2523 ):
2519 # this is mandatory because otherwise ACL clients won't work
2524 # this is mandatory because otherwise ACL clients won't work
2520 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2525 narrowspecpart = bundler.newpart(b'Narrow:responsespec')
2521 narrowspecpart.data = b'%s\0%s' % (
2526 narrowspecpart.data = b'%s\0%s' % (
2522 b'\n'.join(include),
2527 b'\n'.join(include),
2523 b'\n'.join(exclude),
2528 b'\n'.join(exclude),
2524 )
2529 )
2525
2530
2526
2531
2527 @getbundle2partsgenerator(b'bookmarks')
2532 @getbundle2partsgenerator(b'bookmarks')
2528 def _getbundlebookmarkpart(
2533 def _getbundlebookmarkpart(
2529 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2534 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2530 ):
2535 ):
2531 """add a bookmark part to the requested bundle"""
2536 """add a bookmark part to the requested bundle"""
2532 if not kwargs.get(r'bookmarks', False):
2537 if not kwargs.get(r'bookmarks', False):
2533 return
2538 return
2534 if b'bookmarks' not in b2caps:
2539 if b'bookmarks' not in b2caps:
2535 raise error.Abort(_(b'no common bookmarks exchange method'))
2540 raise error.Abort(_(b'no common bookmarks exchange method'))
2536 books = bookmod.listbinbookmarks(repo)
2541 books = bookmod.listbinbookmarks(repo)
2537 data = bookmod.binaryencode(books)
2542 data = bookmod.binaryencode(books)
2538 if data:
2543 if data:
2539 bundler.newpart(b'bookmarks', data=data)
2544 bundler.newpart(b'bookmarks', data=data)
2540
2545
2541
2546
2542 @getbundle2partsgenerator(b'listkeys')
2547 @getbundle2partsgenerator(b'listkeys')
2543 def _getbundlelistkeysparts(
2548 def _getbundlelistkeysparts(
2544 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2549 bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs
2545 ):
2550 ):
2546 """add parts containing listkeys namespaces to the requested bundle"""
2551 """add parts containing listkeys namespaces to the requested bundle"""
2547 listkeys = kwargs.get(r'listkeys', ())
2552 listkeys = kwargs.get(r'listkeys', ())
2548 for namespace in listkeys:
2553 for namespace in listkeys:
2549 part = bundler.newpart(b'listkeys')
2554 part = bundler.newpart(b'listkeys')
2550 part.addparam(b'namespace', namespace)
2555 part.addparam(b'namespace', namespace)
2551 keys = repo.listkeys(namespace).items()
2556 keys = repo.listkeys(namespace).items()
2552 part.data = pushkey.encodekeys(keys)
2557 part.data = pushkey.encodekeys(keys)
2553
2558
2554
2559
2555 @getbundle2partsgenerator(b'obsmarkers')
2560 @getbundle2partsgenerator(b'obsmarkers')
2556 def _getbundleobsmarkerpart(
2561 def _getbundleobsmarkerpart(
2557 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2562 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2558 ):
2563 ):
2559 """add an obsolescence markers part to the requested bundle"""
2564 """add an obsolescence markers part to the requested bundle"""
2560 if kwargs.get(r'obsmarkers', False):
2565 if kwargs.get(r'obsmarkers', False):
2561 if heads is None:
2566 if heads is None:
2562 heads = repo.heads()
2567 heads = repo.heads()
2563 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2568 subset = [c.node() for c in repo.set(b'::%ln', heads)]
2564 markers = repo.obsstore.relevantmarkers(subset)
2569 markers = repo.obsstore.relevantmarkers(subset)
2565 markers = sorted(markers)
2570 markers = sorted(markers)
2566 bundle2.buildobsmarkerspart(bundler, markers)
2571 bundle2.buildobsmarkerspart(bundler, markers)
2567
2572
2568
2573
2569 @getbundle2partsgenerator(b'phases')
2574 @getbundle2partsgenerator(b'phases')
2570 def _getbundlephasespart(
2575 def _getbundlephasespart(
2571 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2576 bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs
2572 ):
2577 ):
2573 """add phase heads part to the requested bundle"""
2578 """add phase heads part to the requested bundle"""
2574 if kwargs.get(r'phases', False):
2579 if kwargs.get(r'phases', False):
2575 if not b'heads' in b2caps.get(b'phases'):
2580 if not b'heads' in b2caps.get(b'phases'):
2576 raise error.Abort(_(b'no common phases exchange method'))
2581 raise error.Abort(_(b'no common phases exchange method'))
2577 if heads is None:
2582 if heads is None:
2578 heads = repo.heads()
2583 heads = repo.heads()
2579
2584
2580 headsbyphase = collections.defaultdict(set)
2585 headsbyphase = collections.defaultdict(set)
2581 if repo.publishing():
2586 if repo.publishing():
2582 headsbyphase[phases.public] = heads
2587 headsbyphase[phases.public] = heads
2583 else:
2588 else:
2584 # find the appropriate heads to move
2589 # find the appropriate heads to move
2585
2590
2586 phase = repo._phasecache.phase
2591 phase = repo._phasecache.phase
2587 node = repo.changelog.node
2592 node = repo.changelog.node
2588 rev = repo.changelog.rev
2593 rev = repo.changelog.rev
2589 for h in heads:
2594 for h in heads:
2590 headsbyphase[phase(repo, rev(h))].add(h)
2595 headsbyphase[phase(repo, rev(h))].add(h)
2591 seenphases = list(headsbyphase.keys())
2596 seenphases = list(headsbyphase.keys())
2592
2597
2593 # We do not handle anything but public and draft phase for now)
2598 # We do not handle anything but public and draft phase for now)
2594 if seenphases:
2599 if seenphases:
2595 assert max(seenphases) <= phases.draft
2600 assert max(seenphases) <= phases.draft
2596
2601
2597 # if client is pulling non-public changesets, we need to find
2602 # if client is pulling non-public changesets, we need to find
2598 # intermediate public heads.
2603 # intermediate public heads.
2599 draftheads = headsbyphase.get(phases.draft, set())
2604 draftheads = headsbyphase.get(phases.draft, set())
2600 if draftheads:
2605 if draftheads:
2601 publicheads = headsbyphase.get(phases.public, set())
2606 publicheads = headsbyphase.get(phases.public, set())
2602
2607
2603 revset = b'heads(only(%ln, %ln) and public())'
2608 revset = b'heads(only(%ln, %ln) and public())'
2604 extraheads = repo.revs(revset, draftheads, publicheads)
2609 extraheads = repo.revs(revset, draftheads, publicheads)
2605 for r in extraheads:
2610 for r in extraheads:
2606 headsbyphase[phases.public].add(node(r))
2611 headsbyphase[phases.public].add(node(r))
2607
2612
2608 # transform data in a format used by the encoding function
2613 # transform data in a format used by the encoding function
2609 phasemapping = []
2614 phasemapping = []
2610 for phase in phases.allphases:
2615 for phase in phases.allphases:
2611 phasemapping.append(sorted(headsbyphase[phase]))
2616 phasemapping.append(sorted(headsbyphase[phase]))
2612
2617
2613 # generate the actual part
2618 # generate the actual part
2614 phasedata = phases.binaryencode(phasemapping)
2619 phasedata = phases.binaryencode(phasemapping)
2615 bundler.newpart(b'phase-heads', data=phasedata)
2620 bundler.newpart(b'phase-heads', data=phasedata)
2616
2621
2617
2622
2618 @getbundle2partsgenerator(b'hgtagsfnodes')
2623 @getbundle2partsgenerator(b'hgtagsfnodes')
2619 def _getbundletagsfnodes(
2624 def _getbundletagsfnodes(
2620 bundler,
2625 bundler,
2621 repo,
2626 repo,
2622 source,
2627 source,
2623 bundlecaps=None,
2628 bundlecaps=None,
2624 b2caps=None,
2629 b2caps=None,
2625 heads=None,
2630 heads=None,
2626 common=None,
2631 common=None,
2627 **kwargs
2632 **kwargs
2628 ):
2633 ):
2629 """Transfer the .hgtags filenodes mapping.
2634 """Transfer the .hgtags filenodes mapping.
2630
2635
2631 Only values for heads in this bundle will be transferred.
2636 Only values for heads in this bundle will be transferred.
2632
2637
2633 The part data consists of pairs of 20 byte changeset node and .hgtags
2638 The part data consists of pairs of 20 byte changeset node and .hgtags
2634 filenodes raw values.
2639 filenodes raw values.
2635 """
2640 """
2636 # Don't send unless:
2641 # Don't send unless:
2637 # - changeset are being exchanged,
2642 # - changeset are being exchanged,
2638 # - the client supports it.
2643 # - the client supports it.
2639 if not (kwargs.get(r'cg', True) and b'hgtagsfnodes' in b2caps):
2644 if not (kwargs.get(r'cg', True) and b'hgtagsfnodes' in b2caps):
2640 return
2645 return
2641
2646
2642 outgoing = _computeoutgoing(repo, heads, common)
2647 outgoing = _computeoutgoing(repo, heads, common)
2643 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2648 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
2644
2649
2645
2650
2646 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2651 @getbundle2partsgenerator(b'cache:rev-branch-cache')
2647 def _getbundlerevbranchcache(
2652 def _getbundlerevbranchcache(
2648 bundler,
2653 bundler,
2649 repo,
2654 repo,
2650 source,
2655 source,
2651 bundlecaps=None,
2656 bundlecaps=None,
2652 b2caps=None,
2657 b2caps=None,
2653 heads=None,
2658 heads=None,
2654 common=None,
2659 common=None,
2655 **kwargs
2660 **kwargs
2656 ):
2661 ):
2657 """Transfer the rev-branch-cache mapping
2662 """Transfer the rev-branch-cache mapping
2658
2663
2659 The payload is a series of data related to each branch
2664 The payload is a series of data related to each branch
2660
2665
2661 1) branch name length
2666 1) branch name length
2662 2) number of open heads
2667 2) number of open heads
2663 3) number of closed heads
2668 3) number of closed heads
2664 4) open heads nodes
2669 4) open heads nodes
2665 5) closed heads nodes
2670 5) closed heads nodes
2666 """
2671 """
2667 # Don't send unless:
2672 # Don't send unless:
2668 # - changeset are being exchanged,
2673 # - changeset are being exchanged,
2669 # - the client supports it.
2674 # - the client supports it.
2670 # - narrow bundle isn't in play (not currently compatible).
2675 # - narrow bundle isn't in play (not currently compatible).
2671 if (
2676 if (
2672 not kwargs.get(r'cg', True)
2677 not kwargs.get(r'cg', True)
2673 or b'rev-branch-cache' not in b2caps
2678 or b'rev-branch-cache' not in b2caps
2674 or kwargs.get(r'narrow', False)
2679 or kwargs.get(r'narrow', False)
2675 or repo.ui.has_section(_NARROWACL_SECTION)
2680 or repo.ui.has_section(_NARROWACL_SECTION)
2676 ):
2681 ):
2677 return
2682 return
2678
2683
2679 outgoing = _computeoutgoing(repo, heads, common)
2684 outgoing = _computeoutgoing(repo, heads, common)
2680 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2685 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
2681
2686
2682
2687
2683 def check_heads(repo, their_heads, context):
2688 def check_heads(repo, their_heads, context):
2684 """check if the heads of a repo have been modified
2689 """check if the heads of a repo have been modified
2685
2690
2686 Used by peer for unbundling.
2691 Used by peer for unbundling.
2687 """
2692 """
2688 heads = repo.heads()
2693 heads = repo.heads()
2689 heads_hash = hashlib.sha1(b''.join(sorted(heads))).digest()
2694 heads_hash = hashlib.sha1(b''.join(sorted(heads))).digest()
2690 if not (
2695 if not (
2691 their_heads == [b'force']
2696 their_heads == [b'force']
2692 or their_heads == heads
2697 or their_heads == heads
2693 or their_heads == [b'hashed', heads_hash]
2698 or their_heads == [b'hashed', heads_hash]
2694 ):
2699 ):
2695 # someone else committed/pushed/unbundled while we
2700 # someone else committed/pushed/unbundled while we
2696 # were transferring data
2701 # were transferring data
2697 raise error.PushRaced(
2702 raise error.PushRaced(
2698 b'repository changed while %s - please try again' % context
2703 b'repository changed while %s - please try again' % context
2699 )
2704 )
2700
2705
2701
2706
2702 def unbundle(repo, cg, heads, source, url):
2707 def unbundle(repo, cg, heads, source, url):
2703 """Apply a bundle to a repo.
2708 """Apply a bundle to a repo.
2704
2709
2705 this function makes sure the repo is locked during the application and have
2710 this function makes sure the repo is locked during the application and have
2706 mechanism to check that no push race occurred between the creation of the
2711 mechanism to check that no push race occurred between the creation of the
2707 bundle and its application.
2712 bundle and its application.
2708
2713
2709 If the push was raced as PushRaced exception is raised."""
2714 If the push was raced as PushRaced exception is raised."""
2710 r = 0
2715 r = 0
2711 # need a transaction when processing a bundle2 stream
2716 # need a transaction when processing a bundle2 stream
2712 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2717 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2713 lockandtr = [None, None, None]
2718 lockandtr = [None, None, None]
2714 recordout = None
2719 recordout = None
2715 # quick fix for output mismatch with bundle2 in 3.4
2720 # quick fix for output mismatch with bundle2 in 3.4
2716 captureoutput = repo.ui.configbool(
2721 captureoutput = repo.ui.configbool(
2717 b'experimental', b'bundle2-output-capture'
2722 b'experimental', b'bundle2-output-capture'
2718 )
2723 )
2719 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2724 if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'):
2720 captureoutput = True
2725 captureoutput = True
2721 try:
2726 try:
2722 # note: outside bundle1, 'heads' is expected to be empty and this
2727 # note: outside bundle1, 'heads' is expected to be empty and this
2723 # 'check_heads' call wil be a no-op
2728 # 'check_heads' call wil be a no-op
2724 check_heads(repo, heads, b'uploading changes')
2729 check_heads(repo, heads, b'uploading changes')
2725 # push can proceed
2730 # push can proceed
2726 if not isinstance(cg, bundle2.unbundle20):
2731 if not isinstance(cg, bundle2.unbundle20):
2727 # legacy case: bundle1 (changegroup 01)
2732 # legacy case: bundle1 (changegroup 01)
2728 txnname = b"\n".join([source, util.hidepassword(url)])
2733 txnname = b"\n".join([source, util.hidepassword(url)])
2729 with repo.lock(), repo.transaction(txnname) as tr:
2734 with repo.lock(), repo.transaction(txnname) as tr:
2730 op = bundle2.applybundle(repo, cg, tr, source, url)
2735 op = bundle2.applybundle(repo, cg, tr, source, url)
2731 r = bundle2.combinechangegroupresults(op)
2736 r = bundle2.combinechangegroupresults(op)
2732 else:
2737 else:
2733 r = None
2738 r = None
2734 try:
2739 try:
2735
2740
2736 def gettransaction():
2741 def gettransaction():
2737 if not lockandtr[2]:
2742 if not lockandtr[2]:
2738 if not bookmod.bookmarksinstore(repo):
2743 if not bookmod.bookmarksinstore(repo):
2739 lockandtr[0] = repo.wlock()
2744 lockandtr[0] = repo.wlock()
2740 lockandtr[1] = repo.lock()
2745 lockandtr[1] = repo.lock()
2741 lockandtr[2] = repo.transaction(source)
2746 lockandtr[2] = repo.transaction(source)
2742 lockandtr[2].hookargs[b'source'] = source
2747 lockandtr[2].hookargs[b'source'] = source
2743 lockandtr[2].hookargs[b'url'] = url
2748 lockandtr[2].hookargs[b'url'] = url
2744 lockandtr[2].hookargs[b'bundle2'] = b'1'
2749 lockandtr[2].hookargs[b'bundle2'] = b'1'
2745 return lockandtr[2]
2750 return lockandtr[2]
2746
2751
2747 # Do greedy locking by default until we're satisfied with lazy
2752 # Do greedy locking by default until we're satisfied with lazy
2748 # locking.
2753 # locking.
2749 if not repo.ui.configbool(
2754 if not repo.ui.configbool(
2750 b'experimental', b'bundle2lazylocking'
2755 b'experimental', b'bundle2lazylocking'
2751 ):
2756 ):
2752 gettransaction()
2757 gettransaction()
2753
2758
2754 op = bundle2.bundleoperation(
2759 op = bundle2.bundleoperation(
2755 repo,
2760 repo,
2756 gettransaction,
2761 gettransaction,
2757 captureoutput=captureoutput,
2762 captureoutput=captureoutput,
2758 source=b'push',
2763 source=b'push',
2759 )
2764 )
2760 try:
2765 try:
2761 op = bundle2.processbundle(repo, cg, op=op)
2766 op = bundle2.processbundle(repo, cg, op=op)
2762 finally:
2767 finally:
2763 r = op.reply
2768 r = op.reply
2764 if captureoutput and r is not None:
2769 if captureoutput and r is not None:
2765 repo.ui.pushbuffer(error=True, subproc=True)
2770 repo.ui.pushbuffer(error=True, subproc=True)
2766
2771
2767 def recordout(output):
2772 def recordout(output):
2768 r.newpart(b'output', data=output, mandatory=False)
2773 r.newpart(b'output', data=output, mandatory=False)
2769
2774
2770 if lockandtr[2] is not None:
2775 if lockandtr[2] is not None:
2771 lockandtr[2].close()
2776 lockandtr[2].close()
2772 except BaseException as exc:
2777 except BaseException as exc:
2773 exc.duringunbundle2 = True
2778 exc.duringunbundle2 = True
2774 if captureoutput and r is not None:
2779 if captureoutput and r is not None:
2775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2780 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2776
2781
2777 def recordout(output):
2782 def recordout(output):
2778 part = bundle2.bundlepart(
2783 part = bundle2.bundlepart(
2779 b'output', data=output, mandatory=False
2784 b'output', data=output, mandatory=False
2780 )
2785 )
2781 parts.append(part)
2786 parts.append(part)
2782
2787
2783 raise
2788 raise
2784 finally:
2789 finally:
2785 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2790 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2786 if recordout is not None:
2791 if recordout is not None:
2787 recordout(repo.ui.popbuffer())
2792 recordout(repo.ui.popbuffer())
2788 return r
2793 return r
2789
2794
2790
2795
2791 def _maybeapplyclonebundle(pullop):
2796 def _maybeapplyclonebundle(pullop):
2792 """Apply a clone bundle from a remote, if possible."""
2797 """Apply a clone bundle from a remote, if possible."""
2793
2798
2794 repo = pullop.repo
2799 repo = pullop.repo
2795 remote = pullop.remote
2800 remote = pullop.remote
2796
2801
2797 if not repo.ui.configbool(b'ui', b'clonebundles'):
2802 if not repo.ui.configbool(b'ui', b'clonebundles'):
2798 return
2803 return
2799
2804
2800 # Only run if local repo is empty.
2805 # Only run if local repo is empty.
2801 if len(repo):
2806 if len(repo):
2802 return
2807 return
2803
2808
2804 if pullop.heads:
2809 if pullop.heads:
2805 return
2810 return
2806
2811
2807 if not remote.capable(b'clonebundles'):
2812 if not remote.capable(b'clonebundles'):
2808 return
2813 return
2809
2814
2810 with remote.commandexecutor() as e:
2815 with remote.commandexecutor() as e:
2811 res = e.callcommand(b'clonebundles', {}).result()
2816 res = e.callcommand(b'clonebundles', {}).result()
2812
2817
2813 # If we call the wire protocol command, that's good enough to record the
2818 # If we call the wire protocol command, that's good enough to record the
2814 # attempt.
2819 # attempt.
2815 pullop.clonebundleattempted = True
2820 pullop.clonebundleattempted = True
2816
2821
2817 entries = parseclonebundlesmanifest(repo, res)
2822 entries = parseclonebundlesmanifest(repo, res)
2818 if not entries:
2823 if not entries:
2819 repo.ui.note(
2824 repo.ui.note(
2820 _(
2825 _(
2821 b'no clone bundles available on remote; '
2826 b'no clone bundles available on remote; '
2822 b'falling back to regular clone\n'
2827 b'falling back to regular clone\n'
2823 )
2828 )
2824 )
2829 )
2825 return
2830 return
2826
2831
2827 entries = filterclonebundleentries(
2832 entries = filterclonebundleentries(
2828 repo, entries, streamclonerequested=pullop.streamclonerequested
2833 repo, entries, streamclonerequested=pullop.streamclonerequested
2829 )
2834 )
2830
2835
2831 if not entries:
2836 if not entries:
2832 # There is a thundering herd concern here. However, if a server
2837 # There is a thundering herd concern here. However, if a server
2833 # operator doesn't advertise bundles appropriate for its clients,
2838 # operator doesn't advertise bundles appropriate for its clients,
2834 # they deserve what's coming. Furthermore, from a client's
2839 # they deserve what's coming. Furthermore, from a client's
2835 # perspective, no automatic fallback would mean not being able to
2840 # perspective, no automatic fallback would mean not being able to
2836 # clone!
2841 # clone!
2837 repo.ui.warn(
2842 repo.ui.warn(
2838 _(
2843 _(
2839 b'no compatible clone bundles available on server; '
2844 b'no compatible clone bundles available on server; '
2840 b'falling back to regular clone\n'
2845 b'falling back to regular clone\n'
2841 )
2846 )
2842 )
2847 )
2843 repo.ui.warn(
2848 repo.ui.warn(
2844 _(b'(you may want to report this to the server operator)\n')
2849 _(b'(you may want to report this to the server operator)\n')
2845 )
2850 )
2846 return
2851 return
2847
2852
2848 entries = sortclonebundleentries(repo.ui, entries)
2853 entries = sortclonebundleentries(repo.ui, entries)
2849
2854
2850 url = entries[0][b'URL']
2855 url = entries[0][b'URL']
2851 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2856 repo.ui.status(_(b'applying clone bundle from %s\n') % url)
2852 if trypullbundlefromurl(repo.ui, repo, url):
2857 if trypullbundlefromurl(repo.ui, repo, url):
2853 repo.ui.status(_(b'finished applying clone bundle\n'))
2858 repo.ui.status(_(b'finished applying clone bundle\n'))
2854 # Bundle failed.
2859 # Bundle failed.
2855 #
2860 #
2856 # We abort by default to avoid the thundering herd of
2861 # We abort by default to avoid the thundering herd of
2857 # clients flooding a server that was expecting expensive
2862 # clients flooding a server that was expecting expensive
2858 # clone load to be offloaded.
2863 # clone load to be offloaded.
2859 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2864 elif repo.ui.configbool(b'ui', b'clonebundlefallback'):
2860 repo.ui.warn(_(b'falling back to normal clone\n'))
2865 repo.ui.warn(_(b'falling back to normal clone\n'))
2861 else:
2866 else:
2862 raise error.Abort(
2867 raise error.Abort(
2863 _(b'error applying bundle'),
2868 _(b'error applying bundle'),
2864 hint=_(
2869 hint=_(
2865 b'if this error persists, consider contacting '
2870 b'if this error persists, consider contacting '
2866 b'the server operator or disable clone '
2871 b'the server operator or disable clone '
2867 b'bundles via '
2872 b'bundles via '
2868 b'"--config ui.clonebundles=false"'
2873 b'"--config ui.clonebundles=false"'
2869 ),
2874 ),
2870 )
2875 )
2871
2876
2872
2877
2873 def parseclonebundlesmanifest(repo, s):
2878 def parseclonebundlesmanifest(repo, s):
2874 """Parses the raw text of a clone bundles manifest.
2879 """Parses the raw text of a clone bundles manifest.
2875
2880
2876 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2881 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2877 to the URL and other keys are the attributes for the entry.
2882 to the URL and other keys are the attributes for the entry.
2878 """
2883 """
2879 m = []
2884 m = []
2880 for line in s.splitlines():
2885 for line in s.splitlines():
2881 fields = line.split()
2886 fields = line.split()
2882 if not fields:
2887 if not fields:
2883 continue
2888 continue
2884 attrs = {b'URL': fields[0]}
2889 attrs = {b'URL': fields[0]}
2885 for rawattr in fields[1:]:
2890 for rawattr in fields[1:]:
2886 key, value = rawattr.split(b'=', 1)
2891 key, value = rawattr.split(b'=', 1)
2887 key = urlreq.unquote(key)
2892 key = urlreq.unquote(key)
2888 value = urlreq.unquote(value)
2893 value = urlreq.unquote(value)
2889 attrs[key] = value
2894 attrs[key] = value
2890
2895
2891 # Parse BUNDLESPEC into components. This makes client-side
2896 # Parse BUNDLESPEC into components. This makes client-side
2892 # preferences easier to specify since you can prefer a single
2897 # preferences easier to specify since you can prefer a single
2893 # component of the BUNDLESPEC.
2898 # component of the BUNDLESPEC.
2894 if key == b'BUNDLESPEC':
2899 if key == b'BUNDLESPEC':
2895 try:
2900 try:
2896 bundlespec = parsebundlespec(repo, value)
2901 bundlespec = parsebundlespec(repo, value)
2897 attrs[b'COMPRESSION'] = bundlespec.compression
2902 attrs[b'COMPRESSION'] = bundlespec.compression
2898 attrs[b'VERSION'] = bundlespec.version
2903 attrs[b'VERSION'] = bundlespec.version
2899 except error.InvalidBundleSpecification:
2904 except error.InvalidBundleSpecification:
2900 pass
2905 pass
2901 except error.UnsupportedBundleSpecification:
2906 except error.UnsupportedBundleSpecification:
2902 pass
2907 pass
2903
2908
2904 m.append(attrs)
2909 m.append(attrs)
2905
2910
2906 return m
2911 return m
2907
2912
2908
2913
2909 def isstreamclonespec(bundlespec):
2914 def isstreamclonespec(bundlespec):
2910 # Stream clone v1
2915 # Stream clone v1
2911 if bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b's1':
2916 if bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b's1':
2912 return True
2917 return True
2913
2918
2914 # Stream clone v2
2919 # Stream clone v2
2915 if (
2920 if (
2916 bundlespec.wirecompression == b'UN'
2921 bundlespec.wirecompression == b'UN'
2917 and bundlespec.wireversion == b'02'
2922 and bundlespec.wireversion == b'02'
2918 and bundlespec.contentopts.get(b'streamv2')
2923 and bundlespec.contentopts.get(b'streamv2')
2919 ):
2924 ):
2920 return True
2925 return True
2921
2926
2922 return False
2927 return False
2923
2928
2924
2929
2925 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2930 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2926 """Remove incompatible clone bundle manifest entries.
2931 """Remove incompatible clone bundle manifest entries.
2927
2932
2928 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2933 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2929 and returns a new list consisting of only the entries that this client
2934 and returns a new list consisting of only the entries that this client
2930 should be able to apply.
2935 should be able to apply.
2931
2936
2932 There is no guarantee we'll be able to apply all returned entries because
2937 There is no guarantee we'll be able to apply all returned entries because
2933 the metadata we use to filter on may be missing or wrong.
2938 the metadata we use to filter on may be missing or wrong.
2934 """
2939 """
2935 newentries = []
2940 newentries = []
2936 for entry in entries:
2941 for entry in entries:
2937 spec = entry.get(b'BUNDLESPEC')
2942 spec = entry.get(b'BUNDLESPEC')
2938 if spec:
2943 if spec:
2939 try:
2944 try:
2940 bundlespec = parsebundlespec(repo, spec, strict=True)
2945 bundlespec = parsebundlespec(repo, spec, strict=True)
2941
2946
2942 # If a stream clone was requested, filter out non-streamclone
2947 # If a stream clone was requested, filter out non-streamclone
2943 # entries.
2948 # entries.
2944 if streamclonerequested and not isstreamclonespec(bundlespec):
2949 if streamclonerequested and not isstreamclonespec(bundlespec):
2945 repo.ui.debug(
2950 repo.ui.debug(
2946 b'filtering %s because not a stream clone\n'
2951 b'filtering %s because not a stream clone\n'
2947 % entry[b'URL']
2952 % entry[b'URL']
2948 )
2953 )
2949 continue
2954 continue
2950
2955
2951 except error.InvalidBundleSpecification as e:
2956 except error.InvalidBundleSpecification as e:
2952 repo.ui.debug(stringutil.forcebytestr(e) + b'\n')
2957 repo.ui.debug(stringutil.forcebytestr(e) + b'\n')
2953 continue
2958 continue
2954 except error.UnsupportedBundleSpecification as e:
2959 except error.UnsupportedBundleSpecification as e:
2955 repo.ui.debug(
2960 repo.ui.debug(
2956 b'filtering %s because unsupported bundle '
2961 b'filtering %s because unsupported bundle '
2957 b'spec: %s\n' % (entry[b'URL'], stringutil.forcebytestr(e))
2962 b'spec: %s\n' % (entry[b'URL'], stringutil.forcebytestr(e))
2958 )
2963 )
2959 continue
2964 continue
2960 # If we don't have a spec and requested a stream clone, we don't know
2965 # If we don't have a spec and requested a stream clone, we don't know
2961 # what the entry is so don't attempt to apply it.
2966 # what the entry is so don't attempt to apply it.
2962 elif streamclonerequested:
2967 elif streamclonerequested:
2963 repo.ui.debug(
2968 repo.ui.debug(
2964 b'filtering %s because cannot determine if a stream '
2969 b'filtering %s because cannot determine if a stream '
2965 b'clone bundle\n' % entry[b'URL']
2970 b'clone bundle\n' % entry[b'URL']
2966 )
2971 )
2967 continue
2972 continue
2968
2973
2969 if b'REQUIRESNI' in entry and not sslutil.hassni:
2974 if b'REQUIRESNI' in entry and not sslutil.hassni:
2970 repo.ui.debug(
2975 repo.ui.debug(
2971 b'filtering %s because SNI not supported\n' % entry[b'URL']
2976 b'filtering %s because SNI not supported\n' % entry[b'URL']
2972 )
2977 )
2973 continue
2978 continue
2974
2979
2975 newentries.append(entry)
2980 newentries.append(entry)
2976
2981
2977 return newentries
2982 return newentries
2978
2983
2979
2984
2980 class clonebundleentry(object):
2985 class clonebundleentry(object):
2981 """Represents an item in a clone bundles manifest.
2986 """Represents an item in a clone bundles manifest.
2982
2987
2983 This rich class is needed to support sorting since sorted() in Python 3
2988 This rich class is needed to support sorting since sorted() in Python 3
2984 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2989 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2985 won't work.
2990 won't work.
2986 """
2991 """
2987
2992
2988 def __init__(self, value, prefers):
2993 def __init__(self, value, prefers):
2989 self.value = value
2994 self.value = value
2990 self.prefers = prefers
2995 self.prefers = prefers
2991
2996
2992 def _cmp(self, other):
2997 def _cmp(self, other):
2993 for prefkey, prefvalue in self.prefers:
2998 for prefkey, prefvalue in self.prefers:
2994 avalue = self.value.get(prefkey)
2999 avalue = self.value.get(prefkey)
2995 bvalue = other.value.get(prefkey)
3000 bvalue = other.value.get(prefkey)
2996
3001
2997 # Special case for b missing attribute and a matches exactly.
3002 # Special case for b missing attribute and a matches exactly.
2998 if avalue is not None and bvalue is None and avalue == prefvalue:
3003 if avalue is not None and bvalue is None and avalue == prefvalue:
2999 return -1
3004 return -1
3000
3005
3001 # Special case for a missing attribute and b matches exactly.
3006 # Special case for a missing attribute and b matches exactly.
3002 if bvalue is not None and avalue is None and bvalue == prefvalue:
3007 if bvalue is not None and avalue is None and bvalue == prefvalue:
3003 return 1
3008 return 1
3004
3009
3005 # We can't compare unless attribute present on both.
3010 # We can't compare unless attribute present on both.
3006 if avalue is None or bvalue is None:
3011 if avalue is None or bvalue is None:
3007 continue
3012 continue
3008
3013
3009 # Same values should fall back to next attribute.
3014 # Same values should fall back to next attribute.
3010 if avalue == bvalue:
3015 if avalue == bvalue:
3011 continue
3016 continue
3012
3017
3013 # Exact matches come first.
3018 # Exact matches come first.
3014 if avalue == prefvalue:
3019 if avalue == prefvalue:
3015 return -1
3020 return -1
3016 if bvalue == prefvalue:
3021 if bvalue == prefvalue:
3017 return 1
3022 return 1
3018
3023
3019 # Fall back to next attribute.
3024 # Fall back to next attribute.
3020 continue
3025 continue
3021
3026
3022 # If we got here we couldn't sort by attributes and prefers. Fall
3027 # If we got here we couldn't sort by attributes and prefers. Fall
3023 # back to index order.
3028 # back to index order.
3024 return 0
3029 return 0
3025
3030
3026 def __lt__(self, other):
3031 def __lt__(self, other):
3027 return self._cmp(other) < 0
3032 return self._cmp(other) < 0
3028
3033
3029 def __gt__(self, other):
3034 def __gt__(self, other):
3030 return self._cmp(other) > 0
3035 return self._cmp(other) > 0
3031
3036
3032 def __eq__(self, other):
3037 def __eq__(self, other):
3033 return self._cmp(other) == 0
3038 return self._cmp(other) == 0
3034
3039
3035 def __le__(self, other):
3040 def __le__(self, other):
3036 return self._cmp(other) <= 0
3041 return self._cmp(other) <= 0
3037
3042
3038 def __ge__(self, other):
3043 def __ge__(self, other):
3039 return self._cmp(other) >= 0
3044 return self._cmp(other) >= 0
3040
3045
3041 def __ne__(self, other):
3046 def __ne__(self, other):
3042 return self._cmp(other) != 0
3047 return self._cmp(other) != 0
3043
3048
3044
3049
3045 def sortclonebundleentries(ui, entries):
3050 def sortclonebundleentries(ui, entries):
3046 prefers = ui.configlist(b'ui', b'clonebundleprefers')
3051 prefers = ui.configlist(b'ui', b'clonebundleprefers')
3047 if not prefers:
3052 if not prefers:
3048 return list(entries)
3053 return list(entries)
3049
3054
3050 prefers = [p.split(b'=', 1) for p in prefers]
3055 prefers = [p.split(b'=', 1) for p in prefers]
3051
3056
3052 items = sorted(clonebundleentry(v, prefers) for v in entries)
3057 items = sorted(clonebundleentry(v, prefers) for v in entries)
3053 return [i.value for i in items]
3058 return [i.value for i in items]
3054
3059
3055
3060
3056 def trypullbundlefromurl(ui, repo, url):
3061 def trypullbundlefromurl(ui, repo, url):
3057 """Attempt to apply a bundle from a URL."""
3062 """Attempt to apply a bundle from a URL."""
3058 with repo.lock(), repo.transaction(b'bundleurl') as tr:
3063 with repo.lock(), repo.transaction(b'bundleurl') as tr:
3059 try:
3064 try:
3060 fh = urlmod.open(ui, url)
3065 fh = urlmod.open(ui, url)
3061 cg = readbundle(ui, fh, b'stream')
3066 cg = readbundle(ui, fh, b'stream')
3062
3067
3063 if isinstance(cg, streamclone.streamcloneapplier):
3068 if isinstance(cg, streamclone.streamcloneapplier):
3064 cg.apply(repo)
3069 cg.apply(repo)
3065 else:
3070 else:
3066 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
3071 bundle2.applybundle(repo, cg, tr, b'clonebundles', url)
3067 return True
3072 return True
3068 except urlerr.httperror as e:
3073 except urlerr.httperror as e:
3069 ui.warn(
3074 ui.warn(
3070 _(b'HTTP error fetching bundle: %s\n')
3075 _(b'HTTP error fetching bundle: %s\n')
3071 % stringutil.forcebytestr(e)
3076 % stringutil.forcebytestr(e)
3072 )
3077 )
3073 except urlerr.urlerror as e:
3078 except urlerr.urlerror as e:
3074 ui.warn(
3079 ui.warn(
3075 _(b'error fetching bundle: %s\n')
3080 _(b'error fetching bundle: %s\n')
3076 % stringutil.forcebytestr(e.reason)
3081 % stringutil.forcebytestr(e.reason)
3077 )
3082 )
3078
3083
3079 return False
3084 return False
General Comments 0
You need to be logged in to leave comments. Login now