##// END OF EJS Templates
bundleoperation: pass the source argument from all the users...
Pulkit Goyal -
r37254:f7d3915d default
parent child Browse files
Show More
@@ -1,2263 +1,2263 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157
157
158 from .i18n import _
158 from .i18n import _
159 from . import (
159 from . import (
160 bookmarks,
160 bookmarks,
161 changegroup,
161 changegroup,
162 encoding,
162 encoding,
163 error,
163 error,
164 node as nodemod,
164 node as nodemod,
165 obsolete,
165 obsolete,
166 phases,
166 phases,
167 pushkey,
167 pushkey,
168 pycompat,
168 pycompat,
169 streamclone,
169 streamclone,
170 tags,
170 tags,
171 url,
171 url,
172 util,
172 util,
173 )
173 )
174 from .utils import (
174 from .utils import (
175 stringutil,
175 stringutil,
176 )
176 )
177
177
178 urlerr = util.urlerr
178 urlerr = util.urlerr
179 urlreq = util.urlreq
179 urlreq = util.urlreq
180
180
181 _pack = struct.pack
181 _pack = struct.pack
182 _unpack = struct.unpack
182 _unpack = struct.unpack
183
183
184 _fstreamparamsize = '>i'
184 _fstreamparamsize = '>i'
185 _fpartheadersize = '>i'
185 _fpartheadersize = '>i'
186 _fparttypesize = '>B'
186 _fparttypesize = '>B'
187 _fpartid = '>I'
187 _fpartid = '>I'
188 _fpayloadsize = '>i'
188 _fpayloadsize = '>i'
189 _fpartparamcount = '>BB'
189 _fpartparamcount = '>BB'
190
190
191 preferedchunksize = 32768
191 preferedchunksize = 32768
192
192
193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
194
194
195 def outdebug(ui, message):
195 def outdebug(ui, message):
196 """debug regarding output stream (bundling)"""
196 """debug regarding output stream (bundling)"""
197 if ui.configbool('devel', 'bundle2.debug'):
197 if ui.configbool('devel', 'bundle2.debug'):
198 ui.debug('bundle2-output: %s\n' % message)
198 ui.debug('bundle2-output: %s\n' % message)
199
199
200 def indebug(ui, message):
200 def indebug(ui, message):
201 """debug on input stream (unbundling)"""
201 """debug on input stream (unbundling)"""
202 if ui.configbool('devel', 'bundle2.debug'):
202 if ui.configbool('devel', 'bundle2.debug'):
203 ui.debug('bundle2-input: %s\n' % message)
203 ui.debug('bundle2-input: %s\n' % message)
204
204
205 def validateparttype(parttype):
205 def validateparttype(parttype):
206 """raise ValueError if a parttype contains invalid character"""
206 """raise ValueError if a parttype contains invalid character"""
207 if _parttypeforbidden.search(parttype):
207 if _parttypeforbidden.search(parttype):
208 raise ValueError(parttype)
208 raise ValueError(parttype)
209
209
210 def _makefpartparamsizes(nbparams):
210 def _makefpartparamsizes(nbparams):
211 """return a struct format to read part parameter sizes
211 """return a struct format to read part parameter sizes
212
212
213 The number parameters is variable so we need to build that format
213 The number parameters is variable so we need to build that format
214 dynamically.
214 dynamically.
215 """
215 """
216 return '>'+('BB'*nbparams)
216 return '>'+('BB'*nbparams)
217
217
218 parthandlermapping = {}
218 parthandlermapping = {}
219
219
220 def parthandler(parttype, params=()):
220 def parthandler(parttype, params=()):
221 """decorator that register a function as a bundle2 part handler
221 """decorator that register a function as a bundle2 part handler
222
222
223 eg::
223 eg::
224
224
225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
226 def myparttypehandler(...):
226 def myparttypehandler(...):
227 '''process a part of type "my part".'''
227 '''process a part of type "my part".'''
228 ...
228 ...
229 """
229 """
230 validateparttype(parttype)
230 validateparttype(parttype)
231 def _decorator(func):
231 def _decorator(func):
232 lparttype = parttype.lower() # enforce lower case matching.
232 lparttype = parttype.lower() # enforce lower case matching.
233 assert lparttype not in parthandlermapping
233 assert lparttype not in parthandlermapping
234 parthandlermapping[lparttype] = func
234 parthandlermapping[lparttype] = func
235 func.params = frozenset(params)
235 func.params = frozenset(params)
236 return func
236 return func
237 return _decorator
237 return _decorator
238
238
239 class unbundlerecords(object):
239 class unbundlerecords(object):
240 """keep record of what happens during and unbundle
240 """keep record of what happens during and unbundle
241
241
242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
243 category of record and obj is an arbitrary object.
243 category of record and obj is an arbitrary object.
244
244
245 `records['cat']` will return all entries of this category 'cat'.
245 `records['cat']` will return all entries of this category 'cat'.
246
246
247 Iterating on the object itself will yield `('category', obj)` tuples
247 Iterating on the object itself will yield `('category', obj)` tuples
248 for all entries.
248 for all entries.
249
249
250 All iterations happens in chronological order.
250 All iterations happens in chronological order.
251 """
251 """
252
252
253 def __init__(self):
253 def __init__(self):
254 self._categories = {}
254 self._categories = {}
255 self._sequences = []
255 self._sequences = []
256 self._replies = {}
256 self._replies = {}
257
257
258 def add(self, category, entry, inreplyto=None):
258 def add(self, category, entry, inreplyto=None):
259 """add a new record of a given category.
259 """add a new record of a given category.
260
260
261 The entry can then be retrieved in the list returned by
261 The entry can then be retrieved in the list returned by
262 self['category']."""
262 self['category']."""
263 self._categories.setdefault(category, []).append(entry)
263 self._categories.setdefault(category, []).append(entry)
264 self._sequences.append((category, entry))
264 self._sequences.append((category, entry))
265 if inreplyto is not None:
265 if inreplyto is not None:
266 self.getreplies(inreplyto).add(category, entry)
266 self.getreplies(inreplyto).add(category, entry)
267
267
268 def getreplies(self, partid):
268 def getreplies(self, partid):
269 """get the records that are replies to a specific part"""
269 """get the records that are replies to a specific part"""
270 return self._replies.setdefault(partid, unbundlerecords())
270 return self._replies.setdefault(partid, unbundlerecords())
271
271
272 def __getitem__(self, cat):
272 def __getitem__(self, cat):
273 return tuple(self._categories.get(cat, ()))
273 return tuple(self._categories.get(cat, ()))
274
274
275 def __iter__(self):
275 def __iter__(self):
276 return iter(self._sequences)
276 return iter(self._sequences)
277
277
278 def __len__(self):
278 def __len__(self):
279 return len(self._sequences)
279 return len(self._sequences)
280
280
281 def __nonzero__(self):
281 def __nonzero__(self):
282 return bool(self._sequences)
282 return bool(self._sequences)
283
283
284 __bool__ = __nonzero__
284 __bool__ = __nonzero__
285
285
286 class bundleoperation(object):
286 class bundleoperation(object):
287 """an object that represents a single bundling process
287 """an object that represents a single bundling process
288
288
289 Its purpose is to carry unbundle-related objects and states.
289 Its purpose is to carry unbundle-related objects and states.
290
290
291 A new object should be created at the beginning of each bundle processing.
291 A new object should be created at the beginning of each bundle processing.
292 The object is to be returned by the processing function.
292 The object is to be returned by the processing function.
293
293
294 The object has very little content now it will ultimately contain:
294 The object has very little content now it will ultimately contain:
295 * an access to the repo the bundle is applied to,
295 * an access to the repo the bundle is applied to,
296 * a ui object,
296 * a ui object,
297 * a way to retrieve a transaction to add changes to the repo,
297 * a way to retrieve a transaction to add changes to the repo,
298 * a way to record the result of processing each part,
298 * a way to record the result of processing each part,
299 * a way to construct a bundle response when applicable.
299 * a way to construct a bundle response when applicable.
300 """
300 """
301
301
302 def __init__(self, repo, transactiongetter, captureoutput=True, source=''):
302 def __init__(self, repo, transactiongetter, captureoutput=True, source=''):
303 self.repo = repo
303 self.repo = repo
304 self.ui = repo.ui
304 self.ui = repo.ui
305 self.records = unbundlerecords()
305 self.records = unbundlerecords()
306 self.reply = None
306 self.reply = None
307 self.captureoutput = captureoutput
307 self.captureoutput = captureoutput
308 self.hookargs = {}
308 self.hookargs = {}
309 self._gettransaction = transactiongetter
309 self._gettransaction = transactiongetter
310 # carries value that can modify part behavior
310 # carries value that can modify part behavior
311 self.modes = {}
311 self.modes = {}
312 self.source = source
312 self.source = source
313
313
314 def gettransaction(self):
314 def gettransaction(self):
315 transaction = self._gettransaction()
315 transaction = self._gettransaction()
316
316
317 if self.hookargs:
317 if self.hookargs:
318 # the ones added to the transaction supercede those added
318 # the ones added to the transaction supercede those added
319 # to the operation.
319 # to the operation.
320 self.hookargs.update(transaction.hookargs)
320 self.hookargs.update(transaction.hookargs)
321 transaction.hookargs = self.hookargs
321 transaction.hookargs = self.hookargs
322
322
323 # mark the hookargs as flushed. further attempts to add to
323 # mark the hookargs as flushed. further attempts to add to
324 # hookargs will result in an abort.
324 # hookargs will result in an abort.
325 self.hookargs = None
325 self.hookargs = None
326
326
327 return transaction
327 return transaction
328
328
329 def addhookargs(self, hookargs):
329 def addhookargs(self, hookargs):
330 if self.hookargs is None:
330 if self.hookargs is None:
331 raise error.ProgrammingError('attempted to add hookargs to '
331 raise error.ProgrammingError('attempted to add hookargs to '
332 'operation after transaction started')
332 'operation after transaction started')
333 self.hookargs.update(hookargs)
333 self.hookargs.update(hookargs)
334
334
335 class TransactionUnavailable(RuntimeError):
335 class TransactionUnavailable(RuntimeError):
336 pass
336 pass
337
337
338 def _notransaction():
338 def _notransaction():
339 """default method to get a transaction while processing a bundle
339 """default method to get a transaction while processing a bundle
340
340
341 Raise an exception to highlight the fact that no transaction was expected
341 Raise an exception to highlight the fact that no transaction was expected
342 to be created"""
342 to be created"""
343 raise TransactionUnavailable()
343 raise TransactionUnavailable()
344
344
345 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
345 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
346 # transform me into unbundler.apply() as soon as the freeze is lifted
346 # transform me into unbundler.apply() as soon as the freeze is lifted
347 if isinstance(unbundler, unbundle20):
347 if isinstance(unbundler, unbundle20):
348 tr.hookargs['bundle2'] = '1'
348 tr.hookargs['bundle2'] = '1'
349 if source is not None and 'source' not in tr.hookargs:
349 if source is not None and 'source' not in tr.hookargs:
350 tr.hookargs['source'] = source
350 tr.hookargs['source'] = source
351 if url is not None and 'url' not in tr.hookargs:
351 if url is not None and 'url' not in tr.hookargs:
352 tr.hookargs['url'] = url
352 tr.hookargs['url'] = url
353 return processbundle(repo, unbundler, lambda: tr, source=source)
353 return processbundle(repo, unbundler, lambda: tr, source=source)
354 else:
354 else:
355 # the transactiongetter won't be used, but we might as well set it
355 # the transactiongetter won't be used, but we might as well set it
356 op = bundleoperation(repo, lambda: tr)
356 op = bundleoperation(repo, lambda: tr, source=source)
357 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
357 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
358 return op
358 return op
359
359
360 class partiterator(object):
360 class partiterator(object):
361 def __init__(self, repo, op, unbundler):
361 def __init__(self, repo, op, unbundler):
362 self.repo = repo
362 self.repo = repo
363 self.op = op
363 self.op = op
364 self.unbundler = unbundler
364 self.unbundler = unbundler
365 self.iterator = None
365 self.iterator = None
366 self.count = 0
366 self.count = 0
367 self.current = None
367 self.current = None
368
368
369 def __enter__(self):
369 def __enter__(self):
370 def func():
370 def func():
371 itr = enumerate(self.unbundler.iterparts())
371 itr = enumerate(self.unbundler.iterparts())
372 for count, p in itr:
372 for count, p in itr:
373 self.count = count
373 self.count = count
374 self.current = p
374 self.current = p
375 yield p
375 yield p
376 p.consume()
376 p.consume()
377 self.current = None
377 self.current = None
378 self.iterator = func()
378 self.iterator = func()
379 return self.iterator
379 return self.iterator
380
380
381 def __exit__(self, type, exc, tb):
381 def __exit__(self, type, exc, tb):
382 if not self.iterator:
382 if not self.iterator:
383 return
383 return
384
384
385 # Only gracefully abort in a normal exception situation. User aborts
385 # Only gracefully abort in a normal exception situation. User aborts
386 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
386 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
387 # and should not gracefully cleanup.
387 # and should not gracefully cleanup.
388 if isinstance(exc, Exception):
388 if isinstance(exc, Exception):
389 # Any exceptions seeking to the end of the bundle at this point are
389 # Any exceptions seeking to the end of the bundle at this point are
390 # almost certainly related to the underlying stream being bad.
390 # almost certainly related to the underlying stream being bad.
391 # And, chances are that the exception we're handling is related to
391 # And, chances are that the exception we're handling is related to
392 # getting in that bad state. So, we swallow the seeking error and
392 # getting in that bad state. So, we swallow the seeking error and
393 # re-raise the original error.
393 # re-raise the original error.
394 seekerror = False
394 seekerror = False
395 try:
395 try:
396 if self.current:
396 if self.current:
397 # consume the part content to not corrupt the stream.
397 # consume the part content to not corrupt the stream.
398 self.current.consume()
398 self.current.consume()
399
399
400 for part in self.iterator:
400 for part in self.iterator:
401 # consume the bundle content
401 # consume the bundle content
402 part.consume()
402 part.consume()
403 except Exception:
403 except Exception:
404 seekerror = True
404 seekerror = True
405
405
406 # Small hack to let caller code distinguish exceptions from bundle2
406 # Small hack to let caller code distinguish exceptions from bundle2
407 # processing from processing the old format. This is mostly needed
407 # processing from processing the old format. This is mostly needed
408 # to handle different return codes to unbundle according to the type
408 # to handle different return codes to unbundle according to the type
409 # of bundle. We should probably clean up or drop this return code
409 # of bundle. We should probably clean up or drop this return code
410 # craziness in a future version.
410 # craziness in a future version.
411 exc.duringunbundle2 = True
411 exc.duringunbundle2 = True
412 salvaged = []
412 salvaged = []
413 replycaps = None
413 replycaps = None
414 if self.op.reply is not None:
414 if self.op.reply is not None:
415 salvaged = self.op.reply.salvageoutput()
415 salvaged = self.op.reply.salvageoutput()
416 replycaps = self.op.reply.capabilities
416 replycaps = self.op.reply.capabilities
417 exc._replycaps = replycaps
417 exc._replycaps = replycaps
418 exc._bundle2salvagedoutput = salvaged
418 exc._bundle2salvagedoutput = salvaged
419
419
420 # Re-raising from a variable loses the original stack. So only use
420 # Re-raising from a variable loses the original stack. So only use
421 # that form if we need to.
421 # that form if we need to.
422 if seekerror:
422 if seekerror:
423 raise exc
423 raise exc
424
424
425 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
425 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
426 self.count)
426 self.count)
427
427
428 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''):
428 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''):
429 """This function process a bundle, apply effect to/from a repo
429 """This function process a bundle, apply effect to/from a repo
430
430
431 It iterates over each part then searches for and uses the proper handling
431 It iterates over each part then searches for and uses the proper handling
432 code to process the part. Parts are processed in order.
432 code to process the part. Parts are processed in order.
433
433
434 Unknown Mandatory part will abort the process.
434 Unknown Mandatory part will abort the process.
435
435
436 It is temporarily possible to provide a prebuilt bundleoperation to the
436 It is temporarily possible to provide a prebuilt bundleoperation to the
437 function. This is used to ensure output is properly propagated in case of
437 function. This is used to ensure output is properly propagated in case of
438 an error during the unbundling. This output capturing part will likely be
438 an error during the unbundling. This output capturing part will likely be
439 reworked and this ability will probably go away in the process.
439 reworked and this ability will probably go away in the process.
440 """
440 """
441 if op is None:
441 if op is None:
442 if transactiongetter is None:
442 if transactiongetter is None:
443 transactiongetter = _notransaction
443 transactiongetter = _notransaction
444 op = bundleoperation(repo, transactiongetter)
444 op = bundleoperation(repo, transactiongetter, source=source)
445 # todo:
445 # todo:
446 # - replace this is a init function soon.
446 # - replace this is a init function soon.
447 # - exception catching
447 # - exception catching
448 unbundler.params
448 unbundler.params
449 if repo.ui.debugflag:
449 if repo.ui.debugflag:
450 msg = ['bundle2-input-bundle:']
450 msg = ['bundle2-input-bundle:']
451 if unbundler.params:
451 if unbundler.params:
452 msg.append(' %i params' % len(unbundler.params))
452 msg.append(' %i params' % len(unbundler.params))
453 if op._gettransaction is None or op._gettransaction is _notransaction:
453 if op._gettransaction is None or op._gettransaction is _notransaction:
454 msg.append(' no-transaction')
454 msg.append(' no-transaction')
455 else:
455 else:
456 msg.append(' with-transaction')
456 msg.append(' with-transaction')
457 msg.append('\n')
457 msg.append('\n')
458 repo.ui.debug(''.join(msg))
458 repo.ui.debug(''.join(msg))
459
459
460 processparts(repo, op, unbundler)
460 processparts(repo, op, unbundler)
461
461
462 return op
462 return op
463
463
464 def processparts(repo, op, unbundler):
464 def processparts(repo, op, unbundler):
465 with partiterator(repo, op, unbundler) as parts:
465 with partiterator(repo, op, unbundler) as parts:
466 for part in parts:
466 for part in parts:
467 _processpart(op, part)
467 _processpart(op, part)
468
468
469 def _processchangegroup(op, cg, tr, source, url, **kwargs):
469 def _processchangegroup(op, cg, tr, source, url, **kwargs):
470 ret = cg.apply(op.repo, tr, source, url, **kwargs)
470 ret = cg.apply(op.repo, tr, source, url, **kwargs)
471 op.records.add('changegroup', {
471 op.records.add('changegroup', {
472 'return': ret,
472 'return': ret,
473 })
473 })
474 return ret
474 return ret
475
475
476 def _gethandler(op, part):
476 def _gethandler(op, part):
477 status = 'unknown' # used by debug output
477 status = 'unknown' # used by debug output
478 try:
478 try:
479 handler = parthandlermapping.get(part.type)
479 handler = parthandlermapping.get(part.type)
480 if handler is None:
480 if handler is None:
481 status = 'unsupported-type'
481 status = 'unsupported-type'
482 raise error.BundleUnknownFeatureError(parttype=part.type)
482 raise error.BundleUnknownFeatureError(parttype=part.type)
483 indebug(op.ui, 'found a handler for part %s' % part.type)
483 indebug(op.ui, 'found a handler for part %s' % part.type)
484 unknownparams = part.mandatorykeys - handler.params
484 unknownparams = part.mandatorykeys - handler.params
485 if unknownparams:
485 if unknownparams:
486 unknownparams = list(unknownparams)
486 unknownparams = list(unknownparams)
487 unknownparams.sort()
487 unknownparams.sort()
488 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
488 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
489 raise error.BundleUnknownFeatureError(parttype=part.type,
489 raise error.BundleUnknownFeatureError(parttype=part.type,
490 params=unknownparams)
490 params=unknownparams)
491 status = 'supported'
491 status = 'supported'
492 except error.BundleUnknownFeatureError as exc:
492 except error.BundleUnknownFeatureError as exc:
493 if part.mandatory: # mandatory parts
493 if part.mandatory: # mandatory parts
494 raise
494 raise
495 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
495 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
496 return # skip to part processing
496 return # skip to part processing
497 finally:
497 finally:
498 if op.ui.debugflag:
498 if op.ui.debugflag:
499 msg = ['bundle2-input-part: "%s"' % part.type]
499 msg = ['bundle2-input-part: "%s"' % part.type]
500 if not part.mandatory:
500 if not part.mandatory:
501 msg.append(' (advisory)')
501 msg.append(' (advisory)')
502 nbmp = len(part.mandatorykeys)
502 nbmp = len(part.mandatorykeys)
503 nbap = len(part.params) - nbmp
503 nbap = len(part.params) - nbmp
504 if nbmp or nbap:
504 if nbmp or nbap:
505 msg.append(' (params:')
505 msg.append(' (params:')
506 if nbmp:
506 if nbmp:
507 msg.append(' %i mandatory' % nbmp)
507 msg.append(' %i mandatory' % nbmp)
508 if nbap:
508 if nbap:
509 msg.append(' %i advisory' % nbmp)
509 msg.append(' %i advisory' % nbmp)
510 msg.append(')')
510 msg.append(')')
511 msg.append(' %s\n' % status)
511 msg.append(' %s\n' % status)
512 op.ui.debug(''.join(msg))
512 op.ui.debug(''.join(msg))
513
513
514 return handler
514 return handler
515
515
516 def _processpart(op, part):
516 def _processpart(op, part):
517 """process a single part from a bundle
517 """process a single part from a bundle
518
518
519 The part is guaranteed to have been fully consumed when the function exits
519 The part is guaranteed to have been fully consumed when the function exits
520 (even if an exception is raised)."""
520 (even if an exception is raised)."""
521 handler = _gethandler(op, part)
521 handler = _gethandler(op, part)
522 if handler is None:
522 if handler is None:
523 return
523 return
524
524
525 # handler is called outside the above try block so that we don't
525 # handler is called outside the above try block so that we don't
526 # risk catching KeyErrors from anything other than the
526 # risk catching KeyErrors from anything other than the
527 # parthandlermapping lookup (any KeyError raised by handler()
527 # parthandlermapping lookup (any KeyError raised by handler()
528 # itself represents a defect of a different variety).
528 # itself represents a defect of a different variety).
529 output = None
529 output = None
530 if op.captureoutput and op.reply is not None:
530 if op.captureoutput and op.reply is not None:
531 op.ui.pushbuffer(error=True, subproc=True)
531 op.ui.pushbuffer(error=True, subproc=True)
532 output = ''
532 output = ''
533 try:
533 try:
534 handler(op, part)
534 handler(op, part)
535 finally:
535 finally:
536 if output is not None:
536 if output is not None:
537 output = op.ui.popbuffer()
537 output = op.ui.popbuffer()
538 if output:
538 if output:
539 outpart = op.reply.newpart('output', data=output,
539 outpart = op.reply.newpart('output', data=output,
540 mandatory=False)
540 mandatory=False)
541 outpart.addparam(
541 outpart.addparam(
542 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
542 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
543
543
544 def decodecaps(blob):
544 def decodecaps(blob):
545 """decode a bundle2 caps bytes blob into a dictionary
545 """decode a bundle2 caps bytes blob into a dictionary
546
546
547 The blob is a list of capabilities (one per line)
547 The blob is a list of capabilities (one per line)
548 Capabilities may have values using a line of the form::
548 Capabilities may have values using a line of the form::
549
549
550 capability=value1,value2,value3
550 capability=value1,value2,value3
551
551
552 The values are always a list."""
552 The values are always a list."""
553 caps = {}
553 caps = {}
554 for line in blob.splitlines():
554 for line in blob.splitlines():
555 if not line:
555 if not line:
556 continue
556 continue
557 if '=' not in line:
557 if '=' not in line:
558 key, vals = line, ()
558 key, vals = line, ()
559 else:
559 else:
560 key, vals = line.split('=', 1)
560 key, vals = line.split('=', 1)
561 vals = vals.split(',')
561 vals = vals.split(',')
562 key = urlreq.unquote(key)
562 key = urlreq.unquote(key)
563 vals = [urlreq.unquote(v) for v in vals]
563 vals = [urlreq.unquote(v) for v in vals]
564 caps[key] = vals
564 caps[key] = vals
565 return caps
565 return caps
566
566
567 def encodecaps(caps):
567 def encodecaps(caps):
568 """encode a bundle2 caps dictionary into a bytes blob"""
568 """encode a bundle2 caps dictionary into a bytes blob"""
569 chunks = []
569 chunks = []
570 for ca in sorted(caps):
570 for ca in sorted(caps):
571 vals = caps[ca]
571 vals = caps[ca]
572 ca = urlreq.quote(ca)
572 ca = urlreq.quote(ca)
573 vals = [urlreq.quote(v) for v in vals]
573 vals = [urlreq.quote(v) for v in vals]
574 if vals:
574 if vals:
575 ca = "%s=%s" % (ca, ','.join(vals))
575 ca = "%s=%s" % (ca, ','.join(vals))
576 chunks.append(ca)
576 chunks.append(ca)
577 return '\n'.join(chunks)
577 return '\n'.join(chunks)
578
578
579 bundletypes = {
579 bundletypes = {
580 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
580 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
581 # since the unification ssh accepts a header but there
581 # since the unification ssh accepts a header but there
582 # is no capability signaling it.
582 # is no capability signaling it.
583 "HG20": (), # special-cased below
583 "HG20": (), # special-cased below
584 "HG10UN": ("HG10UN", 'UN'),
584 "HG10UN": ("HG10UN", 'UN'),
585 "HG10BZ": ("HG10", 'BZ'),
585 "HG10BZ": ("HG10", 'BZ'),
586 "HG10GZ": ("HG10GZ", 'GZ'),
586 "HG10GZ": ("HG10GZ", 'GZ'),
587 }
587 }
588
588
589 # hgweb uses this list to communicate its preferred type
589 # hgweb uses this list to communicate its preferred type
590 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
590 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
591
591
592 class bundle20(object):
592 class bundle20(object):
593 """represent an outgoing bundle2 container
593 """represent an outgoing bundle2 container
594
594
595 Use the `addparam` method to add stream level parameter. and `newpart` to
595 Use the `addparam` method to add stream level parameter. and `newpart` to
596 populate it. Then call `getchunks` to retrieve all the binary chunks of
596 populate it. Then call `getchunks` to retrieve all the binary chunks of
597 data that compose the bundle2 container."""
597 data that compose the bundle2 container."""
598
598
599 _magicstring = 'HG20'
599 _magicstring = 'HG20'
600
600
601 def __init__(self, ui, capabilities=()):
601 def __init__(self, ui, capabilities=()):
602 self.ui = ui
602 self.ui = ui
603 self._params = []
603 self._params = []
604 self._parts = []
604 self._parts = []
605 self.capabilities = dict(capabilities)
605 self.capabilities = dict(capabilities)
606 self._compengine = util.compengines.forbundletype('UN')
606 self._compengine = util.compengines.forbundletype('UN')
607 self._compopts = None
607 self._compopts = None
608 # If compression is being handled by a consumer of the raw
608 # If compression is being handled by a consumer of the raw
609 # data (e.g. the wire protocol), unsetting this flag tells
609 # data (e.g. the wire protocol), unsetting this flag tells
610 # consumers that the bundle is best left uncompressed.
610 # consumers that the bundle is best left uncompressed.
611 self.prefercompressed = True
611 self.prefercompressed = True
612
612
613 def setcompression(self, alg, compopts=None):
613 def setcompression(self, alg, compopts=None):
614 """setup core part compression to <alg>"""
614 """setup core part compression to <alg>"""
615 if alg in (None, 'UN'):
615 if alg in (None, 'UN'):
616 return
616 return
617 assert not any(n.lower() == 'compression' for n, v in self._params)
617 assert not any(n.lower() == 'compression' for n, v in self._params)
618 self.addparam('Compression', alg)
618 self.addparam('Compression', alg)
619 self._compengine = util.compengines.forbundletype(alg)
619 self._compengine = util.compengines.forbundletype(alg)
620 self._compopts = compopts
620 self._compopts = compopts
621
621
622 @property
622 @property
623 def nbparts(self):
623 def nbparts(self):
624 """total number of parts added to the bundler"""
624 """total number of parts added to the bundler"""
625 return len(self._parts)
625 return len(self._parts)
626
626
627 # methods used to defines the bundle2 content
627 # methods used to defines the bundle2 content
628 def addparam(self, name, value=None):
628 def addparam(self, name, value=None):
629 """add a stream level parameter"""
629 """add a stream level parameter"""
630 if not name:
630 if not name:
631 raise ValueError(r'empty parameter name')
631 raise ValueError(r'empty parameter name')
632 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
632 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
633 raise ValueError(r'non letter first character: %s' % name)
633 raise ValueError(r'non letter first character: %s' % name)
634 self._params.append((name, value))
634 self._params.append((name, value))
635
635
636 def addpart(self, part):
636 def addpart(self, part):
637 """add a new part to the bundle2 container
637 """add a new part to the bundle2 container
638
638
639 Parts contains the actual applicative payload."""
639 Parts contains the actual applicative payload."""
640 assert part.id is None
640 assert part.id is None
641 part.id = len(self._parts) # very cheap counter
641 part.id = len(self._parts) # very cheap counter
642 self._parts.append(part)
642 self._parts.append(part)
643
643
644 def newpart(self, typeid, *args, **kwargs):
644 def newpart(self, typeid, *args, **kwargs):
645 """create a new part and add it to the containers
645 """create a new part and add it to the containers
646
646
647 As the part is directly added to the containers. For now, this means
647 As the part is directly added to the containers. For now, this means
648 that any failure to properly initialize the part after calling
648 that any failure to properly initialize the part after calling
649 ``newpart`` should result in a failure of the whole bundling process.
649 ``newpart`` should result in a failure of the whole bundling process.
650
650
651 You can still fall back to manually create and add if you need better
651 You can still fall back to manually create and add if you need better
652 control."""
652 control."""
653 part = bundlepart(typeid, *args, **kwargs)
653 part = bundlepart(typeid, *args, **kwargs)
654 self.addpart(part)
654 self.addpart(part)
655 return part
655 return part
656
656
657 # methods used to generate the bundle2 stream
657 # methods used to generate the bundle2 stream
658 def getchunks(self):
658 def getchunks(self):
659 if self.ui.debugflag:
659 if self.ui.debugflag:
660 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
660 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
661 if self._params:
661 if self._params:
662 msg.append(' (%i params)' % len(self._params))
662 msg.append(' (%i params)' % len(self._params))
663 msg.append(' %i parts total\n' % len(self._parts))
663 msg.append(' %i parts total\n' % len(self._parts))
664 self.ui.debug(''.join(msg))
664 self.ui.debug(''.join(msg))
665 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
665 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
666 yield self._magicstring
666 yield self._magicstring
667 param = self._paramchunk()
667 param = self._paramchunk()
668 outdebug(self.ui, 'bundle parameter: %s' % param)
668 outdebug(self.ui, 'bundle parameter: %s' % param)
669 yield _pack(_fstreamparamsize, len(param))
669 yield _pack(_fstreamparamsize, len(param))
670 if param:
670 if param:
671 yield param
671 yield param
672 for chunk in self._compengine.compressstream(self._getcorechunk(),
672 for chunk in self._compengine.compressstream(self._getcorechunk(),
673 self._compopts):
673 self._compopts):
674 yield chunk
674 yield chunk
675
675
676 def _paramchunk(self):
676 def _paramchunk(self):
677 """return a encoded version of all stream parameters"""
677 """return a encoded version of all stream parameters"""
678 blocks = []
678 blocks = []
679 for par, value in self._params:
679 for par, value in self._params:
680 par = urlreq.quote(par)
680 par = urlreq.quote(par)
681 if value is not None:
681 if value is not None:
682 value = urlreq.quote(value)
682 value = urlreq.quote(value)
683 par = '%s=%s' % (par, value)
683 par = '%s=%s' % (par, value)
684 blocks.append(par)
684 blocks.append(par)
685 return ' '.join(blocks)
685 return ' '.join(blocks)
686
686
687 def _getcorechunk(self):
687 def _getcorechunk(self):
688 """yield chunk for the core part of the bundle
688 """yield chunk for the core part of the bundle
689
689
690 (all but headers and parameters)"""
690 (all but headers and parameters)"""
691 outdebug(self.ui, 'start of parts')
691 outdebug(self.ui, 'start of parts')
692 for part in self._parts:
692 for part in self._parts:
693 outdebug(self.ui, 'bundle part: "%s"' % part.type)
693 outdebug(self.ui, 'bundle part: "%s"' % part.type)
694 for chunk in part.getchunks(ui=self.ui):
694 for chunk in part.getchunks(ui=self.ui):
695 yield chunk
695 yield chunk
696 outdebug(self.ui, 'end of bundle')
696 outdebug(self.ui, 'end of bundle')
697 yield _pack(_fpartheadersize, 0)
697 yield _pack(_fpartheadersize, 0)
698
698
699
699
700 def salvageoutput(self):
700 def salvageoutput(self):
701 """return a list with a copy of all output parts in the bundle
701 """return a list with a copy of all output parts in the bundle
702
702
703 This is meant to be used during error handling to make sure we preserve
703 This is meant to be used during error handling to make sure we preserve
704 server output"""
704 server output"""
705 salvaged = []
705 salvaged = []
706 for part in self._parts:
706 for part in self._parts:
707 if part.type.startswith('output'):
707 if part.type.startswith('output'):
708 salvaged.append(part.copy())
708 salvaged.append(part.copy())
709 return salvaged
709 return salvaged
710
710
711
711
712 class unpackermixin(object):
712 class unpackermixin(object):
713 """A mixin to extract bytes and struct data from a stream"""
713 """A mixin to extract bytes and struct data from a stream"""
714
714
715 def __init__(self, fp):
715 def __init__(self, fp):
716 self._fp = fp
716 self._fp = fp
717
717
718 def _unpack(self, format):
718 def _unpack(self, format):
719 """unpack this struct format from the stream
719 """unpack this struct format from the stream
720
720
721 This method is meant for internal usage by the bundle2 protocol only.
721 This method is meant for internal usage by the bundle2 protocol only.
722 They directly manipulate the low level stream including bundle2 level
722 They directly manipulate the low level stream including bundle2 level
723 instruction.
723 instruction.
724
724
725 Do not use it to implement higher-level logic or methods."""
725 Do not use it to implement higher-level logic or methods."""
726 data = self._readexact(struct.calcsize(format))
726 data = self._readexact(struct.calcsize(format))
727 return _unpack(format, data)
727 return _unpack(format, data)
728
728
729 def _readexact(self, size):
729 def _readexact(self, size):
730 """read exactly <size> bytes from the stream
730 """read exactly <size> bytes from the stream
731
731
732 This method is meant for internal usage by the bundle2 protocol only.
732 This method is meant for internal usage by the bundle2 protocol only.
733 They directly manipulate the low level stream including bundle2 level
733 They directly manipulate the low level stream including bundle2 level
734 instruction.
734 instruction.
735
735
736 Do not use it to implement higher-level logic or methods."""
736 Do not use it to implement higher-level logic or methods."""
737 return changegroup.readexactly(self._fp, size)
737 return changegroup.readexactly(self._fp, size)
738
738
739 def getunbundler(ui, fp, magicstring=None):
739 def getunbundler(ui, fp, magicstring=None):
740 """return a valid unbundler object for a given magicstring"""
740 """return a valid unbundler object for a given magicstring"""
741 if magicstring is None:
741 if magicstring is None:
742 magicstring = changegroup.readexactly(fp, 4)
742 magicstring = changegroup.readexactly(fp, 4)
743 magic, version = magicstring[0:2], magicstring[2:4]
743 magic, version = magicstring[0:2], magicstring[2:4]
744 if magic != 'HG':
744 if magic != 'HG':
745 ui.debug(
745 ui.debug(
746 "error: invalid magic: %r (version %r), should be 'HG'\n"
746 "error: invalid magic: %r (version %r), should be 'HG'\n"
747 % (magic, version))
747 % (magic, version))
748 raise error.Abort(_('not a Mercurial bundle'))
748 raise error.Abort(_('not a Mercurial bundle'))
749 unbundlerclass = formatmap.get(version)
749 unbundlerclass = formatmap.get(version)
750 if unbundlerclass is None:
750 if unbundlerclass is None:
751 raise error.Abort(_('unknown bundle version %s') % version)
751 raise error.Abort(_('unknown bundle version %s') % version)
752 unbundler = unbundlerclass(ui, fp)
752 unbundler = unbundlerclass(ui, fp)
753 indebug(ui, 'start processing of %s stream' % magicstring)
753 indebug(ui, 'start processing of %s stream' % magicstring)
754 return unbundler
754 return unbundler
755
755
756 class unbundle20(unpackermixin):
756 class unbundle20(unpackermixin):
757 """interpret a bundle2 stream
757 """interpret a bundle2 stream
758
758
759 This class is fed with a binary stream and yields parts through its
759 This class is fed with a binary stream and yields parts through its
760 `iterparts` methods."""
760 `iterparts` methods."""
761
761
762 _magicstring = 'HG20'
762 _magicstring = 'HG20'
763
763
764 def __init__(self, ui, fp):
764 def __init__(self, ui, fp):
765 """If header is specified, we do not read it out of the stream."""
765 """If header is specified, we do not read it out of the stream."""
766 self.ui = ui
766 self.ui = ui
767 self._compengine = util.compengines.forbundletype('UN')
767 self._compengine = util.compengines.forbundletype('UN')
768 self._compressed = None
768 self._compressed = None
769 super(unbundle20, self).__init__(fp)
769 super(unbundle20, self).__init__(fp)
770
770
771 @util.propertycache
771 @util.propertycache
772 def params(self):
772 def params(self):
773 """dictionary of stream level parameters"""
773 """dictionary of stream level parameters"""
774 indebug(self.ui, 'reading bundle2 stream parameters')
774 indebug(self.ui, 'reading bundle2 stream parameters')
775 params = {}
775 params = {}
776 paramssize = self._unpack(_fstreamparamsize)[0]
776 paramssize = self._unpack(_fstreamparamsize)[0]
777 if paramssize < 0:
777 if paramssize < 0:
778 raise error.BundleValueError('negative bundle param size: %i'
778 raise error.BundleValueError('negative bundle param size: %i'
779 % paramssize)
779 % paramssize)
780 if paramssize:
780 if paramssize:
781 params = self._readexact(paramssize)
781 params = self._readexact(paramssize)
782 params = self._processallparams(params)
782 params = self._processallparams(params)
783 return params
783 return params
784
784
785 def _processallparams(self, paramsblock):
785 def _processallparams(self, paramsblock):
786 """"""
786 """"""
787 params = util.sortdict()
787 params = util.sortdict()
788 for p in paramsblock.split(' '):
788 for p in paramsblock.split(' '):
789 p = p.split('=', 1)
789 p = p.split('=', 1)
790 p = [urlreq.unquote(i) for i in p]
790 p = [urlreq.unquote(i) for i in p]
791 if len(p) < 2:
791 if len(p) < 2:
792 p.append(None)
792 p.append(None)
793 self._processparam(*p)
793 self._processparam(*p)
794 params[p[0]] = p[1]
794 params[p[0]] = p[1]
795 return params
795 return params
796
796
797
797
798 def _processparam(self, name, value):
798 def _processparam(self, name, value):
799 """process a parameter, applying its effect if needed
799 """process a parameter, applying its effect if needed
800
800
801 Parameter starting with a lower case letter are advisory and will be
801 Parameter starting with a lower case letter are advisory and will be
802 ignored when unknown. Those starting with an upper case letter are
802 ignored when unknown. Those starting with an upper case letter are
803 mandatory and will this function will raise a KeyError when unknown.
803 mandatory and will this function will raise a KeyError when unknown.
804
804
805 Note: no option are currently supported. Any input will be either
805 Note: no option are currently supported. Any input will be either
806 ignored or failing.
806 ignored or failing.
807 """
807 """
808 if not name:
808 if not name:
809 raise ValueError(r'empty parameter name')
809 raise ValueError(r'empty parameter name')
810 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
810 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
811 raise ValueError(r'non letter first character: %s' % name)
811 raise ValueError(r'non letter first character: %s' % name)
812 try:
812 try:
813 handler = b2streamparamsmap[name.lower()]
813 handler = b2streamparamsmap[name.lower()]
814 except KeyError:
814 except KeyError:
815 if name[0:1].islower():
815 if name[0:1].islower():
816 indebug(self.ui, "ignoring unknown parameter %s" % name)
816 indebug(self.ui, "ignoring unknown parameter %s" % name)
817 else:
817 else:
818 raise error.BundleUnknownFeatureError(params=(name,))
818 raise error.BundleUnknownFeatureError(params=(name,))
819 else:
819 else:
820 handler(self, name, value)
820 handler(self, name, value)
821
821
822 def _forwardchunks(self):
822 def _forwardchunks(self):
823 """utility to transfer a bundle2 as binary
823 """utility to transfer a bundle2 as binary
824
824
825 This is made necessary by the fact the 'getbundle' command over 'ssh'
825 This is made necessary by the fact the 'getbundle' command over 'ssh'
826 have no way to know then the reply end, relying on the bundle to be
826 have no way to know then the reply end, relying on the bundle to be
827 interpreted to know its end. This is terrible and we are sorry, but we
827 interpreted to know its end. This is terrible and we are sorry, but we
828 needed to move forward to get general delta enabled.
828 needed to move forward to get general delta enabled.
829 """
829 """
830 yield self._magicstring
830 yield self._magicstring
831 assert 'params' not in vars(self)
831 assert 'params' not in vars(self)
832 paramssize = self._unpack(_fstreamparamsize)[0]
832 paramssize = self._unpack(_fstreamparamsize)[0]
833 if paramssize < 0:
833 if paramssize < 0:
834 raise error.BundleValueError('negative bundle param size: %i'
834 raise error.BundleValueError('negative bundle param size: %i'
835 % paramssize)
835 % paramssize)
836 yield _pack(_fstreamparamsize, paramssize)
836 yield _pack(_fstreamparamsize, paramssize)
837 if paramssize:
837 if paramssize:
838 params = self._readexact(paramssize)
838 params = self._readexact(paramssize)
839 self._processallparams(params)
839 self._processallparams(params)
840 yield params
840 yield params
841 assert self._compengine.bundletype == 'UN'
841 assert self._compengine.bundletype == 'UN'
842 # From there, payload might need to be decompressed
842 # From there, payload might need to be decompressed
843 self._fp = self._compengine.decompressorreader(self._fp)
843 self._fp = self._compengine.decompressorreader(self._fp)
844 emptycount = 0
844 emptycount = 0
845 while emptycount < 2:
845 while emptycount < 2:
846 # so we can brainlessly loop
846 # so we can brainlessly loop
847 assert _fpartheadersize == _fpayloadsize
847 assert _fpartheadersize == _fpayloadsize
848 size = self._unpack(_fpartheadersize)[0]
848 size = self._unpack(_fpartheadersize)[0]
849 yield _pack(_fpartheadersize, size)
849 yield _pack(_fpartheadersize, size)
850 if size:
850 if size:
851 emptycount = 0
851 emptycount = 0
852 else:
852 else:
853 emptycount += 1
853 emptycount += 1
854 continue
854 continue
855 if size == flaginterrupt:
855 if size == flaginterrupt:
856 continue
856 continue
857 elif size < 0:
857 elif size < 0:
858 raise error.BundleValueError('negative chunk size: %i')
858 raise error.BundleValueError('negative chunk size: %i')
859 yield self._readexact(size)
859 yield self._readexact(size)
860
860
861
861
862 def iterparts(self, seekable=False):
862 def iterparts(self, seekable=False):
863 """yield all parts contained in the stream"""
863 """yield all parts contained in the stream"""
864 cls = seekableunbundlepart if seekable else unbundlepart
864 cls = seekableunbundlepart if seekable else unbundlepart
865 # make sure param have been loaded
865 # make sure param have been loaded
866 self.params
866 self.params
867 # From there, payload need to be decompressed
867 # From there, payload need to be decompressed
868 self._fp = self._compengine.decompressorreader(self._fp)
868 self._fp = self._compengine.decompressorreader(self._fp)
869 indebug(self.ui, 'start extraction of bundle2 parts')
869 indebug(self.ui, 'start extraction of bundle2 parts')
870 headerblock = self._readpartheader()
870 headerblock = self._readpartheader()
871 while headerblock is not None:
871 while headerblock is not None:
872 part = cls(self.ui, headerblock, self._fp)
872 part = cls(self.ui, headerblock, self._fp)
873 yield part
873 yield part
874 # Ensure part is fully consumed so we can start reading the next
874 # Ensure part is fully consumed so we can start reading the next
875 # part.
875 # part.
876 part.consume()
876 part.consume()
877
877
878 headerblock = self._readpartheader()
878 headerblock = self._readpartheader()
879 indebug(self.ui, 'end of bundle2 stream')
879 indebug(self.ui, 'end of bundle2 stream')
880
880
881 def _readpartheader(self):
881 def _readpartheader(self):
882 """reads a part header size and return the bytes blob
882 """reads a part header size and return the bytes blob
883
883
884 returns None if empty"""
884 returns None if empty"""
885 headersize = self._unpack(_fpartheadersize)[0]
885 headersize = self._unpack(_fpartheadersize)[0]
886 if headersize < 0:
886 if headersize < 0:
887 raise error.BundleValueError('negative part header size: %i'
887 raise error.BundleValueError('negative part header size: %i'
888 % headersize)
888 % headersize)
889 indebug(self.ui, 'part header size: %i' % headersize)
889 indebug(self.ui, 'part header size: %i' % headersize)
890 if headersize:
890 if headersize:
891 return self._readexact(headersize)
891 return self._readexact(headersize)
892 return None
892 return None
893
893
894 def compressed(self):
894 def compressed(self):
895 self.params # load params
895 self.params # load params
896 return self._compressed
896 return self._compressed
897
897
898 def close(self):
898 def close(self):
899 """close underlying file"""
899 """close underlying file"""
900 if util.safehasattr(self._fp, 'close'):
900 if util.safehasattr(self._fp, 'close'):
901 return self._fp.close()
901 return self._fp.close()
902
902
903 formatmap = {'20': unbundle20}
903 formatmap = {'20': unbundle20}
904
904
905 b2streamparamsmap = {}
905 b2streamparamsmap = {}
906
906
907 def b2streamparamhandler(name):
907 def b2streamparamhandler(name):
908 """register a handler for a stream level parameter"""
908 """register a handler for a stream level parameter"""
909 def decorator(func):
909 def decorator(func):
910 assert name not in formatmap
910 assert name not in formatmap
911 b2streamparamsmap[name] = func
911 b2streamparamsmap[name] = func
912 return func
912 return func
913 return decorator
913 return decorator
914
914
915 @b2streamparamhandler('compression')
915 @b2streamparamhandler('compression')
916 def processcompression(unbundler, param, value):
916 def processcompression(unbundler, param, value):
917 """read compression parameter and install payload decompression"""
917 """read compression parameter and install payload decompression"""
918 if value not in util.compengines.supportedbundletypes:
918 if value not in util.compengines.supportedbundletypes:
919 raise error.BundleUnknownFeatureError(params=(param,),
919 raise error.BundleUnknownFeatureError(params=(param,),
920 values=(value,))
920 values=(value,))
921 unbundler._compengine = util.compengines.forbundletype(value)
921 unbundler._compengine = util.compengines.forbundletype(value)
922 if value is not None:
922 if value is not None:
923 unbundler._compressed = True
923 unbundler._compressed = True
924
924
925 class bundlepart(object):
925 class bundlepart(object):
926 """A bundle2 part contains application level payload
926 """A bundle2 part contains application level payload
927
927
928 The part `type` is used to route the part to the application level
928 The part `type` is used to route the part to the application level
929 handler.
929 handler.
930
930
931 The part payload is contained in ``part.data``. It could be raw bytes or a
931 The part payload is contained in ``part.data``. It could be raw bytes or a
932 generator of byte chunks.
932 generator of byte chunks.
933
933
934 You can add parameters to the part using the ``addparam`` method.
934 You can add parameters to the part using the ``addparam`` method.
935 Parameters can be either mandatory (default) or advisory. Remote side
935 Parameters can be either mandatory (default) or advisory. Remote side
936 should be able to safely ignore the advisory ones.
936 should be able to safely ignore the advisory ones.
937
937
938 Both data and parameters cannot be modified after the generation has begun.
938 Both data and parameters cannot be modified after the generation has begun.
939 """
939 """
940
940
941 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
941 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
942 data='', mandatory=True):
942 data='', mandatory=True):
943 validateparttype(parttype)
943 validateparttype(parttype)
944 self.id = None
944 self.id = None
945 self.type = parttype
945 self.type = parttype
946 self._data = data
946 self._data = data
947 self._mandatoryparams = list(mandatoryparams)
947 self._mandatoryparams = list(mandatoryparams)
948 self._advisoryparams = list(advisoryparams)
948 self._advisoryparams = list(advisoryparams)
949 # checking for duplicated entries
949 # checking for duplicated entries
950 self._seenparams = set()
950 self._seenparams = set()
951 for pname, __ in self._mandatoryparams + self._advisoryparams:
951 for pname, __ in self._mandatoryparams + self._advisoryparams:
952 if pname in self._seenparams:
952 if pname in self._seenparams:
953 raise error.ProgrammingError('duplicated params: %s' % pname)
953 raise error.ProgrammingError('duplicated params: %s' % pname)
954 self._seenparams.add(pname)
954 self._seenparams.add(pname)
955 # status of the part's generation:
955 # status of the part's generation:
956 # - None: not started,
956 # - None: not started,
957 # - False: currently generated,
957 # - False: currently generated,
958 # - True: generation done.
958 # - True: generation done.
959 self._generated = None
959 self._generated = None
960 self.mandatory = mandatory
960 self.mandatory = mandatory
961
961
962 def __repr__(self):
962 def __repr__(self):
963 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
963 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
964 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
964 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
965 % (cls, id(self), self.id, self.type, self.mandatory))
965 % (cls, id(self), self.id, self.type, self.mandatory))
966
966
967 def copy(self):
967 def copy(self):
968 """return a copy of the part
968 """return a copy of the part
969
969
970 The new part have the very same content but no partid assigned yet.
970 The new part have the very same content but no partid assigned yet.
971 Parts with generated data cannot be copied."""
971 Parts with generated data cannot be copied."""
972 assert not util.safehasattr(self.data, 'next')
972 assert not util.safehasattr(self.data, 'next')
973 return self.__class__(self.type, self._mandatoryparams,
973 return self.__class__(self.type, self._mandatoryparams,
974 self._advisoryparams, self._data, self.mandatory)
974 self._advisoryparams, self._data, self.mandatory)
975
975
976 # methods used to defines the part content
976 # methods used to defines the part content
977 @property
977 @property
978 def data(self):
978 def data(self):
979 return self._data
979 return self._data
980
980
981 @data.setter
981 @data.setter
982 def data(self, data):
982 def data(self, data):
983 if self._generated is not None:
983 if self._generated is not None:
984 raise error.ReadOnlyPartError('part is being generated')
984 raise error.ReadOnlyPartError('part is being generated')
985 self._data = data
985 self._data = data
986
986
987 @property
987 @property
988 def mandatoryparams(self):
988 def mandatoryparams(self):
989 # make it an immutable tuple to force people through ``addparam``
989 # make it an immutable tuple to force people through ``addparam``
990 return tuple(self._mandatoryparams)
990 return tuple(self._mandatoryparams)
991
991
992 @property
992 @property
993 def advisoryparams(self):
993 def advisoryparams(self):
994 # make it an immutable tuple to force people through ``addparam``
994 # make it an immutable tuple to force people through ``addparam``
995 return tuple(self._advisoryparams)
995 return tuple(self._advisoryparams)
996
996
997 def addparam(self, name, value='', mandatory=True):
997 def addparam(self, name, value='', mandatory=True):
998 """add a parameter to the part
998 """add a parameter to the part
999
999
1000 If 'mandatory' is set to True, the remote handler must claim support
1000 If 'mandatory' is set to True, the remote handler must claim support
1001 for this parameter or the unbundling will be aborted.
1001 for this parameter or the unbundling will be aborted.
1002
1002
1003 The 'name' and 'value' cannot exceed 255 bytes each.
1003 The 'name' and 'value' cannot exceed 255 bytes each.
1004 """
1004 """
1005 if self._generated is not None:
1005 if self._generated is not None:
1006 raise error.ReadOnlyPartError('part is being generated')
1006 raise error.ReadOnlyPartError('part is being generated')
1007 if name in self._seenparams:
1007 if name in self._seenparams:
1008 raise ValueError('duplicated params: %s' % name)
1008 raise ValueError('duplicated params: %s' % name)
1009 self._seenparams.add(name)
1009 self._seenparams.add(name)
1010 params = self._advisoryparams
1010 params = self._advisoryparams
1011 if mandatory:
1011 if mandatory:
1012 params = self._mandatoryparams
1012 params = self._mandatoryparams
1013 params.append((name, value))
1013 params.append((name, value))
1014
1014
1015 # methods used to generates the bundle2 stream
1015 # methods used to generates the bundle2 stream
1016 def getchunks(self, ui):
1016 def getchunks(self, ui):
1017 if self._generated is not None:
1017 if self._generated is not None:
1018 raise error.ProgrammingError('part can only be consumed once')
1018 raise error.ProgrammingError('part can only be consumed once')
1019 self._generated = False
1019 self._generated = False
1020
1020
1021 if ui.debugflag:
1021 if ui.debugflag:
1022 msg = ['bundle2-output-part: "%s"' % self.type]
1022 msg = ['bundle2-output-part: "%s"' % self.type]
1023 if not self.mandatory:
1023 if not self.mandatory:
1024 msg.append(' (advisory)')
1024 msg.append(' (advisory)')
1025 nbmp = len(self.mandatoryparams)
1025 nbmp = len(self.mandatoryparams)
1026 nbap = len(self.advisoryparams)
1026 nbap = len(self.advisoryparams)
1027 if nbmp or nbap:
1027 if nbmp or nbap:
1028 msg.append(' (params:')
1028 msg.append(' (params:')
1029 if nbmp:
1029 if nbmp:
1030 msg.append(' %i mandatory' % nbmp)
1030 msg.append(' %i mandatory' % nbmp)
1031 if nbap:
1031 if nbap:
1032 msg.append(' %i advisory' % nbmp)
1032 msg.append(' %i advisory' % nbmp)
1033 msg.append(')')
1033 msg.append(')')
1034 if not self.data:
1034 if not self.data:
1035 msg.append(' empty payload')
1035 msg.append(' empty payload')
1036 elif (util.safehasattr(self.data, 'next')
1036 elif (util.safehasattr(self.data, 'next')
1037 or util.safehasattr(self.data, '__next__')):
1037 or util.safehasattr(self.data, '__next__')):
1038 msg.append(' streamed payload')
1038 msg.append(' streamed payload')
1039 else:
1039 else:
1040 msg.append(' %i bytes payload' % len(self.data))
1040 msg.append(' %i bytes payload' % len(self.data))
1041 msg.append('\n')
1041 msg.append('\n')
1042 ui.debug(''.join(msg))
1042 ui.debug(''.join(msg))
1043
1043
1044 #### header
1044 #### header
1045 if self.mandatory:
1045 if self.mandatory:
1046 parttype = self.type.upper()
1046 parttype = self.type.upper()
1047 else:
1047 else:
1048 parttype = self.type.lower()
1048 parttype = self.type.lower()
1049 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1049 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1050 ## parttype
1050 ## parttype
1051 header = [_pack(_fparttypesize, len(parttype)),
1051 header = [_pack(_fparttypesize, len(parttype)),
1052 parttype, _pack(_fpartid, self.id),
1052 parttype, _pack(_fpartid, self.id),
1053 ]
1053 ]
1054 ## parameters
1054 ## parameters
1055 # count
1055 # count
1056 manpar = self.mandatoryparams
1056 manpar = self.mandatoryparams
1057 advpar = self.advisoryparams
1057 advpar = self.advisoryparams
1058 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1058 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1059 # size
1059 # size
1060 parsizes = []
1060 parsizes = []
1061 for key, value in manpar:
1061 for key, value in manpar:
1062 parsizes.append(len(key))
1062 parsizes.append(len(key))
1063 parsizes.append(len(value))
1063 parsizes.append(len(value))
1064 for key, value in advpar:
1064 for key, value in advpar:
1065 parsizes.append(len(key))
1065 parsizes.append(len(key))
1066 parsizes.append(len(value))
1066 parsizes.append(len(value))
1067 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1067 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1068 header.append(paramsizes)
1068 header.append(paramsizes)
1069 # key, value
1069 # key, value
1070 for key, value in manpar:
1070 for key, value in manpar:
1071 header.append(key)
1071 header.append(key)
1072 header.append(value)
1072 header.append(value)
1073 for key, value in advpar:
1073 for key, value in advpar:
1074 header.append(key)
1074 header.append(key)
1075 header.append(value)
1075 header.append(value)
1076 ## finalize header
1076 ## finalize header
1077 try:
1077 try:
1078 headerchunk = ''.join(header)
1078 headerchunk = ''.join(header)
1079 except TypeError:
1079 except TypeError:
1080 raise TypeError(r'Found a non-bytes trying to '
1080 raise TypeError(r'Found a non-bytes trying to '
1081 r'build bundle part header: %r' % header)
1081 r'build bundle part header: %r' % header)
1082 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1082 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1083 yield _pack(_fpartheadersize, len(headerchunk))
1083 yield _pack(_fpartheadersize, len(headerchunk))
1084 yield headerchunk
1084 yield headerchunk
1085 ## payload
1085 ## payload
1086 try:
1086 try:
1087 for chunk in self._payloadchunks():
1087 for chunk in self._payloadchunks():
1088 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1088 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1089 yield _pack(_fpayloadsize, len(chunk))
1089 yield _pack(_fpayloadsize, len(chunk))
1090 yield chunk
1090 yield chunk
1091 except GeneratorExit:
1091 except GeneratorExit:
1092 # GeneratorExit means that nobody is listening for our
1092 # GeneratorExit means that nobody is listening for our
1093 # results anyway, so just bail quickly rather than trying
1093 # results anyway, so just bail quickly rather than trying
1094 # to produce an error part.
1094 # to produce an error part.
1095 ui.debug('bundle2-generatorexit\n')
1095 ui.debug('bundle2-generatorexit\n')
1096 raise
1096 raise
1097 except BaseException as exc:
1097 except BaseException as exc:
1098 bexc = stringutil.forcebytestr(exc)
1098 bexc = stringutil.forcebytestr(exc)
1099 # backup exception data for later
1099 # backup exception data for later
1100 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1100 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1101 % bexc)
1101 % bexc)
1102 tb = sys.exc_info()[2]
1102 tb = sys.exc_info()[2]
1103 msg = 'unexpected error: %s' % bexc
1103 msg = 'unexpected error: %s' % bexc
1104 interpart = bundlepart('error:abort', [('message', msg)],
1104 interpart = bundlepart('error:abort', [('message', msg)],
1105 mandatory=False)
1105 mandatory=False)
1106 interpart.id = 0
1106 interpart.id = 0
1107 yield _pack(_fpayloadsize, -1)
1107 yield _pack(_fpayloadsize, -1)
1108 for chunk in interpart.getchunks(ui=ui):
1108 for chunk in interpart.getchunks(ui=ui):
1109 yield chunk
1109 yield chunk
1110 outdebug(ui, 'closing payload chunk')
1110 outdebug(ui, 'closing payload chunk')
1111 # abort current part payload
1111 # abort current part payload
1112 yield _pack(_fpayloadsize, 0)
1112 yield _pack(_fpayloadsize, 0)
1113 pycompat.raisewithtb(exc, tb)
1113 pycompat.raisewithtb(exc, tb)
1114 # end of payload
1114 # end of payload
1115 outdebug(ui, 'closing payload chunk')
1115 outdebug(ui, 'closing payload chunk')
1116 yield _pack(_fpayloadsize, 0)
1116 yield _pack(_fpayloadsize, 0)
1117 self._generated = True
1117 self._generated = True
1118
1118
1119 def _payloadchunks(self):
1119 def _payloadchunks(self):
1120 """yield chunks of a the part payload
1120 """yield chunks of a the part payload
1121
1121
1122 Exists to handle the different methods to provide data to a part."""
1122 Exists to handle the different methods to provide data to a part."""
1123 # we only support fixed size data now.
1123 # we only support fixed size data now.
1124 # This will be improved in the future.
1124 # This will be improved in the future.
1125 if (util.safehasattr(self.data, 'next')
1125 if (util.safehasattr(self.data, 'next')
1126 or util.safehasattr(self.data, '__next__')):
1126 or util.safehasattr(self.data, '__next__')):
1127 buff = util.chunkbuffer(self.data)
1127 buff = util.chunkbuffer(self.data)
1128 chunk = buff.read(preferedchunksize)
1128 chunk = buff.read(preferedchunksize)
1129 while chunk:
1129 while chunk:
1130 yield chunk
1130 yield chunk
1131 chunk = buff.read(preferedchunksize)
1131 chunk = buff.read(preferedchunksize)
1132 elif len(self.data):
1132 elif len(self.data):
1133 yield self.data
1133 yield self.data
1134
1134
1135
1135
1136 flaginterrupt = -1
1136 flaginterrupt = -1
1137
1137
1138 class interrupthandler(unpackermixin):
1138 class interrupthandler(unpackermixin):
1139 """read one part and process it with restricted capability
1139 """read one part and process it with restricted capability
1140
1140
1141 This allows to transmit exception raised on the producer size during part
1141 This allows to transmit exception raised on the producer size during part
1142 iteration while the consumer is reading a part.
1142 iteration while the consumer is reading a part.
1143
1143
1144 Part processed in this manner only have access to a ui object,"""
1144 Part processed in this manner only have access to a ui object,"""
1145
1145
1146 def __init__(self, ui, fp):
1146 def __init__(self, ui, fp):
1147 super(interrupthandler, self).__init__(fp)
1147 super(interrupthandler, self).__init__(fp)
1148 self.ui = ui
1148 self.ui = ui
1149
1149
1150 def _readpartheader(self):
1150 def _readpartheader(self):
1151 """reads a part header size and return the bytes blob
1151 """reads a part header size and return the bytes blob
1152
1152
1153 returns None if empty"""
1153 returns None if empty"""
1154 headersize = self._unpack(_fpartheadersize)[0]
1154 headersize = self._unpack(_fpartheadersize)[0]
1155 if headersize < 0:
1155 if headersize < 0:
1156 raise error.BundleValueError('negative part header size: %i'
1156 raise error.BundleValueError('negative part header size: %i'
1157 % headersize)
1157 % headersize)
1158 indebug(self.ui, 'part header size: %i\n' % headersize)
1158 indebug(self.ui, 'part header size: %i\n' % headersize)
1159 if headersize:
1159 if headersize:
1160 return self._readexact(headersize)
1160 return self._readexact(headersize)
1161 return None
1161 return None
1162
1162
1163 def __call__(self):
1163 def __call__(self):
1164
1164
1165 self.ui.debug('bundle2-input-stream-interrupt:'
1165 self.ui.debug('bundle2-input-stream-interrupt:'
1166 ' opening out of band context\n')
1166 ' opening out of band context\n')
1167 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1167 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1168 headerblock = self._readpartheader()
1168 headerblock = self._readpartheader()
1169 if headerblock is None:
1169 if headerblock is None:
1170 indebug(self.ui, 'no part found during interruption.')
1170 indebug(self.ui, 'no part found during interruption.')
1171 return
1171 return
1172 part = unbundlepart(self.ui, headerblock, self._fp)
1172 part = unbundlepart(self.ui, headerblock, self._fp)
1173 op = interruptoperation(self.ui)
1173 op = interruptoperation(self.ui)
1174 hardabort = False
1174 hardabort = False
1175 try:
1175 try:
1176 _processpart(op, part)
1176 _processpart(op, part)
1177 except (SystemExit, KeyboardInterrupt):
1177 except (SystemExit, KeyboardInterrupt):
1178 hardabort = True
1178 hardabort = True
1179 raise
1179 raise
1180 finally:
1180 finally:
1181 if not hardabort:
1181 if not hardabort:
1182 part.consume()
1182 part.consume()
1183 self.ui.debug('bundle2-input-stream-interrupt:'
1183 self.ui.debug('bundle2-input-stream-interrupt:'
1184 ' closing out of band context\n')
1184 ' closing out of band context\n')
1185
1185
1186 class interruptoperation(object):
1186 class interruptoperation(object):
1187 """A limited operation to be use by part handler during interruption
1187 """A limited operation to be use by part handler during interruption
1188
1188
1189 It only have access to an ui object.
1189 It only have access to an ui object.
1190 """
1190 """
1191
1191
1192 def __init__(self, ui):
1192 def __init__(self, ui):
1193 self.ui = ui
1193 self.ui = ui
1194 self.reply = None
1194 self.reply = None
1195 self.captureoutput = False
1195 self.captureoutput = False
1196
1196
1197 @property
1197 @property
1198 def repo(self):
1198 def repo(self):
1199 raise error.ProgrammingError('no repo access from stream interruption')
1199 raise error.ProgrammingError('no repo access from stream interruption')
1200
1200
1201 def gettransaction(self):
1201 def gettransaction(self):
1202 raise TransactionUnavailable('no repo access from stream interruption')
1202 raise TransactionUnavailable('no repo access from stream interruption')
1203
1203
1204 def decodepayloadchunks(ui, fh):
1204 def decodepayloadchunks(ui, fh):
1205 """Reads bundle2 part payload data into chunks.
1205 """Reads bundle2 part payload data into chunks.
1206
1206
1207 Part payload data consists of framed chunks. This function takes
1207 Part payload data consists of framed chunks. This function takes
1208 a file handle and emits those chunks.
1208 a file handle and emits those chunks.
1209 """
1209 """
1210 dolog = ui.configbool('devel', 'bundle2.debug')
1210 dolog = ui.configbool('devel', 'bundle2.debug')
1211 debug = ui.debug
1211 debug = ui.debug
1212
1212
1213 headerstruct = struct.Struct(_fpayloadsize)
1213 headerstruct = struct.Struct(_fpayloadsize)
1214 headersize = headerstruct.size
1214 headersize = headerstruct.size
1215 unpack = headerstruct.unpack
1215 unpack = headerstruct.unpack
1216
1216
1217 readexactly = changegroup.readexactly
1217 readexactly = changegroup.readexactly
1218 read = fh.read
1218 read = fh.read
1219
1219
1220 chunksize = unpack(readexactly(fh, headersize))[0]
1220 chunksize = unpack(readexactly(fh, headersize))[0]
1221 indebug(ui, 'payload chunk size: %i' % chunksize)
1221 indebug(ui, 'payload chunk size: %i' % chunksize)
1222
1222
1223 # changegroup.readexactly() is inlined below for performance.
1223 # changegroup.readexactly() is inlined below for performance.
1224 while chunksize:
1224 while chunksize:
1225 if chunksize >= 0:
1225 if chunksize >= 0:
1226 s = read(chunksize)
1226 s = read(chunksize)
1227 if len(s) < chunksize:
1227 if len(s) < chunksize:
1228 raise error.Abort(_('stream ended unexpectedly '
1228 raise error.Abort(_('stream ended unexpectedly '
1229 ' (got %d bytes, expected %d)') %
1229 ' (got %d bytes, expected %d)') %
1230 (len(s), chunksize))
1230 (len(s), chunksize))
1231
1231
1232 yield s
1232 yield s
1233 elif chunksize == flaginterrupt:
1233 elif chunksize == flaginterrupt:
1234 # Interrupt "signal" detected. The regular stream is interrupted
1234 # Interrupt "signal" detected. The regular stream is interrupted
1235 # and a bundle2 part follows. Consume it.
1235 # and a bundle2 part follows. Consume it.
1236 interrupthandler(ui, fh)()
1236 interrupthandler(ui, fh)()
1237 else:
1237 else:
1238 raise error.BundleValueError(
1238 raise error.BundleValueError(
1239 'negative payload chunk size: %s' % chunksize)
1239 'negative payload chunk size: %s' % chunksize)
1240
1240
1241 s = read(headersize)
1241 s = read(headersize)
1242 if len(s) < headersize:
1242 if len(s) < headersize:
1243 raise error.Abort(_('stream ended unexpectedly '
1243 raise error.Abort(_('stream ended unexpectedly '
1244 ' (got %d bytes, expected %d)') %
1244 ' (got %d bytes, expected %d)') %
1245 (len(s), chunksize))
1245 (len(s), chunksize))
1246
1246
1247 chunksize = unpack(s)[0]
1247 chunksize = unpack(s)[0]
1248
1248
1249 # indebug() inlined for performance.
1249 # indebug() inlined for performance.
1250 if dolog:
1250 if dolog:
1251 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1251 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1252
1252
1253 class unbundlepart(unpackermixin):
1253 class unbundlepart(unpackermixin):
1254 """a bundle part read from a bundle"""
1254 """a bundle part read from a bundle"""
1255
1255
1256 def __init__(self, ui, header, fp):
1256 def __init__(self, ui, header, fp):
1257 super(unbundlepart, self).__init__(fp)
1257 super(unbundlepart, self).__init__(fp)
1258 self._seekable = (util.safehasattr(fp, 'seek') and
1258 self._seekable = (util.safehasattr(fp, 'seek') and
1259 util.safehasattr(fp, 'tell'))
1259 util.safehasattr(fp, 'tell'))
1260 self.ui = ui
1260 self.ui = ui
1261 # unbundle state attr
1261 # unbundle state attr
1262 self._headerdata = header
1262 self._headerdata = header
1263 self._headeroffset = 0
1263 self._headeroffset = 0
1264 self._initialized = False
1264 self._initialized = False
1265 self.consumed = False
1265 self.consumed = False
1266 # part data
1266 # part data
1267 self.id = None
1267 self.id = None
1268 self.type = None
1268 self.type = None
1269 self.mandatoryparams = None
1269 self.mandatoryparams = None
1270 self.advisoryparams = None
1270 self.advisoryparams = None
1271 self.params = None
1271 self.params = None
1272 self.mandatorykeys = ()
1272 self.mandatorykeys = ()
1273 self._readheader()
1273 self._readheader()
1274 self._mandatory = None
1274 self._mandatory = None
1275 self._pos = 0
1275 self._pos = 0
1276
1276
1277 def _fromheader(self, size):
1277 def _fromheader(self, size):
1278 """return the next <size> byte from the header"""
1278 """return the next <size> byte from the header"""
1279 offset = self._headeroffset
1279 offset = self._headeroffset
1280 data = self._headerdata[offset:(offset + size)]
1280 data = self._headerdata[offset:(offset + size)]
1281 self._headeroffset = offset + size
1281 self._headeroffset = offset + size
1282 return data
1282 return data
1283
1283
1284 def _unpackheader(self, format):
1284 def _unpackheader(self, format):
1285 """read given format from header
1285 """read given format from header
1286
1286
1287 This automatically compute the size of the format to read."""
1287 This automatically compute the size of the format to read."""
1288 data = self._fromheader(struct.calcsize(format))
1288 data = self._fromheader(struct.calcsize(format))
1289 return _unpack(format, data)
1289 return _unpack(format, data)
1290
1290
1291 def _initparams(self, mandatoryparams, advisoryparams):
1291 def _initparams(self, mandatoryparams, advisoryparams):
1292 """internal function to setup all logic related parameters"""
1292 """internal function to setup all logic related parameters"""
1293 # make it read only to prevent people touching it by mistake.
1293 # make it read only to prevent people touching it by mistake.
1294 self.mandatoryparams = tuple(mandatoryparams)
1294 self.mandatoryparams = tuple(mandatoryparams)
1295 self.advisoryparams = tuple(advisoryparams)
1295 self.advisoryparams = tuple(advisoryparams)
1296 # user friendly UI
1296 # user friendly UI
1297 self.params = util.sortdict(self.mandatoryparams)
1297 self.params = util.sortdict(self.mandatoryparams)
1298 self.params.update(self.advisoryparams)
1298 self.params.update(self.advisoryparams)
1299 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1299 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1300
1300
1301 def _readheader(self):
1301 def _readheader(self):
1302 """read the header and setup the object"""
1302 """read the header and setup the object"""
1303 typesize = self._unpackheader(_fparttypesize)[0]
1303 typesize = self._unpackheader(_fparttypesize)[0]
1304 self.type = self._fromheader(typesize)
1304 self.type = self._fromheader(typesize)
1305 indebug(self.ui, 'part type: "%s"' % self.type)
1305 indebug(self.ui, 'part type: "%s"' % self.type)
1306 self.id = self._unpackheader(_fpartid)[0]
1306 self.id = self._unpackheader(_fpartid)[0]
1307 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1307 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1308 # extract mandatory bit from type
1308 # extract mandatory bit from type
1309 self.mandatory = (self.type != self.type.lower())
1309 self.mandatory = (self.type != self.type.lower())
1310 self.type = self.type.lower()
1310 self.type = self.type.lower()
1311 ## reading parameters
1311 ## reading parameters
1312 # param count
1312 # param count
1313 mancount, advcount = self._unpackheader(_fpartparamcount)
1313 mancount, advcount = self._unpackheader(_fpartparamcount)
1314 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1314 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1315 # param size
1315 # param size
1316 fparamsizes = _makefpartparamsizes(mancount + advcount)
1316 fparamsizes = _makefpartparamsizes(mancount + advcount)
1317 paramsizes = self._unpackheader(fparamsizes)
1317 paramsizes = self._unpackheader(fparamsizes)
1318 # make it a list of couple again
1318 # make it a list of couple again
1319 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1319 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1320 # split mandatory from advisory
1320 # split mandatory from advisory
1321 mansizes = paramsizes[:mancount]
1321 mansizes = paramsizes[:mancount]
1322 advsizes = paramsizes[mancount:]
1322 advsizes = paramsizes[mancount:]
1323 # retrieve param value
1323 # retrieve param value
1324 manparams = []
1324 manparams = []
1325 for key, value in mansizes:
1325 for key, value in mansizes:
1326 manparams.append((self._fromheader(key), self._fromheader(value)))
1326 manparams.append((self._fromheader(key), self._fromheader(value)))
1327 advparams = []
1327 advparams = []
1328 for key, value in advsizes:
1328 for key, value in advsizes:
1329 advparams.append((self._fromheader(key), self._fromheader(value)))
1329 advparams.append((self._fromheader(key), self._fromheader(value)))
1330 self._initparams(manparams, advparams)
1330 self._initparams(manparams, advparams)
1331 ## part payload
1331 ## part payload
1332 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1332 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1333 # we read the data, tell it
1333 # we read the data, tell it
1334 self._initialized = True
1334 self._initialized = True
1335
1335
1336 def _payloadchunks(self):
1336 def _payloadchunks(self):
1337 """Generator of decoded chunks in the payload."""
1337 """Generator of decoded chunks in the payload."""
1338 return decodepayloadchunks(self.ui, self._fp)
1338 return decodepayloadchunks(self.ui, self._fp)
1339
1339
1340 def consume(self):
1340 def consume(self):
1341 """Read the part payload until completion.
1341 """Read the part payload until completion.
1342
1342
1343 By consuming the part data, the underlying stream read offset will
1343 By consuming the part data, the underlying stream read offset will
1344 be advanced to the next part (or end of stream).
1344 be advanced to the next part (or end of stream).
1345 """
1345 """
1346 if self.consumed:
1346 if self.consumed:
1347 return
1347 return
1348
1348
1349 chunk = self.read(32768)
1349 chunk = self.read(32768)
1350 while chunk:
1350 while chunk:
1351 self._pos += len(chunk)
1351 self._pos += len(chunk)
1352 chunk = self.read(32768)
1352 chunk = self.read(32768)
1353
1353
1354 def read(self, size=None):
1354 def read(self, size=None):
1355 """read payload data"""
1355 """read payload data"""
1356 if not self._initialized:
1356 if not self._initialized:
1357 self._readheader()
1357 self._readheader()
1358 if size is None:
1358 if size is None:
1359 data = self._payloadstream.read()
1359 data = self._payloadstream.read()
1360 else:
1360 else:
1361 data = self._payloadstream.read(size)
1361 data = self._payloadstream.read(size)
1362 self._pos += len(data)
1362 self._pos += len(data)
1363 if size is None or len(data) < size:
1363 if size is None or len(data) < size:
1364 if not self.consumed and self._pos:
1364 if not self.consumed and self._pos:
1365 self.ui.debug('bundle2-input-part: total payload size %i\n'
1365 self.ui.debug('bundle2-input-part: total payload size %i\n'
1366 % self._pos)
1366 % self._pos)
1367 self.consumed = True
1367 self.consumed = True
1368 return data
1368 return data
1369
1369
1370 class seekableunbundlepart(unbundlepart):
1370 class seekableunbundlepart(unbundlepart):
1371 """A bundle2 part in a bundle that is seekable.
1371 """A bundle2 part in a bundle that is seekable.
1372
1372
1373 Regular ``unbundlepart`` instances can only be read once. This class
1373 Regular ``unbundlepart`` instances can only be read once. This class
1374 extends ``unbundlepart`` to enable bi-directional seeking within the
1374 extends ``unbundlepart`` to enable bi-directional seeking within the
1375 part.
1375 part.
1376
1376
1377 Bundle2 part data consists of framed chunks. Offsets when seeking
1377 Bundle2 part data consists of framed chunks. Offsets when seeking
1378 refer to the decoded data, not the offsets in the underlying bundle2
1378 refer to the decoded data, not the offsets in the underlying bundle2
1379 stream.
1379 stream.
1380
1380
1381 To facilitate quickly seeking within the decoded data, instances of this
1381 To facilitate quickly seeking within the decoded data, instances of this
1382 class maintain a mapping between offsets in the underlying stream and
1382 class maintain a mapping between offsets in the underlying stream and
1383 the decoded payload. This mapping will consume memory in proportion
1383 the decoded payload. This mapping will consume memory in proportion
1384 to the number of chunks within the payload (which almost certainly
1384 to the number of chunks within the payload (which almost certainly
1385 increases in proportion with the size of the part).
1385 increases in proportion with the size of the part).
1386 """
1386 """
1387 def __init__(self, ui, header, fp):
1387 def __init__(self, ui, header, fp):
1388 # (payload, file) offsets for chunk starts.
1388 # (payload, file) offsets for chunk starts.
1389 self._chunkindex = []
1389 self._chunkindex = []
1390
1390
1391 super(seekableunbundlepart, self).__init__(ui, header, fp)
1391 super(seekableunbundlepart, self).__init__(ui, header, fp)
1392
1392
1393 def _payloadchunks(self, chunknum=0):
1393 def _payloadchunks(self, chunknum=0):
1394 '''seek to specified chunk and start yielding data'''
1394 '''seek to specified chunk and start yielding data'''
1395 if len(self._chunkindex) == 0:
1395 if len(self._chunkindex) == 0:
1396 assert chunknum == 0, 'Must start with chunk 0'
1396 assert chunknum == 0, 'Must start with chunk 0'
1397 self._chunkindex.append((0, self._tellfp()))
1397 self._chunkindex.append((0, self._tellfp()))
1398 else:
1398 else:
1399 assert chunknum < len(self._chunkindex), \
1399 assert chunknum < len(self._chunkindex), \
1400 'Unknown chunk %d' % chunknum
1400 'Unknown chunk %d' % chunknum
1401 self._seekfp(self._chunkindex[chunknum][1])
1401 self._seekfp(self._chunkindex[chunknum][1])
1402
1402
1403 pos = self._chunkindex[chunknum][0]
1403 pos = self._chunkindex[chunknum][0]
1404
1404
1405 for chunk in decodepayloadchunks(self.ui, self._fp):
1405 for chunk in decodepayloadchunks(self.ui, self._fp):
1406 chunknum += 1
1406 chunknum += 1
1407 pos += len(chunk)
1407 pos += len(chunk)
1408 if chunknum == len(self._chunkindex):
1408 if chunknum == len(self._chunkindex):
1409 self._chunkindex.append((pos, self._tellfp()))
1409 self._chunkindex.append((pos, self._tellfp()))
1410
1410
1411 yield chunk
1411 yield chunk
1412
1412
1413 def _findchunk(self, pos):
1413 def _findchunk(self, pos):
1414 '''for a given payload position, return a chunk number and offset'''
1414 '''for a given payload position, return a chunk number and offset'''
1415 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1415 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1416 if ppos == pos:
1416 if ppos == pos:
1417 return chunk, 0
1417 return chunk, 0
1418 elif ppos > pos:
1418 elif ppos > pos:
1419 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1419 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1420 raise ValueError('Unknown chunk')
1420 raise ValueError('Unknown chunk')
1421
1421
1422 def tell(self):
1422 def tell(self):
1423 return self._pos
1423 return self._pos
1424
1424
1425 def seek(self, offset, whence=os.SEEK_SET):
1425 def seek(self, offset, whence=os.SEEK_SET):
1426 if whence == os.SEEK_SET:
1426 if whence == os.SEEK_SET:
1427 newpos = offset
1427 newpos = offset
1428 elif whence == os.SEEK_CUR:
1428 elif whence == os.SEEK_CUR:
1429 newpos = self._pos + offset
1429 newpos = self._pos + offset
1430 elif whence == os.SEEK_END:
1430 elif whence == os.SEEK_END:
1431 if not self.consumed:
1431 if not self.consumed:
1432 # Can't use self.consume() here because it advances self._pos.
1432 # Can't use self.consume() here because it advances self._pos.
1433 chunk = self.read(32768)
1433 chunk = self.read(32768)
1434 while chunk:
1434 while chunk:
1435 chunk = self.read(32768)
1435 chunk = self.read(32768)
1436 newpos = self._chunkindex[-1][0] - offset
1436 newpos = self._chunkindex[-1][0] - offset
1437 else:
1437 else:
1438 raise ValueError('Unknown whence value: %r' % (whence,))
1438 raise ValueError('Unknown whence value: %r' % (whence,))
1439
1439
1440 if newpos > self._chunkindex[-1][0] and not self.consumed:
1440 if newpos > self._chunkindex[-1][0] and not self.consumed:
1441 # Can't use self.consume() here because it advances self._pos.
1441 # Can't use self.consume() here because it advances self._pos.
1442 chunk = self.read(32768)
1442 chunk = self.read(32768)
1443 while chunk:
1443 while chunk:
1444 chunk = self.read(32668)
1444 chunk = self.read(32668)
1445
1445
1446 if not 0 <= newpos <= self._chunkindex[-1][0]:
1446 if not 0 <= newpos <= self._chunkindex[-1][0]:
1447 raise ValueError('Offset out of range')
1447 raise ValueError('Offset out of range')
1448
1448
1449 if self._pos != newpos:
1449 if self._pos != newpos:
1450 chunk, internaloffset = self._findchunk(newpos)
1450 chunk, internaloffset = self._findchunk(newpos)
1451 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1451 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1452 adjust = self.read(internaloffset)
1452 adjust = self.read(internaloffset)
1453 if len(adjust) != internaloffset:
1453 if len(adjust) != internaloffset:
1454 raise error.Abort(_('Seek failed\n'))
1454 raise error.Abort(_('Seek failed\n'))
1455 self._pos = newpos
1455 self._pos = newpos
1456
1456
1457 def _seekfp(self, offset, whence=0):
1457 def _seekfp(self, offset, whence=0):
1458 """move the underlying file pointer
1458 """move the underlying file pointer
1459
1459
1460 This method is meant for internal usage by the bundle2 protocol only.
1460 This method is meant for internal usage by the bundle2 protocol only.
1461 They directly manipulate the low level stream including bundle2 level
1461 They directly manipulate the low level stream including bundle2 level
1462 instruction.
1462 instruction.
1463
1463
1464 Do not use it to implement higher-level logic or methods."""
1464 Do not use it to implement higher-level logic or methods."""
1465 if self._seekable:
1465 if self._seekable:
1466 return self._fp.seek(offset, whence)
1466 return self._fp.seek(offset, whence)
1467 else:
1467 else:
1468 raise NotImplementedError(_('File pointer is not seekable'))
1468 raise NotImplementedError(_('File pointer is not seekable'))
1469
1469
1470 def _tellfp(self):
1470 def _tellfp(self):
1471 """return the file offset, or None if file is not seekable
1471 """return the file offset, or None if file is not seekable
1472
1472
1473 This method is meant for internal usage by the bundle2 protocol only.
1473 This method is meant for internal usage by the bundle2 protocol only.
1474 They directly manipulate the low level stream including bundle2 level
1474 They directly manipulate the low level stream including bundle2 level
1475 instruction.
1475 instruction.
1476
1476
1477 Do not use it to implement higher-level logic or methods."""
1477 Do not use it to implement higher-level logic or methods."""
1478 if self._seekable:
1478 if self._seekable:
1479 try:
1479 try:
1480 return self._fp.tell()
1480 return self._fp.tell()
1481 except IOError as e:
1481 except IOError as e:
1482 if e.errno == errno.ESPIPE:
1482 if e.errno == errno.ESPIPE:
1483 self._seekable = False
1483 self._seekable = False
1484 else:
1484 else:
1485 raise
1485 raise
1486 return None
1486 return None
1487
1487
1488 # These are only the static capabilities.
1488 # These are only the static capabilities.
1489 # Check the 'getrepocaps' function for the rest.
1489 # Check the 'getrepocaps' function for the rest.
1490 capabilities = {'HG20': (),
1490 capabilities = {'HG20': (),
1491 'bookmarks': (),
1491 'bookmarks': (),
1492 'error': ('abort', 'unsupportedcontent', 'pushraced',
1492 'error': ('abort', 'unsupportedcontent', 'pushraced',
1493 'pushkey'),
1493 'pushkey'),
1494 'listkeys': (),
1494 'listkeys': (),
1495 'pushkey': (),
1495 'pushkey': (),
1496 'digests': tuple(sorted(util.DIGESTS.keys())),
1496 'digests': tuple(sorted(util.DIGESTS.keys())),
1497 'remote-changegroup': ('http', 'https'),
1497 'remote-changegroup': ('http', 'https'),
1498 'hgtagsfnodes': (),
1498 'hgtagsfnodes': (),
1499 'rev-branch-cache': (),
1499 'rev-branch-cache': (),
1500 'phases': ('heads',),
1500 'phases': ('heads',),
1501 'stream': ('v2',),
1501 'stream': ('v2',),
1502 }
1502 }
1503
1503
1504 def getrepocaps(repo, allowpushback=False, role=None):
1504 def getrepocaps(repo, allowpushback=False, role=None):
1505 """return the bundle2 capabilities for a given repo
1505 """return the bundle2 capabilities for a given repo
1506
1506
1507 Exists to allow extensions (like evolution) to mutate the capabilities.
1507 Exists to allow extensions (like evolution) to mutate the capabilities.
1508
1508
1509 The returned value is used for servers advertising their capabilities as
1509 The returned value is used for servers advertising their capabilities as
1510 well as clients advertising their capabilities to servers as part of
1510 well as clients advertising their capabilities to servers as part of
1511 bundle2 requests. The ``role`` argument specifies which is which.
1511 bundle2 requests. The ``role`` argument specifies which is which.
1512 """
1512 """
1513 if role not in ('client', 'server'):
1513 if role not in ('client', 'server'):
1514 raise error.ProgrammingError('role argument must be client or server')
1514 raise error.ProgrammingError('role argument must be client or server')
1515
1515
1516 caps = capabilities.copy()
1516 caps = capabilities.copy()
1517 caps['changegroup'] = tuple(sorted(
1517 caps['changegroup'] = tuple(sorted(
1518 changegroup.supportedincomingversions(repo)))
1518 changegroup.supportedincomingversions(repo)))
1519 if obsolete.isenabled(repo, obsolete.exchangeopt):
1519 if obsolete.isenabled(repo, obsolete.exchangeopt):
1520 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1520 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1521 caps['obsmarkers'] = supportedformat
1521 caps['obsmarkers'] = supportedformat
1522 if allowpushback:
1522 if allowpushback:
1523 caps['pushback'] = ()
1523 caps['pushback'] = ()
1524 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1524 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1525 if cpmode == 'check-related':
1525 if cpmode == 'check-related':
1526 caps['checkheads'] = ('related',)
1526 caps['checkheads'] = ('related',)
1527 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1527 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1528 caps.pop('phases')
1528 caps.pop('phases')
1529
1529
1530 # Don't advertise stream clone support in server mode if not configured.
1530 # Don't advertise stream clone support in server mode if not configured.
1531 if role == 'server':
1531 if role == 'server':
1532 streamsupported = repo.ui.configbool('server', 'uncompressed',
1532 streamsupported = repo.ui.configbool('server', 'uncompressed',
1533 untrusted=True)
1533 untrusted=True)
1534 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1534 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1535
1535
1536 if not streamsupported or not featuresupported:
1536 if not streamsupported or not featuresupported:
1537 caps.pop('stream')
1537 caps.pop('stream')
1538 # Else always advertise support on client, because payload support
1538 # Else always advertise support on client, because payload support
1539 # should always be advertised.
1539 # should always be advertised.
1540
1540
1541 return caps
1541 return caps
1542
1542
1543 def bundle2caps(remote):
1543 def bundle2caps(remote):
1544 """return the bundle capabilities of a peer as dict"""
1544 """return the bundle capabilities of a peer as dict"""
1545 raw = remote.capable('bundle2')
1545 raw = remote.capable('bundle2')
1546 if not raw and raw != '':
1546 if not raw and raw != '':
1547 return {}
1547 return {}
1548 capsblob = urlreq.unquote(remote.capable('bundle2'))
1548 capsblob = urlreq.unquote(remote.capable('bundle2'))
1549 return decodecaps(capsblob)
1549 return decodecaps(capsblob)
1550
1550
1551 def obsmarkersversion(caps):
1551 def obsmarkersversion(caps):
1552 """extract the list of supported obsmarkers versions from a bundle2caps dict
1552 """extract the list of supported obsmarkers versions from a bundle2caps dict
1553 """
1553 """
1554 obscaps = caps.get('obsmarkers', ())
1554 obscaps = caps.get('obsmarkers', ())
1555 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1555 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1556
1556
1557 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1557 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1558 vfs=None, compression=None, compopts=None):
1558 vfs=None, compression=None, compopts=None):
1559 if bundletype.startswith('HG10'):
1559 if bundletype.startswith('HG10'):
1560 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1560 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1561 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1561 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1562 compression=compression, compopts=compopts)
1562 compression=compression, compopts=compopts)
1563 elif not bundletype.startswith('HG20'):
1563 elif not bundletype.startswith('HG20'):
1564 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1564 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1565
1565
1566 caps = {}
1566 caps = {}
1567 if 'obsolescence' in opts:
1567 if 'obsolescence' in opts:
1568 caps['obsmarkers'] = ('V1',)
1568 caps['obsmarkers'] = ('V1',)
1569 bundle = bundle20(ui, caps)
1569 bundle = bundle20(ui, caps)
1570 bundle.setcompression(compression, compopts)
1570 bundle.setcompression(compression, compopts)
1571 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1571 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1572 chunkiter = bundle.getchunks()
1572 chunkiter = bundle.getchunks()
1573
1573
1574 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1574 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1575
1575
1576 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1576 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1577 # We should eventually reconcile this logic with the one behind
1577 # We should eventually reconcile this logic with the one behind
1578 # 'exchange.getbundle2partsgenerator'.
1578 # 'exchange.getbundle2partsgenerator'.
1579 #
1579 #
1580 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1580 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1581 # different right now. So we keep them separated for now for the sake of
1581 # different right now. So we keep them separated for now for the sake of
1582 # simplicity.
1582 # simplicity.
1583
1583
1584 # we might not always want a changegroup in such bundle, for example in
1584 # we might not always want a changegroup in such bundle, for example in
1585 # stream bundles
1585 # stream bundles
1586 if opts.get('changegroup', True):
1586 if opts.get('changegroup', True):
1587 cgversion = opts.get('cg.version')
1587 cgversion = opts.get('cg.version')
1588 if cgversion is None:
1588 if cgversion is None:
1589 cgversion = changegroup.safeversion(repo)
1589 cgversion = changegroup.safeversion(repo)
1590 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1590 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1591 part = bundler.newpart('changegroup', data=cg.getchunks())
1591 part = bundler.newpart('changegroup', data=cg.getchunks())
1592 part.addparam('version', cg.version)
1592 part.addparam('version', cg.version)
1593 if 'clcount' in cg.extras:
1593 if 'clcount' in cg.extras:
1594 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1594 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1595 mandatory=False)
1595 mandatory=False)
1596 if opts.get('phases') and repo.revs('%ln and secret()',
1596 if opts.get('phases') and repo.revs('%ln and secret()',
1597 outgoing.missingheads):
1597 outgoing.missingheads):
1598 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1598 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1599
1599
1600 if opts.get('streamv2', False):
1600 if opts.get('streamv2', False):
1601 addpartbundlestream2(bundler, repo, stream=True)
1601 addpartbundlestream2(bundler, repo, stream=True)
1602
1602
1603 if opts.get('tagsfnodescache', True):
1603 if opts.get('tagsfnodescache', True):
1604 addparttagsfnodescache(repo, bundler, outgoing)
1604 addparttagsfnodescache(repo, bundler, outgoing)
1605
1605
1606 if opts.get('revbranchcache', True):
1606 if opts.get('revbranchcache', True):
1607 addpartrevbranchcache(repo, bundler, outgoing)
1607 addpartrevbranchcache(repo, bundler, outgoing)
1608
1608
1609 if opts.get('obsolescence', False):
1609 if opts.get('obsolescence', False):
1610 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1610 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1611 buildobsmarkerspart(bundler, obsmarkers)
1611 buildobsmarkerspart(bundler, obsmarkers)
1612
1612
1613 if opts.get('phases', False):
1613 if opts.get('phases', False):
1614 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1614 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1615 phasedata = phases.binaryencode(headsbyphase)
1615 phasedata = phases.binaryencode(headsbyphase)
1616 bundler.newpart('phase-heads', data=phasedata)
1616 bundler.newpart('phase-heads', data=phasedata)
1617
1617
1618 def addparttagsfnodescache(repo, bundler, outgoing):
1618 def addparttagsfnodescache(repo, bundler, outgoing):
1619 # we include the tags fnode cache for the bundle changeset
1619 # we include the tags fnode cache for the bundle changeset
1620 # (as an optional parts)
1620 # (as an optional parts)
1621 cache = tags.hgtagsfnodescache(repo.unfiltered())
1621 cache = tags.hgtagsfnodescache(repo.unfiltered())
1622 chunks = []
1622 chunks = []
1623
1623
1624 # .hgtags fnodes are only relevant for head changesets. While we could
1624 # .hgtags fnodes are only relevant for head changesets. While we could
1625 # transfer values for all known nodes, there will likely be little to
1625 # transfer values for all known nodes, there will likely be little to
1626 # no benefit.
1626 # no benefit.
1627 #
1627 #
1628 # We don't bother using a generator to produce output data because
1628 # We don't bother using a generator to produce output data because
1629 # a) we only have 40 bytes per head and even esoteric numbers of heads
1629 # a) we only have 40 bytes per head and even esoteric numbers of heads
1630 # consume little memory (1M heads is 40MB) b) we don't want to send the
1630 # consume little memory (1M heads is 40MB) b) we don't want to send the
1631 # part if we don't have entries and knowing if we have entries requires
1631 # part if we don't have entries and knowing if we have entries requires
1632 # cache lookups.
1632 # cache lookups.
1633 for node in outgoing.missingheads:
1633 for node in outgoing.missingheads:
1634 # Don't compute missing, as this may slow down serving.
1634 # Don't compute missing, as this may slow down serving.
1635 fnode = cache.getfnode(node, computemissing=False)
1635 fnode = cache.getfnode(node, computemissing=False)
1636 if fnode is not None:
1636 if fnode is not None:
1637 chunks.extend([node, fnode])
1637 chunks.extend([node, fnode])
1638
1638
1639 if chunks:
1639 if chunks:
1640 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1640 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1641
1641
1642 def addpartrevbranchcache(repo, bundler, outgoing):
1642 def addpartrevbranchcache(repo, bundler, outgoing):
1643 # we include the rev branch cache for the bundle changeset
1643 # we include the rev branch cache for the bundle changeset
1644 # (as an optional parts)
1644 # (as an optional parts)
1645 cache = repo.revbranchcache()
1645 cache = repo.revbranchcache()
1646 cl = repo.unfiltered().changelog
1646 cl = repo.unfiltered().changelog
1647 branchesdata = collections.defaultdict(lambda: (set(), set()))
1647 branchesdata = collections.defaultdict(lambda: (set(), set()))
1648 for node in outgoing.missing:
1648 for node in outgoing.missing:
1649 branch, close = cache.branchinfo(cl.rev(node))
1649 branch, close = cache.branchinfo(cl.rev(node))
1650 branchesdata[branch][close].add(node)
1650 branchesdata[branch][close].add(node)
1651
1651
1652 def generate():
1652 def generate():
1653 for branch, (nodes, closed) in sorted(branchesdata.items()):
1653 for branch, (nodes, closed) in sorted(branchesdata.items()):
1654 utf8branch = encoding.fromlocal(branch)
1654 utf8branch = encoding.fromlocal(branch)
1655 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1655 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1656 yield utf8branch
1656 yield utf8branch
1657 for n in sorted(nodes):
1657 for n in sorted(nodes):
1658 yield n
1658 yield n
1659 for n in sorted(closed):
1659 for n in sorted(closed):
1660 yield n
1660 yield n
1661
1661
1662 bundler.newpart('cache:rev-branch-cache', data=generate())
1662 bundler.newpart('cache:rev-branch-cache', data=generate())
1663
1663
1664 def _formatrequirementsspec(requirements):
1664 def _formatrequirementsspec(requirements):
1665 return urlreq.quote(','.join(sorted(requirements)))
1665 return urlreq.quote(','.join(sorted(requirements)))
1666
1666
1667 def _formatrequirementsparams(requirements):
1667 def _formatrequirementsparams(requirements):
1668 requirements = _formatrequirementsspec(requirements)
1668 requirements = _formatrequirementsspec(requirements)
1669 params = "%s%s" % (urlreq.quote("requirements="), requirements)
1669 params = "%s%s" % (urlreq.quote("requirements="), requirements)
1670 return params
1670 return params
1671
1671
1672 def addpartbundlestream2(bundler, repo, **kwargs):
1672 def addpartbundlestream2(bundler, repo, **kwargs):
1673 if not kwargs.get('stream', False):
1673 if not kwargs.get('stream', False):
1674 return
1674 return
1675
1675
1676 if not streamclone.allowservergeneration(repo):
1676 if not streamclone.allowservergeneration(repo):
1677 raise error.Abort(_('stream data requested but server does not allow '
1677 raise error.Abort(_('stream data requested but server does not allow '
1678 'this feature'),
1678 'this feature'),
1679 hint=_('well-behaved clients should not be '
1679 hint=_('well-behaved clients should not be '
1680 'requesting stream data from servers not '
1680 'requesting stream data from servers not '
1681 'advertising it; the client may be buggy'))
1681 'advertising it; the client may be buggy'))
1682
1682
1683 # Stream clones don't compress well. And compression undermines a
1683 # Stream clones don't compress well. And compression undermines a
1684 # goal of stream clones, which is to be fast. Communicate the desire
1684 # goal of stream clones, which is to be fast. Communicate the desire
1685 # to avoid compression to consumers of the bundle.
1685 # to avoid compression to consumers of the bundle.
1686 bundler.prefercompressed = False
1686 bundler.prefercompressed = False
1687
1687
1688 filecount, bytecount, it = streamclone.generatev2(repo)
1688 filecount, bytecount, it = streamclone.generatev2(repo)
1689 requirements = _formatrequirementsspec(repo.requirements)
1689 requirements = _formatrequirementsspec(repo.requirements)
1690 part = bundler.newpart('stream2', data=it)
1690 part = bundler.newpart('stream2', data=it)
1691 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1691 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1692 part.addparam('filecount', '%d' % filecount, mandatory=True)
1692 part.addparam('filecount', '%d' % filecount, mandatory=True)
1693 part.addparam('requirements', requirements, mandatory=True)
1693 part.addparam('requirements', requirements, mandatory=True)
1694
1694
1695 def buildobsmarkerspart(bundler, markers):
1695 def buildobsmarkerspart(bundler, markers):
1696 """add an obsmarker part to the bundler with <markers>
1696 """add an obsmarker part to the bundler with <markers>
1697
1697
1698 No part is created if markers is empty.
1698 No part is created if markers is empty.
1699 Raises ValueError if the bundler doesn't support any known obsmarker format.
1699 Raises ValueError if the bundler doesn't support any known obsmarker format.
1700 """
1700 """
1701 if not markers:
1701 if not markers:
1702 return None
1702 return None
1703
1703
1704 remoteversions = obsmarkersversion(bundler.capabilities)
1704 remoteversions = obsmarkersversion(bundler.capabilities)
1705 version = obsolete.commonversion(remoteversions)
1705 version = obsolete.commonversion(remoteversions)
1706 if version is None:
1706 if version is None:
1707 raise ValueError('bundler does not support common obsmarker format')
1707 raise ValueError('bundler does not support common obsmarker format')
1708 stream = obsolete.encodemarkers(markers, True, version=version)
1708 stream = obsolete.encodemarkers(markers, True, version=version)
1709 return bundler.newpart('obsmarkers', data=stream)
1709 return bundler.newpart('obsmarkers', data=stream)
1710
1710
1711 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1711 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1712 compopts=None):
1712 compopts=None):
1713 """Write a bundle file and return its filename.
1713 """Write a bundle file and return its filename.
1714
1714
1715 Existing files will not be overwritten.
1715 Existing files will not be overwritten.
1716 If no filename is specified, a temporary file is created.
1716 If no filename is specified, a temporary file is created.
1717 bz2 compression can be turned off.
1717 bz2 compression can be turned off.
1718 The bundle file will be deleted in case of errors.
1718 The bundle file will be deleted in case of errors.
1719 """
1719 """
1720
1720
1721 if bundletype == "HG20":
1721 if bundletype == "HG20":
1722 bundle = bundle20(ui)
1722 bundle = bundle20(ui)
1723 bundle.setcompression(compression, compopts)
1723 bundle.setcompression(compression, compopts)
1724 part = bundle.newpart('changegroup', data=cg.getchunks())
1724 part = bundle.newpart('changegroup', data=cg.getchunks())
1725 part.addparam('version', cg.version)
1725 part.addparam('version', cg.version)
1726 if 'clcount' in cg.extras:
1726 if 'clcount' in cg.extras:
1727 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1727 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1728 mandatory=False)
1728 mandatory=False)
1729 chunkiter = bundle.getchunks()
1729 chunkiter = bundle.getchunks()
1730 else:
1730 else:
1731 # compression argument is only for the bundle2 case
1731 # compression argument is only for the bundle2 case
1732 assert compression is None
1732 assert compression is None
1733 if cg.version != '01':
1733 if cg.version != '01':
1734 raise error.Abort(_('old bundle types only supports v1 '
1734 raise error.Abort(_('old bundle types only supports v1 '
1735 'changegroups'))
1735 'changegroups'))
1736 header, comp = bundletypes[bundletype]
1736 header, comp = bundletypes[bundletype]
1737 if comp not in util.compengines.supportedbundletypes:
1737 if comp not in util.compengines.supportedbundletypes:
1738 raise error.Abort(_('unknown stream compression type: %s')
1738 raise error.Abort(_('unknown stream compression type: %s')
1739 % comp)
1739 % comp)
1740 compengine = util.compengines.forbundletype(comp)
1740 compengine = util.compengines.forbundletype(comp)
1741 def chunkiter():
1741 def chunkiter():
1742 yield header
1742 yield header
1743 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1743 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1744 yield chunk
1744 yield chunk
1745 chunkiter = chunkiter()
1745 chunkiter = chunkiter()
1746
1746
1747 # parse the changegroup data, otherwise we will block
1747 # parse the changegroup data, otherwise we will block
1748 # in case of sshrepo because we don't know the end of the stream
1748 # in case of sshrepo because we don't know the end of the stream
1749 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1749 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1750
1750
1751 def combinechangegroupresults(op):
1751 def combinechangegroupresults(op):
1752 """logic to combine 0 or more addchangegroup results into one"""
1752 """logic to combine 0 or more addchangegroup results into one"""
1753 results = [r.get('return', 0)
1753 results = [r.get('return', 0)
1754 for r in op.records['changegroup']]
1754 for r in op.records['changegroup']]
1755 changedheads = 0
1755 changedheads = 0
1756 result = 1
1756 result = 1
1757 for ret in results:
1757 for ret in results:
1758 # If any changegroup result is 0, return 0
1758 # If any changegroup result is 0, return 0
1759 if ret == 0:
1759 if ret == 0:
1760 result = 0
1760 result = 0
1761 break
1761 break
1762 if ret < -1:
1762 if ret < -1:
1763 changedheads += ret + 1
1763 changedheads += ret + 1
1764 elif ret > 1:
1764 elif ret > 1:
1765 changedheads += ret - 1
1765 changedheads += ret - 1
1766 if changedheads > 0:
1766 if changedheads > 0:
1767 result = 1 + changedheads
1767 result = 1 + changedheads
1768 elif changedheads < 0:
1768 elif changedheads < 0:
1769 result = -1 + changedheads
1769 result = -1 + changedheads
1770 return result
1770 return result
1771
1771
1772 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1772 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1773 'targetphase'))
1773 'targetphase'))
1774 def handlechangegroup(op, inpart):
1774 def handlechangegroup(op, inpart):
1775 """apply a changegroup part on the repo
1775 """apply a changegroup part on the repo
1776
1776
1777 This is a very early implementation that will massive rework before being
1777 This is a very early implementation that will massive rework before being
1778 inflicted to any end-user.
1778 inflicted to any end-user.
1779 """
1779 """
1780 tr = op.gettransaction()
1780 tr = op.gettransaction()
1781 unpackerversion = inpart.params.get('version', '01')
1781 unpackerversion = inpart.params.get('version', '01')
1782 # We should raise an appropriate exception here
1782 # We should raise an appropriate exception here
1783 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1783 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1784 # the source and url passed here are overwritten by the one contained in
1784 # the source and url passed here are overwritten by the one contained in
1785 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1785 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1786 nbchangesets = None
1786 nbchangesets = None
1787 if 'nbchanges' in inpart.params:
1787 if 'nbchanges' in inpart.params:
1788 nbchangesets = int(inpart.params.get('nbchanges'))
1788 nbchangesets = int(inpart.params.get('nbchanges'))
1789 if ('treemanifest' in inpart.params and
1789 if ('treemanifest' in inpart.params and
1790 'treemanifest' not in op.repo.requirements):
1790 'treemanifest' not in op.repo.requirements):
1791 if len(op.repo.changelog) != 0:
1791 if len(op.repo.changelog) != 0:
1792 raise error.Abort(_(
1792 raise error.Abort(_(
1793 "bundle contains tree manifests, but local repo is "
1793 "bundle contains tree manifests, but local repo is "
1794 "non-empty and does not use tree manifests"))
1794 "non-empty and does not use tree manifests"))
1795 op.repo.requirements.add('treemanifest')
1795 op.repo.requirements.add('treemanifest')
1796 op.repo._applyopenerreqs()
1796 op.repo._applyopenerreqs()
1797 op.repo._writerequirements()
1797 op.repo._writerequirements()
1798 extrakwargs = {}
1798 extrakwargs = {}
1799 targetphase = inpart.params.get('targetphase')
1799 targetphase = inpart.params.get('targetphase')
1800 if targetphase is not None:
1800 if targetphase is not None:
1801 extrakwargs[r'targetphase'] = int(targetphase)
1801 extrakwargs[r'targetphase'] = int(targetphase)
1802 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1802 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1803 expectedtotal=nbchangesets, **extrakwargs)
1803 expectedtotal=nbchangesets, **extrakwargs)
1804 if op.reply is not None:
1804 if op.reply is not None:
1805 # This is definitely not the final form of this
1805 # This is definitely not the final form of this
1806 # return. But one need to start somewhere.
1806 # return. But one need to start somewhere.
1807 part = op.reply.newpart('reply:changegroup', mandatory=False)
1807 part = op.reply.newpart('reply:changegroup', mandatory=False)
1808 part.addparam(
1808 part.addparam(
1809 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1809 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1810 part.addparam('return', '%i' % ret, mandatory=False)
1810 part.addparam('return', '%i' % ret, mandatory=False)
1811 assert not inpart.read()
1811 assert not inpart.read()
1812
1812
1813 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1813 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1814 ['digest:%s' % k for k in util.DIGESTS.keys()])
1814 ['digest:%s' % k for k in util.DIGESTS.keys()])
1815 @parthandler('remote-changegroup', _remotechangegroupparams)
1815 @parthandler('remote-changegroup', _remotechangegroupparams)
1816 def handleremotechangegroup(op, inpart):
1816 def handleremotechangegroup(op, inpart):
1817 """apply a bundle10 on the repo, given an url and validation information
1817 """apply a bundle10 on the repo, given an url and validation information
1818
1818
1819 All the information about the remote bundle to import are given as
1819 All the information about the remote bundle to import are given as
1820 parameters. The parameters include:
1820 parameters. The parameters include:
1821 - url: the url to the bundle10.
1821 - url: the url to the bundle10.
1822 - size: the bundle10 file size. It is used to validate what was
1822 - size: the bundle10 file size. It is used to validate what was
1823 retrieved by the client matches the server knowledge about the bundle.
1823 retrieved by the client matches the server knowledge about the bundle.
1824 - digests: a space separated list of the digest types provided as
1824 - digests: a space separated list of the digest types provided as
1825 parameters.
1825 parameters.
1826 - digest:<digest-type>: the hexadecimal representation of the digest with
1826 - digest:<digest-type>: the hexadecimal representation of the digest with
1827 that name. Like the size, it is used to validate what was retrieved by
1827 that name. Like the size, it is used to validate what was retrieved by
1828 the client matches what the server knows about the bundle.
1828 the client matches what the server knows about the bundle.
1829
1829
1830 When multiple digest types are given, all of them are checked.
1830 When multiple digest types are given, all of them are checked.
1831 """
1831 """
1832 try:
1832 try:
1833 raw_url = inpart.params['url']
1833 raw_url = inpart.params['url']
1834 except KeyError:
1834 except KeyError:
1835 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1835 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1836 parsed_url = util.url(raw_url)
1836 parsed_url = util.url(raw_url)
1837 if parsed_url.scheme not in capabilities['remote-changegroup']:
1837 if parsed_url.scheme not in capabilities['remote-changegroup']:
1838 raise error.Abort(_('remote-changegroup does not support %s urls') %
1838 raise error.Abort(_('remote-changegroup does not support %s urls') %
1839 parsed_url.scheme)
1839 parsed_url.scheme)
1840
1840
1841 try:
1841 try:
1842 size = int(inpart.params['size'])
1842 size = int(inpart.params['size'])
1843 except ValueError:
1843 except ValueError:
1844 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1844 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1845 % 'size')
1845 % 'size')
1846 except KeyError:
1846 except KeyError:
1847 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1847 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1848
1848
1849 digests = {}
1849 digests = {}
1850 for typ in inpart.params.get('digests', '').split():
1850 for typ in inpart.params.get('digests', '').split():
1851 param = 'digest:%s' % typ
1851 param = 'digest:%s' % typ
1852 try:
1852 try:
1853 value = inpart.params[param]
1853 value = inpart.params[param]
1854 except KeyError:
1854 except KeyError:
1855 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1855 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1856 param)
1856 param)
1857 digests[typ] = value
1857 digests[typ] = value
1858
1858
1859 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1859 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1860
1860
1861 tr = op.gettransaction()
1861 tr = op.gettransaction()
1862 from . import exchange
1862 from . import exchange
1863 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1863 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1864 if not isinstance(cg, changegroup.cg1unpacker):
1864 if not isinstance(cg, changegroup.cg1unpacker):
1865 raise error.Abort(_('%s: not a bundle version 1.0') %
1865 raise error.Abort(_('%s: not a bundle version 1.0') %
1866 util.hidepassword(raw_url))
1866 util.hidepassword(raw_url))
1867 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1867 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1868 if op.reply is not None:
1868 if op.reply is not None:
1869 # This is definitely not the final form of this
1869 # This is definitely not the final form of this
1870 # return. But one need to start somewhere.
1870 # return. But one need to start somewhere.
1871 part = op.reply.newpart('reply:changegroup')
1871 part = op.reply.newpart('reply:changegroup')
1872 part.addparam(
1872 part.addparam(
1873 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1873 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1874 part.addparam('return', '%i' % ret, mandatory=False)
1874 part.addparam('return', '%i' % ret, mandatory=False)
1875 try:
1875 try:
1876 real_part.validate()
1876 real_part.validate()
1877 except error.Abort as e:
1877 except error.Abort as e:
1878 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1878 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1879 (util.hidepassword(raw_url), str(e)))
1879 (util.hidepassword(raw_url), str(e)))
1880 assert not inpart.read()
1880 assert not inpart.read()
1881
1881
1882 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1882 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1883 def handlereplychangegroup(op, inpart):
1883 def handlereplychangegroup(op, inpart):
1884 ret = int(inpart.params['return'])
1884 ret = int(inpart.params['return'])
1885 replyto = int(inpart.params['in-reply-to'])
1885 replyto = int(inpart.params['in-reply-to'])
1886 op.records.add('changegroup', {'return': ret}, replyto)
1886 op.records.add('changegroup', {'return': ret}, replyto)
1887
1887
1888 @parthandler('check:bookmarks')
1888 @parthandler('check:bookmarks')
1889 def handlecheckbookmarks(op, inpart):
1889 def handlecheckbookmarks(op, inpart):
1890 """check location of bookmarks
1890 """check location of bookmarks
1891
1891
1892 This part is to be used to detect push race regarding bookmark, it
1892 This part is to be used to detect push race regarding bookmark, it
1893 contains binary encoded (bookmark, node) tuple. If the local state does
1893 contains binary encoded (bookmark, node) tuple. If the local state does
1894 not marks the one in the part, a PushRaced exception is raised
1894 not marks the one in the part, a PushRaced exception is raised
1895 """
1895 """
1896 bookdata = bookmarks.binarydecode(inpart)
1896 bookdata = bookmarks.binarydecode(inpart)
1897
1897
1898 msgstandard = ('repository changed while pushing - please try again '
1898 msgstandard = ('repository changed while pushing - please try again '
1899 '(bookmark "%s" move from %s to %s)')
1899 '(bookmark "%s" move from %s to %s)')
1900 msgmissing = ('repository changed while pushing - please try again '
1900 msgmissing = ('repository changed while pushing - please try again '
1901 '(bookmark "%s" is missing, expected %s)')
1901 '(bookmark "%s" is missing, expected %s)')
1902 msgexist = ('repository changed while pushing - please try again '
1902 msgexist = ('repository changed while pushing - please try again '
1903 '(bookmark "%s" set on %s, expected missing)')
1903 '(bookmark "%s" set on %s, expected missing)')
1904 for book, node in bookdata:
1904 for book, node in bookdata:
1905 currentnode = op.repo._bookmarks.get(book)
1905 currentnode = op.repo._bookmarks.get(book)
1906 if currentnode != node:
1906 if currentnode != node:
1907 if node is None:
1907 if node is None:
1908 finalmsg = msgexist % (book, nodemod.short(currentnode))
1908 finalmsg = msgexist % (book, nodemod.short(currentnode))
1909 elif currentnode is None:
1909 elif currentnode is None:
1910 finalmsg = msgmissing % (book, nodemod.short(node))
1910 finalmsg = msgmissing % (book, nodemod.short(node))
1911 else:
1911 else:
1912 finalmsg = msgstandard % (book, nodemod.short(node),
1912 finalmsg = msgstandard % (book, nodemod.short(node),
1913 nodemod.short(currentnode))
1913 nodemod.short(currentnode))
1914 raise error.PushRaced(finalmsg)
1914 raise error.PushRaced(finalmsg)
1915
1915
1916 @parthandler('check:heads')
1916 @parthandler('check:heads')
1917 def handlecheckheads(op, inpart):
1917 def handlecheckheads(op, inpart):
1918 """check that head of the repo did not change
1918 """check that head of the repo did not change
1919
1919
1920 This is used to detect a push race when using unbundle.
1920 This is used to detect a push race when using unbundle.
1921 This replaces the "heads" argument of unbundle."""
1921 This replaces the "heads" argument of unbundle."""
1922 h = inpart.read(20)
1922 h = inpart.read(20)
1923 heads = []
1923 heads = []
1924 while len(h) == 20:
1924 while len(h) == 20:
1925 heads.append(h)
1925 heads.append(h)
1926 h = inpart.read(20)
1926 h = inpart.read(20)
1927 assert not h
1927 assert not h
1928 # Trigger a transaction so that we are guaranteed to have the lock now.
1928 # Trigger a transaction so that we are guaranteed to have the lock now.
1929 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1929 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1930 op.gettransaction()
1930 op.gettransaction()
1931 if sorted(heads) != sorted(op.repo.heads()):
1931 if sorted(heads) != sorted(op.repo.heads()):
1932 raise error.PushRaced('repository changed while pushing - '
1932 raise error.PushRaced('repository changed while pushing - '
1933 'please try again')
1933 'please try again')
1934
1934
1935 @parthandler('check:updated-heads')
1935 @parthandler('check:updated-heads')
1936 def handlecheckupdatedheads(op, inpart):
1936 def handlecheckupdatedheads(op, inpart):
1937 """check for race on the heads touched by a push
1937 """check for race on the heads touched by a push
1938
1938
1939 This is similar to 'check:heads' but focus on the heads actually updated
1939 This is similar to 'check:heads' but focus on the heads actually updated
1940 during the push. If other activities happen on unrelated heads, it is
1940 during the push. If other activities happen on unrelated heads, it is
1941 ignored.
1941 ignored.
1942
1942
1943 This allow server with high traffic to avoid push contention as long as
1943 This allow server with high traffic to avoid push contention as long as
1944 unrelated parts of the graph are involved."""
1944 unrelated parts of the graph are involved."""
1945 h = inpart.read(20)
1945 h = inpart.read(20)
1946 heads = []
1946 heads = []
1947 while len(h) == 20:
1947 while len(h) == 20:
1948 heads.append(h)
1948 heads.append(h)
1949 h = inpart.read(20)
1949 h = inpart.read(20)
1950 assert not h
1950 assert not h
1951 # trigger a transaction so that we are guaranteed to have the lock now.
1951 # trigger a transaction so that we are guaranteed to have the lock now.
1952 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1952 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1953 op.gettransaction()
1953 op.gettransaction()
1954
1954
1955 currentheads = set()
1955 currentheads = set()
1956 for ls in op.repo.branchmap().itervalues():
1956 for ls in op.repo.branchmap().itervalues():
1957 currentheads.update(ls)
1957 currentheads.update(ls)
1958
1958
1959 for h in heads:
1959 for h in heads:
1960 if h not in currentheads:
1960 if h not in currentheads:
1961 raise error.PushRaced('repository changed while pushing - '
1961 raise error.PushRaced('repository changed while pushing - '
1962 'please try again')
1962 'please try again')
1963
1963
1964 @parthandler('check:phases')
1964 @parthandler('check:phases')
1965 def handlecheckphases(op, inpart):
1965 def handlecheckphases(op, inpart):
1966 """check that phase boundaries of the repository did not change
1966 """check that phase boundaries of the repository did not change
1967
1967
1968 This is used to detect a push race.
1968 This is used to detect a push race.
1969 """
1969 """
1970 phasetonodes = phases.binarydecode(inpart)
1970 phasetonodes = phases.binarydecode(inpart)
1971 unfi = op.repo.unfiltered()
1971 unfi = op.repo.unfiltered()
1972 cl = unfi.changelog
1972 cl = unfi.changelog
1973 phasecache = unfi._phasecache
1973 phasecache = unfi._phasecache
1974 msg = ('repository changed while pushing - please try again '
1974 msg = ('repository changed while pushing - please try again '
1975 '(%s is %s expected %s)')
1975 '(%s is %s expected %s)')
1976 for expectedphase, nodes in enumerate(phasetonodes):
1976 for expectedphase, nodes in enumerate(phasetonodes):
1977 for n in nodes:
1977 for n in nodes:
1978 actualphase = phasecache.phase(unfi, cl.rev(n))
1978 actualphase = phasecache.phase(unfi, cl.rev(n))
1979 if actualphase != expectedphase:
1979 if actualphase != expectedphase:
1980 finalmsg = msg % (nodemod.short(n),
1980 finalmsg = msg % (nodemod.short(n),
1981 phases.phasenames[actualphase],
1981 phases.phasenames[actualphase],
1982 phases.phasenames[expectedphase])
1982 phases.phasenames[expectedphase])
1983 raise error.PushRaced(finalmsg)
1983 raise error.PushRaced(finalmsg)
1984
1984
1985 @parthandler('output')
1985 @parthandler('output')
1986 def handleoutput(op, inpart):
1986 def handleoutput(op, inpart):
1987 """forward output captured on the server to the client"""
1987 """forward output captured on the server to the client"""
1988 for line in inpart.read().splitlines():
1988 for line in inpart.read().splitlines():
1989 op.ui.status(_('remote: %s\n') % line)
1989 op.ui.status(_('remote: %s\n') % line)
1990
1990
1991 @parthandler('replycaps')
1991 @parthandler('replycaps')
1992 def handlereplycaps(op, inpart):
1992 def handlereplycaps(op, inpart):
1993 """Notify that a reply bundle should be created
1993 """Notify that a reply bundle should be created
1994
1994
1995 The payload contains the capabilities information for the reply"""
1995 The payload contains the capabilities information for the reply"""
1996 caps = decodecaps(inpart.read())
1996 caps = decodecaps(inpart.read())
1997 if op.reply is None:
1997 if op.reply is None:
1998 op.reply = bundle20(op.ui, caps)
1998 op.reply = bundle20(op.ui, caps)
1999
1999
2000 class AbortFromPart(error.Abort):
2000 class AbortFromPart(error.Abort):
2001 """Sub-class of Abort that denotes an error from a bundle2 part."""
2001 """Sub-class of Abort that denotes an error from a bundle2 part."""
2002
2002
2003 @parthandler('error:abort', ('message', 'hint'))
2003 @parthandler('error:abort', ('message', 'hint'))
2004 def handleerrorabort(op, inpart):
2004 def handleerrorabort(op, inpart):
2005 """Used to transmit abort error over the wire"""
2005 """Used to transmit abort error over the wire"""
2006 raise AbortFromPart(inpart.params['message'],
2006 raise AbortFromPart(inpart.params['message'],
2007 hint=inpart.params.get('hint'))
2007 hint=inpart.params.get('hint'))
2008
2008
2009 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
2009 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
2010 'in-reply-to'))
2010 'in-reply-to'))
2011 def handleerrorpushkey(op, inpart):
2011 def handleerrorpushkey(op, inpart):
2012 """Used to transmit failure of a mandatory pushkey over the wire"""
2012 """Used to transmit failure of a mandatory pushkey over the wire"""
2013 kwargs = {}
2013 kwargs = {}
2014 for name in ('namespace', 'key', 'new', 'old', 'ret'):
2014 for name in ('namespace', 'key', 'new', 'old', 'ret'):
2015 value = inpart.params.get(name)
2015 value = inpart.params.get(name)
2016 if value is not None:
2016 if value is not None:
2017 kwargs[name] = value
2017 kwargs[name] = value
2018 raise error.PushkeyFailed(inpart.params['in-reply-to'],
2018 raise error.PushkeyFailed(inpart.params['in-reply-to'],
2019 **pycompat.strkwargs(kwargs))
2019 **pycompat.strkwargs(kwargs))
2020
2020
2021 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
2021 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
2022 def handleerrorunsupportedcontent(op, inpart):
2022 def handleerrorunsupportedcontent(op, inpart):
2023 """Used to transmit unknown content error over the wire"""
2023 """Used to transmit unknown content error over the wire"""
2024 kwargs = {}
2024 kwargs = {}
2025 parttype = inpart.params.get('parttype')
2025 parttype = inpart.params.get('parttype')
2026 if parttype is not None:
2026 if parttype is not None:
2027 kwargs['parttype'] = parttype
2027 kwargs['parttype'] = parttype
2028 params = inpart.params.get('params')
2028 params = inpart.params.get('params')
2029 if params is not None:
2029 if params is not None:
2030 kwargs['params'] = params.split('\0')
2030 kwargs['params'] = params.split('\0')
2031
2031
2032 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2032 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2033
2033
2034 @parthandler('error:pushraced', ('message',))
2034 @parthandler('error:pushraced', ('message',))
2035 def handleerrorpushraced(op, inpart):
2035 def handleerrorpushraced(op, inpart):
2036 """Used to transmit push race error over the wire"""
2036 """Used to transmit push race error over the wire"""
2037 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2037 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2038
2038
2039 @parthandler('listkeys', ('namespace',))
2039 @parthandler('listkeys', ('namespace',))
2040 def handlelistkeys(op, inpart):
2040 def handlelistkeys(op, inpart):
2041 """retrieve pushkey namespace content stored in a bundle2"""
2041 """retrieve pushkey namespace content stored in a bundle2"""
2042 namespace = inpart.params['namespace']
2042 namespace = inpart.params['namespace']
2043 r = pushkey.decodekeys(inpart.read())
2043 r = pushkey.decodekeys(inpart.read())
2044 op.records.add('listkeys', (namespace, r))
2044 op.records.add('listkeys', (namespace, r))
2045
2045
2046 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2046 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2047 def handlepushkey(op, inpart):
2047 def handlepushkey(op, inpart):
2048 """process a pushkey request"""
2048 """process a pushkey request"""
2049 dec = pushkey.decode
2049 dec = pushkey.decode
2050 namespace = dec(inpart.params['namespace'])
2050 namespace = dec(inpart.params['namespace'])
2051 key = dec(inpart.params['key'])
2051 key = dec(inpart.params['key'])
2052 old = dec(inpart.params['old'])
2052 old = dec(inpart.params['old'])
2053 new = dec(inpart.params['new'])
2053 new = dec(inpart.params['new'])
2054 # Grab the transaction to ensure that we have the lock before performing the
2054 # Grab the transaction to ensure that we have the lock before performing the
2055 # pushkey.
2055 # pushkey.
2056 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2056 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2057 op.gettransaction()
2057 op.gettransaction()
2058 ret = op.repo.pushkey(namespace, key, old, new)
2058 ret = op.repo.pushkey(namespace, key, old, new)
2059 record = {'namespace': namespace,
2059 record = {'namespace': namespace,
2060 'key': key,
2060 'key': key,
2061 'old': old,
2061 'old': old,
2062 'new': new}
2062 'new': new}
2063 op.records.add('pushkey', record)
2063 op.records.add('pushkey', record)
2064 if op.reply is not None:
2064 if op.reply is not None:
2065 rpart = op.reply.newpart('reply:pushkey')
2065 rpart = op.reply.newpart('reply:pushkey')
2066 rpart.addparam(
2066 rpart.addparam(
2067 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2067 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2068 rpart.addparam('return', '%i' % ret, mandatory=False)
2068 rpart.addparam('return', '%i' % ret, mandatory=False)
2069 if inpart.mandatory and not ret:
2069 if inpart.mandatory and not ret:
2070 kwargs = {}
2070 kwargs = {}
2071 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2071 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2072 if key in inpart.params:
2072 if key in inpart.params:
2073 kwargs[key] = inpart.params[key]
2073 kwargs[key] = inpart.params[key]
2074 raise error.PushkeyFailed(partid='%d' % inpart.id,
2074 raise error.PushkeyFailed(partid='%d' % inpart.id,
2075 **pycompat.strkwargs(kwargs))
2075 **pycompat.strkwargs(kwargs))
2076
2076
2077 @parthandler('bookmarks')
2077 @parthandler('bookmarks')
2078 def handlebookmark(op, inpart):
2078 def handlebookmark(op, inpart):
2079 """transmit bookmark information
2079 """transmit bookmark information
2080
2080
2081 The part contains binary encoded bookmark information.
2081 The part contains binary encoded bookmark information.
2082
2082
2083 The exact behavior of this part can be controlled by the 'bookmarks' mode
2083 The exact behavior of this part can be controlled by the 'bookmarks' mode
2084 on the bundle operation.
2084 on the bundle operation.
2085
2085
2086 When mode is 'apply' (the default) the bookmark information is applied as
2086 When mode is 'apply' (the default) the bookmark information is applied as
2087 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2087 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2088 issued earlier to check for push races in such update. This behavior is
2088 issued earlier to check for push races in such update. This behavior is
2089 suitable for pushing.
2089 suitable for pushing.
2090
2090
2091 When mode is 'records', the information is recorded into the 'bookmarks'
2091 When mode is 'records', the information is recorded into the 'bookmarks'
2092 records of the bundle operation. This behavior is suitable for pulling.
2092 records of the bundle operation. This behavior is suitable for pulling.
2093 """
2093 """
2094 changes = bookmarks.binarydecode(inpart)
2094 changes = bookmarks.binarydecode(inpart)
2095
2095
2096 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2096 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2097 bookmarksmode = op.modes.get('bookmarks', 'apply')
2097 bookmarksmode = op.modes.get('bookmarks', 'apply')
2098
2098
2099 if bookmarksmode == 'apply':
2099 if bookmarksmode == 'apply':
2100 tr = op.gettransaction()
2100 tr = op.gettransaction()
2101 bookstore = op.repo._bookmarks
2101 bookstore = op.repo._bookmarks
2102 if pushkeycompat:
2102 if pushkeycompat:
2103 allhooks = []
2103 allhooks = []
2104 for book, node in changes:
2104 for book, node in changes:
2105 hookargs = tr.hookargs.copy()
2105 hookargs = tr.hookargs.copy()
2106 hookargs['pushkeycompat'] = '1'
2106 hookargs['pushkeycompat'] = '1'
2107 hookargs['namespace'] = 'bookmarks'
2107 hookargs['namespace'] = 'bookmarks'
2108 hookargs['key'] = book
2108 hookargs['key'] = book
2109 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2109 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2110 hookargs['new'] = nodemod.hex(node if node is not None else '')
2110 hookargs['new'] = nodemod.hex(node if node is not None else '')
2111 allhooks.append(hookargs)
2111 allhooks.append(hookargs)
2112
2112
2113 for hookargs in allhooks:
2113 for hookargs in allhooks:
2114 op.repo.hook('prepushkey', throw=True,
2114 op.repo.hook('prepushkey', throw=True,
2115 **pycompat.strkwargs(hookargs))
2115 **pycompat.strkwargs(hookargs))
2116
2116
2117 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2117 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2118
2118
2119 if pushkeycompat:
2119 if pushkeycompat:
2120 def runhook():
2120 def runhook():
2121 for hookargs in allhooks:
2121 for hookargs in allhooks:
2122 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2122 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2123 op.repo._afterlock(runhook)
2123 op.repo._afterlock(runhook)
2124
2124
2125 elif bookmarksmode == 'records':
2125 elif bookmarksmode == 'records':
2126 for book, node in changes:
2126 for book, node in changes:
2127 record = {'bookmark': book, 'node': node}
2127 record = {'bookmark': book, 'node': node}
2128 op.records.add('bookmarks', record)
2128 op.records.add('bookmarks', record)
2129 else:
2129 else:
2130 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2130 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2131
2131
2132 @parthandler('phase-heads')
2132 @parthandler('phase-heads')
2133 def handlephases(op, inpart):
2133 def handlephases(op, inpart):
2134 """apply phases from bundle part to repo"""
2134 """apply phases from bundle part to repo"""
2135 headsbyphase = phases.binarydecode(inpart)
2135 headsbyphase = phases.binarydecode(inpart)
2136 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2136 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2137
2137
2138 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2138 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2139 def handlepushkeyreply(op, inpart):
2139 def handlepushkeyreply(op, inpart):
2140 """retrieve the result of a pushkey request"""
2140 """retrieve the result of a pushkey request"""
2141 ret = int(inpart.params['return'])
2141 ret = int(inpart.params['return'])
2142 partid = int(inpart.params['in-reply-to'])
2142 partid = int(inpart.params['in-reply-to'])
2143 op.records.add('pushkey', {'return': ret}, partid)
2143 op.records.add('pushkey', {'return': ret}, partid)
2144
2144
2145 @parthandler('obsmarkers')
2145 @parthandler('obsmarkers')
2146 def handleobsmarker(op, inpart):
2146 def handleobsmarker(op, inpart):
2147 """add a stream of obsmarkers to the repo"""
2147 """add a stream of obsmarkers to the repo"""
2148 tr = op.gettransaction()
2148 tr = op.gettransaction()
2149 markerdata = inpart.read()
2149 markerdata = inpart.read()
2150 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2150 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2151 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2151 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2152 % len(markerdata))
2152 % len(markerdata))
2153 # The mergemarkers call will crash if marker creation is not enabled.
2153 # The mergemarkers call will crash if marker creation is not enabled.
2154 # we want to avoid this if the part is advisory.
2154 # we want to avoid this if the part is advisory.
2155 if not inpart.mandatory and op.repo.obsstore.readonly:
2155 if not inpart.mandatory and op.repo.obsstore.readonly:
2156 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2156 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2157 return
2157 return
2158 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2158 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2159 op.repo.invalidatevolatilesets()
2159 op.repo.invalidatevolatilesets()
2160 if new:
2160 if new:
2161 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2161 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2162 op.records.add('obsmarkers', {'new': new})
2162 op.records.add('obsmarkers', {'new': new})
2163 if op.reply is not None:
2163 if op.reply is not None:
2164 rpart = op.reply.newpart('reply:obsmarkers')
2164 rpart = op.reply.newpart('reply:obsmarkers')
2165 rpart.addparam(
2165 rpart.addparam(
2166 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2166 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2167 rpart.addparam('new', '%i' % new, mandatory=False)
2167 rpart.addparam('new', '%i' % new, mandatory=False)
2168
2168
2169
2169
2170 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2170 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2171 def handleobsmarkerreply(op, inpart):
2171 def handleobsmarkerreply(op, inpart):
2172 """retrieve the result of a pushkey request"""
2172 """retrieve the result of a pushkey request"""
2173 ret = int(inpart.params['new'])
2173 ret = int(inpart.params['new'])
2174 partid = int(inpart.params['in-reply-to'])
2174 partid = int(inpart.params['in-reply-to'])
2175 op.records.add('obsmarkers', {'new': ret}, partid)
2175 op.records.add('obsmarkers', {'new': ret}, partid)
2176
2176
2177 @parthandler('hgtagsfnodes')
2177 @parthandler('hgtagsfnodes')
2178 def handlehgtagsfnodes(op, inpart):
2178 def handlehgtagsfnodes(op, inpart):
2179 """Applies .hgtags fnodes cache entries to the local repo.
2179 """Applies .hgtags fnodes cache entries to the local repo.
2180
2180
2181 Payload is pairs of 20 byte changeset nodes and filenodes.
2181 Payload is pairs of 20 byte changeset nodes and filenodes.
2182 """
2182 """
2183 # Grab the transaction so we ensure that we have the lock at this point.
2183 # Grab the transaction so we ensure that we have the lock at this point.
2184 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2184 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2185 op.gettransaction()
2185 op.gettransaction()
2186 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2186 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2187
2187
2188 count = 0
2188 count = 0
2189 while True:
2189 while True:
2190 node = inpart.read(20)
2190 node = inpart.read(20)
2191 fnode = inpart.read(20)
2191 fnode = inpart.read(20)
2192 if len(node) < 20 or len(fnode) < 20:
2192 if len(node) < 20 or len(fnode) < 20:
2193 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2193 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2194 break
2194 break
2195 cache.setfnode(node, fnode)
2195 cache.setfnode(node, fnode)
2196 count += 1
2196 count += 1
2197
2197
2198 cache.write()
2198 cache.write()
2199 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2199 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2200
2200
2201 rbcstruct = struct.Struct('>III')
2201 rbcstruct = struct.Struct('>III')
2202
2202
2203 @parthandler('cache:rev-branch-cache')
2203 @parthandler('cache:rev-branch-cache')
2204 def handlerbc(op, inpart):
2204 def handlerbc(op, inpart):
2205 """receive a rev-branch-cache payload and update the local cache
2205 """receive a rev-branch-cache payload and update the local cache
2206
2206
2207 The payload is a series of data related to each branch
2207 The payload is a series of data related to each branch
2208
2208
2209 1) branch name length
2209 1) branch name length
2210 2) number of open heads
2210 2) number of open heads
2211 3) number of closed heads
2211 3) number of closed heads
2212 4) open heads nodes
2212 4) open heads nodes
2213 5) closed heads nodes
2213 5) closed heads nodes
2214 """
2214 """
2215 total = 0
2215 total = 0
2216 rawheader = inpart.read(rbcstruct.size)
2216 rawheader = inpart.read(rbcstruct.size)
2217 cache = op.repo.revbranchcache()
2217 cache = op.repo.revbranchcache()
2218 cl = op.repo.unfiltered().changelog
2218 cl = op.repo.unfiltered().changelog
2219 while rawheader:
2219 while rawheader:
2220 header = rbcstruct.unpack(rawheader)
2220 header = rbcstruct.unpack(rawheader)
2221 total += header[1] + header[2]
2221 total += header[1] + header[2]
2222 utf8branch = inpart.read(header[0])
2222 utf8branch = inpart.read(header[0])
2223 branch = encoding.tolocal(utf8branch)
2223 branch = encoding.tolocal(utf8branch)
2224 for x in xrange(header[1]):
2224 for x in xrange(header[1]):
2225 node = inpart.read(20)
2225 node = inpart.read(20)
2226 rev = cl.rev(node)
2226 rev = cl.rev(node)
2227 cache.setdata(branch, rev, node, False)
2227 cache.setdata(branch, rev, node, False)
2228 for x in xrange(header[2]):
2228 for x in xrange(header[2]):
2229 node = inpart.read(20)
2229 node = inpart.read(20)
2230 rev = cl.rev(node)
2230 rev = cl.rev(node)
2231 cache.setdata(branch, rev, node, True)
2231 cache.setdata(branch, rev, node, True)
2232 rawheader = inpart.read(rbcstruct.size)
2232 rawheader = inpart.read(rbcstruct.size)
2233 cache.write()
2233 cache.write()
2234
2234
2235 @parthandler('pushvars')
2235 @parthandler('pushvars')
2236 def bundle2getvars(op, part):
2236 def bundle2getvars(op, part):
2237 '''unbundle a bundle2 containing shellvars on the server'''
2237 '''unbundle a bundle2 containing shellvars on the server'''
2238 # An option to disable unbundling on server-side for security reasons
2238 # An option to disable unbundling on server-side for security reasons
2239 if op.ui.configbool('push', 'pushvars.server'):
2239 if op.ui.configbool('push', 'pushvars.server'):
2240 hookargs = {}
2240 hookargs = {}
2241 for key, value in part.advisoryparams:
2241 for key, value in part.advisoryparams:
2242 key = key.upper()
2242 key = key.upper()
2243 # We want pushed variables to have USERVAR_ prepended so we know
2243 # We want pushed variables to have USERVAR_ prepended so we know
2244 # they came from the --pushvar flag.
2244 # they came from the --pushvar flag.
2245 key = "USERVAR_" + key
2245 key = "USERVAR_" + key
2246 hookargs[key] = value
2246 hookargs[key] = value
2247 op.addhookargs(hookargs)
2247 op.addhookargs(hookargs)
2248
2248
2249 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2249 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2250 def handlestreamv2bundle(op, part):
2250 def handlestreamv2bundle(op, part):
2251
2251
2252 requirements = urlreq.unquote(part.params['requirements']).split(',')
2252 requirements = urlreq.unquote(part.params['requirements']).split(',')
2253 filecount = int(part.params['filecount'])
2253 filecount = int(part.params['filecount'])
2254 bytecount = int(part.params['bytecount'])
2254 bytecount = int(part.params['bytecount'])
2255
2255
2256 repo = op.repo
2256 repo = op.repo
2257 if len(repo):
2257 if len(repo):
2258 msg = _('cannot apply stream clone to non empty repository')
2258 msg = _('cannot apply stream clone to non empty repository')
2259 raise error.Abort(msg)
2259 raise error.Abort(msg)
2260
2260
2261 repo.ui.debug('applying stream bundle\n')
2261 repo.ui.debug('applying stream bundle\n')
2262 streamclone.applybundlev2(repo, part, filecount, bytecount,
2262 streamclone.applybundlev2(repo, part, filecount, bytecount,
2263 requirements)
2263 requirements)
@@ -1,2336 +1,2338 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import errno
11 import errno
12 import hashlib
12 import hashlib
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 bin,
16 bin,
17 hex,
17 hex,
18 nullid,
18 nullid,
19 )
19 )
20 from .thirdparty import (
20 from .thirdparty import (
21 attr,
21 attr,
22 )
22 )
23 from . import (
23 from . import (
24 bookmarks as bookmod,
24 bookmarks as bookmod,
25 bundle2,
25 bundle2,
26 changegroup,
26 changegroup,
27 discovery,
27 discovery,
28 error,
28 error,
29 lock as lockmod,
29 lock as lockmod,
30 logexchange,
30 logexchange,
31 obsolete,
31 obsolete,
32 phases,
32 phases,
33 pushkey,
33 pushkey,
34 pycompat,
34 pycompat,
35 scmutil,
35 scmutil,
36 sslutil,
36 sslutil,
37 streamclone,
37 streamclone,
38 url as urlmod,
38 url as urlmod,
39 util,
39 util,
40 )
40 )
41 from .utils import (
41 from .utils import (
42 stringutil,
42 stringutil,
43 )
43 )
44
44
45 urlerr = util.urlerr
45 urlerr = util.urlerr
46 urlreq = util.urlreq
46 urlreq = util.urlreq
47
47
48 # Maps bundle version human names to changegroup versions.
48 # Maps bundle version human names to changegroup versions.
49 _bundlespeccgversions = {'v1': '01',
49 _bundlespeccgversions = {'v1': '01',
50 'v2': '02',
50 'v2': '02',
51 'packed1': 's1',
51 'packed1': 's1',
52 'bundle2': '02', #legacy
52 'bundle2': '02', #legacy
53 }
53 }
54
54
55 # Maps bundle version with content opts to choose which part to bundle
55 # Maps bundle version with content opts to choose which part to bundle
56 _bundlespeccontentopts = {
56 _bundlespeccontentopts = {
57 'v1': {
57 'v1': {
58 'changegroup': True,
58 'changegroup': True,
59 'cg.version': '01',
59 'cg.version': '01',
60 'obsolescence': False,
60 'obsolescence': False,
61 'phases': False,
61 'phases': False,
62 'tagsfnodescache': False,
62 'tagsfnodescache': False,
63 'revbranchcache': False
63 'revbranchcache': False
64 },
64 },
65 'v2': {
65 'v2': {
66 'changegroup': True,
66 'changegroup': True,
67 'cg.version': '02',
67 'cg.version': '02',
68 'obsolescence': False,
68 'obsolescence': False,
69 'phases': False,
69 'phases': False,
70 'tagsfnodescache': True,
70 'tagsfnodescache': True,
71 'revbranchcache': True
71 'revbranchcache': True
72 },
72 },
73 'packed1' : {
73 'packed1' : {
74 'cg.version': 's1'
74 'cg.version': 's1'
75 }
75 }
76 }
76 }
77 _bundlespeccontentopts['bundle2'] = _bundlespeccontentopts['v2']
77 _bundlespeccontentopts['bundle2'] = _bundlespeccontentopts['v2']
78
78
79 _bundlespecvariants = {"streamv2": {"changegroup": False, "streamv2": True,
79 _bundlespecvariants = {"streamv2": {"changegroup": False, "streamv2": True,
80 "tagsfnodescache": False,
80 "tagsfnodescache": False,
81 "revbranchcache": False}}
81 "revbranchcache": False}}
82
82
83 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
83 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
84 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
84 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
85
85
86 @attr.s
86 @attr.s
87 class bundlespec(object):
87 class bundlespec(object):
88 compression = attr.ib()
88 compression = attr.ib()
89 version = attr.ib()
89 version = attr.ib()
90 params = attr.ib()
90 params = attr.ib()
91 contentopts = attr.ib()
91 contentopts = attr.ib()
92
92
93 def parsebundlespec(repo, spec, strict=True, externalnames=False):
93 def parsebundlespec(repo, spec, strict=True, externalnames=False):
94 """Parse a bundle string specification into parts.
94 """Parse a bundle string specification into parts.
95
95
96 Bundle specifications denote a well-defined bundle/exchange format.
96 Bundle specifications denote a well-defined bundle/exchange format.
97 The content of a given specification should not change over time in
97 The content of a given specification should not change over time in
98 order to ensure that bundles produced by a newer version of Mercurial are
98 order to ensure that bundles produced by a newer version of Mercurial are
99 readable from an older version.
99 readable from an older version.
100
100
101 The string currently has the form:
101 The string currently has the form:
102
102
103 <compression>-<type>[;<parameter0>[;<parameter1>]]
103 <compression>-<type>[;<parameter0>[;<parameter1>]]
104
104
105 Where <compression> is one of the supported compression formats
105 Where <compression> is one of the supported compression formats
106 and <type> is (currently) a version string. A ";" can follow the type and
106 and <type> is (currently) a version string. A ";" can follow the type and
107 all text afterwards is interpreted as URI encoded, ";" delimited key=value
107 all text afterwards is interpreted as URI encoded, ";" delimited key=value
108 pairs.
108 pairs.
109
109
110 If ``strict`` is True (the default) <compression> is required. Otherwise,
110 If ``strict`` is True (the default) <compression> is required. Otherwise,
111 it is optional.
111 it is optional.
112
112
113 If ``externalnames`` is False (the default), the human-centric names will
113 If ``externalnames`` is False (the default), the human-centric names will
114 be converted to their internal representation.
114 be converted to their internal representation.
115
115
116 Returns a bundlespec object of (compression, version, parameters).
116 Returns a bundlespec object of (compression, version, parameters).
117 Compression will be ``None`` if not in strict mode and a compression isn't
117 Compression will be ``None`` if not in strict mode and a compression isn't
118 defined.
118 defined.
119
119
120 An ``InvalidBundleSpecification`` is raised when the specification is
120 An ``InvalidBundleSpecification`` is raised when the specification is
121 not syntactically well formed.
121 not syntactically well formed.
122
122
123 An ``UnsupportedBundleSpecification`` is raised when the compression or
123 An ``UnsupportedBundleSpecification`` is raised when the compression or
124 bundle type/version is not recognized.
124 bundle type/version is not recognized.
125
125
126 Note: this function will likely eventually return a more complex data
126 Note: this function will likely eventually return a more complex data
127 structure, including bundle2 part information.
127 structure, including bundle2 part information.
128 """
128 """
129 def parseparams(s):
129 def parseparams(s):
130 if ';' not in s:
130 if ';' not in s:
131 return s, {}
131 return s, {}
132
132
133 params = {}
133 params = {}
134 version, paramstr = s.split(';', 1)
134 version, paramstr = s.split(';', 1)
135
135
136 for p in paramstr.split(';'):
136 for p in paramstr.split(';'):
137 if '=' not in p:
137 if '=' not in p:
138 raise error.InvalidBundleSpecification(
138 raise error.InvalidBundleSpecification(
139 _('invalid bundle specification: '
139 _('invalid bundle specification: '
140 'missing "=" in parameter: %s') % p)
140 'missing "=" in parameter: %s') % p)
141
141
142 key, value = p.split('=', 1)
142 key, value = p.split('=', 1)
143 key = urlreq.unquote(key)
143 key = urlreq.unquote(key)
144 value = urlreq.unquote(value)
144 value = urlreq.unquote(value)
145 params[key] = value
145 params[key] = value
146
146
147 return version, params
147 return version, params
148
148
149
149
150 if strict and '-' not in spec:
150 if strict and '-' not in spec:
151 raise error.InvalidBundleSpecification(
151 raise error.InvalidBundleSpecification(
152 _('invalid bundle specification; '
152 _('invalid bundle specification; '
153 'must be prefixed with compression: %s') % spec)
153 'must be prefixed with compression: %s') % spec)
154
154
155 if '-' in spec:
155 if '-' in spec:
156 compression, version = spec.split('-', 1)
156 compression, version = spec.split('-', 1)
157
157
158 if compression not in util.compengines.supportedbundlenames:
158 if compression not in util.compengines.supportedbundlenames:
159 raise error.UnsupportedBundleSpecification(
159 raise error.UnsupportedBundleSpecification(
160 _('%s compression is not supported') % compression)
160 _('%s compression is not supported') % compression)
161
161
162 version, params = parseparams(version)
162 version, params = parseparams(version)
163
163
164 if version not in _bundlespeccgversions:
164 if version not in _bundlespeccgversions:
165 raise error.UnsupportedBundleSpecification(
165 raise error.UnsupportedBundleSpecification(
166 _('%s is not a recognized bundle version') % version)
166 _('%s is not a recognized bundle version') % version)
167 else:
167 else:
168 # Value could be just the compression or just the version, in which
168 # Value could be just the compression or just the version, in which
169 # case some defaults are assumed (but only when not in strict mode).
169 # case some defaults are assumed (but only when not in strict mode).
170 assert not strict
170 assert not strict
171
171
172 spec, params = parseparams(spec)
172 spec, params = parseparams(spec)
173
173
174 if spec in util.compengines.supportedbundlenames:
174 if spec in util.compengines.supportedbundlenames:
175 compression = spec
175 compression = spec
176 version = 'v1'
176 version = 'v1'
177 # Generaldelta repos require v2.
177 # Generaldelta repos require v2.
178 if 'generaldelta' in repo.requirements:
178 if 'generaldelta' in repo.requirements:
179 version = 'v2'
179 version = 'v2'
180 # Modern compression engines require v2.
180 # Modern compression engines require v2.
181 if compression not in _bundlespecv1compengines:
181 if compression not in _bundlespecv1compengines:
182 version = 'v2'
182 version = 'v2'
183 elif spec in _bundlespeccgversions:
183 elif spec in _bundlespeccgversions:
184 if spec == 'packed1':
184 if spec == 'packed1':
185 compression = 'none'
185 compression = 'none'
186 else:
186 else:
187 compression = 'bzip2'
187 compression = 'bzip2'
188 version = spec
188 version = spec
189 else:
189 else:
190 raise error.UnsupportedBundleSpecification(
190 raise error.UnsupportedBundleSpecification(
191 _('%s is not a recognized bundle specification') % spec)
191 _('%s is not a recognized bundle specification') % spec)
192
192
193 # Bundle version 1 only supports a known set of compression engines.
193 # Bundle version 1 only supports a known set of compression engines.
194 if version == 'v1' and compression not in _bundlespecv1compengines:
194 if version == 'v1' and compression not in _bundlespecv1compengines:
195 raise error.UnsupportedBundleSpecification(
195 raise error.UnsupportedBundleSpecification(
196 _('compression engine %s is not supported on v1 bundles') %
196 _('compression engine %s is not supported on v1 bundles') %
197 compression)
197 compression)
198
198
199 # The specification for packed1 can optionally declare the data formats
199 # The specification for packed1 can optionally declare the data formats
200 # required to apply it. If we see this metadata, compare against what the
200 # required to apply it. If we see this metadata, compare against what the
201 # repo supports and error if the bundle isn't compatible.
201 # repo supports and error if the bundle isn't compatible.
202 if version == 'packed1' and 'requirements' in params:
202 if version == 'packed1' and 'requirements' in params:
203 requirements = set(params['requirements'].split(','))
203 requirements = set(params['requirements'].split(','))
204 missingreqs = requirements - repo.supportedformats
204 missingreqs = requirements - repo.supportedformats
205 if missingreqs:
205 if missingreqs:
206 raise error.UnsupportedBundleSpecification(
206 raise error.UnsupportedBundleSpecification(
207 _('missing support for repository features: %s') %
207 _('missing support for repository features: %s') %
208 ', '.join(sorted(missingreqs)))
208 ', '.join(sorted(missingreqs)))
209
209
210 # Compute contentopts based on the version
210 # Compute contentopts based on the version
211 contentopts = _bundlespeccontentopts.get(version, {}).copy()
211 contentopts = _bundlespeccontentopts.get(version, {}).copy()
212
212
213 # Process the variants
213 # Process the variants
214 if "stream" in params and params["stream"] == "v2":
214 if "stream" in params and params["stream"] == "v2":
215 variant = _bundlespecvariants["streamv2"]
215 variant = _bundlespecvariants["streamv2"]
216 contentopts.update(variant)
216 contentopts.update(variant)
217
217
218 if not externalnames:
218 if not externalnames:
219 engine = util.compengines.forbundlename(compression)
219 engine = util.compengines.forbundlename(compression)
220 compression = engine.bundletype()[1]
220 compression = engine.bundletype()[1]
221 version = _bundlespeccgversions[version]
221 version = _bundlespeccgversions[version]
222
222
223 return bundlespec(compression, version, params, contentopts)
223 return bundlespec(compression, version, params, contentopts)
224
224
225 def readbundle(ui, fh, fname, vfs=None):
225 def readbundle(ui, fh, fname, vfs=None):
226 header = changegroup.readexactly(fh, 4)
226 header = changegroup.readexactly(fh, 4)
227
227
228 alg = None
228 alg = None
229 if not fname:
229 if not fname:
230 fname = "stream"
230 fname = "stream"
231 if not header.startswith('HG') and header.startswith('\0'):
231 if not header.startswith('HG') and header.startswith('\0'):
232 fh = changegroup.headerlessfixup(fh, header)
232 fh = changegroup.headerlessfixup(fh, header)
233 header = "HG10"
233 header = "HG10"
234 alg = 'UN'
234 alg = 'UN'
235 elif vfs:
235 elif vfs:
236 fname = vfs.join(fname)
236 fname = vfs.join(fname)
237
237
238 magic, version = header[0:2], header[2:4]
238 magic, version = header[0:2], header[2:4]
239
239
240 if magic != 'HG':
240 if magic != 'HG':
241 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
241 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
242 if version == '10':
242 if version == '10':
243 if alg is None:
243 if alg is None:
244 alg = changegroup.readexactly(fh, 2)
244 alg = changegroup.readexactly(fh, 2)
245 return changegroup.cg1unpacker(fh, alg)
245 return changegroup.cg1unpacker(fh, alg)
246 elif version.startswith('2'):
246 elif version.startswith('2'):
247 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
247 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
248 elif version == 'S1':
248 elif version == 'S1':
249 return streamclone.streamcloneapplier(fh)
249 return streamclone.streamcloneapplier(fh)
250 else:
250 else:
251 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
251 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
252
252
253 def getbundlespec(ui, fh):
253 def getbundlespec(ui, fh):
254 """Infer the bundlespec from a bundle file handle.
254 """Infer the bundlespec from a bundle file handle.
255
255
256 The input file handle is seeked and the original seek position is not
256 The input file handle is seeked and the original seek position is not
257 restored.
257 restored.
258 """
258 """
259 def speccompression(alg):
259 def speccompression(alg):
260 try:
260 try:
261 return util.compengines.forbundletype(alg).bundletype()[0]
261 return util.compengines.forbundletype(alg).bundletype()[0]
262 except KeyError:
262 except KeyError:
263 return None
263 return None
264
264
265 b = readbundle(ui, fh, None)
265 b = readbundle(ui, fh, None)
266 if isinstance(b, changegroup.cg1unpacker):
266 if isinstance(b, changegroup.cg1unpacker):
267 alg = b._type
267 alg = b._type
268 if alg == '_truncatedBZ':
268 if alg == '_truncatedBZ':
269 alg = 'BZ'
269 alg = 'BZ'
270 comp = speccompression(alg)
270 comp = speccompression(alg)
271 if not comp:
271 if not comp:
272 raise error.Abort(_('unknown compression algorithm: %s') % alg)
272 raise error.Abort(_('unknown compression algorithm: %s') % alg)
273 return '%s-v1' % comp
273 return '%s-v1' % comp
274 elif isinstance(b, bundle2.unbundle20):
274 elif isinstance(b, bundle2.unbundle20):
275 if 'Compression' in b.params:
275 if 'Compression' in b.params:
276 comp = speccompression(b.params['Compression'])
276 comp = speccompression(b.params['Compression'])
277 if not comp:
277 if not comp:
278 raise error.Abort(_('unknown compression algorithm: %s') % comp)
278 raise error.Abort(_('unknown compression algorithm: %s') % comp)
279 else:
279 else:
280 comp = 'none'
280 comp = 'none'
281
281
282 version = None
282 version = None
283 for part in b.iterparts():
283 for part in b.iterparts():
284 if part.type == 'changegroup':
284 if part.type == 'changegroup':
285 version = part.params['version']
285 version = part.params['version']
286 if version in ('01', '02'):
286 if version in ('01', '02'):
287 version = 'v2'
287 version = 'v2'
288 else:
288 else:
289 raise error.Abort(_('changegroup version %s does not have '
289 raise error.Abort(_('changegroup version %s does not have '
290 'a known bundlespec') % version,
290 'a known bundlespec') % version,
291 hint=_('try upgrading your Mercurial '
291 hint=_('try upgrading your Mercurial '
292 'client'))
292 'client'))
293 elif part.type == 'stream2' and version is None:
293 elif part.type == 'stream2' and version is None:
294 # A stream2 part requires to be part of a v2 bundle
294 # A stream2 part requires to be part of a v2 bundle
295 version = "v2"
295 version = "v2"
296 requirements = urlreq.unquote(part.params['requirements'])
296 requirements = urlreq.unquote(part.params['requirements'])
297 splitted = requirements.split()
297 splitted = requirements.split()
298 params = bundle2._formatrequirementsparams(splitted)
298 params = bundle2._formatrequirementsparams(splitted)
299 return 'none-v2;stream=v2;%s' % params
299 return 'none-v2;stream=v2;%s' % params
300
300
301 if not version:
301 if not version:
302 raise error.Abort(_('could not identify changegroup version in '
302 raise error.Abort(_('could not identify changegroup version in '
303 'bundle'))
303 'bundle'))
304
304
305 return '%s-%s' % (comp, version)
305 return '%s-%s' % (comp, version)
306 elif isinstance(b, streamclone.streamcloneapplier):
306 elif isinstance(b, streamclone.streamcloneapplier):
307 requirements = streamclone.readbundle1header(fh)[2]
307 requirements = streamclone.readbundle1header(fh)[2]
308 formatted = bundle2._formatrequirementsparams(requirements)
308 formatted = bundle2._formatrequirementsparams(requirements)
309 return 'none-packed1;%s' % formatted
309 return 'none-packed1;%s' % formatted
310 else:
310 else:
311 raise error.Abort(_('unknown bundle type: %s') % b)
311 raise error.Abort(_('unknown bundle type: %s') % b)
312
312
313 def _computeoutgoing(repo, heads, common):
313 def _computeoutgoing(repo, heads, common):
314 """Computes which revs are outgoing given a set of common
314 """Computes which revs are outgoing given a set of common
315 and a set of heads.
315 and a set of heads.
316
316
317 This is a separate function so extensions can have access to
317 This is a separate function so extensions can have access to
318 the logic.
318 the logic.
319
319
320 Returns a discovery.outgoing object.
320 Returns a discovery.outgoing object.
321 """
321 """
322 cl = repo.changelog
322 cl = repo.changelog
323 if common:
323 if common:
324 hasnode = cl.hasnode
324 hasnode = cl.hasnode
325 common = [n for n in common if hasnode(n)]
325 common = [n for n in common if hasnode(n)]
326 else:
326 else:
327 common = [nullid]
327 common = [nullid]
328 if not heads:
328 if not heads:
329 heads = cl.heads()
329 heads = cl.heads()
330 return discovery.outgoing(repo, common, heads)
330 return discovery.outgoing(repo, common, heads)
331
331
332 def _forcebundle1(op):
332 def _forcebundle1(op):
333 """return true if a pull/push must use bundle1
333 """return true if a pull/push must use bundle1
334
334
335 This function is used to allow testing of the older bundle version"""
335 This function is used to allow testing of the older bundle version"""
336 ui = op.repo.ui
336 ui = op.repo.ui
337 # The goal is this config is to allow developer to choose the bundle
337 # The goal is this config is to allow developer to choose the bundle
338 # version used during exchanged. This is especially handy during test.
338 # version used during exchanged. This is especially handy during test.
339 # Value is a list of bundle version to be picked from, highest version
339 # Value is a list of bundle version to be picked from, highest version
340 # should be used.
340 # should be used.
341 #
341 #
342 # developer config: devel.legacy.exchange
342 # developer config: devel.legacy.exchange
343 exchange = ui.configlist('devel', 'legacy.exchange')
343 exchange = ui.configlist('devel', 'legacy.exchange')
344 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
344 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
345 return forcebundle1 or not op.remote.capable('bundle2')
345 return forcebundle1 or not op.remote.capable('bundle2')
346
346
347 class pushoperation(object):
347 class pushoperation(object):
348 """A object that represent a single push operation
348 """A object that represent a single push operation
349
349
350 Its purpose is to carry push related state and very common operations.
350 Its purpose is to carry push related state and very common operations.
351
351
352 A new pushoperation should be created at the beginning of each push and
352 A new pushoperation should be created at the beginning of each push and
353 discarded afterward.
353 discarded afterward.
354 """
354 """
355
355
356 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
356 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
357 bookmarks=(), pushvars=None):
357 bookmarks=(), pushvars=None):
358 # repo we push from
358 # repo we push from
359 self.repo = repo
359 self.repo = repo
360 self.ui = repo.ui
360 self.ui = repo.ui
361 # repo we push to
361 # repo we push to
362 self.remote = remote
362 self.remote = remote
363 # force option provided
363 # force option provided
364 self.force = force
364 self.force = force
365 # revs to be pushed (None is "all")
365 # revs to be pushed (None is "all")
366 self.revs = revs
366 self.revs = revs
367 # bookmark explicitly pushed
367 # bookmark explicitly pushed
368 self.bookmarks = bookmarks
368 self.bookmarks = bookmarks
369 # allow push of new branch
369 # allow push of new branch
370 self.newbranch = newbranch
370 self.newbranch = newbranch
371 # step already performed
371 # step already performed
372 # (used to check what steps have been already performed through bundle2)
372 # (used to check what steps have been already performed through bundle2)
373 self.stepsdone = set()
373 self.stepsdone = set()
374 # Integer version of the changegroup push result
374 # Integer version of the changegroup push result
375 # - None means nothing to push
375 # - None means nothing to push
376 # - 0 means HTTP error
376 # - 0 means HTTP error
377 # - 1 means we pushed and remote head count is unchanged *or*
377 # - 1 means we pushed and remote head count is unchanged *or*
378 # we have outgoing changesets but refused to push
378 # we have outgoing changesets but refused to push
379 # - other values as described by addchangegroup()
379 # - other values as described by addchangegroup()
380 self.cgresult = None
380 self.cgresult = None
381 # Boolean value for the bookmark push
381 # Boolean value for the bookmark push
382 self.bkresult = None
382 self.bkresult = None
383 # discover.outgoing object (contains common and outgoing data)
383 # discover.outgoing object (contains common and outgoing data)
384 self.outgoing = None
384 self.outgoing = None
385 # all remote topological heads before the push
385 # all remote topological heads before the push
386 self.remoteheads = None
386 self.remoteheads = None
387 # Details of the remote branch pre and post push
387 # Details of the remote branch pre and post push
388 #
388 #
389 # mapping: {'branch': ([remoteheads],
389 # mapping: {'branch': ([remoteheads],
390 # [newheads],
390 # [newheads],
391 # [unsyncedheads],
391 # [unsyncedheads],
392 # [discardedheads])}
392 # [discardedheads])}
393 # - branch: the branch name
393 # - branch: the branch name
394 # - remoteheads: the list of remote heads known locally
394 # - remoteheads: the list of remote heads known locally
395 # None if the branch is new
395 # None if the branch is new
396 # - newheads: the new remote heads (known locally) with outgoing pushed
396 # - newheads: the new remote heads (known locally) with outgoing pushed
397 # - unsyncedheads: the list of remote heads unknown locally.
397 # - unsyncedheads: the list of remote heads unknown locally.
398 # - discardedheads: the list of remote heads made obsolete by the push
398 # - discardedheads: the list of remote heads made obsolete by the push
399 self.pushbranchmap = None
399 self.pushbranchmap = None
400 # testable as a boolean indicating if any nodes are missing locally.
400 # testable as a boolean indicating if any nodes are missing locally.
401 self.incoming = None
401 self.incoming = None
402 # summary of the remote phase situation
402 # summary of the remote phase situation
403 self.remotephases = None
403 self.remotephases = None
404 # phases changes that must be pushed along side the changesets
404 # phases changes that must be pushed along side the changesets
405 self.outdatedphases = None
405 self.outdatedphases = None
406 # phases changes that must be pushed if changeset push fails
406 # phases changes that must be pushed if changeset push fails
407 self.fallbackoutdatedphases = None
407 self.fallbackoutdatedphases = None
408 # outgoing obsmarkers
408 # outgoing obsmarkers
409 self.outobsmarkers = set()
409 self.outobsmarkers = set()
410 # outgoing bookmarks
410 # outgoing bookmarks
411 self.outbookmarks = []
411 self.outbookmarks = []
412 # transaction manager
412 # transaction manager
413 self.trmanager = None
413 self.trmanager = None
414 # map { pushkey partid -> callback handling failure}
414 # map { pushkey partid -> callback handling failure}
415 # used to handle exception from mandatory pushkey part failure
415 # used to handle exception from mandatory pushkey part failure
416 self.pkfailcb = {}
416 self.pkfailcb = {}
417 # an iterable of pushvars or None
417 # an iterable of pushvars or None
418 self.pushvars = pushvars
418 self.pushvars = pushvars
419
419
420 @util.propertycache
420 @util.propertycache
421 def futureheads(self):
421 def futureheads(self):
422 """future remote heads if the changeset push succeeds"""
422 """future remote heads if the changeset push succeeds"""
423 return self.outgoing.missingheads
423 return self.outgoing.missingheads
424
424
425 @util.propertycache
425 @util.propertycache
426 def fallbackheads(self):
426 def fallbackheads(self):
427 """future remote heads if the changeset push fails"""
427 """future remote heads if the changeset push fails"""
428 if self.revs is None:
428 if self.revs is None:
429 # not target to push, all common are relevant
429 # not target to push, all common are relevant
430 return self.outgoing.commonheads
430 return self.outgoing.commonheads
431 unfi = self.repo.unfiltered()
431 unfi = self.repo.unfiltered()
432 # I want cheads = heads(::missingheads and ::commonheads)
432 # I want cheads = heads(::missingheads and ::commonheads)
433 # (missingheads is revs with secret changeset filtered out)
433 # (missingheads is revs with secret changeset filtered out)
434 #
434 #
435 # This can be expressed as:
435 # This can be expressed as:
436 # cheads = ( (missingheads and ::commonheads)
436 # cheads = ( (missingheads and ::commonheads)
437 # + (commonheads and ::missingheads))"
437 # + (commonheads and ::missingheads))"
438 # )
438 # )
439 #
439 #
440 # while trying to push we already computed the following:
440 # while trying to push we already computed the following:
441 # common = (::commonheads)
441 # common = (::commonheads)
442 # missing = ((commonheads::missingheads) - commonheads)
442 # missing = ((commonheads::missingheads) - commonheads)
443 #
443 #
444 # We can pick:
444 # We can pick:
445 # * missingheads part of common (::commonheads)
445 # * missingheads part of common (::commonheads)
446 common = self.outgoing.common
446 common = self.outgoing.common
447 nm = self.repo.changelog.nodemap
447 nm = self.repo.changelog.nodemap
448 cheads = [node for node in self.revs if nm[node] in common]
448 cheads = [node for node in self.revs if nm[node] in common]
449 # and
449 # and
450 # * commonheads parents on missing
450 # * commonheads parents on missing
451 revset = unfi.set('%ln and parents(roots(%ln))',
451 revset = unfi.set('%ln and parents(roots(%ln))',
452 self.outgoing.commonheads,
452 self.outgoing.commonheads,
453 self.outgoing.missing)
453 self.outgoing.missing)
454 cheads.extend(c.node() for c in revset)
454 cheads.extend(c.node() for c in revset)
455 return cheads
455 return cheads
456
456
457 @property
457 @property
458 def commonheads(self):
458 def commonheads(self):
459 """set of all common heads after changeset bundle push"""
459 """set of all common heads after changeset bundle push"""
460 if self.cgresult:
460 if self.cgresult:
461 return self.futureheads
461 return self.futureheads
462 else:
462 else:
463 return self.fallbackheads
463 return self.fallbackheads
464
464
465 # mapping of message used when pushing bookmark
465 # mapping of message used when pushing bookmark
466 bookmsgmap = {'update': (_("updating bookmark %s\n"),
466 bookmsgmap = {'update': (_("updating bookmark %s\n"),
467 _('updating bookmark %s failed!\n')),
467 _('updating bookmark %s failed!\n')),
468 'export': (_("exporting bookmark %s\n"),
468 'export': (_("exporting bookmark %s\n"),
469 _('exporting bookmark %s failed!\n')),
469 _('exporting bookmark %s failed!\n')),
470 'delete': (_("deleting remote bookmark %s\n"),
470 'delete': (_("deleting remote bookmark %s\n"),
471 _('deleting remote bookmark %s failed!\n')),
471 _('deleting remote bookmark %s failed!\n')),
472 }
472 }
473
473
474
474
475 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
475 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
476 opargs=None):
476 opargs=None):
477 '''Push outgoing changesets (limited by revs) from a local
477 '''Push outgoing changesets (limited by revs) from a local
478 repository to remote. Return an integer:
478 repository to remote. Return an integer:
479 - None means nothing to push
479 - None means nothing to push
480 - 0 means HTTP error
480 - 0 means HTTP error
481 - 1 means we pushed and remote head count is unchanged *or*
481 - 1 means we pushed and remote head count is unchanged *or*
482 we have outgoing changesets but refused to push
482 we have outgoing changesets but refused to push
483 - other values as described by addchangegroup()
483 - other values as described by addchangegroup()
484 '''
484 '''
485 if opargs is None:
485 if opargs is None:
486 opargs = {}
486 opargs = {}
487 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
487 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
488 **pycompat.strkwargs(opargs))
488 **pycompat.strkwargs(opargs))
489 if pushop.remote.local():
489 if pushop.remote.local():
490 missing = (set(pushop.repo.requirements)
490 missing = (set(pushop.repo.requirements)
491 - pushop.remote.local().supported)
491 - pushop.remote.local().supported)
492 if missing:
492 if missing:
493 msg = _("required features are not"
493 msg = _("required features are not"
494 " supported in the destination:"
494 " supported in the destination:"
495 " %s") % (', '.join(sorted(missing)))
495 " %s") % (', '.join(sorted(missing)))
496 raise error.Abort(msg)
496 raise error.Abort(msg)
497
497
498 if not pushop.remote.canpush():
498 if not pushop.remote.canpush():
499 raise error.Abort(_("destination does not support push"))
499 raise error.Abort(_("destination does not support push"))
500
500
501 if not pushop.remote.capable('unbundle'):
501 if not pushop.remote.capable('unbundle'):
502 raise error.Abort(_('cannot push: destination does not support the '
502 raise error.Abort(_('cannot push: destination does not support the '
503 'unbundle wire protocol command'))
503 'unbundle wire protocol command'))
504
504
505 # get lock as we might write phase data
505 # get lock as we might write phase data
506 wlock = lock = None
506 wlock = lock = None
507 try:
507 try:
508 # bundle2 push may receive a reply bundle touching bookmarks or other
508 # bundle2 push may receive a reply bundle touching bookmarks or other
509 # things requiring the wlock. Take it now to ensure proper ordering.
509 # things requiring the wlock. Take it now to ensure proper ordering.
510 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
510 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
511 if (not _forcebundle1(pushop)) and maypushback:
511 if (not _forcebundle1(pushop)) and maypushback:
512 wlock = pushop.repo.wlock()
512 wlock = pushop.repo.wlock()
513 lock = pushop.repo.lock()
513 lock = pushop.repo.lock()
514 pushop.trmanager = transactionmanager(pushop.repo,
514 pushop.trmanager = transactionmanager(pushop.repo,
515 'push-response',
515 'push-response',
516 pushop.remote.url())
516 pushop.remote.url())
517 except IOError as err:
517 except IOError as err:
518 if err.errno != errno.EACCES:
518 if err.errno != errno.EACCES:
519 raise
519 raise
520 # source repo cannot be locked.
520 # source repo cannot be locked.
521 # We do not abort the push, but just disable the local phase
521 # We do not abort the push, but just disable the local phase
522 # synchronisation.
522 # synchronisation.
523 msg = 'cannot lock source repository: %s\n' % err
523 msg = 'cannot lock source repository: %s\n' % err
524 pushop.ui.debug(msg)
524 pushop.ui.debug(msg)
525
525
526 with wlock or util.nullcontextmanager(), \
526 with wlock or util.nullcontextmanager(), \
527 lock or util.nullcontextmanager(), \
527 lock or util.nullcontextmanager(), \
528 pushop.trmanager or util.nullcontextmanager():
528 pushop.trmanager or util.nullcontextmanager():
529 pushop.repo.checkpush(pushop)
529 pushop.repo.checkpush(pushop)
530 _pushdiscovery(pushop)
530 _pushdiscovery(pushop)
531 if not _forcebundle1(pushop):
531 if not _forcebundle1(pushop):
532 _pushbundle2(pushop)
532 _pushbundle2(pushop)
533 _pushchangeset(pushop)
533 _pushchangeset(pushop)
534 _pushsyncphase(pushop)
534 _pushsyncphase(pushop)
535 _pushobsolete(pushop)
535 _pushobsolete(pushop)
536 _pushbookmark(pushop)
536 _pushbookmark(pushop)
537
537
538 return pushop
538 return pushop
539
539
540 # list of steps to perform discovery before push
540 # list of steps to perform discovery before push
541 pushdiscoveryorder = []
541 pushdiscoveryorder = []
542
542
543 # Mapping between step name and function
543 # Mapping between step name and function
544 #
544 #
545 # This exists to help extensions wrap steps if necessary
545 # This exists to help extensions wrap steps if necessary
546 pushdiscoverymapping = {}
546 pushdiscoverymapping = {}
547
547
548 def pushdiscovery(stepname):
548 def pushdiscovery(stepname):
549 """decorator for function performing discovery before push
549 """decorator for function performing discovery before push
550
550
551 The function is added to the step -> function mapping and appended to the
551 The function is added to the step -> function mapping and appended to the
552 list of steps. Beware that decorated function will be added in order (this
552 list of steps. Beware that decorated function will be added in order (this
553 may matter).
553 may matter).
554
554
555 You can only use this decorator for a new step, if you want to wrap a step
555 You can only use this decorator for a new step, if you want to wrap a step
556 from an extension, change the pushdiscovery dictionary directly."""
556 from an extension, change the pushdiscovery dictionary directly."""
557 def dec(func):
557 def dec(func):
558 assert stepname not in pushdiscoverymapping
558 assert stepname not in pushdiscoverymapping
559 pushdiscoverymapping[stepname] = func
559 pushdiscoverymapping[stepname] = func
560 pushdiscoveryorder.append(stepname)
560 pushdiscoveryorder.append(stepname)
561 return func
561 return func
562 return dec
562 return dec
563
563
564 def _pushdiscovery(pushop):
564 def _pushdiscovery(pushop):
565 """Run all discovery steps"""
565 """Run all discovery steps"""
566 for stepname in pushdiscoveryorder:
566 for stepname in pushdiscoveryorder:
567 step = pushdiscoverymapping[stepname]
567 step = pushdiscoverymapping[stepname]
568 step(pushop)
568 step(pushop)
569
569
570 @pushdiscovery('changeset')
570 @pushdiscovery('changeset')
571 def _pushdiscoverychangeset(pushop):
571 def _pushdiscoverychangeset(pushop):
572 """discover the changeset that need to be pushed"""
572 """discover the changeset that need to be pushed"""
573 fci = discovery.findcommonincoming
573 fci = discovery.findcommonincoming
574 if pushop.revs:
574 if pushop.revs:
575 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
575 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
576 ancestorsof=pushop.revs)
576 ancestorsof=pushop.revs)
577 else:
577 else:
578 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
578 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
579 common, inc, remoteheads = commoninc
579 common, inc, remoteheads = commoninc
580 fco = discovery.findcommonoutgoing
580 fco = discovery.findcommonoutgoing
581 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
581 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
582 commoninc=commoninc, force=pushop.force)
582 commoninc=commoninc, force=pushop.force)
583 pushop.outgoing = outgoing
583 pushop.outgoing = outgoing
584 pushop.remoteheads = remoteheads
584 pushop.remoteheads = remoteheads
585 pushop.incoming = inc
585 pushop.incoming = inc
586
586
587 @pushdiscovery('phase')
587 @pushdiscovery('phase')
588 def _pushdiscoveryphase(pushop):
588 def _pushdiscoveryphase(pushop):
589 """discover the phase that needs to be pushed
589 """discover the phase that needs to be pushed
590
590
591 (computed for both success and failure case for changesets push)"""
591 (computed for both success and failure case for changesets push)"""
592 outgoing = pushop.outgoing
592 outgoing = pushop.outgoing
593 unfi = pushop.repo.unfiltered()
593 unfi = pushop.repo.unfiltered()
594 remotephases = pushop.remote.listkeys('phases')
594 remotephases = pushop.remote.listkeys('phases')
595 if (pushop.ui.configbool('ui', '_usedassubrepo')
595 if (pushop.ui.configbool('ui', '_usedassubrepo')
596 and remotephases # server supports phases
596 and remotephases # server supports phases
597 and not pushop.outgoing.missing # no changesets to be pushed
597 and not pushop.outgoing.missing # no changesets to be pushed
598 and remotephases.get('publishing', False)):
598 and remotephases.get('publishing', False)):
599 # When:
599 # When:
600 # - this is a subrepo push
600 # - this is a subrepo push
601 # - and remote support phase
601 # - and remote support phase
602 # - and no changeset are to be pushed
602 # - and no changeset are to be pushed
603 # - and remote is publishing
603 # - and remote is publishing
604 # We may be in issue 3781 case!
604 # We may be in issue 3781 case!
605 # We drop the possible phase synchronisation done by
605 # We drop the possible phase synchronisation done by
606 # courtesy to publish changesets possibly locally draft
606 # courtesy to publish changesets possibly locally draft
607 # on the remote.
607 # on the remote.
608 pushop.outdatedphases = []
608 pushop.outdatedphases = []
609 pushop.fallbackoutdatedphases = []
609 pushop.fallbackoutdatedphases = []
610 return
610 return
611
611
612 pushop.remotephases = phases.remotephasessummary(pushop.repo,
612 pushop.remotephases = phases.remotephasessummary(pushop.repo,
613 pushop.fallbackheads,
613 pushop.fallbackheads,
614 remotephases)
614 remotephases)
615 droots = pushop.remotephases.draftroots
615 droots = pushop.remotephases.draftroots
616
616
617 extracond = ''
617 extracond = ''
618 if not pushop.remotephases.publishing:
618 if not pushop.remotephases.publishing:
619 extracond = ' and public()'
619 extracond = ' and public()'
620 revset = 'heads((%%ln::%%ln) %s)' % extracond
620 revset = 'heads((%%ln::%%ln) %s)' % extracond
621 # Get the list of all revs draft on remote by public here.
621 # Get the list of all revs draft on remote by public here.
622 # XXX Beware that revset break if droots is not strictly
622 # XXX Beware that revset break if droots is not strictly
623 # XXX root we may want to ensure it is but it is costly
623 # XXX root we may want to ensure it is but it is costly
624 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
624 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
625 if not outgoing.missing:
625 if not outgoing.missing:
626 future = fallback
626 future = fallback
627 else:
627 else:
628 # adds changeset we are going to push as draft
628 # adds changeset we are going to push as draft
629 #
629 #
630 # should not be necessary for publishing server, but because of an
630 # should not be necessary for publishing server, but because of an
631 # issue fixed in xxxxx we have to do it anyway.
631 # issue fixed in xxxxx we have to do it anyway.
632 fdroots = list(unfi.set('roots(%ln + %ln::)',
632 fdroots = list(unfi.set('roots(%ln + %ln::)',
633 outgoing.missing, droots))
633 outgoing.missing, droots))
634 fdroots = [f.node() for f in fdroots]
634 fdroots = [f.node() for f in fdroots]
635 future = list(unfi.set(revset, fdroots, pushop.futureheads))
635 future = list(unfi.set(revset, fdroots, pushop.futureheads))
636 pushop.outdatedphases = future
636 pushop.outdatedphases = future
637 pushop.fallbackoutdatedphases = fallback
637 pushop.fallbackoutdatedphases = fallback
638
638
639 @pushdiscovery('obsmarker')
639 @pushdiscovery('obsmarker')
640 def _pushdiscoveryobsmarkers(pushop):
640 def _pushdiscoveryobsmarkers(pushop):
641 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
641 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
642 and pushop.repo.obsstore
642 and pushop.repo.obsstore
643 and 'obsolete' in pushop.remote.listkeys('namespaces')):
643 and 'obsolete' in pushop.remote.listkeys('namespaces')):
644 repo = pushop.repo
644 repo = pushop.repo
645 # very naive computation, that can be quite expensive on big repo.
645 # very naive computation, that can be quite expensive on big repo.
646 # However: evolution is currently slow on them anyway.
646 # However: evolution is currently slow on them anyway.
647 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
647 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
648 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
648 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
649
649
650 @pushdiscovery('bookmarks')
650 @pushdiscovery('bookmarks')
651 def _pushdiscoverybookmarks(pushop):
651 def _pushdiscoverybookmarks(pushop):
652 ui = pushop.ui
652 ui = pushop.ui
653 repo = pushop.repo.unfiltered()
653 repo = pushop.repo.unfiltered()
654 remote = pushop.remote
654 remote = pushop.remote
655 ui.debug("checking for updated bookmarks\n")
655 ui.debug("checking for updated bookmarks\n")
656 ancestors = ()
656 ancestors = ()
657 if pushop.revs:
657 if pushop.revs:
658 revnums = map(repo.changelog.rev, pushop.revs)
658 revnums = map(repo.changelog.rev, pushop.revs)
659 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
659 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
660 remotebookmark = remote.listkeys('bookmarks')
660 remotebookmark = remote.listkeys('bookmarks')
661
661
662 explicit = set([repo._bookmarks.expandname(bookmark)
662 explicit = set([repo._bookmarks.expandname(bookmark)
663 for bookmark in pushop.bookmarks])
663 for bookmark in pushop.bookmarks])
664
664
665 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
665 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
666 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
666 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
667
667
668 def safehex(x):
668 def safehex(x):
669 if x is None:
669 if x is None:
670 return x
670 return x
671 return hex(x)
671 return hex(x)
672
672
673 def hexifycompbookmarks(bookmarks):
673 def hexifycompbookmarks(bookmarks):
674 return [(b, safehex(scid), safehex(dcid))
674 return [(b, safehex(scid), safehex(dcid))
675 for (b, scid, dcid) in bookmarks]
675 for (b, scid, dcid) in bookmarks]
676
676
677 comp = [hexifycompbookmarks(marks) for marks in comp]
677 comp = [hexifycompbookmarks(marks) for marks in comp]
678 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
678 return _processcompared(pushop, ancestors, explicit, remotebookmark, comp)
679
679
680 def _processcompared(pushop, pushed, explicit, remotebms, comp):
680 def _processcompared(pushop, pushed, explicit, remotebms, comp):
681 """take decision on bookmark to pull from the remote bookmark
681 """take decision on bookmark to pull from the remote bookmark
682
682
683 Exist to help extensions who want to alter this behavior.
683 Exist to help extensions who want to alter this behavior.
684 """
684 """
685 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
685 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
686
686
687 repo = pushop.repo
687 repo = pushop.repo
688
688
689 for b, scid, dcid in advsrc:
689 for b, scid, dcid in advsrc:
690 if b in explicit:
690 if b in explicit:
691 explicit.remove(b)
691 explicit.remove(b)
692 if not pushed or repo[scid].rev() in pushed:
692 if not pushed or repo[scid].rev() in pushed:
693 pushop.outbookmarks.append((b, dcid, scid))
693 pushop.outbookmarks.append((b, dcid, scid))
694 # search added bookmark
694 # search added bookmark
695 for b, scid, dcid in addsrc:
695 for b, scid, dcid in addsrc:
696 if b in explicit:
696 if b in explicit:
697 explicit.remove(b)
697 explicit.remove(b)
698 pushop.outbookmarks.append((b, '', scid))
698 pushop.outbookmarks.append((b, '', scid))
699 # search for overwritten bookmark
699 # search for overwritten bookmark
700 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
700 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
701 if b in explicit:
701 if b in explicit:
702 explicit.remove(b)
702 explicit.remove(b)
703 pushop.outbookmarks.append((b, dcid, scid))
703 pushop.outbookmarks.append((b, dcid, scid))
704 # search for bookmark to delete
704 # search for bookmark to delete
705 for b, scid, dcid in adddst:
705 for b, scid, dcid in adddst:
706 if b in explicit:
706 if b in explicit:
707 explicit.remove(b)
707 explicit.remove(b)
708 # treat as "deleted locally"
708 # treat as "deleted locally"
709 pushop.outbookmarks.append((b, dcid, ''))
709 pushop.outbookmarks.append((b, dcid, ''))
710 # identical bookmarks shouldn't get reported
710 # identical bookmarks shouldn't get reported
711 for b, scid, dcid in same:
711 for b, scid, dcid in same:
712 if b in explicit:
712 if b in explicit:
713 explicit.remove(b)
713 explicit.remove(b)
714
714
715 if explicit:
715 if explicit:
716 explicit = sorted(explicit)
716 explicit = sorted(explicit)
717 # we should probably list all of them
717 # we should probably list all of them
718 pushop.ui.warn(_('bookmark %s does not exist on the local '
718 pushop.ui.warn(_('bookmark %s does not exist on the local '
719 'or remote repository!\n') % explicit[0])
719 'or remote repository!\n') % explicit[0])
720 pushop.bkresult = 2
720 pushop.bkresult = 2
721
721
722 pushop.outbookmarks.sort()
722 pushop.outbookmarks.sort()
723
723
724 def _pushcheckoutgoing(pushop):
724 def _pushcheckoutgoing(pushop):
725 outgoing = pushop.outgoing
725 outgoing = pushop.outgoing
726 unfi = pushop.repo.unfiltered()
726 unfi = pushop.repo.unfiltered()
727 if not outgoing.missing:
727 if not outgoing.missing:
728 # nothing to push
728 # nothing to push
729 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
729 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
730 return False
730 return False
731 # something to push
731 # something to push
732 if not pushop.force:
732 if not pushop.force:
733 # if repo.obsstore == False --> no obsolete
733 # if repo.obsstore == False --> no obsolete
734 # then, save the iteration
734 # then, save the iteration
735 if unfi.obsstore:
735 if unfi.obsstore:
736 # this message are here for 80 char limit reason
736 # this message are here for 80 char limit reason
737 mso = _("push includes obsolete changeset: %s!")
737 mso = _("push includes obsolete changeset: %s!")
738 mspd = _("push includes phase-divergent changeset: %s!")
738 mspd = _("push includes phase-divergent changeset: %s!")
739 mscd = _("push includes content-divergent changeset: %s!")
739 mscd = _("push includes content-divergent changeset: %s!")
740 mst = {"orphan": _("push includes orphan changeset: %s!"),
740 mst = {"orphan": _("push includes orphan changeset: %s!"),
741 "phase-divergent": mspd,
741 "phase-divergent": mspd,
742 "content-divergent": mscd}
742 "content-divergent": mscd}
743 # If we are to push if there is at least one
743 # If we are to push if there is at least one
744 # obsolete or unstable changeset in missing, at
744 # obsolete or unstable changeset in missing, at
745 # least one of the missinghead will be obsolete or
745 # least one of the missinghead will be obsolete or
746 # unstable. So checking heads only is ok
746 # unstable. So checking heads only is ok
747 for node in outgoing.missingheads:
747 for node in outgoing.missingheads:
748 ctx = unfi[node]
748 ctx = unfi[node]
749 if ctx.obsolete():
749 if ctx.obsolete():
750 raise error.Abort(mso % ctx)
750 raise error.Abort(mso % ctx)
751 elif ctx.isunstable():
751 elif ctx.isunstable():
752 # TODO print more than one instability in the abort
752 # TODO print more than one instability in the abort
753 # message
753 # message
754 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
754 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
755
755
756 discovery.checkheads(pushop)
756 discovery.checkheads(pushop)
757 return True
757 return True
758
758
759 # List of names of steps to perform for an outgoing bundle2, order matters.
759 # List of names of steps to perform for an outgoing bundle2, order matters.
760 b2partsgenorder = []
760 b2partsgenorder = []
761
761
762 # Mapping between step name and function
762 # Mapping between step name and function
763 #
763 #
764 # This exists to help extensions wrap steps if necessary
764 # This exists to help extensions wrap steps if necessary
765 b2partsgenmapping = {}
765 b2partsgenmapping = {}
766
766
767 def b2partsgenerator(stepname, idx=None):
767 def b2partsgenerator(stepname, idx=None):
768 """decorator for function generating bundle2 part
768 """decorator for function generating bundle2 part
769
769
770 The function is added to the step -> function mapping and appended to the
770 The function is added to the step -> function mapping and appended to the
771 list of steps. Beware that decorated functions will be added in order
771 list of steps. Beware that decorated functions will be added in order
772 (this may matter).
772 (this may matter).
773
773
774 You can only use this decorator for new steps, if you want to wrap a step
774 You can only use this decorator for new steps, if you want to wrap a step
775 from an extension, attack the b2partsgenmapping dictionary directly."""
775 from an extension, attack the b2partsgenmapping dictionary directly."""
776 def dec(func):
776 def dec(func):
777 assert stepname not in b2partsgenmapping
777 assert stepname not in b2partsgenmapping
778 b2partsgenmapping[stepname] = func
778 b2partsgenmapping[stepname] = func
779 if idx is None:
779 if idx is None:
780 b2partsgenorder.append(stepname)
780 b2partsgenorder.append(stepname)
781 else:
781 else:
782 b2partsgenorder.insert(idx, stepname)
782 b2partsgenorder.insert(idx, stepname)
783 return func
783 return func
784 return dec
784 return dec
785
785
786 def _pushb2ctxcheckheads(pushop, bundler):
786 def _pushb2ctxcheckheads(pushop, bundler):
787 """Generate race condition checking parts
787 """Generate race condition checking parts
788
788
789 Exists as an independent function to aid extensions
789 Exists as an independent function to aid extensions
790 """
790 """
791 # * 'force' do not check for push race,
791 # * 'force' do not check for push race,
792 # * if we don't push anything, there are nothing to check.
792 # * if we don't push anything, there are nothing to check.
793 if not pushop.force and pushop.outgoing.missingheads:
793 if not pushop.force and pushop.outgoing.missingheads:
794 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
794 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
795 emptyremote = pushop.pushbranchmap is None
795 emptyremote = pushop.pushbranchmap is None
796 if not allowunrelated or emptyremote:
796 if not allowunrelated or emptyremote:
797 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
797 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
798 else:
798 else:
799 affected = set()
799 affected = set()
800 for branch, heads in pushop.pushbranchmap.iteritems():
800 for branch, heads in pushop.pushbranchmap.iteritems():
801 remoteheads, newheads, unsyncedheads, discardedheads = heads
801 remoteheads, newheads, unsyncedheads, discardedheads = heads
802 if remoteheads is not None:
802 if remoteheads is not None:
803 remote = set(remoteheads)
803 remote = set(remoteheads)
804 affected |= set(discardedheads) & remote
804 affected |= set(discardedheads) & remote
805 affected |= remote - set(newheads)
805 affected |= remote - set(newheads)
806 if affected:
806 if affected:
807 data = iter(sorted(affected))
807 data = iter(sorted(affected))
808 bundler.newpart('check:updated-heads', data=data)
808 bundler.newpart('check:updated-heads', data=data)
809
809
810 def _pushing(pushop):
810 def _pushing(pushop):
811 """return True if we are pushing anything"""
811 """return True if we are pushing anything"""
812 return bool(pushop.outgoing.missing
812 return bool(pushop.outgoing.missing
813 or pushop.outdatedphases
813 or pushop.outdatedphases
814 or pushop.outobsmarkers
814 or pushop.outobsmarkers
815 or pushop.outbookmarks)
815 or pushop.outbookmarks)
816
816
817 @b2partsgenerator('check-bookmarks')
817 @b2partsgenerator('check-bookmarks')
818 def _pushb2checkbookmarks(pushop, bundler):
818 def _pushb2checkbookmarks(pushop, bundler):
819 """insert bookmark move checking"""
819 """insert bookmark move checking"""
820 if not _pushing(pushop) or pushop.force:
820 if not _pushing(pushop) or pushop.force:
821 return
821 return
822 b2caps = bundle2.bundle2caps(pushop.remote)
822 b2caps = bundle2.bundle2caps(pushop.remote)
823 hasbookmarkcheck = 'bookmarks' in b2caps
823 hasbookmarkcheck = 'bookmarks' in b2caps
824 if not (pushop.outbookmarks and hasbookmarkcheck):
824 if not (pushop.outbookmarks and hasbookmarkcheck):
825 return
825 return
826 data = []
826 data = []
827 for book, old, new in pushop.outbookmarks:
827 for book, old, new in pushop.outbookmarks:
828 old = bin(old)
828 old = bin(old)
829 data.append((book, old))
829 data.append((book, old))
830 checkdata = bookmod.binaryencode(data)
830 checkdata = bookmod.binaryencode(data)
831 bundler.newpart('check:bookmarks', data=checkdata)
831 bundler.newpart('check:bookmarks', data=checkdata)
832
832
833 @b2partsgenerator('check-phases')
833 @b2partsgenerator('check-phases')
834 def _pushb2checkphases(pushop, bundler):
834 def _pushb2checkphases(pushop, bundler):
835 """insert phase move checking"""
835 """insert phase move checking"""
836 if not _pushing(pushop) or pushop.force:
836 if not _pushing(pushop) or pushop.force:
837 return
837 return
838 b2caps = bundle2.bundle2caps(pushop.remote)
838 b2caps = bundle2.bundle2caps(pushop.remote)
839 hasphaseheads = 'heads' in b2caps.get('phases', ())
839 hasphaseheads = 'heads' in b2caps.get('phases', ())
840 if pushop.remotephases is not None and hasphaseheads:
840 if pushop.remotephases is not None and hasphaseheads:
841 # check that the remote phase has not changed
841 # check that the remote phase has not changed
842 checks = [[] for p in phases.allphases]
842 checks = [[] for p in phases.allphases]
843 checks[phases.public].extend(pushop.remotephases.publicheads)
843 checks[phases.public].extend(pushop.remotephases.publicheads)
844 checks[phases.draft].extend(pushop.remotephases.draftroots)
844 checks[phases.draft].extend(pushop.remotephases.draftroots)
845 if any(checks):
845 if any(checks):
846 for nodes in checks:
846 for nodes in checks:
847 nodes.sort()
847 nodes.sort()
848 checkdata = phases.binaryencode(checks)
848 checkdata = phases.binaryencode(checks)
849 bundler.newpart('check:phases', data=checkdata)
849 bundler.newpart('check:phases', data=checkdata)
850
850
851 @b2partsgenerator('changeset')
851 @b2partsgenerator('changeset')
852 def _pushb2ctx(pushop, bundler):
852 def _pushb2ctx(pushop, bundler):
853 """handle changegroup push through bundle2
853 """handle changegroup push through bundle2
854
854
855 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
855 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
856 """
856 """
857 if 'changesets' in pushop.stepsdone:
857 if 'changesets' in pushop.stepsdone:
858 return
858 return
859 pushop.stepsdone.add('changesets')
859 pushop.stepsdone.add('changesets')
860 # Send known heads to the server for race detection.
860 # Send known heads to the server for race detection.
861 if not _pushcheckoutgoing(pushop):
861 if not _pushcheckoutgoing(pushop):
862 return
862 return
863 pushop.repo.prepushoutgoinghooks(pushop)
863 pushop.repo.prepushoutgoinghooks(pushop)
864
864
865 _pushb2ctxcheckheads(pushop, bundler)
865 _pushb2ctxcheckheads(pushop, bundler)
866
866
867 b2caps = bundle2.bundle2caps(pushop.remote)
867 b2caps = bundle2.bundle2caps(pushop.remote)
868 version = '01'
868 version = '01'
869 cgversions = b2caps.get('changegroup')
869 cgversions = b2caps.get('changegroup')
870 if cgversions: # 3.1 and 3.2 ship with an empty value
870 if cgversions: # 3.1 and 3.2 ship with an empty value
871 cgversions = [v for v in cgversions
871 cgversions = [v for v in cgversions
872 if v in changegroup.supportedoutgoingversions(
872 if v in changegroup.supportedoutgoingversions(
873 pushop.repo)]
873 pushop.repo)]
874 if not cgversions:
874 if not cgversions:
875 raise ValueError(_('no common changegroup version'))
875 raise ValueError(_('no common changegroup version'))
876 version = max(cgversions)
876 version = max(cgversions)
877 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
877 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
878 'push')
878 'push')
879 cgpart = bundler.newpart('changegroup', data=cgstream)
879 cgpart = bundler.newpart('changegroup', data=cgstream)
880 if cgversions:
880 if cgversions:
881 cgpart.addparam('version', version)
881 cgpart.addparam('version', version)
882 if 'treemanifest' in pushop.repo.requirements:
882 if 'treemanifest' in pushop.repo.requirements:
883 cgpart.addparam('treemanifest', '1')
883 cgpart.addparam('treemanifest', '1')
884 def handlereply(op):
884 def handlereply(op):
885 """extract addchangegroup returns from server reply"""
885 """extract addchangegroup returns from server reply"""
886 cgreplies = op.records.getreplies(cgpart.id)
886 cgreplies = op.records.getreplies(cgpart.id)
887 assert len(cgreplies['changegroup']) == 1
887 assert len(cgreplies['changegroup']) == 1
888 pushop.cgresult = cgreplies['changegroup'][0]['return']
888 pushop.cgresult = cgreplies['changegroup'][0]['return']
889 return handlereply
889 return handlereply
890
890
891 @b2partsgenerator('phase')
891 @b2partsgenerator('phase')
892 def _pushb2phases(pushop, bundler):
892 def _pushb2phases(pushop, bundler):
893 """handle phase push through bundle2"""
893 """handle phase push through bundle2"""
894 if 'phases' in pushop.stepsdone:
894 if 'phases' in pushop.stepsdone:
895 return
895 return
896 b2caps = bundle2.bundle2caps(pushop.remote)
896 b2caps = bundle2.bundle2caps(pushop.remote)
897 ui = pushop.repo.ui
897 ui = pushop.repo.ui
898
898
899 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
899 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
900 haspushkey = 'pushkey' in b2caps
900 haspushkey = 'pushkey' in b2caps
901 hasphaseheads = 'heads' in b2caps.get('phases', ())
901 hasphaseheads = 'heads' in b2caps.get('phases', ())
902
902
903 if hasphaseheads and not legacyphase:
903 if hasphaseheads and not legacyphase:
904 return _pushb2phaseheads(pushop, bundler)
904 return _pushb2phaseheads(pushop, bundler)
905 elif haspushkey:
905 elif haspushkey:
906 return _pushb2phasespushkey(pushop, bundler)
906 return _pushb2phasespushkey(pushop, bundler)
907
907
908 def _pushb2phaseheads(pushop, bundler):
908 def _pushb2phaseheads(pushop, bundler):
909 """push phase information through a bundle2 - binary part"""
909 """push phase information through a bundle2 - binary part"""
910 pushop.stepsdone.add('phases')
910 pushop.stepsdone.add('phases')
911 if pushop.outdatedphases:
911 if pushop.outdatedphases:
912 updates = [[] for p in phases.allphases]
912 updates = [[] for p in phases.allphases]
913 updates[0].extend(h.node() for h in pushop.outdatedphases)
913 updates[0].extend(h.node() for h in pushop.outdatedphases)
914 phasedata = phases.binaryencode(updates)
914 phasedata = phases.binaryencode(updates)
915 bundler.newpart('phase-heads', data=phasedata)
915 bundler.newpart('phase-heads', data=phasedata)
916
916
917 def _pushb2phasespushkey(pushop, bundler):
917 def _pushb2phasespushkey(pushop, bundler):
918 """push phase information through a bundle2 - pushkey part"""
918 """push phase information through a bundle2 - pushkey part"""
919 pushop.stepsdone.add('phases')
919 pushop.stepsdone.add('phases')
920 part2node = []
920 part2node = []
921
921
922 def handlefailure(pushop, exc):
922 def handlefailure(pushop, exc):
923 targetid = int(exc.partid)
923 targetid = int(exc.partid)
924 for partid, node in part2node:
924 for partid, node in part2node:
925 if partid == targetid:
925 if partid == targetid:
926 raise error.Abort(_('updating %s to public failed') % node)
926 raise error.Abort(_('updating %s to public failed') % node)
927
927
928 enc = pushkey.encode
928 enc = pushkey.encode
929 for newremotehead in pushop.outdatedphases:
929 for newremotehead in pushop.outdatedphases:
930 part = bundler.newpart('pushkey')
930 part = bundler.newpart('pushkey')
931 part.addparam('namespace', enc('phases'))
931 part.addparam('namespace', enc('phases'))
932 part.addparam('key', enc(newremotehead.hex()))
932 part.addparam('key', enc(newremotehead.hex()))
933 part.addparam('old', enc('%d' % phases.draft))
933 part.addparam('old', enc('%d' % phases.draft))
934 part.addparam('new', enc('%d' % phases.public))
934 part.addparam('new', enc('%d' % phases.public))
935 part2node.append((part.id, newremotehead))
935 part2node.append((part.id, newremotehead))
936 pushop.pkfailcb[part.id] = handlefailure
936 pushop.pkfailcb[part.id] = handlefailure
937
937
938 def handlereply(op):
938 def handlereply(op):
939 for partid, node in part2node:
939 for partid, node in part2node:
940 partrep = op.records.getreplies(partid)
940 partrep = op.records.getreplies(partid)
941 results = partrep['pushkey']
941 results = partrep['pushkey']
942 assert len(results) <= 1
942 assert len(results) <= 1
943 msg = None
943 msg = None
944 if not results:
944 if not results:
945 msg = _('server ignored update of %s to public!\n') % node
945 msg = _('server ignored update of %s to public!\n') % node
946 elif not int(results[0]['return']):
946 elif not int(results[0]['return']):
947 msg = _('updating %s to public failed!\n') % node
947 msg = _('updating %s to public failed!\n') % node
948 if msg is not None:
948 if msg is not None:
949 pushop.ui.warn(msg)
949 pushop.ui.warn(msg)
950 return handlereply
950 return handlereply
951
951
952 @b2partsgenerator('obsmarkers')
952 @b2partsgenerator('obsmarkers')
953 def _pushb2obsmarkers(pushop, bundler):
953 def _pushb2obsmarkers(pushop, bundler):
954 if 'obsmarkers' in pushop.stepsdone:
954 if 'obsmarkers' in pushop.stepsdone:
955 return
955 return
956 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
956 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
957 if obsolete.commonversion(remoteversions) is None:
957 if obsolete.commonversion(remoteversions) is None:
958 return
958 return
959 pushop.stepsdone.add('obsmarkers')
959 pushop.stepsdone.add('obsmarkers')
960 if pushop.outobsmarkers:
960 if pushop.outobsmarkers:
961 markers = sorted(pushop.outobsmarkers)
961 markers = sorted(pushop.outobsmarkers)
962 bundle2.buildobsmarkerspart(bundler, markers)
962 bundle2.buildobsmarkerspart(bundler, markers)
963
963
964 @b2partsgenerator('bookmarks')
964 @b2partsgenerator('bookmarks')
965 def _pushb2bookmarks(pushop, bundler):
965 def _pushb2bookmarks(pushop, bundler):
966 """handle bookmark push through bundle2"""
966 """handle bookmark push through bundle2"""
967 if 'bookmarks' in pushop.stepsdone:
967 if 'bookmarks' in pushop.stepsdone:
968 return
968 return
969 b2caps = bundle2.bundle2caps(pushop.remote)
969 b2caps = bundle2.bundle2caps(pushop.remote)
970
970
971 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
971 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
972 legacybooks = 'bookmarks' in legacy
972 legacybooks = 'bookmarks' in legacy
973
973
974 if not legacybooks and 'bookmarks' in b2caps:
974 if not legacybooks and 'bookmarks' in b2caps:
975 return _pushb2bookmarkspart(pushop, bundler)
975 return _pushb2bookmarkspart(pushop, bundler)
976 elif 'pushkey' in b2caps:
976 elif 'pushkey' in b2caps:
977 return _pushb2bookmarkspushkey(pushop, bundler)
977 return _pushb2bookmarkspushkey(pushop, bundler)
978
978
979 def _bmaction(old, new):
979 def _bmaction(old, new):
980 """small utility for bookmark pushing"""
980 """small utility for bookmark pushing"""
981 if not old:
981 if not old:
982 return 'export'
982 return 'export'
983 elif not new:
983 elif not new:
984 return 'delete'
984 return 'delete'
985 return 'update'
985 return 'update'
986
986
987 def _pushb2bookmarkspart(pushop, bundler):
987 def _pushb2bookmarkspart(pushop, bundler):
988 pushop.stepsdone.add('bookmarks')
988 pushop.stepsdone.add('bookmarks')
989 if not pushop.outbookmarks:
989 if not pushop.outbookmarks:
990 return
990 return
991
991
992 allactions = []
992 allactions = []
993 data = []
993 data = []
994 for book, old, new in pushop.outbookmarks:
994 for book, old, new in pushop.outbookmarks:
995 new = bin(new)
995 new = bin(new)
996 data.append((book, new))
996 data.append((book, new))
997 allactions.append((book, _bmaction(old, new)))
997 allactions.append((book, _bmaction(old, new)))
998 checkdata = bookmod.binaryencode(data)
998 checkdata = bookmod.binaryencode(data)
999 bundler.newpart('bookmarks', data=checkdata)
999 bundler.newpart('bookmarks', data=checkdata)
1000
1000
1001 def handlereply(op):
1001 def handlereply(op):
1002 ui = pushop.ui
1002 ui = pushop.ui
1003 # if success
1003 # if success
1004 for book, action in allactions:
1004 for book, action in allactions:
1005 ui.status(bookmsgmap[action][0] % book)
1005 ui.status(bookmsgmap[action][0] % book)
1006
1006
1007 return handlereply
1007 return handlereply
1008
1008
1009 def _pushb2bookmarkspushkey(pushop, bundler):
1009 def _pushb2bookmarkspushkey(pushop, bundler):
1010 pushop.stepsdone.add('bookmarks')
1010 pushop.stepsdone.add('bookmarks')
1011 part2book = []
1011 part2book = []
1012 enc = pushkey.encode
1012 enc = pushkey.encode
1013
1013
1014 def handlefailure(pushop, exc):
1014 def handlefailure(pushop, exc):
1015 targetid = int(exc.partid)
1015 targetid = int(exc.partid)
1016 for partid, book, action in part2book:
1016 for partid, book, action in part2book:
1017 if partid == targetid:
1017 if partid == targetid:
1018 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1018 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
1019 # we should not be called for part we did not generated
1019 # we should not be called for part we did not generated
1020 assert False
1020 assert False
1021
1021
1022 for book, old, new in pushop.outbookmarks:
1022 for book, old, new in pushop.outbookmarks:
1023 part = bundler.newpart('pushkey')
1023 part = bundler.newpart('pushkey')
1024 part.addparam('namespace', enc('bookmarks'))
1024 part.addparam('namespace', enc('bookmarks'))
1025 part.addparam('key', enc(book))
1025 part.addparam('key', enc(book))
1026 part.addparam('old', enc(old))
1026 part.addparam('old', enc(old))
1027 part.addparam('new', enc(new))
1027 part.addparam('new', enc(new))
1028 action = 'update'
1028 action = 'update'
1029 if not old:
1029 if not old:
1030 action = 'export'
1030 action = 'export'
1031 elif not new:
1031 elif not new:
1032 action = 'delete'
1032 action = 'delete'
1033 part2book.append((part.id, book, action))
1033 part2book.append((part.id, book, action))
1034 pushop.pkfailcb[part.id] = handlefailure
1034 pushop.pkfailcb[part.id] = handlefailure
1035
1035
1036 def handlereply(op):
1036 def handlereply(op):
1037 ui = pushop.ui
1037 ui = pushop.ui
1038 for partid, book, action in part2book:
1038 for partid, book, action in part2book:
1039 partrep = op.records.getreplies(partid)
1039 partrep = op.records.getreplies(partid)
1040 results = partrep['pushkey']
1040 results = partrep['pushkey']
1041 assert len(results) <= 1
1041 assert len(results) <= 1
1042 if not results:
1042 if not results:
1043 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
1043 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
1044 else:
1044 else:
1045 ret = int(results[0]['return'])
1045 ret = int(results[0]['return'])
1046 if ret:
1046 if ret:
1047 ui.status(bookmsgmap[action][0] % book)
1047 ui.status(bookmsgmap[action][0] % book)
1048 else:
1048 else:
1049 ui.warn(bookmsgmap[action][1] % book)
1049 ui.warn(bookmsgmap[action][1] % book)
1050 if pushop.bkresult is not None:
1050 if pushop.bkresult is not None:
1051 pushop.bkresult = 1
1051 pushop.bkresult = 1
1052 return handlereply
1052 return handlereply
1053
1053
1054 @b2partsgenerator('pushvars', idx=0)
1054 @b2partsgenerator('pushvars', idx=0)
1055 def _getbundlesendvars(pushop, bundler):
1055 def _getbundlesendvars(pushop, bundler):
1056 '''send shellvars via bundle2'''
1056 '''send shellvars via bundle2'''
1057 pushvars = pushop.pushvars
1057 pushvars = pushop.pushvars
1058 if pushvars:
1058 if pushvars:
1059 shellvars = {}
1059 shellvars = {}
1060 for raw in pushvars:
1060 for raw in pushvars:
1061 if '=' not in raw:
1061 if '=' not in raw:
1062 msg = ("unable to parse variable '%s', should follow "
1062 msg = ("unable to parse variable '%s', should follow "
1063 "'KEY=VALUE' or 'KEY=' format")
1063 "'KEY=VALUE' or 'KEY=' format")
1064 raise error.Abort(msg % raw)
1064 raise error.Abort(msg % raw)
1065 k, v = raw.split('=', 1)
1065 k, v = raw.split('=', 1)
1066 shellvars[k] = v
1066 shellvars[k] = v
1067
1067
1068 part = bundler.newpart('pushvars')
1068 part = bundler.newpart('pushvars')
1069
1069
1070 for key, value in shellvars.iteritems():
1070 for key, value in shellvars.iteritems():
1071 part.addparam(key, value, mandatory=False)
1071 part.addparam(key, value, mandatory=False)
1072
1072
1073 def _pushbundle2(pushop):
1073 def _pushbundle2(pushop):
1074 """push data to the remote using bundle2
1074 """push data to the remote using bundle2
1075
1075
1076 The only currently supported type of data is changegroup but this will
1076 The only currently supported type of data is changegroup but this will
1077 evolve in the future."""
1077 evolve in the future."""
1078 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1078 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1079 pushback = (pushop.trmanager
1079 pushback = (pushop.trmanager
1080 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1080 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1081
1081
1082 # create reply capability
1082 # create reply capability
1083 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1083 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1084 allowpushback=pushback,
1084 allowpushback=pushback,
1085 role='client'))
1085 role='client'))
1086 bundler.newpart('replycaps', data=capsblob)
1086 bundler.newpart('replycaps', data=capsblob)
1087 replyhandlers = []
1087 replyhandlers = []
1088 for partgenname in b2partsgenorder:
1088 for partgenname in b2partsgenorder:
1089 partgen = b2partsgenmapping[partgenname]
1089 partgen = b2partsgenmapping[partgenname]
1090 ret = partgen(pushop, bundler)
1090 ret = partgen(pushop, bundler)
1091 if callable(ret):
1091 if callable(ret):
1092 replyhandlers.append(ret)
1092 replyhandlers.append(ret)
1093 # do not push if nothing to push
1093 # do not push if nothing to push
1094 if bundler.nbparts <= 1:
1094 if bundler.nbparts <= 1:
1095 return
1095 return
1096 stream = util.chunkbuffer(bundler.getchunks())
1096 stream = util.chunkbuffer(bundler.getchunks())
1097 try:
1097 try:
1098 try:
1098 try:
1099 reply = pushop.remote.unbundle(
1099 reply = pushop.remote.unbundle(
1100 stream, ['force'], pushop.remote.url())
1100 stream, ['force'], pushop.remote.url())
1101 except error.BundleValueError as exc:
1101 except error.BundleValueError as exc:
1102 raise error.Abort(_('missing support for %s') % exc)
1102 raise error.Abort(_('missing support for %s') % exc)
1103 try:
1103 try:
1104 trgetter = None
1104 trgetter = None
1105 if pushback:
1105 if pushback:
1106 trgetter = pushop.trmanager.transaction
1106 trgetter = pushop.trmanager.transaction
1107 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1107 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1108 except error.BundleValueError as exc:
1108 except error.BundleValueError as exc:
1109 raise error.Abort(_('missing support for %s') % exc)
1109 raise error.Abort(_('missing support for %s') % exc)
1110 except bundle2.AbortFromPart as exc:
1110 except bundle2.AbortFromPart as exc:
1111 pushop.ui.status(_('remote: %s\n') % exc)
1111 pushop.ui.status(_('remote: %s\n') % exc)
1112 if exc.hint is not None:
1112 if exc.hint is not None:
1113 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1113 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1114 raise error.Abort(_('push failed on remote'))
1114 raise error.Abort(_('push failed on remote'))
1115 except error.PushkeyFailed as exc:
1115 except error.PushkeyFailed as exc:
1116 partid = int(exc.partid)
1116 partid = int(exc.partid)
1117 if partid not in pushop.pkfailcb:
1117 if partid not in pushop.pkfailcb:
1118 raise
1118 raise
1119 pushop.pkfailcb[partid](pushop, exc)
1119 pushop.pkfailcb[partid](pushop, exc)
1120 for rephand in replyhandlers:
1120 for rephand in replyhandlers:
1121 rephand(op)
1121 rephand(op)
1122
1122
1123 def _pushchangeset(pushop):
1123 def _pushchangeset(pushop):
1124 """Make the actual push of changeset bundle to remote repo"""
1124 """Make the actual push of changeset bundle to remote repo"""
1125 if 'changesets' in pushop.stepsdone:
1125 if 'changesets' in pushop.stepsdone:
1126 return
1126 return
1127 pushop.stepsdone.add('changesets')
1127 pushop.stepsdone.add('changesets')
1128 if not _pushcheckoutgoing(pushop):
1128 if not _pushcheckoutgoing(pushop):
1129 return
1129 return
1130
1130
1131 # Should have verified this in push().
1131 # Should have verified this in push().
1132 assert pushop.remote.capable('unbundle')
1132 assert pushop.remote.capable('unbundle')
1133
1133
1134 pushop.repo.prepushoutgoinghooks(pushop)
1134 pushop.repo.prepushoutgoinghooks(pushop)
1135 outgoing = pushop.outgoing
1135 outgoing = pushop.outgoing
1136 # TODO: get bundlecaps from remote
1136 # TODO: get bundlecaps from remote
1137 bundlecaps = None
1137 bundlecaps = None
1138 # create a changegroup from local
1138 # create a changegroup from local
1139 if pushop.revs is None and not (outgoing.excluded
1139 if pushop.revs is None and not (outgoing.excluded
1140 or pushop.repo.changelog.filteredrevs):
1140 or pushop.repo.changelog.filteredrevs):
1141 # push everything,
1141 # push everything,
1142 # use the fast path, no race possible on push
1142 # use the fast path, no race possible on push
1143 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1143 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1144 fastpath=True, bundlecaps=bundlecaps)
1144 fastpath=True, bundlecaps=bundlecaps)
1145 else:
1145 else:
1146 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1146 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1147 'push', bundlecaps=bundlecaps)
1147 'push', bundlecaps=bundlecaps)
1148
1148
1149 # apply changegroup to remote
1149 # apply changegroup to remote
1150 # local repo finds heads on server, finds out what
1150 # local repo finds heads on server, finds out what
1151 # revs it must push. once revs transferred, if server
1151 # revs it must push. once revs transferred, if server
1152 # finds it has different heads (someone else won
1152 # finds it has different heads (someone else won
1153 # commit/push race), server aborts.
1153 # commit/push race), server aborts.
1154 if pushop.force:
1154 if pushop.force:
1155 remoteheads = ['force']
1155 remoteheads = ['force']
1156 else:
1156 else:
1157 remoteheads = pushop.remoteheads
1157 remoteheads = pushop.remoteheads
1158 # ssh: return remote's addchangegroup()
1158 # ssh: return remote's addchangegroup()
1159 # http: return remote's addchangegroup() or 0 for error
1159 # http: return remote's addchangegroup() or 0 for error
1160 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1160 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1161 pushop.repo.url())
1161 pushop.repo.url())
1162
1162
1163 def _pushsyncphase(pushop):
1163 def _pushsyncphase(pushop):
1164 """synchronise phase information locally and remotely"""
1164 """synchronise phase information locally and remotely"""
1165 cheads = pushop.commonheads
1165 cheads = pushop.commonheads
1166 # even when we don't push, exchanging phase data is useful
1166 # even when we don't push, exchanging phase data is useful
1167 remotephases = pushop.remote.listkeys('phases')
1167 remotephases = pushop.remote.listkeys('phases')
1168 if (pushop.ui.configbool('ui', '_usedassubrepo')
1168 if (pushop.ui.configbool('ui', '_usedassubrepo')
1169 and remotephases # server supports phases
1169 and remotephases # server supports phases
1170 and pushop.cgresult is None # nothing was pushed
1170 and pushop.cgresult is None # nothing was pushed
1171 and remotephases.get('publishing', False)):
1171 and remotephases.get('publishing', False)):
1172 # When:
1172 # When:
1173 # - this is a subrepo push
1173 # - this is a subrepo push
1174 # - and remote support phase
1174 # - and remote support phase
1175 # - and no changeset was pushed
1175 # - and no changeset was pushed
1176 # - and remote is publishing
1176 # - and remote is publishing
1177 # We may be in issue 3871 case!
1177 # We may be in issue 3871 case!
1178 # We drop the possible phase synchronisation done by
1178 # We drop the possible phase synchronisation done by
1179 # courtesy to publish changesets possibly locally draft
1179 # courtesy to publish changesets possibly locally draft
1180 # on the remote.
1180 # on the remote.
1181 remotephases = {'publishing': 'True'}
1181 remotephases = {'publishing': 'True'}
1182 if not remotephases: # old server or public only reply from non-publishing
1182 if not remotephases: # old server or public only reply from non-publishing
1183 _localphasemove(pushop, cheads)
1183 _localphasemove(pushop, cheads)
1184 # don't push any phase data as there is nothing to push
1184 # don't push any phase data as there is nothing to push
1185 else:
1185 else:
1186 ana = phases.analyzeremotephases(pushop.repo, cheads,
1186 ana = phases.analyzeremotephases(pushop.repo, cheads,
1187 remotephases)
1187 remotephases)
1188 pheads, droots = ana
1188 pheads, droots = ana
1189 ### Apply remote phase on local
1189 ### Apply remote phase on local
1190 if remotephases.get('publishing', False):
1190 if remotephases.get('publishing', False):
1191 _localphasemove(pushop, cheads)
1191 _localphasemove(pushop, cheads)
1192 else: # publish = False
1192 else: # publish = False
1193 _localphasemove(pushop, pheads)
1193 _localphasemove(pushop, pheads)
1194 _localphasemove(pushop, cheads, phases.draft)
1194 _localphasemove(pushop, cheads, phases.draft)
1195 ### Apply local phase on remote
1195 ### Apply local phase on remote
1196
1196
1197 if pushop.cgresult:
1197 if pushop.cgresult:
1198 if 'phases' in pushop.stepsdone:
1198 if 'phases' in pushop.stepsdone:
1199 # phases already pushed though bundle2
1199 # phases already pushed though bundle2
1200 return
1200 return
1201 outdated = pushop.outdatedphases
1201 outdated = pushop.outdatedphases
1202 else:
1202 else:
1203 outdated = pushop.fallbackoutdatedphases
1203 outdated = pushop.fallbackoutdatedphases
1204
1204
1205 pushop.stepsdone.add('phases')
1205 pushop.stepsdone.add('phases')
1206
1206
1207 # filter heads already turned public by the push
1207 # filter heads already turned public by the push
1208 outdated = [c for c in outdated if c.node() not in pheads]
1208 outdated = [c for c in outdated if c.node() not in pheads]
1209 # fallback to independent pushkey command
1209 # fallback to independent pushkey command
1210 for newremotehead in outdated:
1210 for newremotehead in outdated:
1211 r = pushop.remote.pushkey('phases',
1211 r = pushop.remote.pushkey('phases',
1212 newremotehead.hex(),
1212 newremotehead.hex(),
1213 ('%d' % phases.draft),
1213 ('%d' % phases.draft),
1214 ('%d' % phases.public))
1214 ('%d' % phases.public))
1215 if not r:
1215 if not r:
1216 pushop.ui.warn(_('updating %s to public failed!\n')
1216 pushop.ui.warn(_('updating %s to public failed!\n')
1217 % newremotehead)
1217 % newremotehead)
1218
1218
1219 def _localphasemove(pushop, nodes, phase=phases.public):
1219 def _localphasemove(pushop, nodes, phase=phases.public):
1220 """move <nodes> to <phase> in the local source repo"""
1220 """move <nodes> to <phase> in the local source repo"""
1221 if pushop.trmanager:
1221 if pushop.trmanager:
1222 phases.advanceboundary(pushop.repo,
1222 phases.advanceboundary(pushop.repo,
1223 pushop.trmanager.transaction(),
1223 pushop.trmanager.transaction(),
1224 phase,
1224 phase,
1225 nodes)
1225 nodes)
1226 else:
1226 else:
1227 # repo is not locked, do not change any phases!
1227 # repo is not locked, do not change any phases!
1228 # Informs the user that phases should have been moved when
1228 # Informs the user that phases should have been moved when
1229 # applicable.
1229 # applicable.
1230 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1230 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1231 phasestr = phases.phasenames[phase]
1231 phasestr = phases.phasenames[phase]
1232 if actualmoves:
1232 if actualmoves:
1233 pushop.ui.status(_('cannot lock source repo, skipping '
1233 pushop.ui.status(_('cannot lock source repo, skipping '
1234 'local %s phase update\n') % phasestr)
1234 'local %s phase update\n') % phasestr)
1235
1235
1236 def _pushobsolete(pushop):
1236 def _pushobsolete(pushop):
1237 """utility function to push obsolete markers to a remote"""
1237 """utility function to push obsolete markers to a remote"""
1238 if 'obsmarkers' in pushop.stepsdone:
1238 if 'obsmarkers' in pushop.stepsdone:
1239 return
1239 return
1240 repo = pushop.repo
1240 repo = pushop.repo
1241 remote = pushop.remote
1241 remote = pushop.remote
1242 pushop.stepsdone.add('obsmarkers')
1242 pushop.stepsdone.add('obsmarkers')
1243 if pushop.outobsmarkers:
1243 if pushop.outobsmarkers:
1244 pushop.ui.debug('try to push obsolete markers to remote\n')
1244 pushop.ui.debug('try to push obsolete markers to remote\n')
1245 rslts = []
1245 rslts = []
1246 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1246 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1247 for key in sorted(remotedata, reverse=True):
1247 for key in sorted(remotedata, reverse=True):
1248 # reverse sort to ensure we end with dump0
1248 # reverse sort to ensure we end with dump0
1249 data = remotedata[key]
1249 data = remotedata[key]
1250 rslts.append(remote.pushkey('obsolete', key, '', data))
1250 rslts.append(remote.pushkey('obsolete', key, '', data))
1251 if [r for r in rslts if not r]:
1251 if [r for r in rslts if not r]:
1252 msg = _('failed to push some obsolete markers!\n')
1252 msg = _('failed to push some obsolete markers!\n')
1253 repo.ui.warn(msg)
1253 repo.ui.warn(msg)
1254
1254
1255 def _pushbookmark(pushop):
1255 def _pushbookmark(pushop):
1256 """Update bookmark position on remote"""
1256 """Update bookmark position on remote"""
1257 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1257 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1258 return
1258 return
1259 pushop.stepsdone.add('bookmarks')
1259 pushop.stepsdone.add('bookmarks')
1260 ui = pushop.ui
1260 ui = pushop.ui
1261 remote = pushop.remote
1261 remote = pushop.remote
1262
1262
1263 for b, old, new in pushop.outbookmarks:
1263 for b, old, new in pushop.outbookmarks:
1264 action = 'update'
1264 action = 'update'
1265 if not old:
1265 if not old:
1266 action = 'export'
1266 action = 'export'
1267 elif not new:
1267 elif not new:
1268 action = 'delete'
1268 action = 'delete'
1269 if remote.pushkey('bookmarks', b, old, new):
1269 if remote.pushkey('bookmarks', b, old, new):
1270 ui.status(bookmsgmap[action][0] % b)
1270 ui.status(bookmsgmap[action][0] % b)
1271 else:
1271 else:
1272 ui.warn(bookmsgmap[action][1] % b)
1272 ui.warn(bookmsgmap[action][1] % b)
1273 # discovery can have set the value form invalid entry
1273 # discovery can have set the value form invalid entry
1274 if pushop.bkresult is not None:
1274 if pushop.bkresult is not None:
1275 pushop.bkresult = 1
1275 pushop.bkresult = 1
1276
1276
1277 class pulloperation(object):
1277 class pulloperation(object):
1278 """A object that represent a single pull operation
1278 """A object that represent a single pull operation
1279
1279
1280 It purpose is to carry pull related state and very common operation.
1280 It purpose is to carry pull related state and very common operation.
1281
1281
1282 A new should be created at the beginning of each pull and discarded
1282 A new should be created at the beginning of each pull and discarded
1283 afterward.
1283 afterward.
1284 """
1284 """
1285
1285
1286 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1286 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1287 remotebookmarks=None, streamclonerequested=None):
1287 remotebookmarks=None, streamclonerequested=None):
1288 # repo we pull into
1288 # repo we pull into
1289 self.repo = repo
1289 self.repo = repo
1290 # repo we pull from
1290 # repo we pull from
1291 self.remote = remote
1291 self.remote = remote
1292 # revision we try to pull (None is "all")
1292 # revision we try to pull (None is "all")
1293 self.heads = heads
1293 self.heads = heads
1294 # bookmark pulled explicitly
1294 # bookmark pulled explicitly
1295 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1295 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1296 for bookmark in bookmarks]
1296 for bookmark in bookmarks]
1297 # do we force pull?
1297 # do we force pull?
1298 self.force = force
1298 self.force = force
1299 # whether a streaming clone was requested
1299 # whether a streaming clone was requested
1300 self.streamclonerequested = streamclonerequested
1300 self.streamclonerequested = streamclonerequested
1301 # transaction manager
1301 # transaction manager
1302 self.trmanager = None
1302 self.trmanager = None
1303 # set of common changeset between local and remote before pull
1303 # set of common changeset between local and remote before pull
1304 self.common = None
1304 self.common = None
1305 # set of pulled head
1305 # set of pulled head
1306 self.rheads = None
1306 self.rheads = None
1307 # list of missing changeset to fetch remotely
1307 # list of missing changeset to fetch remotely
1308 self.fetch = None
1308 self.fetch = None
1309 # remote bookmarks data
1309 # remote bookmarks data
1310 self.remotebookmarks = remotebookmarks
1310 self.remotebookmarks = remotebookmarks
1311 # result of changegroup pulling (used as return code by pull)
1311 # result of changegroup pulling (used as return code by pull)
1312 self.cgresult = None
1312 self.cgresult = None
1313 # list of step already done
1313 # list of step already done
1314 self.stepsdone = set()
1314 self.stepsdone = set()
1315 # Whether we attempted a clone from pre-generated bundles.
1315 # Whether we attempted a clone from pre-generated bundles.
1316 self.clonebundleattempted = False
1316 self.clonebundleattempted = False
1317
1317
1318 @util.propertycache
1318 @util.propertycache
1319 def pulledsubset(self):
1319 def pulledsubset(self):
1320 """heads of the set of changeset target by the pull"""
1320 """heads of the set of changeset target by the pull"""
1321 # compute target subset
1321 # compute target subset
1322 if self.heads is None:
1322 if self.heads is None:
1323 # We pulled every thing possible
1323 # We pulled every thing possible
1324 # sync on everything common
1324 # sync on everything common
1325 c = set(self.common)
1325 c = set(self.common)
1326 ret = list(self.common)
1326 ret = list(self.common)
1327 for n in self.rheads:
1327 for n in self.rheads:
1328 if n not in c:
1328 if n not in c:
1329 ret.append(n)
1329 ret.append(n)
1330 return ret
1330 return ret
1331 else:
1331 else:
1332 # We pulled a specific subset
1332 # We pulled a specific subset
1333 # sync on this subset
1333 # sync on this subset
1334 return self.heads
1334 return self.heads
1335
1335
1336 @util.propertycache
1336 @util.propertycache
1337 def canusebundle2(self):
1337 def canusebundle2(self):
1338 return not _forcebundle1(self)
1338 return not _forcebundle1(self)
1339
1339
1340 @util.propertycache
1340 @util.propertycache
1341 def remotebundle2caps(self):
1341 def remotebundle2caps(self):
1342 return bundle2.bundle2caps(self.remote)
1342 return bundle2.bundle2caps(self.remote)
1343
1343
1344 def gettransaction(self):
1344 def gettransaction(self):
1345 # deprecated; talk to trmanager directly
1345 # deprecated; talk to trmanager directly
1346 return self.trmanager.transaction()
1346 return self.trmanager.transaction()
1347
1347
1348 class transactionmanager(util.transactional):
1348 class transactionmanager(util.transactional):
1349 """An object to manage the life cycle of a transaction
1349 """An object to manage the life cycle of a transaction
1350
1350
1351 It creates the transaction on demand and calls the appropriate hooks when
1351 It creates the transaction on demand and calls the appropriate hooks when
1352 closing the transaction."""
1352 closing the transaction."""
1353 def __init__(self, repo, source, url):
1353 def __init__(self, repo, source, url):
1354 self.repo = repo
1354 self.repo = repo
1355 self.source = source
1355 self.source = source
1356 self.url = url
1356 self.url = url
1357 self._tr = None
1357 self._tr = None
1358
1358
1359 def transaction(self):
1359 def transaction(self):
1360 """Return an open transaction object, constructing if necessary"""
1360 """Return an open transaction object, constructing if necessary"""
1361 if not self._tr:
1361 if not self._tr:
1362 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1362 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1363 self._tr = self.repo.transaction(trname)
1363 self._tr = self.repo.transaction(trname)
1364 self._tr.hookargs['source'] = self.source
1364 self._tr.hookargs['source'] = self.source
1365 self._tr.hookargs['url'] = self.url
1365 self._tr.hookargs['url'] = self.url
1366 return self._tr
1366 return self._tr
1367
1367
1368 def close(self):
1368 def close(self):
1369 """close transaction if created"""
1369 """close transaction if created"""
1370 if self._tr is not None:
1370 if self._tr is not None:
1371 self._tr.close()
1371 self._tr.close()
1372
1372
1373 def release(self):
1373 def release(self):
1374 """release transaction if created"""
1374 """release transaction if created"""
1375 if self._tr is not None:
1375 if self._tr is not None:
1376 self._tr.release()
1376 self._tr.release()
1377
1377
1378 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1378 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1379 streamclonerequested=None):
1379 streamclonerequested=None):
1380 """Fetch repository data from a remote.
1380 """Fetch repository data from a remote.
1381
1381
1382 This is the main function used to retrieve data from a remote repository.
1382 This is the main function used to retrieve data from a remote repository.
1383
1383
1384 ``repo`` is the local repository to clone into.
1384 ``repo`` is the local repository to clone into.
1385 ``remote`` is a peer instance.
1385 ``remote`` is a peer instance.
1386 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1386 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1387 default) means to pull everything from the remote.
1387 default) means to pull everything from the remote.
1388 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1388 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1389 default, all remote bookmarks are pulled.
1389 default, all remote bookmarks are pulled.
1390 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1390 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1391 initialization.
1391 initialization.
1392 ``streamclonerequested`` is a boolean indicating whether a "streaming
1392 ``streamclonerequested`` is a boolean indicating whether a "streaming
1393 clone" is requested. A "streaming clone" is essentially a raw file copy
1393 clone" is requested. A "streaming clone" is essentially a raw file copy
1394 of revlogs from the server. This only works when the local repository is
1394 of revlogs from the server. This only works when the local repository is
1395 empty. The default value of ``None`` means to respect the server
1395 empty. The default value of ``None`` means to respect the server
1396 configuration for preferring stream clones.
1396 configuration for preferring stream clones.
1397
1397
1398 Returns the ``pulloperation`` created for this pull.
1398 Returns the ``pulloperation`` created for this pull.
1399 """
1399 """
1400 if opargs is None:
1400 if opargs is None:
1401 opargs = {}
1401 opargs = {}
1402 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1402 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1403 streamclonerequested=streamclonerequested,
1403 streamclonerequested=streamclonerequested,
1404 **pycompat.strkwargs(opargs))
1404 **pycompat.strkwargs(opargs))
1405
1405
1406 peerlocal = pullop.remote.local()
1406 peerlocal = pullop.remote.local()
1407 if peerlocal:
1407 if peerlocal:
1408 missing = set(peerlocal.requirements) - pullop.repo.supported
1408 missing = set(peerlocal.requirements) - pullop.repo.supported
1409 if missing:
1409 if missing:
1410 msg = _("required features are not"
1410 msg = _("required features are not"
1411 " supported in the destination:"
1411 " supported in the destination:"
1412 " %s") % (', '.join(sorted(missing)))
1412 " %s") % (', '.join(sorted(missing)))
1413 raise error.Abort(msg)
1413 raise error.Abort(msg)
1414
1414
1415 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1415 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1416 with repo.wlock(), repo.lock(), pullop.trmanager:
1416 with repo.wlock(), repo.lock(), pullop.trmanager:
1417 # This should ideally be in _pullbundle2(). However, it needs to run
1417 # This should ideally be in _pullbundle2(). However, it needs to run
1418 # before discovery to avoid extra work.
1418 # before discovery to avoid extra work.
1419 _maybeapplyclonebundle(pullop)
1419 _maybeapplyclonebundle(pullop)
1420 streamclone.maybeperformlegacystreamclone(pullop)
1420 streamclone.maybeperformlegacystreamclone(pullop)
1421 _pulldiscovery(pullop)
1421 _pulldiscovery(pullop)
1422 if pullop.canusebundle2:
1422 if pullop.canusebundle2:
1423 _pullbundle2(pullop)
1423 _pullbundle2(pullop)
1424 _pullchangeset(pullop)
1424 _pullchangeset(pullop)
1425 _pullphase(pullop)
1425 _pullphase(pullop)
1426 _pullbookmarks(pullop)
1426 _pullbookmarks(pullop)
1427 _pullobsolete(pullop)
1427 _pullobsolete(pullop)
1428
1428
1429 # storing remotenames
1429 # storing remotenames
1430 if repo.ui.configbool('experimental', 'remotenames'):
1430 if repo.ui.configbool('experimental', 'remotenames'):
1431 logexchange.pullremotenames(repo, remote)
1431 logexchange.pullremotenames(repo, remote)
1432
1432
1433 return pullop
1433 return pullop
1434
1434
1435 # list of steps to perform discovery before pull
1435 # list of steps to perform discovery before pull
1436 pulldiscoveryorder = []
1436 pulldiscoveryorder = []
1437
1437
1438 # Mapping between step name and function
1438 # Mapping between step name and function
1439 #
1439 #
1440 # This exists to help extensions wrap steps if necessary
1440 # This exists to help extensions wrap steps if necessary
1441 pulldiscoverymapping = {}
1441 pulldiscoverymapping = {}
1442
1442
1443 def pulldiscovery(stepname):
1443 def pulldiscovery(stepname):
1444 """decorator for function performing discovery before pull
1444 """decorator for function performing discovery before pull
1445
1445
1446 The function is added to the step -> function mapping and appended to the
1446 The function is added to the step -> function mapping and appended to the
1447 list of steps. Beware that decorated function will be added in order (this
1447 list of steps. Beware that decorated function will be added in order (this
1448 may matter).
1448 may matter).
1449
1449
1450 You can only use this decorator for a new step, if you want to wrap a step
1450 You can only use this decorator for a new step, if you want to wrap a step
1451 from an extension, change the pulldiscovery dictionary directly."""
1451 from an extension, change the pulldiscovery dictionary directly."""
1452 def dec(func):
1452 def dec(func):
1453 assert stepname not in pulldiscoverymapping
1453 assert stepname not in pulldiscoverymapping
1454 pulldiscoverymapping[stepname] = func
1454 pulldiscoverymapping[stepname] = func
1455 pulldiscoveryorder.append(stepname)
1455 pulldiscoveryorder.append(stepname)
1456 return func
1456 return func
1457 return dec
1457 return dec
1458
1458
1459 def _pulldiscovery(pullop):
1459 def _pulldiscovery(pullop):
1460 """Run all discovery steps"""
1460 """Run all discovery steps"""
1461 for stepname in pulldiscoveryorder:
1461 for stepname in pulldiscoveryorder:
1462 step = pulldiscoverymapping[stepname]
1462 step = pulldiscoverymapping[stepname]
1463 step(pullop)
1463 step(pullop)
1464
1464
1465 @pulldiscovery('b1:bookmarks')
1465 @pulldiscovery('b1:bookmarks')
1466 def _pullbookmarkbundle1(pullop):
1466 def _pullbookmarkbundle1(pullop):
1467 """fetch bookmark data in bundle1 case
1467 """fetch bookmark data in bundle1 case
1468
1468
1469 If not using bundle2, we have to fetch bookmarks before changeset
1469 If not using bundle2, we have to fetch bookmarks before changeset
1470 discovery to reduce the chance and impact of race conditions."""
1470 discovery to reduce the chance and impact of race conditions."""
1471 if pullop.remotebookmarks is not None:
1471 if pullop.remotebookmarks is not None:
1472 return
1472 return
1473 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1473 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1474 # all known bundle2 servers now support listkeys, but lets be nice with
1474 # all known bundle2 servers now support listkeys, but lets be nice with
1475 # new implementation.
1475 # new implementation.
1476 return
1476 return
1477 books = pullop.remote.listkeys('bookmarks')
1477 books = pullop.remote.listkeys('bookmarks')
1478 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1478 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1479
1479
1480
1480
1481 @pulldiscovery('changegroup')
1481 @pulldiscovery('changegroup')
1482 def _pulldiscoverychangegroup(pullop):
1482 def _pulldiscoverychangegroup(pullop):
1483 """discovery phase for the pull
1483 """discovery phase for the pull
1484
1484
1485 Current handle changeset discovery only, will change handle all discovery
1485 Current handle changeset discovery only, will change handle all discovery
1486 at some point."""
1486 at some point."""
1487 tmp = discovery.findcommonincoming(pullop.repo,
1487 tmp = discovery.findcommonincoming(pullop.repo,
1488 pullop.remote,
1488 pullop.remote,
1489 heads=pullop.heads,
1489 heads=pullop.heads,
1490 force=pullop.force)
1490 force=pullop.force)
1491 common, fetch, rheads = tmp
1491 common, fetch, rheads = tmp
1492 nm = pullop.repo.unfiltered().changelog.nodemap
1492 nm = pullop.repo.unfiltered().changelog.nodemap
1493 if fetch and rheads:
1493 if fetch and rheads:
1494 # If a remote heads is filtered locally, put in back in common.
1494 # If a remote heads is filtered locally, put in back in common.
1495 #
1495 #
1496 # This is a hackish solution to catch most of "common but locally
1496 # This is a hackish solution to catch most of "common but locally
1497 # hidden situation". We do not performs discovery on unfiltered
1497 # hidden situation". We do not performs discovery on unfiltered
1498 # repository because it end up doing a pathological amount of round
1498 # repository because it end up doing a pathological amount of round
1499 # trip for w huge amount of changeset we do not care about.
1499 # trip for w huge amount of changeset we do not care about.
1500 #
1500 #
1501 # If a set of such "common but filtered" changeset exist on the server
1501 # If a set of such "common but filtered" changeset exist on the server
1502 # but are not including a remote heads, we'll not be able to detect it,
1502 # but are not including a remote heads, we'll not be able to detect it,
1503 scommon = set(common)
1503 scommon = set(common)
1504 for n in rheads:
1504 for n in rheads:
1505 if n in nm:
1505 if n in nm:
1506 if n not in scommon:
1506 if n not in scommon:
1507 common.append(n)
1507 common.append(n)
1508 if set(rheads).issubset(set(common)):
1508 if set(rheads).issubset(set(common)):
1509 fetch = []
1509 fetch = []
1510 pullop.common = common
1510 pullop.common = common
1511 pullop.fetch = fetch
1511 pullop.fetch = fetch
1512 pullop.rheads = rheads
1512 pullop.rheads = rheads
1513
1513
1514 def _pullbundle2(pullop):
1514 def _pullbundle2(pullop):
1515 """pull data using bundle2
1515 """pull data using bundle2
1516
1516
1517 For now, the only supported data are changegroup."""
1517 For now, the only supported data are changegroup."""
1518 kwargs = {'bundlecaps': caps20to10(pullop.repo, role='client')}
1518 kwargs = {'bundlecaps': caps20to10(pullop.repo, role='client')}
1519
1519
1520 # make ui easier to access
1520 # make ui easier to access
1521 ui = pullop.repo.ui
1521 ui = pullop.repo.ui
1522
1522
1523 # At the moment we don't do stream clones over bundle2. If that is
1523 # At the moment we don't do stream clones over bundle2. If that is
1524 # implemented then here's where the check for that will go.
1524 # implemented then here's where the check for that will go.
1525 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1525 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1526
1526
1527 # declare pull perimeters
1527 # declare pull perimeters
1528 kwargs['common'] = pullop.common
1528 kwargs['common'] = pullop.common
1529 kwargs['heads'] = pullop.heads or pullop.rheads
1529 kwargs['heads'] = pullop.heads or pullop.rheads
1530
1530
1531 if streaming:
1531 if streaming:
1532 kwargs['cg'] = False
1532 kwargs['cg'] = False
1533 kwargs['stream'] = True
1533 kwargs['stream'] = True
1534 pullop.stepsdone.add('changegroup')
1534 pullop.stepsdone.add('changegroup')
1535 pullop.stepsdone.add('phases')
1535 pullop.stepsdone.add('phases')
1536
1536
1537 else:
1537 else:
1538 # pulling changegroup
1538 # pulling changegroup
1539 pullop.stepsdone.add('changegroup')
1539 pullop.stepsdone.add('changegroup')
1540
1540
1541 kwargs['cg'] = pullop.fetch
1541 kwargs['cg'] = pullop.fetch
1542
1542
1543 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1543 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1544 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1544 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1545 if (not legacyphase and hasbinaryphase):
1545 if (not legacyphase and hasbinaryphase):
1546 kwargs['phases'] = True
1546 kwargs['phases'] = True
1547 pullop.stepsdone.add('phases')
1547 pullop.stepsdone.add('phases')
1548
1548
1549 if 'listkeys' in pullop.remotebundle2caps:
1549 if 'listkeys' in pullop.remotebundle2caps:
1550 if 'phases' not in pullop.stepsdone:
1550 if 'phases' not in pullop.stepsdone:
1551 kwargs['listkeys'] = ['phases']
1551 kwargs['listkeys'] = ['phases']
1552
1552
1553 bookmarksrequested = False
1553 bookmarksrequested = False
1554 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1554 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1555 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1555 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1556
1556
1557 if pullop.remotebookmarks is not None:
1557 if pullop.remotebookmarks is not None:
1558 pullop.stepsdone.add('request-bookmarks')
1558 pullop.stepsdone.add('request-bookmarks')
1559
1559
1560 if ('request-bookmarks' not in pullop.stepsdone
1560 if ('request-bookmarks' not in pullop.stepsdone
1561 and pullop.remotebookmarks is None
1561 and pullop.remotebookmarks is None
1562 and not legacybookmark and hasbinarybook):
1562 and not legacybookmark and hasbinarybook):
1563 kwargs['bookmarks'] = True
1563 kwargs['bookmarks'] = True
1564 bookmarksrequested = True
1564 bookmarksrequested = True
1565
1565
1566 if 'listkeys' in pullop.remotebundle2caps:
1566 if 'listkeys' in pullop.remotebundle2caps:
1567 if 'request-bookmarks' not in pullop.stepsdone:
1567 if 'request-bookmarks' not in pullop.stepsdone:
1568 # make sure to always includes bookmark data when migrating
1568 # make sure to always includes bookmark data when migrating
1569 # `hg incoming --bundle` to using this function.
1569 # `hg incoming --bundle` to using this function.
1570 pullop.stepsdone.add('request-bookmarks')
1570 pullop.stepsdone.add('request-bookmarks')
1571 kwargs.setdefault('listkeys', []).append('bookmarks')
1571 kwargs.setdefault('listkeys', []).append('bookmarks')
1572
1572
1573 # If this is a full pull / clone and the server supports the clone bundles
1573 # If this is a full pull / clone and the server supports the clone bundles
1574 # feature, tell the server whether we attempted a clone bundle. The
1574 # feature, tell the server whether we attempted a clone bundle. The
1575 # presence of this flag indicates the client supports clone bundles. This
1575 # presence of this flag indicates the client supports clone bundles. This
1576 # will enable the server to treat clients that support clone bundles
1576 # will enable the server to treat clients that support clone bundles
1577 # differently from those that don't.
1577 # differently from those that don't.
1578 if (pullop.remote.capable('clonebundles')
1578 if (pullop.remote.capable('clonebundles')
1579 and pullop.heads is None and list(pullop.common) == [nullid]):
1579 and pullop.heads is None and list(pullop.common) == [nullid]):
1580 kwargs['cbattempted'] = pullop.clonebundleattempted
1580 kwargs['cbattempted'] = pullop.clonebundleattempted
1581
1581
1582 if streaming:
1582 if streaming:
1583 pullop.repo.ui.status(_('streaming all changes\n'))
1583 pullop.repo.ui.status(_('streaming all changes\n'))
1584 elif not pullop.fetch:
1584 elif not pullop.fetch:
1585 pullop.repo.ui.status(_("no changes found\n"))
1585 pullop.repo.ui.status(_("no changes found\n"))
1586 pullop.cgresult = 0
1586 pullop.cgresult = 0
1587 else:
1587 else:
1588 if pullop.heads is None and list(pullop.common) == [nullid]:
1588 if pullop.heads is None and list(pullop.common) == [nullid]:
1589 pullop.repo.ui.status(_("requesting all changes\n"))
1589 pullop.repo.ui.status(_("requesting all changes\n"))
1590 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1590 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1591 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1591 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1592 if obsolete.commonversion(remoteversions) is not None:
1592 if obsolete.commonversion(remoteversions) is not None:
1593 kwargs['obsmarkers'] = True
1593 kwargs['obsmarkers'] = True
1594 pullop.stepsdone.add('obsmarkers')
1594 pullop.stepsdone.add('obsmarkers')
1595 _pullbundle2extraprepare(pullop, kwargs)
1595 _pullbundle2extraprepare(pullop, kwargs)
1596 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1596 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1597 try:
1597 try:
1598 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1598 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction,
1599 source='pull')
1599 op.modes['bookmarks'] = 'records'
1600 op.modes['bookmarks'] = 'records'
1600 bundle2.processbundle(pullop.repo, bundle, op=op)
1601 bundle2.processbundle(pullop.repo, bundle, op=op)
1601 except bundle2.AbortFromPart as exc:
1602 except bundle2.AbortFromPart as exc:
1602 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1603 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1603 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1604 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1604 except error.BundleValueError as exc:
1605 except error.BundleValueError as exc:
1605 raise error.Abort(_('missing support for %s') % exc)
1606 raise error.Abort(_('missing support for %s') % exc)
1606
1607
1607 if pullop.fetch:
1608 if pullop.fetch:
1608 pullop.cgresult = bundle2.combinechangegroupresults(op)
1609 pullop.cgresult = bundle2.combinechangegroupresults(op)
1609
1610
1610 # processing phases change
1611 # processing phases change
1611 for namespace, value in op.records['listkeys']:
1612 for namespace, value in op.records['listkeys']:
1612 if namespace == 'phases':
1613 if namespace == 'phases':
1613 _pullapplyphases(pullop, value)
1614 _pullapplyphases(pullop, value)
1614
1615
1615 # processing bookmark update
1616 # processing bookmark update
1616 if bookmarksrequested:
1617 if bookmarksrequested:
1617 books = {}
1618 books = {}
1618 for record in op.records['bookmarks']:
1619 for record in op.records['bookmarks']:
1619 books[record['bookmark']] = record["node"]
1620 books[record['bookmark']] = record["node"]
1620 pullop.remotebookmarks = books
1621 pullop.remotebookmarks = books
1621 else:
1622 else:
1622 for namespace, value in op.records['listkeys']:
1623 for namespace, value in op.records['listkeys']:
1623 if namespace == 'bookmarks':
1624 if namespace == 'bookmarks':
1624 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1625 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1625
1626
1626 # bookmark data were either already there or pulled in the bundle
1627 # bookmark data were either already there or pulled in the bundle
1627 if pullop.remotebookmarks is not None:
1628 if pullop.remotebookmarks is not None:
1628 _pullbookmarks(pullop)
1629 _pullbookmarks(pullop)
1629
1630
1630 def _pullbundle2extraprepare(pullop, kwargs):
1631 def _pullbundle2extraprepare(pullop, kwargs):
1631 """hook function so that extensions can extend the getbundle call"""
1632 """hook function so that extensions can extend the getbundle call"""
1632
1633
1633 def _pullchangeset(pullop):
1634 def _pullchangeset(pullop):
1634 """pull changeset from unbundle into the local repo"""
1635 """pull changeset from unbundle into the local repo"""
1635 # We delay the open of the transaction as late as possible so we
1636 # We delay the open of the transaction as late as possible so we
1636 # don't open transaction for nothing or you break future useful
1637 # don't open transaction for nothing or you break future useful
1637 # rollback call
1638 # rollback call
1638 if 'changegroup' in pullop.stepsdone:
1639 if 'changegroup' in pullop.stepsdone:
1639 return
1640 return
1640 pullop.stepsdone.add('changegroup')
1641 pullop.stepsdone.add('changegroup')
1641 if not pullop.fetch:
1642 if not pullop.fetch:
1642 pullop.repo.ui.status(_("no changes found\n"))
1643 pullop.repo.ui.status(_("no changes found\n"))
1643 pullop.cgresult = 0
1644 pullop.cgresult = 0
1644 return
1645 return
1645 tr = pullop.gettransaction()
1646 tr = pullop.gettransaction()
1646 if pullop.heads is None and list(pullop.common) == [nullid]:
1647 if pullop.heads is None and list(pullop.common) == [nullid]:
1647 pullop.repo.ui.status(_("requesting all changes\n"))
1648 pullop.repo.ui.status(_("requesting all changes\n"))
1648 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1649 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1649 # issue1320, avoid a race if remote changed after discovery
1650 # issue1320, avoid a race if remote changed after discovery
1650 pullop.heads = pullop.rheads
1651 pullop.heads = pullop.rheads
1651
1652
1652 if pullop.remote.capable('getbundle'):
1653 if pullop.remote.capable('getbundle'):
1653 # TODO: get bundlecaps from remote
1654 # TODO: get bundlecaps from remote
1654 cg = pullop.remote.getbundle('pull', common=pullop.common,
1655 cg = pullop.remote.getbundle('pull', common=pullop.common,
1655 heads=pullop.heads or pullop.rheads)
1656 heads=pullop.heads or pullop.rheads)
1656 elif pullop.heads is None:
1657 elif pullop.heads is None:
1657 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1658 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1658 elif not pullop.remote.capable('changegroupsubset'):
1659 elif not pullop.remote.capable('changegroupsubset'):
1659 raise error.Abort(_("partial pull cannot be done because "
1660 raise error.Abort(_("partial pull cannot be done because "
1660 "other repository doesn't support "
1661 "other repository doesn't support "
1661 "changegroupsubset."))
1662 "changegroupsubset."))
1662 else:
1663 else:
1663 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1664 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1664 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1665 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1665 pullop.remote.url())
1666 pullop.remote.url())
1666 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1667 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1667
1668
1668 def _pullphase(pullop):
1669 def _pullphase(pullop):
1669 # Get remote phases data from remote
1670 # Get remote phases data from remote
1670 if 'phases' in pullop.stepsdone:
1671 if 'phases' in pullop.stepsdone:
1671 return
1672 return
1672 remotephases = pullop.remote.listkeys('phases')
1673 remotephases = pullop.remote.listkeys('phases')
1673 _pullapplyphases(pullop, remotephases)
1674 _pullapplyphases(pullop, remotephases)
1674
1675
1675 def _pullapplyphases(pullop, remotephases):
1676 def _pullapplyphases(pullop, remotephases):
1676 """apply phase movement from observed remote state"""
1677 """apply phase movement from observed remote state"""
1677 if 'phases' in pullop.stepsdone:
1678 if 'phases' in pullop.stepsdone:
1678 return
1679 return
1679 pullop.stepsdone.add('phases')
1680 pullop.stepsdone.add('phases')
1680 publishing = bool(remotephases.get('publishing', False))
1681 publishing = bool(remotephases.get('publishing', False))
1681 if remotephases and not publishing:
1682 if remotephases and not publishing:
1682 # remote is new and non-publishing
1683 # remote is new and non-publishing
1683 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1684 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1684 pullop.pulledsubset,
1685 pullop.pulledsubset,
1685 remotephases)
1686 remotephases)
1686 dheads = pullop.pulledsubset
1687 dheads = pullop.pulledsubset
1687 else:
1688 else:
1688 # Remote is old or publishing all common changesets
1689 # Remote is old or publishing all common changesets
1689 # should be seen as public
1690 # should be seen as public
1690 pheads = pullop.pulledsubset
1691 pheads = pullop.pulledsubset
1691 dheads = []
1692 dheads = []
1692 unfi = pullop.repo.unfiltered()
1693 unfi = pullop.repo.unfiltered()
1693 phase = unfi._phasecache.phase
1694 phase = unfi._phasecache.phase
1694 rev = unfi.changelog.nodemap.get
1695 rev = unfi.changelog.nodemap.get
1695 public = phases.public
1696 public = phases.public
1696 draft = phases.draft
1697 draft = phases.draft
1697
1698
1698 # exclude changesets already public locally and update the others
1699 # exclude changesets already public locally and update the others
1699 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1700 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1700 if pheads:
1701 if pheads:
1701 tr = pullop.gettransaction()
1702 tr = pullop.gettransaction()
1702 phases.advanceboundary(pullop.repo, tr, public, pheads)
1703 phases.advanceboundary(pullop.repo, tr, public, pheads)
1703
1704
1704 # exclude changesets already draft locally and update the others
1705 # exclude changesets already draft locally and update the others
1705 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1706 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1706 if dheads:
1707 if dheads:
1707 tr = pullop.gettransaction()
1708 tr = pullop.gettransaction()
1708 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1709 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1709
1710
1710 def _pullbookmarks(pullop):
1711 def _pullbookmarks(pullop):
1711 """process the remote bookmark information to update the local one"""
1712 """process the remote bookmark information to update the local one"""
1712 if 'bookmarks' in pullop.stepsdone:
1713 if 'bookmarks' in pullop.stepsdone:
1713 return
1714 return
1714 pullop.stepsdone.add('bookmarks')
1715 pullop.stepsdone.add('bookmarks')
1715 repo = pullop.repo
1716 repo = pullop.repo
1716 remotebookmarks = pullop.remotebookmarks
1717 remotebookmarks = pullop.remotebookmarks
1717 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1718 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1718 pullop.remote.url(),
1719 pullop.remote.url(),
1719 pullop.gettransaction,
1720 pullop.gettransaction,
1720 explicit=pullop.explicitbookmarks)
1721 explicit=pullop.explicitbookmarks)
1721
1722
1722 def _pullobsolete(pullop):
1723 def _pullobsolete(pullop):
1723 """utility function to pull obsolete markers from a remote
1724 """utility function to pull obsolete markers from a remote
1724
1725
1725 The `gettransaction` is function that return the pull transaction, creating
1726 The `gettransaction` is function that return the pull transaction, creating
1726 one if necessary. We return the transaction to inform the calling code that
1727 one if necessary. We return the transaction to inform the calling code that
1727 a new transaction have been created (when applicable).
1728 a new transaction have been created (when applicable).
1728
1729
1729 Exists mostly to allow overriding for experimentation purpose"""
1730 Exists mostly to allow overriding for experimentation purpose"""
1730 if 'obsmarkers' in pullop.stepsdone:
1731 if 'obsmarkers' in pullop.stepsdone:
1731 return
1732 return
1732 pullop.stepsdone.add('obsmarkers')
1733 pullop.stepsdone.add('obsmarkers')
1733 tr = None
1734 tr = None
1734 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1735 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1735 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1736 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1736 remoteobs = pullop.remote.listkeys('obsolete')
1737 remoteobs = pullop.remote.listkeys('obsolete')
1737 if 'dump0' in remoteobs:
1738 if 'dump0' in remoteobs:
1738 tr = pullop.gettransaction()
1739 tr = pullop.gettransaction()
1739 markers = []
1740 markers = []
1740 for key in sorted(remoteobs, reverse=True):
1741 for key in sorted(remoteobs, reverse=True):
1741 if key.startswith('dump'):
1742 if key.startswith('dump'):
1742 data = util.b85decode(remoteobs[key])
1743 data = util.b85decode(remoteobs[key])
1743 version, newmarks = obsolete._readmarkers(data)
1744 version, newmarks = obsolete._readmarkers(data)
1744 markers += newmarks
1745 markers += newmarks
1745 if markers:
1746 if markers:
1746 pullop.repo.obsstore.add(tr, markers)
1747 pullop.repo.obsstore.add(tr, markers)
1747 pullop.repo.invalidatevolatilesets()
1748 pullop.repo.invalidatevolatilesets()
1748 return tr
1749 return tr
1749
1750
1750 def caps20to10(repo, role):
1751 def caps20to10(repo, role):
1751 """return a set with appropriate options to use bundle20 during getbundle"""
1752 """return a set with appropriate options to use bundle20 during getbundle"""
1752 caps = {'HG20'}
1753 caps = {'HG20'}
1753 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
1754 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role))
1754 caps.add('bundle2=' + urlreq.quote(capsblob))
1755 caps.add('bundle2=' + urlreq.quote(capsblob))
1755 return caps
1756 return caps
1756
1757
1757 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1758 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1758 getbundle2partsorder = []
1759 getbundle2partsorder = []
1759
1760
1760 # Mapping between step name and function
1761 # Mapping between step name and function
1761 #
1762 #
1762 # This exists to help extensions wrap steps if necessary
1763 # This exists to help extensions wrap steps if necessary
1763 getbundle2partsmapping = {}
1764 getbundle2partsmapping = {}
1764
1765
1765 def getbundle2partsgenerator(stepname, idx=None):
1766 def getbundle2partsgenerator(stepname, idx=None):
1766 """decorator for function generating bundle2 part for getbundle
1767 """decorator for function generating bundle2 part for getbundle
1767
1768
1768 The function is added to the step -> function mapping and appended to the
1769 The function is added to the step -> function mapping and appended to the
1769 list of steps. Beware that decorated functions will be added in order
1770 list of steps. Beware that decorated functions will be added in order
1770 (this may matter).
1771 (this may matter).
1771
1772
1772 You can only use this decorator for new steps, if you want to wrap a step
1773 You can only use this decorator for new steps, if you want to wrap a step
1773 from an extension, attack the getbundle2partsmapping dictionary directly."""
1774 from an extension, attack the getbundle2partsmapping dictionary directly."""
1774 def dec(func):
1775 def dec(func):
1775 assert stepname not in getbundle2partsmapping
1776 assert stepname not in getbundle2partsmapping
1776 getbundle2partsmapping[stepname] = func
1777 getbundle2partsmapping[stepname] = func
1777 if idx is None:
1778 if idx is None:
1778 getbundle2partsorder.append(stepname)
1779 getbundle2partsorder.append(stepname)
1779 else:
1780 else:
1780 getbundle2partsorder.insert(idx, stepname)
1781 getbundle2partsorder.insert(idx, stepname)
1781 return func
1782 return func
1782 return dec
1783 return dec
1783
1784
1784 def bundle2requested(bundlecaps):
1785 def bundle2requested(bundlecaps):
1785 if bundlecaps is not None:
1786 if bundlecaps is not None:
1786 return any(cap.startswith('HG2') for cap in bundlecaps)
1787 return any(cap.startswith('HG2') for cap in bundlecaps)
1787 return False
1788 return False
1788
1789
1789 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1790 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1790 **kwargs):
1791 **kwargs):
1791 """Return chunks constituting a bundle's raw data.
1792 """Return chunks constituting a bundle's raw data.
1792
1793
1793 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1794 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1794 passed.
1795 passed.
1795
1796
1796 Returns a 2-tuple of a dict with metadata about the generated bundle
1797 Returns a 2-tuple of a dict with metadata about the generated bundle
1797 and an iterator over raw chunks (of varying sizes).
1798 and an iterator over raw chunks (of varying sizes).
1798 """
1799 """
1799 kwargs = pycompat.byteskwargs(kwargs)
1800 kwargs = pycompat.byteskwargs(kwargs)
1800 info = {}
1801 info = {}
1801 usebundle2 = bundle2requested(bundlecaps)
1802 usebundle2 = bundle2requested(bundlecaps)
1802 # bundle10 case
1803 # bundle10 case
1803 if not usebundle2:
1804 if not usebundle2:
1804 if bundlecaps and not kwargs.get('cg', True):
1805 if bundlecaps and not kwargs.get('cg', True):
1805 raise ValueError(_('request for bundle10 must include changegroup'))
1806 raise ValueError(_('request for bundle10 must include changegroup'))
1806
1807
1807 if kwargs:
1808 if kwargs:
1808 raise ValueError(_('unsupported getbundle arguments: %s')
1809 raise ValueError(_('unsupported getbundle arguments: %s')
1809 % ', '.join(sorted(kwargs.keys())))
1810 % ', '.join(sorted(kwargs.keys())))
1810 outgoing = _computeoutgoing(repo, heads, common)
1811 outgoing = _computeoutgoing(repo, heads, common)
1811 info['bundleversion'] = 1
1812 info['bundleversion'] = 1
1812 return info, changegroup.makestream(repo, outgoing, '01', source,
1813 return info, changegroup.makestream(repo, outgoing, '01', source,
1813 bundlecaps=bundlecaps)
1814 bundlecaps=bundlecaps)
1814
1815
1815 # bundle20 case
1816 # bundle20 case
1816 info['bundleversion'] = 2
1817 info['bundleversion'] = 2
1817 b2caps = {}
1818 b2caps = {}
1818 for bcaps in bundlecaps:
1819 for bcaps in bundlecaps:
1819 if bcaps.startswith('bundle2='):
1820 if bcaps.startswith('bundle2='):
1820 blob = urlreq.unquote(bcaps[len('bundle2='):])
1821 blob = urlreq.unquote(bcaps[len('bundle2='):])
1821 b2caps.update(bundle2.decodecaps(blob))
1822 b2caps.update(bundle2.decodecaps(blob))
1822 bundler = bundle2.bundle20(repo.ui, b2caps)
1823 bundler = bundle2.bundle20(repo.ui, b2caps)
1823
1824
1824 kwargs['heads'] = heads
1825 kwargs['heads'] = heads
1825 kwargs['common'] = common
1826 kwargs['common'] = common
1826
1827
1827 for name in getbundle2partsorder:
1828 for name in getbundle2partsorder:
1828 func = getbundle2partsmapping[name]
1829 func = getbundle2partsmapping[name]
1829 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1830 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1830 **pycompat.strkwargs(kwargs))
1831 **pycompat.strkwargs(kwargs))
1831
1832
1832 info['prefercompressed'] = bundler.prefercompressed
1833 info['prefercompressed'] = bundler.prefercompressed
1833
1834
1834 return info, bundler.getchunks()
1835 return info, bundler.getchunks()
1835
1836
1836 @getbundle2partsgenerator('stream2')
1837 @getbundle2partsgenerator('stream2')
1837 def _getbundlestream2(bundler, repo, *args, **kwargs):
1838 def _getbundlestream2(bundler, repo, *args, **kwargs):
1838 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
1839 return bundle2.addpartbundlestream2(bundler, repo, **kwargs)
1839
1840
1840 @getbundle2partsgenerator('changegroup')
1841 @getbundle2partsgenerator('changegroup')
1841 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1842 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1842 b2caps=None, heads=None, common=None, **kwargs):
1843 b2caps=None, heads=None, common=None, **kwargs):
1843 """add a changegroup part to the requested bundle"""
1844 """add a changegroup part to the requested bundle"""
1844 cgstream = None
1845 cgstream = None
1845 if kwargs.get(r'cg', True):
1846 if kwargs.get(r'cg', True):
1846 # build changegroup bundle here.
1847 # build changegroup bundle here.
1847 version = '01'
1848 version = '01'
1848 cgversions = b2caps.get('changegroup')
1849 cgversions = b2caps.get('changegroup')
1849 if cgversions: # 3.1 and 3.2 ship with an empty value
1850 if cgversions: # 3.1 and 3.2 ship with an empty value
1850 cgversions = [v for v in cgversions
1851 cgversions = [v for v in cgversions
1851 if v in changegroup.supportedoutgoingversions(repo)]
1852 if v in changegroup.supportedoutgoingversions(repo)]
1852 if not cgversions:
1853 if not cgversions:
1853 raise ValueError(_('no common changegroup version'))
1854 raise ValueError(_('no common changegroup version'))
1854 version = max(cgversions)
1855 version = max(cgversions)
1855 outgoing = _computeoutgoing(repo, heads, common)
1856 outgoing = _computeoutgoing(repo, heads, common)
1856 if outgoing.missing:
1857 if outgoing.missing:
1857 cgstream = changegroup.makestream(repo, outgoing, version, source,
1858 cgstream = changegroup.makestream(repo, outgoing, version, source,
1858 bundlecaps=bundlecaps)
1859 bundlecaps=bundlecaps)
1859
1860
1860 if cgstream:
1861 if cgstream:
1861 part = bundler.newpart('changegroup', data=cgstream)
1862 part = bundler.newpart('changegroup', data=cgstream)
1862 if cgversions:
1863 if cgversions:
1863 part.addparam('version', version)
1864 part.addparam('version', version)
1864 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1865 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1865 mandatory=False)
1866 mandatory=False)
1866 if 'treemanifest' in repo.requirements:
1867 if 'treemanifest' in repo.requirements:
1867 part.addparam('treemanifest', '1')
1868 part.addparam('treemanifest', '1')
1868
1869
1869 @getbundle2partsgenerator('bookmarks')
1870 @getbundle2partsgenerator('bookmarks')
1870 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1871 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1871 b2caps=None, **kwargs):
1872 b2caps=None, **kwargs):
1872 """add a bookmark part to the requested bundle"""
1873 """add a bookmark part to the requested bundle"""
1873 if not kwargs.get(r'bookmarks', False):
1874 if not kwargs.get(r'bookmarks', False):
1874 return
1875 return
1875 if 'bookmarks' not in b2caps:
1876 if 'bookmarks' not in b2caps:
1876 raise ValueError(_('no common bookmarks exchange method'))
1877 raise ValueError(_('no common bookmarks exchange method'))
1877 books = bookmod.listbinbookmarks(repo)
1878 books = bookmod.listbinbookmarks(repo)
1878 data = bookmod.binaryencode(books)
1879 data = bookmod.binaryencode(books)
1879 if data:
1880 if data:
1880 bundler.newpart('bookmarks', data=data)
1881 bundler.newpart('bookmarks', data=data)
1881
1882
1882 @getbundle2partsgenerator('listkeys')
1883 @getbundle2partsgenerator('listkeys')
1883 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1884 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1884 b2caps=None, **kwargs):
1885 b2caps=None, **kwargs):
1885 """add parts containing listkeys namespaces to the requested bundle"""
1886 """add parts containing listkeys namespaces to the requested bundle"""
1886 listkeys = kwargs.get(r'listkeys', ())
1887 listkeys = kwargs.get(r'listkeys', ())
1887 for namespace in listkeys:
1888 for namespace in listkeys:
1888 part = bundler.newpart('listkeys')
1889 part = bundler.newpart('listkeys')
1889 part.addparam('namespace', namespace)
1890 part.addparam('namespace', namespace)
1890 keys = repo.listkeys(namespace).items()
1891 keys = repo.listkeys(namespace).items()
1891 part.data = pushkey.encodekeys(keys)
1892 part.data = pushkey.encodekeys(keys)
1892
1893
1893 @getbundle2partsgenerator('obsmarkers')
1894 @getbundle2partsgenerator('obsmarkers')
1894 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1895 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1895 b2caps=None, heads=None, **kwargs):
1896 b2caps=None, heads=None, **kwargs):
1896 """add an obsolescence markers part to the requested bundle"""
1897 """add an obsolescence markers part to the requested bundle"""
1897 if kwargs.get(r'obsmarkers', False):
1898 if kwargs.get(r'obsmarkers', False):
1898 if heads is None:
1899 if heads is None:
1899 heads = repo.heads()
1900 heads = repo.heads()
1900 subset = [c.node() for c in repo.set('::%ln', heads)]
1901 subset = [c.node() for c in repo.set('::%ln', heads)]
1901 markers = repo.obsstore.relevantmarkers(subset)
1902 markers = repo.obsstore.relevantmarkers(subset)
1902 markers = sorted(markers)
1903 markers = sorted(markers)
1903 bundle2.buildobsmarkerspart(bundler, markers)
1904 bundle2.buildobsmarkerspart(bundler, markers)
1904
1905
1905 @getbundle2partsgenerator('phases')
1906 @getbundle2partsgenerator('phases')
1906 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1907 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1907 b2caps=None, heads=None, **kwargs):
1908 b2caps=None, heads=None, **kwargs):
1908 """add phase heads part to the requested bundle"""
1909 """add phase heads part to the requested bundle"""
1909 if kwargs.get(r'phases', False):
1910 if kwargs.get(r'phases', False):
1910 if not 'heads' in b2caps.get('phases'):
1911 if not 'heads' in b2caps.get('phases'):
1911 raise ValueError(_('no common phases exchange method'))
1912 raise ValueError(_('no common phases exchange method'))
1912 if heads is None:
1913 if heads is None:
1913 heads = repo.heads()
1914 heads = repo.heads()
1914
1915
1915 headsbyphase = collections.defaultdict(set)
1916 headsbyphase = collections.defaultdict(set)
1916 if repo.publishing():
1917 if repo.publishing():
1917 headsbyphase[phases.public] = heads
1918 headsbyphase[phases.public] = heads
1918 else:
1919 else:
1919 # find the appropriate heads to move
1920 # find the appropriate heads to move
1920
1921
1921 phase = repo._phasecache.phase
1922 phase = repo._phasecache.phase
1922 node = repo.changelog.node
1923 node = repo.changelog.node
1923 rev = repo.changelog.rev
1924 rev = repo.changelog.rev
1924 for h in heads:
1925 for h in heads:
1925 headsbyphase[phase(repo, rev(h))].add(h)
1926 headsbyphase[phase(repo, rev(h))].add(h)
1926 seenphases = list(headsbyphase.keys())
1927 seenphases = list(headsbyphase.keys())
1927
1928
1928 # We do not handle anything but public and draft phase for now)
1929 # We do not handle anything but public and draft phase for now)
1929 if seenphases:
1930 if seenphases:
1930 assert max(seenphases) <= phases.draft
1931 assert max(seenphases) <= phases.draft
1931
1932
1932 # if client is pulling non-public changesets, we need to find
1933 # if client is pulling non-public changesets, we need to find
1933 # intermediate public heads.
1934 # intermediate public heads.
1934 draftheads = headsbyphase.get(phases.draft, set())
1935 draftheads = headsbyphase.get(phases.draft, set())
1935 if draftheads:
1936 if draftheads:
1936 publicheads = headsbyphase.get(phases.public, set())
1937 publicheads = headsbyphase.get(phases.public, set())
1937
1938
1938 revset = 'heads(only(%ln, %ln) and public())'
1939 revset = 'heads(only(%ln, %ln) and public())'
1939 extraheads = repo.revs(revset, draftheads, publicheads)
1940 extraheads = repo.revs(revset, draftheads, publicheads)
1940 for r in extraheads:
1941 for r in extraheads:
1941 headsbyphase[phases.public].add(node(r))
1942 headsbyphase[phases.public].add(node(r))
1942
1943
1943 # transform data in a format used by the encoding function
1944 # transform data in a format used by the encoding function
1944 phasemapping = []
1945 phasemapping = []
1945 for phase in phases.allphases:
1946 for phase in phases.allphases:
1946 phasemapping.append(sorted(headsbyphase[phase]))
1947 phasemapping.append(sorted(headsbyphase[phase]))
1947
1948
1948 # generate the actual part
1949 # generate the actual part
1949 phasedata = phases.binaryencode(phasemapping)
1950 phasedata = phases.binaryencode(phasemapping)
1950 bundler.newpart('phase-heads', data=phasedata)
1951 bundler.newpart('phase-heads', data=phasedata)
1951
1952
1952 @getbundle2partsgenerator('hgtagsfnodes')
1953 @getbundle2partsgenerator('hgtagsfnodes')
1953 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1954 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1954 b2caps=None, heads=None, common=None,
1955 b2caps=None, heads=None, common=None,
1955 **kwargs):
1956 **kwargs):
1956 """Transfer the .hgtags filenodes mapping.
1957 """Transfer the .hgtags filenodes mapping.
1957
1958
1958 Only values for heads in this bundle will be transferred.
1959 Only values for heads in this bundle will be transferred.
1959
1960
1960 The part data consists of pairs of 20 byte changeset node and .hgtags
1961 The part data consists of pairs of 20 byte changeset node and .hgtags
1961 filenodes raw values.
1962 filenodes raw values.
1962 """
1963 """
1963 # Don't send unless:
1964 # Don't send unless:
1964 # - changeset are being exchanged,
1965 # - changeset are being exchanged,
1965 # - the client supports it.
1966 # - the client supports it.
1966 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1967 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1967 return
1968 return
1968
1969
1969 outgoing = _computeoutgoing(repo, heads, common)
1970 outgoing = _computeoutgoing(repo, heads, common)
1970 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1971 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1971
1972
1972 @getbundle2partsgenerator('cache:rev-branch-cache')
1973 @getbundle2partsgenerator('cache:rev-branch-cache')
1973 def _getbundlerevbranchcache(bundler, repo, source, bundlecaps=None,
1974 def _getbundlerevbranchcache(bundler, repo, source, bundlecaps=None,
1974 b2caps=None, heads=None, common=None,
1975 b2caps=None, heads=None, common=None,
1975 **kwargs):
1976 **kwargs):
1976 """Transfer the rev-branch-cache mapping
1977 """Transfer the rev-branch-cache mapping
1977
1978
1978 The payload is a series of data related to each branch
1979 The payload is a series of data related to each branch
1979
1980
1980 1) branch name length
1981 1) branch name length
1981 2) number of open heads
1982 2) number of open heads
1982 3) number of closed heads
1983 3) number of closed heads
1983 4) open heads nodes
1984 4) open heads nodes
1984 5) closed heads nodes
1985 5) closed heads nodes
1985 """
1986 """
1986 # Don't send unless:
1987 # Don't send unless:
1987 # - changeset are being exchanged,
1988 # - changeset are being exchanged,
1988 # - the client supports it.
1989 # - the client supports it.
1989 if not (kwargs.get(r'cg', True)) or 'rev-branch-cache' not in b2caps:
1990 if not (kwargs.get(r'cg', True)) or 'rev-branch-cache' not in b2caps:
1990 return
1991 return
1991 outgoing = _computeoutgoing(repo, heads, common)
1992 outgoing = _computeoutgoing(repo, heads, common)
1992 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
1993 bundle2.addpartrevbranchcache(repo, bundler, outgoing)
1993
1994
1994 def check_heads(repo, their_heads, context):
1995 def check_heads(repo, their_heads, context):
1995 """check if the heads of a repo have been modified
1996 """check if the heads of a repo have been modified
1996
1997
1997 Used by peer for unbundling.
1998 Used by peer for unbundling.
1998 """
1999 """
1999 heads = repo.heads()
2000 heads = repo.heads()
2000 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
2001 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
2001 if not (their_heads == ['force'] or their_heads == heads or
2002 if not (their_heads == ['force'] or their_heads == heads or
2002 their_heads == ['hashed', heads_hash]):
2003 their_heads == ['hashed', heads_hash]):
2003 # someone else committed/pushed/unbundled while we
2004 # someone else committed/pushed/unbundled while we
2004 # were transferring data
2005 # were transferring data
2005 raise error.PushRaced('repository changed while %s - '
2006 raise error.PushRaced('repository changed while %s - '
2006 'please try again' % context)
2007 'please try again' % context)
2007
2008
2008 def unbundle(repo, cg, heads, source, url):
2009 def unbundle(repo, cg, heads, source, url):
2009 """Apply a bundle to a repo.
2010 """Apply a bundle to a repo.
2010
2011
2011 this function makes sure the repo is locked during the application and have
2012 this function makes sure the repo is locked during the application and have
2012 mechanism to check that no push race occurred between the creation of the
2013 mechanism to check that no push race occurred between the creation of the
2013 bundle and its application.
2014 bundle and its application.
2014
2015
2015 If the push was raced as PushRaced exception is raised."""
2016 If the push was raced as PushRaced exception is raised."""
2016 r = 0
2017 r = 0
2017 # need a transaction when processing a bundle2 stream
2018 # need a transaction when processing a bundle2 stream
2018 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2019 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
2019 lockandtr = [None, None, None]
2020 lockandtr = [None, None, None]
2020 recordout = None
2021 recordout = None
2021 # quick fix for output mismatch with bundle2 in 3.4
2022 # quick fix for output mismatch with bundle2 in 3.4
2022 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
2023 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
2023 if url.startswith('remote:http:') or url.startswith('remote:https:'):
2024 if url.startswith('remote:http:') or url.startswith('remote:https:'):
2024 captureoutput = True
2025 captureoutput = True
2025 try:
2026 try:
2026 # note: outside bundle1, 'heads' is expected to be empty and this
2027 # note: outside bundle1, 'heads' is expected to be empty and this
2027 # 'check_heads' call wil be a no-op
2028 # 'check_heads' call wil be a no-op
2028 check_heads(repo, heads, 'uploading changes')
2029 check_heads(repo, heads, 'uploading changes')
2029 # push can proceed
2030 # push can proceed
2030 if not isinstance(cg, bundle2.unbundle20):
2031 if not isinstance(cg, bundle2.unbundle20):
2031 # legacy case: bundle1 (changegroup 01)
2032 # legacy case: bundle1 (changegroup 01)
2032 txnname = "\n".join([source, util.hidepassword(url)])
2033 txnname = "\n".join([source, util.hidepassword(url)])
2033 with repo.lock(), repo.transaction(txnname) as tr:
2034 with repo.lock(), repo.transaction(txnname) as tr:
2034 op = bundle2.applybundle(repo, cg, tr, source, url)
2035 op = bundle2.applybundle(repo, cg, tr, source, url)
2035 r = bundle2.combinechangegroupresults(op)
2036 r = bundle2.combinechangegroupresults(op)
2036 else:
2037 else:
2037 r = None
2038 r = None
2038 try:
2039 try:
2039 def gettransaction():
2040 def gettransaction():
2040 if not lockandtr[2]:
2041 if not lockandtr[2]:
2041 lockandtr[0] = repo.wlock()
2042 lockandtr[0] = repo.wlock()
2042 lockandtr[1] = repo.lock()
2043 lockandtr[1] = repo.lock()
2043 lockandtr[2] = repo.transaction(source)
2044 lockandtr[2] = repo.transaction(source)
2044 lockandtr[2].hookargs['source'] = source
2045 lockandtr[2].hookargs['source'] = source
2045 lockandtr[2].hookargs['url'] = url
2046 lockandtr[2].hookargs['url'] = url
2046 lockandtr[2].hookargs['bundle2'] = '1'
2047 lockandtr[2].hookargs['bundle2'] = '1'
2047 return lockandtr[2]
2048 return lockandtr[2]
2048
2049
2049 # Do greedy locking by default until we're satisfied with lazy
2050 # Do greedy locking by default until we're satisfied with lazy
2050 # locking.
2051 # locking.
2051 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
2052 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
2052 gettransaction()
2053 gettransaction()
2053
2054
2054 op = bundle2.bundleoperation(repo, gettransaction,
2055 op = bundle2.bundleoperation(repo, gettransaction,
2055 captureoutput=captureoutput)
2056 captureoutput=captureoutput,
2057 source='push')
2056 try:
2058 try:
2057 op = bundle2.processbundle(repo, cg, op=op)
2059 op = bundle2.processbundle(repo, cg, op=op)
2058 finally:
2060 finally:
2059 r = op.reply
2061 r = op.reply
2060 if captureoutput and r is not None:
2062 if captureoutput and r is not None:
2061 repo.ui.pushbuffer(error=True, subproc=True)
2063 repo.ui.pushbuffer(error=True, subproc=True)
2062 def recordout(output):
2064 def recordout(output):
2063 r.newpart('output', data=output, mandatory=False)
2065 r.newpart('output', data=output, mandatory=False)
2064 if lockandtr[2] is not None:
2066 if lockandtr[2] is not None:
2065 lockandtr[2].close()
2067 lockandtr[2].close()
2066 except BaseException as exc:
2068 except BaseException as exc:
2067 exc.duringunbundle2 = True
2069 exc.duringunbundle2 = True
2068 if captureoutput and r is not None:
2070 if captureoutput and r is not None:
2069 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2071 parts = exc._bundle2salvagedoutput = r.salvageoutput()
2070 def recordout(output):
2072 def recordout(output):
2071 part = bundle2.bundlepart('output', data=output,
2073 part = bundle2.bundlepart('output', data=output,
2072 mandatory=False)
2074 mandatory=False)
2073 parts.append(part)
2075 parts.append(part)
2074 raise
2076 raise
2075 finally:
2077 finally:
2076 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2078 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
2077 if recordout is not None:
2079 if recordout is not None:
2078 recordout(repo.ui.popbuffer())
2080 recordout(repo.ui.popbuffer())
2079 return r
2081 return r
2080
2082
2081 def _maybeapplyclonebundle(pullop):
2083 def _maybeapplyclonebundle(pullop):
2082 """Apply a clone bundle from a remote, if possible."""
2084 """Apply a clone bundle from a remote, if possible."""
2083
2085
2084 repo = pullop.repo
2086 repo = pullop.repo
2085 remote = pullop.remote
2087 remote = pullop.remote
2086
2088
2087 if not repo.ui.configbool('ui', 'clonebundles'):
2089 if not repo.ui.configbool('ui', 'clonebundles'):
2088 return
2090 return
2089
2091
2090 # Only run if local repo is empty.
2092 # Only run if local repo is empty.
2091 if len(repo):
2093 if len(repo):
2092 return
2094 return
2093
2095
2094 if pullop.heads:
2096 if pullop.heads:
2095 return
2097 return
2096
2098
2097 if not remote.capable('clonebundles'):
2099 if not remote.capable('clonebundles'):
2098 return
2100 return
2099
2101
2100 res = remote._call('clonebundles')
2102 res = remote._call('clonebundles')
2101
2103
2102 # If we call the wire protocol command, that's good enough to record the
2104 # If we call the wire protocol command, that's good enough to record the
2103 # attempt.
2105 # attempt.
2104 pullop.clonebundleattempted = True
2106 pullop.clonebundleattempted = True
2105
2107
2106 entries = parseclonebundlesmanifest(repo, res)
2108 entries = parseclonebundlesmanifest(repo, res)
2107 if not entries:
2109 if not entries:
2108 repo.ui.note(_('no clone bundles available on remote; '
2110 repo.ui.note(_('no clone bundles available on remote; '
2109 'falling back to regular clone\n'))
2111 'falling back to regular clone\n'))
2110 return
2112 return
2111
2113
2112 entries = filterclonebundleentries(
2114 entries = filterclonebundleentries(
2113 repo, entries, streamclonerequested=pullop.streamclonerequested)
2115 repo, entries, streamclonerequested=pullop.streamclonerequested)
2114
2116
2115 if not entries:
2117 if not entries:
2116 # There is a thundering herd concern here. However, if a server
2118 # There is a thundering herd concern here. However, if a server
2117 # operator doesn't advertise bundles appropriate for its clients,
2119 # operator doesn't advertise bundles appropriate for its clients,
2118 # they deserve what's coming. Furthermore, from a client's
2120 # they deserve what's coming. Furthermore, from a client's
2119 # perspective, no automatic fallback would mean not being able to
2121 # perspective, no automatic fallback would mean not being able to
2120 # clone!
2122 # clone!
2121 repo.ui.warn(_('no compatible clone bundles available on server; '
2123 repo.ui.warn(_('no compatible clone bundles available on server; '
2122 'falling back to regular clone\n'))
2124 'falling back to regular clone\n'))
2123 repo.ui.warn(_('(you may want to report this to the server '
2125 repo.ui.warn(_('(you may want to report this to the server '
2124 'operator)\n'))
2126 'operator)\n'))
2125 return
2127 return
2126
2128
2127 entries = sortclonebundleentries(repo.ui, entries)
2129 entries = sortclonebundleentries(repo.ui, entries)
2128
2130
2129 url = entries[0]['URL']
2131 url = entries[0]['URL']
2130 repo.ui.status(_('applying clone bundle from %s\n') % url)
2132 repo.ui.status(_('applying clone bundle from %s\n') % url)
2131 if trypullbundlefromurl(repo.ui, repo, url):
2133 if trypullbundlefromurl(repo.ui, repo, url):
2132 repo.ui.status(_('finished applying clone bundle\n'))
2134 repo.ui.status(_('finished applying clone bundle\n'))
2133 # Bundle failed.
2135 # Bundle failed.
2134 #
2136 #
2135 # We abort by default to avoid the thundering herd of
2137 # We abort by default to avoid the thundering herd of
2136 # clients flooding a server that was expecting expensive
2138 # clients flooding a server that was expecting expensive
2137 # clone load to be offloaded.
2139 # clone load to be offloaded.
2138 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2140 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2139 repo.ui.warn(_('falling back to normal clone\n'))
2141 repo.ui.warn(_('falling back to normal clone\n'))
2140 else:
2142 else:
2141 raise error.Abort(_('error applying bundle'),
2143 raise error.Abort(_('error applying bundle'),
2142 hint=_('if this error persists, consider contacting '
2144 hint=_('if this error persists, consider contacting '
2143 'the server operator or disable clone '
2145 'the server operator or disable clone '
2144 'bundles via '
2146 'bundles via '
2145 '"--config ui.clonebundles=false"'))
2147 '"--config ui.clonebundles=false"'))
2146
2148
2147 def parseclonebundlesmanifest(repo, s):
2149 def parseclonebundlesmanifest(repo, s):
2148 """Parses the raw text of a clone bundles manifest.
2150 """Parses the raw text of a clone bundles manifest.
2149
2151
2150 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2152 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2151 to the URL and other keys are the attributes for the entry.
2153 to the URL and other keys are the attributes for the entry.
2152 """
2154 """
2153 m = []
2155 m = []
2154 for line in s.splitlines():
2156 for line in s.splitlines():
2155 fields = line.split()
2157 fields = line.split()
2156 if not fields:
2158 if not fields:
2157 continue
2159 continue
2158 attrs = {'URL': fields[0]}
2160 attrs = {'URL': fields[0]}
2159 for rawattr in fields[1:]:
2161 for rawattr in fields[1:]:
2160 key, value = rawattr.split('=', 1)
2162 key, value = rawattr.split('=', 1)
2161 key = urlreq.unquote(key)
2163 key = urlreq.unquote(key)
2162 value = urlreq.unquote(value)
2164 value = urlreq.unquote(value)
2163 attrs[key] = value
2165 attrs[key] = value
2164
2166
2165 # Parse BUNDLESPEC into components. This makes client-side
2167 # Parse BUNDLESPEC into components. This makes client-side
2166 # preferences easier to specify since you can prefer a single
2168 # preferences easier to specify since you can prefer a single
2167 # component of the BUNDLESPEC.
2169 # component of the BUNDLESPEC.
2168 if key == 'BUNDLESPEC':
2170 if key == 'BUNDLESPEC':
2169 try:
2171 try:
2170 bundlespec = parsebundlespec(repo, value,
2172 bundlespec = parsebundlespec(repo, value,
2171 externalnames=True)
2173 externalnames=True)
2172 attrs['COMPRESSION'] = bundlespec.compression
2174 attrs['COMPRESSION'] = bundlespec.compression
2173 attrs['VERSION'] = bundlespec.version
2175 attrs['VERSION'] = bundlespec.version
2174 except error.InvalidBundleSpecification:
2176 except error.InvalidBundleSpecification:
2175 pass
2177 pass
2176 except error.UnsupportedBundleSpecification:
2178 except error.UnsupportedBundleSpecification:
2177 pass
2179 pass
2178
2180
2179 m.append(attrs)
2181 m.append(attrs)
2180
2182
2181 return m
2183 return m
2182
2184
2183 def isstreamclonespec(bundlespec):
2185 def isstreamclonespec(bundlespec):
2184 # Stream clone v1
2186 # Stream clone v1
2185 if (bundlespec.compression == 'UN' and bundlespec.version == 's1'):
2187 if (bundlespec.compression == 'UN' and bundlespec.version == 's1'):
2186 return True
2188 return True
2187
2189
2188 # Stream clone v2
2190 # Stream clone v2
2189 if (bundlespec.compression == 'UN' and bundlespec.version == '02' and \
2191 if (bundlespec.compression == 'UN' and bundlespec.version == '02' and \
2190 bundlespec.contentopts.get('streamv2')):
2192 bundlespec.contentopts.get('streamv2')):
2191 return True
2193 return True
2192
2194
2193 return False
2195 return False
2194
2196
2195 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2197 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2196 """Remove incompatible clone bundle manifest entries.
2198 """Remove incompatible clone bundle manifest entries.
2197
2199
2198 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2200 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2199 and returns a new list consisting of only the entries that this client
2201 and returns a new list consisting of only the entries that this client
2200 should be able to apply.
2202 should be able to apply.
2201
2203
2202 There is no guarantee we'll be able to apply all returned entries because
2204 There is no guarantee we'll be able to apply all returned entries because
2203 the metadata we use to filter on may be missing or wrong.
2205 the metadata we use to filter on may be missing or wrong.
2204 """
2206 """
2205 newentries = []
2207 newentries = []
2206 for entry in entries:
2208 for entry in entries:
2207 spec = entry.get('BUNDLESPEC')
2209 spec = entry.get('BUNDLESPEC')
2208 if spec:
2210 if spec:
2209 try:
2211 try:
2210 bundlespec = parsebundlespec(repo, spec, strict=True)
2212 bundlespec = parsebundlespec(repo, spec, strict=True)
2211
2213
2212 # If a stream clone was requested, filter out non-streamclone
2214 # If a stream clone was requested, filter out non-streamclone
2213 # entries.
2215 # entries.
2214 if streamclonerequested and not isstreamclonespec(bundlespec):
2216 if streamclonerequested and not isstreamclonespec(bundlespec):
2215 repo.ui.debug('filtering %s because not a stream clone\n' %
2217 repo.ui.debug('filtering %s because not a stream clone\n' %
2216 entry['URL'])
2218 entry['URL'])
2217 continue
2219 continue
2218
2220
2219 except error.InvalidBundleSpecification as e:
2221 except error.InvalidBundleSpecification as e:
2220 repo.ui.debug(str(e) + '\n')
2222 repo.ui.debug(str(e) + '\n')
2221 continue
2223 continue
2222 except error.UnsupportedBundleSpecification as e:
2224 except error.UnsupportedBundleSpecification as e:
2223 repo.ui.debug('filtering %s because unsupported bundle '
2225 repo.ui.debug('filtering %s because unsupported bundle '
2224 'spec: %s\n' % (
2226 'spec: %s\n' % (
2225 entry['URL'], stringutil.forcebytestr(e)))
2227 entry['URL'], stringutil.forcebytestr(e)))
2226 continue
2228 continue
2227 # If we don't have a spec and requested a stream clone, we don't know
2229 # If we don't have a spec and requested a stream clone, we don't know
2228 # what the entry is so don't attempt to apply it.
2230 # what the entry is so don't attempt to apply it.
2229 elif streamclonerequested:
2231 elif streamclonerequested:
2230 repo.ui.debug('filtering %s because cannot determine if a stream '
2232 repo.ui.debug('filtering %s because cannot determine if a stream '
2231 'clone bundle\n' % entry['URL'])
2233 'clone bundle\n' % entry['URL'])
2232 continue
2234 continue
2233
2235
2234 if 'REQUIRESNI' in entry and not sslutil.hassni:
2236 if 'REQUIRESNI' in entry and not sslutil.hassni:
2235 repo.ui.debug('filtering %s because SNI not supported\n' %
2237 repo.ui.debug('filtering %s because SNI not supported\n' %
2236 entry['URL'])
2238 entry['URL'])
2237 continue
2239 continue
2238
2240
2239 newentries.append(entry)
2241 newentries.append(entry)
2240
2242
2241 return newentries
2243 return newentries
2242
2244
2243 class clonebundleentry(object):
2245 class clonebundleentry(object):
2244 """Represents an item in a clone bundles manifest.
2246 """Represents an item in a clone bundles manifest.
2245
2247
2246 This rich class is needed to support sorting since sorted() in Python 3
2248 This rich class is needed to support sorting since sorted() in Python 3
2247 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2249 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2248 won't work.
2250 won't work.
2249 """
2251 """
2250
2252
2251 def __init__(self, value, prefers):
2253 def __init__(self, value, prefers):
2252 self.value = value
2254 self.value = value
2253 self.prefers = prefers
2255 self.prefers = prefers
2254
2256
2255 def _cmp(self, other):
2257 def _cmp(self, other):
2256 for prefkey, prefvalue in self.prefers:
2258 for prefkey, prefvalue in self.prefers:
2257 avalue = self.value.get(prefkey)
2259 avalue = self.value.get(prefkey)
2258 bvalue = other.value.get(prefkey)
2260 bvalue = other.value.get(prefkey)
2259
2261
2260 # Special case for b missing attribute and a matches exactly.
2262 # Special case for b missing attribute and a matches exactly.
2261 if avalue is not None and bvalue is None and avalue == prefvalue:
2263 if avalue is not None and bvalue is None and avalue == prefvalue:
2262 return -1
2264 return -1
2263
2265
2264 # Special case for a missing attribute and b matches exactly.
2266 # Special case for a missing attribute and b matches exactly.
2265 if bvalue is not None and avalue is None and bvalue == prefvalue:
2267 if bvalue is not None and avalue is None and bvalue == prefvalue:
2266 return 1
2268 return 1
2267
2269
2268 # We can't compare unless attribute present on both.
2270 # We can't compare unless attribute present on both.
2269 if avalue is None or bvalue is None:
2271 if avalue is None or bvalue is None:
2270 continue
2272 continue
2271
2273
2272 # Same values should fall back to next attribute.
2274 # Same values should fall back to next attribute.
2273 if avalue == bvalue:
2275 if avalue == bvalue:
2274 continue
2276 continue
2275
2277
2276 # Exact matches come first.
2278 # Exact matches come first.
2277 if avalue == prefvalue:
2279 if avalue == prefvalue:
2278 return -1
2280 return -1
2279 if bvalue == prefvalue:
2281 if bvalue == prefvalue:
2280 return 1
2282 return 1
2281
2283
2282 # Fall back to next attribute.
2284 # Fall back to next attribute.
2283 continue
2285 continue
2284
2286
2285 # If we got here we couldn't sort by attributes and prefers. Fall
2287 # If we got here we couldn't sort by attributes and prefers. Fall
2286 # back to index order.
2288 # back to index order.
2287 return 0
2289 return 0
2288
2290
2289 def __lt__(self, other):
2291 def __lt__(self, other):
2290 return self._cmp(other) < 0
2292 return self._cmp(other) < 0
2291
2293
2292 def __gt__(self, other):
2294 def __gt__(self, other):
2293 return self._cmp(other) > 0
2295 return self._cmp(other) > 0
2294
2296
2295 def __eq__(self, other):
2297 def __eq__(self, other):
2296 return self._cmp(other) == 0
2298 return self._cmp(other) == 0
2297
2299
2298 def __le__(self, other):
2300 def __le__(self, other):
2299 return self._cmp(other) <= 0
2301 return self._cmp(other) <= 0
2300
2302
2301 def __ge__(self, other):
2303 def __ge__(self, other):
2302 return self._cmp(other) >= 0
2304 return self._cmp(other) >= 0
2303
2305
2304 def __ne__(self, other):
2306 def __ne__(self, other):
2305 return self._cmp(other) != 0
2307 return self._cmp(other) != 0
2306
2308
2307 def sortclonebundleentries(ui, entries):
2309 def sortclonebundleentries(ui, entries):
2308 prefers = ui.configlist('ui', 'clonebundleprefers')
2310 prefers = ui.configlist('ui', 'clonebundleprefers')
2309 if not prefers:
2311 if not prefers:
2310 return list(entries)
2312 return list(entries)
2311
2313
2312 prefers = [p.split('=', 1) for p in prefers]
2314 prefers = [p.split('=', 1) for p in prefers]
2313
2315
2314 items = sorted(clonebundleentry(v, prefers) for v in entries)
2316 items = sorted(clonebundleentry(v, prefers) for v in entries)
2315 return [i.value for i in items]
2317 return [i.value for i in items]
2316
2318
2317 def trypullbundlefromurl(ui, repo, url):
2319 def trypullbundlefromurl(ui, repo, url):
2318 """Attempt to apply a bundle from a URL."""
2320 """Attempt to apply a bundle from a URL."""
2319 with repo.lock(), repo.transaction('bundleurl') as tr:
2321 with repo.lock(), repo.transaction('bundleurl') as tr:
2320 try:
2322 try:
2321 fh = urlmod.open(ui, url)
2323 fh = urlmod.open(ui, url)
2322 cg = readbundle(ui, fh, 'stream')
2324 cg = readbundle(ui, fh, 'stream')
2323
2325
2324 if isinstance(cg, streamclone.streamcloneapplier):
2326 if isinstance(cg, streamclone.streamcloneapplier):
2325 cg.apply(repo)
2327 cg.apply(repo)
2326 else:
2328 else:
2327 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2329 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2328 return True
2330 return True
2329 except urlerr.httperror as e:
2331 except urlerr.httperror as e:
2330 ui.warn(_('HTTP error fetching bundle: %s\n') %
2332 ui.warn(_('HTTP error fetching bundle: %s\n') %
2331 stringutil.forcebytestr(e))
2333 stringutil.forcebytestr(e))
2332 except urlerr.urlerror as e:
2334 except urlerr.urlerror as e:
2333 ui.warn(_('error fetching bundle: %s\n') %
2335 ui.warn(_('error fetching bundle: %s\n') %
2334 stringutil.forcebytestr(e.reason))
2336 stringutil.forcebytestr(e.reason))
2335
2337
2336 return False
2338 return False
General Comments 0
You need to be logged in to leave comments. Login now