##// END OF EJS Templates
streamclone: add support for bundle2 based stream clone...
Boris Feld -
r35781:7eedbd5d default
parent child Browse files
Show More
@@ -1,2144 +1,2147
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import errno
150 import errno
151 import os
151 import os
152 import re
152 import re
153 import string
153 import string
154 import struct
154 import struct
155 import sys
155 import sys
156
156
157 from .i18n import _
157 from .i18n import _
158 from . import (
158 from . import (
159 bookmarks,
159 bookmarks,
160 changegroup,
160 changegroup,
161 error,
161 error,
162 node as nodemod,
162 node as nodemod,
163 obsolete,
163 obsolete,
164 phases,
164 phases,
165 pushkey,
165 pushkey,
166 pycompat,
166 pycompat,
167 streamclone,
167 streamclone,
168 tags,
168 tags,
169 url,
169 url,
170 util,
170 util,
171 )
171 )
172
172
173 urlerr = util.urlerr
173 urlerr = util.urlerr
174 urlreq = util.urlreq
174 urlreq = util.urlreq
175
175
176 _pack = struct.pack
176 _pack = struct.pack
177 _unpack = struct.unpack
177 _unpack = struct.unpack
178
178
179 _fstreamparamsize = '>i'
179 _fstreamparamsize = '>i'
180 _fpartheadersize = '>i'
180 _fpartheadersize = '>i'
181 _fparttypesize = '>B'
181 _fparttypesize = '>B'
182 _fpartid = '>I'
182 _fpartid = '>I'
183 _fpayloadsize = '>i'
183 _fpayloadsize = '>i'
184 _fpartparamcount = '>BB'
184 _fpartparamcount = '>BB'
185
185
186 preferedchunksize = 4096
186 preferedchunksize = 4096
187
187
188 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
188 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
189
189
190 def outdebug(ui, message):
190 def outdebug(ui, message):
191 """debug regarding output stream (bundling)"""
191 """debug regarding output stream (bundling)"""
192 if ui.configbool('devel', 'bundle2.debug'):
192 if ui.configbool('devel', 'bundle2.debug'):
193 ui.debug('bundle2-output: %s\n' % message)
193 ui.debug('bundle2-output: %s\n' % message)
194
194
195 def indebug(ui, message):
195 def indebug(ui, message):
196 """debug on input stream (unbundling)"""
196 """debug on input stream (unbundling)"""
197 if ui.configbool('devel', 'bundle2.debug'):
197 if ui.configbool('devel', 'bundle2.debug'):
198 ui.debug('bundle2-input: %s\n' % message)
198 ui.debug('bundle2-input: %s\n' % message)
199
199
200 def validateparttype(parttype):
200 def validateparttype(parttype):
201 """raise ValueError if a parttype contains invalid character"""
201 """raise ValueError if a parttype contains invalid character"""
202 if _parttypeforbidden.search(parttype):
202 if _parttypeforbidden.search(parttype):
203 raise ValueError(parttype)
203 raise ValueError(parttype)
204
204
205 def _makefpartparamsizes(nbparams):
205 def _makefpartparamsizes(nbparams):
206 """return a struct format to read part parameter sizes
206 """return a struct format to read part parameter sizes
207
207
208 The number parameters is variable so we need to build that format
208 The number parameters is variable so we need to build that format
209 dynamically.
209 dynamically.
210 """
210 """
211 return '>'+('BB'*nbparams)
211 return '>'+('BB'*nbparams)
212
212
213 parthandlermapping = {}
213 parthandlermapping = {}
214
214
215 def parthandler(parttype, params=()):
215 def parthandler(parttype, params=()):
216 """decorator that register a function as a bundle2 part handler
216 """decorator that register a function as a bundle2 part handler
217
217
218 eg::
218 eg::
219
219
220 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
220 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
221 def myparttypehandler(...):
221 def myparttypehandler(...):
222 '''process a part of type "my part".'''
222 '''process a part of type "my part".'''
223 ...
223 ...
224 """
224 """
225 validateparttype(parttype)
225 validateparttype(parttype)
226 def _decorator(func):
226 def _decorator(func):
227 lparttype = parttype.lower() # enforce lower case matching.
227 lparttype = parttype.lower() # enforce lower case matching.
228 assert lparttype not in parthandlermapping
228 assert lparttype not in parthandlermapping
229 parthandlermapping[lparttype] = func
229 parthandlermapping[lparttype] = func
230 func.params = frozenset(params)
230 func.params = frozenset(params)
231 return func
231 return func
232 return _decorator
232 return _decorator
233
233
234 class unbundlerecords(object):
234 class unbundlerecords(object):
235 """keep record of what happens during and unbundle
235 """keep record of what happens during and unbundle
236
236
237 New records are added using `records.add('cat', obj)`. Where 'cat' is a
237 New records are added using `records.add('cat', obj)`. Where 'cat' is a
238 category of record and obj is an arbitrary object.
238 category of record and obj is an arbitrary object.
239
239
240 `records['cat']` will return all entries of this category 'cat'.
240 `records['cat']` will return all entries of this category 'cat'.
241
241
242 Iterating on the object itself will yield `('category', obj)` tuples
242 Iterating on the object itself will yield `('category', obj)` tuples
243 for all entries.
243 for all entries.
244
244
245 All iterations happens in chronological order.
245 All iterations happens in chronological order.
246 """
246 """
247
247
248 def __init__(self):
248 def __init__(self):
249 self._categories = {}
249 self._categories = {}
250 self._sequences = []
250 self._sequences = []
251 self._replies = {}
251 self._replies = {}
252
252
253 def add(self, category, entry, inreplyto=None):
253 def add(self, category, entry, inreplyto=None):
254 """add a new record of a given category.
254 """add a new record of a given category.
255
255
256 The entry can then be retrieved in the list returned by
256 The entry can then be retrieved in the list returned by
257 self['category']."""
257 self['category']."""
258 self._categories.setdefault(category, []).append(entry)
258 self._categories.setdefault(category, []).append(entry)
259 self._sequences.append((category, entry))
259 self._sequences.append((category, entry))
260 if inreplyto is not None:
260 if inreplyto is not None:
261 self.getreplies(inreplyto).add(category, entry)
261 self.getreplies(inreplyto).add(category, entry)
262
262
263 def getreplies(self, partid):
263 def getreplies(self, partid):
264 """get the records that are replies to a specific part"""
264 """get the records that are replies to a specific part"""
265 return self._replies.setdefault(partid, unbundlerecords())
265 return self._replies.setdefault(partid, unbundlerecords())
266
266
267 def __getitem__(self, cat):
267 def __getitem__(self, cat):
268 return tuple(self._categories.get(cat, ()))
268 return tuple(self._categories.get(cat, ()))
269
269
270 def __iter__(self):
270 def __iter__(self):
271 return iter(self._sequences)
271 return iter(self._sequences)
272
272
273 def __len__(self):
273 def __len__(self):
274 return len(self._sequences)
274 return len(self._sequences)
275
275
276 def __nonzero__(self):
276 def __nonzero__(self):
277 return bool(self._sequences)
277 return bool(self._sequences)
278
278
279 __bool__ = __nonzero__
279 __bool__ = __nonzero__
280
280
281 class bundleoperation(object):
281 class bundleoperation(object):
282 """an object that represents a single bundling process
282 """an object that represents a single bundling process
283
283
284 Its purpose is to carry unbundle-related objects and states.
284 Its purpose is to carry unbundle-related objects and states.
285
285
286 A new object should be created at the beginning of each bundle processing.
286 A new object should be created at the beginning of each bundle processing.
287 The object is to be returned by the processing function.
287 The object is to be returned by the processing function.
288
288
289 The object has very little content now it will ultimately contain:
289 The object has very little content now it will ultimately contain:
290 * an access to the repo the bundle is applied to,
290 * an access to the repo the bundle is applied to,
291 * a ui object,
291 * a ui object,
292 * a way to retrieve a transaction to add changes to the repo,
292 * a way to retrieve a transaction to add changes to the repo,
293 * a way to record the result of processing each part,
293 * a way to record the result of processing each part,
294 * a way to construct a bundle response when applicable.
294 * a way to construct a bundle response when applicable.
295 """
295 """
296
296
297 def __init__(self, repo, transactiongetter, captureoutput=True):
297 def __init__(self, repo, transactiongetter, captureoutput=True):
298 self.repo = repo
298 self.repo = repo
299 self.ui = repo.ui
299 self.ui = repo.ui
300 self.records = unbundlerecords()
300 self.records = unbundlerecords()
301 self.reply = None
301 self.reply = None
302 self.captureoutput = captureoutput
302 self.captureoutput = captureoutput
303 self.hookargs = {}
303 self.hookargs = {}
304 self._gettransaction = transactiongetter
304 self._gettransaction = transactiongetter
305 # carries value that can modify part behavior
305 # carries value that can modify part behavior
306 self.modes = {}
306 self.modes = {}
307
307
308 def gettransaction(self):
308 def gettransaction(self):
309 transaction = self._gettransaction()
309 transaction = self._gettransaction()
310
310
311 if self.hookargs:
311 if self.hookargs:
312 # the ones added to the transaction supercede those added
312 # the ones added to the transaction supercede those added
313 # to the operation.
313 # to the operation.
314 self.hookargs.update(transaction.hookargs)
314 self.hookargs.update(transaction.hookargs)
315 transaction.hookargs = self.hookargs
315 transaction.hookargs = self.hookargs
316
316
317 # mark the hookargs as flushed. further attempts to add to
317 # mark the hookargs as flushed. further attempts to add to
318 # hookargs will result in an abort.
318 # hookargs will result in an abort.
319 self.hookargs = None
319 self.hookargs = None
320
320
321 return transaction
321 return transaction
322
322
323 def addhookargs(self, hookargs):
323 def addhookargs(self, hookargs):
324 if self.hookargs is None:
324 if self.hookargs is None:
325 raise error.ProgrammingError('attempted to add hookargs to '
325 raise error.ProgrammingError('attempted to add hookargs to '
326 'operation after transaction started')
326 'operation after transaction started')
327 self.hookargs.update(hookargs)
327 self.hookargs.update(hookargs)
328
328
329 class TransactionUnavailable(RuntimeError):
329 class TransactionUnavailable(RuntimeError):
330 pass
330 pass
331
331
332 def _notransaction():
332 def _notransaction():
333 """default method to get a transaction while processing a bundle
333 """default method to get a transaction while processing a bundle
334
334
335 Raise an exception to highlight the fact that no transaction was expected
335 Raise an exception to highlight the fact that no transaction was expected
336 to be created"""
336 to be created"""
337 raise TransactionUnavailable()
337 raise TransactionUnavailable()
338
338
339 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
339 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
340 # transform me into unbundler.apply() as soon as the freeze is lifted
340 # transform me into unbundler.apply() as soon as the freeze is lifted
341 if isinstance(unbundler, unbundle20):
341 if isinstance(unbundler, unbundle20):
342 tr.hookargs['bundle2'] = '1'
342 tr.hookargs['bundle2'] = '1'
343 if source is not None and 'source' not in tr.hookargs:
343 if source is not None and 'source' not in tr.hookargs:
344 tr.hookargs['source'] = source
344 tr.hookargs['source'] = source
345 if url is not None and 'url' not in tr.hookargs:
345 if url is not None and 'url' not in tr.hookargs:
346 tr.hookargs['url'] = url
346 tr.hookargs['url'] = url
347 return processbundle(repo, unbundler, lambda: tr)
347 return processbundle(repo, unbundler, lambda: tr)
348 else:
348 else:
349 # the transactiongetter won't be used, but we might as well set it
349 # the transactiongetter won't be used, but we might as well set it
350 op = bundleoperation(repo, lambda: tr)
350 op = bundleoperation(repo, lambda: tr)
351 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
351 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
352 return op
352 return op
353
353
354 class partiterator(object):
354 class partiterator(object):
355 def __init__(self, repo, op, unbundler):
355 def __init__(self, repo, op, unbundler):
356 self.repo = repo
356 self.repo = repo
357 self.op = op
357 self.op = op
358 self.unbundler = unbundler
358 self.unbundler = unbundler
359 self.iterator = None
359 self.iterator = None
360 self.count = 0
360 self.count = 0
361 self.current = None
361 self.current = None
362
362
363 def __enter__(self):
363 def __enter__(self):
364 def func():
364 def func():
365 itr = enumerate(self.unbundler.iterparts())
365 itr = enumerate(self.unbundler.iterparts())
366 for count, p in itr:
366 for count, p in itr:
367 self.count = count
367 self.count = count
368 self.current = p
368 self.current = p
369 yield p
369 yield p
370 p.consume()
370 p.consume()
371 self.current = None
371 self.current = None
372 self.iterator = func()
372 self.iterator = func()
373 return self.iterator
373 return self.iterator
374
374
375 def __exit__(self, type, exc, tb):
375 def __exit__(self, type, exc, tb):
376 if not self.iterator:
376 if not self.iterator:
377 return
377 return
378
378
379 # Only gracefully abort in a normal exception situation. User aborts
379 # Only gracefully abort in a normal exception situation. User aborts
380 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
380 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
381 # and should not gracefully cleanup.
381 # and should not gracefully cleanup.
382 if isinstance(exc, Exception):
382 if isinstance(exc, Exception):
383 # Any exceptions seeking to the end of the bundle at this point are
383 # Any exceptions seeking to the end of the bundle at this point are
384 # almost certainly related to the underlying stream being bad.
384 # almost certainly related to the underlying stream being bad.
385 # And, chances are that the exception we're handling is related to
385 # And, chances are that the exception we're handling is related to
386 # getting in that bad state. So, we swallow the seeking error and
386 # getting in that bad state. So, we swallow the seeking error and
387 # re-raise the original error.
387 # re-raise the original error.
388 seekerror = False
388 seekerror = False
389 try:
389 try:
390 if self.current:
390 if self.current:
391 # consume the part content to not corrupt the stream.
391 # consume the part content to not corrupt the stream.
392 self.current.consume()
392 self.current.consume()
393
393
394 for part in self.iterator:
394 for part in self.iterator:
395 # consume the bundle content
395 # consume the bundle content
396 part.consume()
396 part.consume()
397 except Exception:
397 except Exception:
398 seekerror = True
398 seekerror = True
399
399
400 # Small hack to let caller code distinguish exceptions from bundle2
400 # Small hack to let caller code distinguish exceptions from bundle2
401 # processing from processing the old format. This is mostly needed
401 # processing from processing the old format. This is mostly needed
402 # to handle different return codes to unbundle according to the type
402 # to handle different return codes to unbundle according to the type
403 # of bundle. We should probably clean up or drop this return code
403 # of bundle. We should probably clean up or drop this return code
404 # craziness in a future version.
404 # craziness in a future version.
405 exc.duringunbundle2 = True
405 exc.duringunbundle2 = True
406 salvaged = []
406 salvaged = []
407 replycaps = None
407 replycaps = None
408 if self.op.reply is not None:
408 if self.op.reply is not None:
409 salvaged = self.op.reply.salvageoutput()
409 salvaged = self.op.reply.salvageoutput()
410 replycaps = self.op.reply.capabilities
410 replycaps = self.op.reply.capabilities
411 exc._replycaps = replycaps
411 exc._replycaps = replycaps
412 exc._bundle2salvagedoutput = salvaged
412 exc._bundle2salvagedoutput = salvaged
413
413
414 # Re-raising from a variable loses the original stack. So only use
414 # Re-raising from a variable loses the original stack. So only use
415 # that form if we need to.
415 # that form if we need to.
416 if seekerror:
416 if seekerror:
417 raise exc
417 raise exc
418
418
419 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
419 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
420 self.count)
420 self.count)
421
421
422 def processbundle(repo, unbundler, transactiongetter=None, op=None):
422 def processbundle(repo, unbundler, transactiongetter=None, op=None):
423 """This function process a bundle, apply effect to/from a repo
423 """This function process a bundle, apply effect to/from a repo
424
424
425 It iterates over each part then searches for and uses the proper handling
425 It iterates over each part then searches for and uses the proper handling
426 code to process the part. Parts are processed in order.
426 code to process the part. Parts are processed in order.
427
427
428 Unknown Mandatory part will abort the process.
428 Unknown Mandatory part will abort the process.
429
429
430 It is temporarily possible to provide a prebuilt bundleoperation to the
430 It is temporarily possible to provide a prebuilt bundleoperation to the
431 function. This is used to ensure output is properly propagated in case of
431 function. This is used to ensure output is properly propagated in case of
432 an error during the unbundling. This output capturing part will likely be
432 an error during the unbundling. This output capturing part will likely be
433 reworked and this ability will probably go away in the process.
433 reworked and this ability will probably go away in the process.
434 """
434 """
435 if op is None:
435 if op is None:
436 if transactiongetter is None:
436 if transactiongetter is None:
437 transactiongetter = _notransaction
437 transactiongetter = _notransaction
438 op = bundleoperation(repo, transactiongetter)
438 op = bundleoperation(repo, transactiongetter)
439 # todo:
439 # todo:
440 # - replace this is a init function soon.
440 # - replace this is a init function soon.
441 # - exception catching
441 # - exception catching
442 unbundler.params
442 unbundler.params
443 if repo.ui.debugflag:
443 if repo.ui.debugflag:
444 msg = ['bundle2-input-bundle:']
444 msg = ['bundle2-input-bundle:']
445 if unbundler.params:
445 if unbundler.params:
446 msg.append(' %i params' % len(unbundler.params))
446 msg.append(' %i params' % len(unbundler.params))
447 if op._gettransaction is None or op._gettransaction is _notransaction:
447 if op._gettransaction is None or op._gettransaction is _notransaction:
448 msg.append(' no-transaction')
448 msg.append(' no-transaction')
449 else:
449 else:
450 msg.append(' with-transaction')
450 msg.append(' with-transaction')
451 msg.append('\n')
451 msg.append('\n')
452 repo.ui.debug(''.join(msg))
452 repo.ui.debug(''.join(msg))
453
453
454 processparts(repo, op, unbundler)
454 processparts(repo, op, unbundler)
455
455
456 return op
456 return op
457
457
458 def processparts(repo, op, unbundler):
458 def processparts(repo, op, unbundler):
459 with partiterator(repo, op, unbundler) as parts:
459 with partiterator(repo, op, unbundler) as parts:
460 for part in parts:
460 for part in parts:
461 _processpart(op, part)
461 _processpart(op, part)
462
462
463 def _processchangegroup(op, cg, tr, source, url, **kwargs):
463 def _processchangegroup(op, cg, tr, source, url, **kwargs):
464 ret = cg.apply(op.repo, tr, source, url, **kwargs)
464 ret = cg.apply(op.repo, tr, source, url, **kwargs)
465 op.records.add('changegroup', {
465 op.records.add('changegroup', {
466 'return': ret,
466 'return': ret,
467 })
467 })
468 return ret
468 return ret
469
469
470 def _gethandler(op, part):
470 def _gethandler(op, part):
471 status = 'unknown' # used by debug output
471 status = 'unknown' # used by debug output
472 try:
472 try:
473 handler = parthandlermapping.get(part.type)
473 handler = parthandlermapping.get(part.type)
474 if handler is None:
474 if handler is None:
475 status = 'unsupported-type'
475 status = 'unsupported-type'
476 raise error.BundleUnknownFeatureError(parttype=part.type)
476 raise error.BundleUnknownFeatureError(parttype=part.type)
477 indebug(op.ui, 'found a handler for part %s' % part.type)
477 indebug(op.ui, 'found a handler for part %s' % part.type)
478 unknownparams = part.mandatorykeys - handler.params
478 unknownparams = part.mandatorykeys - handler.params
479 if unknownparams:
479 if unknownparams:
480 unknownparams = list(unknownparams)
480 unknownparams = list(unknownparams)
481 unknownparams.sort()
481 unknownparams.sort()
482 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
482 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
483 raise error.BundleUnknownFeatureError(parttype=part.type,
483 raise error.BundleUnknownFeatureError(parttype=part.type,
484 params=unknownparams)
484 params=unknownparams)
485 status = 'supported'
485 status = 'supported'
486 except error.BundleUnknownFeatureError as exc:
486 except error.BundleUnknownFeatureError as exc:
487 if part.mandatory: # mandatory parts
487 if part.mandatory: # mandatory parts
488 raise
488 raise
489 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
489 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
490 return # skip to part processing
490 return # skip to part processing
491 finally:
491 finally:
492 if op.ui.debugflag:
492 if op.ui.debugflag:
493 msg = ['bundle2-input-part: "%s"' % part.type]
493 msg = ['bundle2-input-part: "%s"' % part.type]
494 if not part.mandatory:
494 if not part.mandatory:
495 msg.append(' (advisory)')
495 msg.append(' (advisory)')
496 nbmp = len(part.mandatorykeys)
496 nbmp = len(part.mandatorykeys)
497 nbap = len(part.params) - nbmp
497 nbap = len(part.params) - nbmp
498 if nbmp or nbap:
498 if nbmp or nbap:
499 msg.append(' (params:')
499 msg.append(' (params:')
500 if nbmp:
500 if nbmp:
501 msg.append(' %i mandatory' % nbmp)
501 msg.append(' %i mandatory' % nbmp)
502 if nbap:
502 if nbap:
503 msg.append(' %i advisory' % nbmp)
503 msg.append(' %i advisory' % nbmp)
504 msg.append(')')
504 msg.append(')')
505 msg.append(' %s\n' % status)
505 msg.append(' %s\n' % status)
506 op.ui.debug(''.join(msg))
506 op.ui.debug(''.join(msg))
507
507
508 return handler
508 return handler
509
509
510 def _processpart(op, part):
510 def _processpart(op, part):
511 """process a single part from a bundle
511 """process a single part from a bundle
512
512
513 The part is guaranteed to have been fully consumed when the function exits
513 The part is guaranteed to have been fully consumed when the function exits
514 (even if an exception is raised)."""
514 (even if an exception is raised)."""
515 handler = _gethandler(op, part)
515 handler = _gethandler(op, part)
516 if handler is None:
516 if handler is None:
517 return
517 return
518
518
519 # handler is called outside the above try block so that we don't
519 # handler is called outside the above try block so that we don't
520 # risk catching KeyErrors from anything other than the
520 # risk catching KeyErrors from anything other than the
521 # parthandlermapping lookup (any KeyError raised by handler()
521 # parthandlermapping lookup (any KeyError raised by handler()
522 # itself represents a defect of a different variety).
522 # itself represents a defect of a different variety).
523 output = None
523 output = None
524 if op.captureoutput and op.reply is not None:
524 if op.captureoutput and op.reply is not None:
525 op.ui.pushbuffer(error=True, subproc=True)
525 op.ui.pushbuffer(error=True, subproc=True)
526 output = ''
526 output = ''
527 try:
527 try:
528 handler(op, part)
528 handler(op, part)
529 finally:
529 finally:
530 if output is not None:
530 if output is not None:
531 output = op.ui.popbuffer()
531 output = op.ui.popbuffer()
532 if output:
532 if output:
533 outpart = op.reply.newpart('output', data=output,
533 outpart = op.reply.newpart('output', data=output,
534 mandatory=False)
534 mandatory=False)
535 outpart.addparam(
535 outpart.addparam(
536 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
536 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
537
537
538 def decodecaps(blob):
538 def decodecaps(blob):
539 """decode a bundle2 caps bytes blob into a dictionary
539 """decode a bundle2 caps bytes blob into a dictionary
540
540
541 The blob is a list of capabilities (one per line)
541 The blob is a list of capabilities (one per line)
542 Capabilities may have values using a line of the form::
542 Capabilities may have values using a line of the form::
543
543
544 capability=value1,value2,value3
544 capability=value1,value2,value3
545
545
546 The values are always a list."""
546 The values are always a list."""
547 caps = {}
547 caps = {}
548 for line in blob.splitlines():
548 for line in blob.splitlines():
549 if not line:
549 if not line:
550 continue
550 continue
551 if '=' not in line:
551 if '=' not in line:
552 key, vals = line, ()
552 key, vals = line, ()
553 else:
553 else:
554 key, vals = line.split('=', 1)
554 key, vals = line.split('=', 1)
555 vals = vals.split(',')
555 vals = vals.split(',')
556 key = urlreq.unquote(key)
556 key = urlreq.unquote(key)
557 vals = [urlreq.unquote(v) for v in vals]
557 vals = [urlreq.unquote(v) for v in vals]
558 caps[key] = vals
558 caps[key] = vals
559 return caps
559 return caps
560
560
561 def encodecaps(caps):
561 def encodecaps(caps):
562 """encode a bundle2 caps dictionary into a bytes blob"""
562 """encode a bundle2 caps dictionary into a bytes blob"""
563 chunks = []
563 chunks = []
564 for ca in sorted(caps):
564 for ca in sorted(caps):
565 vals = caps[ca]
565 vals = caps[ca]
566 ca = urlreq.quote(ca)
566 ca = urlreq.quote(ca)
567 vals = [urlreq.quote(v) for v in vals]
567 vals = [urlreq.quote(v) for v in vals]
568 if vals:
568 if vals:
569 ca = "%s=%s" % (ca, ','.join(vals))
569 ca = "%s=%s" % (ca, ','.join(vals))
570 chunks.append(ca)
570 chunks.append(ca)
571 return '\n'.join(chunks)
571 return '\n'.join(chunks)
572
572
573 bundletypes = {
573 bundletypes = {
574 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
574 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
575 # since the unification ssh accepts a header but there
575 # since the unification ssh accepts a header but there
576 # is no capability signaling it.
576 # is no capability signaling it.
577 "HG20": (), # special-cased below
577 "HG20": (), # special-cased below
578 "HG10UN": ("HG10UN", 'UN'),
578 "HG10UN": ("HG10UN", 'UN'),
579 "HG10BZ": ("HG10", 'BZ'),
579 "HG10BZ": ("HG10", 'BZ'),
580 "HG10GZ": ("HG10GZ", 'GZ'),
580 "HG10GZ": ("HG10GZ", 'GZ'),
581 }
581 }
582
582
583 # hgweb uses this list to communicate its preferred type
583 # hgweb uses this list to communicate its preferred type
584 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
584 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
585
585
586 class bundle20(object):
586 class bundle20(object):
587 """represent an outgoing bundle2 container
587 """represent an outgoing bundle2 container
588
588
589 Use the `addparam` method to add stream level parameter. and `newpart` to
589 Use the `addparam` method to add stream level parameter. and `newpart` to
590 populate it. Then call `getchunks` to retrieve all the binary chunks of
590 populate it. Then call `getchunks` to retrieve all the binary chunks of
591 data that compose the bundle2 container."""
591 data that compose the bundle2 container."""
592
592
593 _magicstring = 'HG20'
593 _magicstring = 'HG20'
594
594
595 def __init__(self, ui, capabilities=()):
595 def __init__(self, ui, capabilities=()):
596 self.ui = ui
596 self.ui = ui
597 self._params = []
597 self._params = []
598 self._parts = []
598 self._parts = []
599 self.capabilities = dict(capabilities)
599 self.capabilities = dict(capabilities)
600 self._compengine = util.compengines.forbundletype('UN')
600 self._compengine = util.compengines.forbundletype('UN')
601 self._compopts = None
601 self._compopts = None
602
602
603 def setcompression(self, alg, compopts=None):
603 def setcompression(self, alg, compopts=None):
604 """setup core part compression to <alg>"""
604 """setup core part compression to <alg>"""
605 if alg in (None, 'UN'):
605 if alg in (None, 'UN'):
606 return
606 return
607 assert not any(n.lower() == 'compression' for n, v in self._params)
607 assert not any(n.lower() == 'compression' for n, v in self._params)
608 self.addparam('Compression', alg)
608 self.addparam('Compression', alg)
609 self._compengine = util.compengines.forbundletype(alg)
609 self._compengine = util.compengines.forbundletype(alg)
610 self._compopts = compopts
610 self._compopts = compopts
611
611
612 @property
612 @property
613 def nbparts(self):
613 def nbparts(self):
614 """total number of parts added to the bundler"""
614 """total number of parts added to the bundler"""
615 return len(self._parts)
615 return len(self._parts)
616
616
617 # methods used to defines the bundle2 content
617 # methods used to defines the bundle2 content
618 def addparam(self, name, value=None):
618 def addparam(self, name, value=None):
619 """add a stream level parameter"""
619 """add a stream level parameter"""
620 if not name:
620 if not name:
621 raise ValueError(r'empty parameter name')
621 raise ValueError(r'empty parameter name')
622 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
622 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
623 raise ValueError(r'non letter first character: %s' % name)
623 raise ValueError(r'non letter first character: %s' % name)
624 self._params.append((name, value))
624 self._params.append((name, value))
625
625
626 def addpart(self, part):
626 def addpart(self, part):
627 """add a new part to the bundle2 container
627 """add a new part to the bundle2 container
628
628
629 Parts contains the actual applicative payload."""
629 Parts contains the actual applicative payload."""
630 assert part.id is None
630 assert part.id is None
631 part.id = len(self._parts) # very cheap counter
631 part.id = len(self._parts) # very cheap counter
632 self._parts.append(part)
632 self._parts.append(part)
633
633
634 def newpart(self, typeid, *args, **kwargs):
634 def newpart(self, typeid, *args, **kwargs):
635 """create a new part and add it to the containers
635 """create a new part and add it to the containers
636
636
637 As the part is directly added to the containers. For now, this means
637 As the part is directly added to the containers. For now, this means
638 that any failure to properly initialize the part after calling
638 that any failure to properly initialize the part after calling
639 ``newpart`` should result in a failure of the whole bundling process.
639 ``newpart`` should result in a failure of the whole bundling process.
640
640
641 You can still fall back to manually create and add if you need better
641 You can still fall back to manually create and add if you need better
642 control."""
642 control."""
643 part = bundlepart(typeid, *args, **kwargs)
643 part = bundlepart(typeid, *args, **kwargs)
644 self.addpart(part)
644 self.addpart(part)
645 return part
645 return part
646
646
647 # methods used to generate the bundle2 stream
647 # methods used to generate the bundle2 stream
648 def getchunks(self):
648 def getchunks(self):
649 if self.ui.debugflag:
649 if self.ui.debugflag:
650 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
650 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
651 if self._params:
651 if self._params:
652 msg.append(' (%i params)' % len(self._params))
652 msg.append(' (%i params)' % len(self._params))
653 msg.append(' %i parts total\n' % len(self._parts))
653 msg.append(' %i parts total\n' % len(self._parts))
654 self.ui.debug(''.join(msg))
654 self.ui.debug(''.join(msg))
655 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
655 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
656 yield self._magicstring
656 yield self._magicstring
657 param = self._paramchunk()
657 param = self._paramchunk()
658 outdebug(self.ui, 'bundle parameter: %s' % param)
658 outdebug(self.ui, 'bundle parameter: %s' % param)
659 yield _pack(_fstreamparamsize, len(param))
659 yield _pack(_fstreamparamsize, len(param))
660 if param:
660 if param:
661 yield param
661 yield param
662 for chunk in self._compengine.compressstream(self._getcorechunk(),
662 for chunk in self._compengine.compressstream(self._getcorechunk(),
663 self._compopts):
663 self._compopts):
664 yield chunk
664 yield chunk
665
665
666 def _paramchunk(self):
666 def _paramchunk(self):
667 """return a encoded version of all stream parameters"""
667 """return a encoded version of all stream parameters"""
668 blocks = []
668 blocks = []
669 for par, value in self._params:
669 for par, value in self._params:
670 par = urlreq.quote(par)
670 par = urlreq.quote(par)
671 if value is not None:
671 if value is not None:
672 value = urlreq.quote(value)
672 value = urlreq.quote(value)
673 par = '%s=%s' % (par, value)
673 par = '%s=%s' % (par, value)
674 blocks.append(par)
674 blocks.append(par)
675 return ' '.join(blocks)
675 return ' '.join(blocks)
676
676
677 def _getcorechunk(self):
677 def _getcorechunk(self):
678 """yield chunk for the core part of the bundle
678 """yield chunk for the core part of the bundle
679
679
680 (all but headers and parameters)"""
680 (all but headers and parameters)"""
681 outdebug(self.ui, 'start of parts')
681 outdebug(self.ui, 'start of parts')
682 for part in self._parts:
682 for part in self._parts:
683 outdebug(self.ui, 'bundle part: "%s"' % part.type)
683 outdebug(self.ui, 'bundle part: "%s"' % part.type)
684 for chunk in part.getchunks(ui=self.ui):
684 for chunk in part.getchunks(ui=self.ui):
685 yield chunk
685 yield chunk
686 outdebug(self.ui, 'end of bundle')
686 outdebug(self.ui, 'end of bundle')
687 yield _pack(_fpartheadersize, 0)
687 yield _pack(_fpartheadersize, 0)
688
688
689
689
690 def salvageoutput(self):
690 def salvageoutput(self):
691 """return a list with a copy of all output parts in the bundle
691 """return a list with a copy of all output parts in the bundle
692
692
693 This is meant to be used during error handling to make sure we preserve
693 This is meant to be used during error handling to make sure we preserve
694 server output"""
694 server output"""
695 salvaged = []
695 salvaged = []
696 for part in self._parts:
696 for part in self._parts:
697 if part.type.startswith('output'):
697 if part.type.startswith('output'):
698 salvaged.append(part.copy())
698 salvaged.append(part.copy())
699 return salvaged
699 return salvaged
700
700
701
701
702 class unpackermixin(object):
702 class unpackermixin(object):
703 """A mixin to extract bytes and struct data from a stream"""
703 """A mixin to extract bytes and struct data from a stream"""
704
704
705 def __init__(self, fp):
705 def __init__(self, fp):
706 self._fp = fp
706 self._fp = fp
707
707
708 def _unpack(self, format):
708 def _unpack(self, format):
709 """unpack this struct format from the stream
709 """unpack this struct format from the stream
710
710
711 This method is meant for internal usage by the bundle2 protocol only.
711 This method is meant for internal usage by the bundle2 protocol only.
712 They directly manipulate the low level stream including bundle2 level
712 They directly manipulate the low level stream including bundle2 level
713 instruction.
713 instruction.
714
714
715 Do not use it to implement higher-level logic or methods."""
715 Do not use it to implement higher-level logic or methods."""
716 data = self._readexact(struct.calcsize(format))
716 data = self._readexact(struct.calcsize(format))
717 return _unpack(format, data)
717 return _unpack(format, data)
718
718
719 def _readexact(self, size):
719 def _readexact(self, size):
720 """read exactly <size> bytes from the stream
720 """read exactly <size> bytes from the stream
721
721
722 This method is meant for internal usage by the bundle2 protocol only.
722 This method is meant for internal usage by the bundle2 protocol only.
723 They directly manipulate the low level stream including bundle2 level
723 They directly manipulate the low level stream including bundle2 level
724 instruction.
724 instruction.
725
725
726 Do not use it to implement higher-level logic or methods."""
726 Do not use it to implement higher-level logic or methods."""
727 return changegroup.readexactly(self._fp, size)
727 return changegroup.readexactly(self._fp, size)
728
728
729 def getunbundler(ui, fp, magicstring=None):
729 def getunbundler(ui, fp, magicstring=None):
730 """return a valid unbundler object for a given magicstring"""
730 """return a valid unbundler object for a given magicstring"""
731 if magicstring is None:
731 if magicstring is None:
732 magicstring = changegroup.readexactly(fp, 4)
732 magicstring = changegroup.readexactly(fp, 4)
733 magic, version = magicstring[0:2], magicstring[2:4]
733 magic, version = magicstring[0:2], magicstring[2:4]
734 if magic != 'HG':
734 if magic != 'HG':
735 ui.debug(
735 ui.debug(
736 "error: invalid magic: %r (version %r), should be 'HG'\n"
736 "error: invalid magic: %r (version %r), should be 'HG'\n"
737 % (magic, version))
737 % (magic, version))
738 raise error.Abort(_('not a Mercurial bundle'))
738 raise error.Abort(_('not a Mercurial bundle'))
739 unbundlerclass = formatmap.get(version)
739 unbundlerclass = formatmap.get(version)
740 if unbundlerclass is None:
740 if unbundlerclass is None:
741 raise error.Abort(_('unknown bundle version %s') % version)
741 raise error.Abort(_('unknown bundle version %s') % version)
742 unbundler = unbundlerclass(ui, fp)
742 unbundler = unbundlerclass(ui, fp)
743 indebug(ui, 'start processing of %s stream' % magicstring)
743 indebug(ui, 'start processing of %s stream' % magicstring)
744 return unbundler
744 return unbundler
745
745
746 class unbundle20(unpackermixin):
746 class unbundle20(unpackermixin):
747 """interpret a bundle2 stream
747 """interpret a bundle2 stream
748
748
749 This class is fed with a binary stream and yields parts through its
749 This class is fed with a binary stream and yields parts through its
750 `iterparts` methods."""
750 `iterparts` methods."""
751
751
752 _magicstring = 'HG20'
752 _magicstring = 'HG20'
753
753
754 def __init__(self, ui, fp):
754 def __init__(self, ui, fp):
755 """If header is specified, we do not read it out of the stream."""
755 """If header is specified, we do not read it out of the stream."""
756 self.ui = ui
756 self.ui = ui
757 self._compengine = util.compengines.forbundletype('UN')
757 self._compengine = util.compengines.forbundletype('UN')
758 self._compressed = None
758 self._compressed = None
759 super(unbundle20, self).__init__(fp)
759 super(unbundle20, self).__init__(fp)
760
760
761 @util.propertycache
761 @util.propertycache
762 def params(self):
762 def params(self):
763 """dictionary of stream level parameters"""
763 """dictionary of stream level parameters"""
764 indebug(self.ui, 'reading bundle2 stream parameters')
764 indebug(self.ui, 'reading bundle2 stream parameters')
765 params = {}
765 params = {}
766 paramssize = self._unpack(_fstreamparamsize)[0]
766 paramssize = self._unpack(_fstreamparamsize)[0]
767 if paramssize < 0:
767 if paramssize < 0:
768 raise error.BundleValueError('negative bundle param size: %i'
768 raise error.BundleValueError('negative bundle param size: %i'
769 % paramssize)
769 % paramssize)
770 if paramssize:
770 if paramssize:
771 params = self._readexact(paramssize)
771 params = self._readexact(paramssize)
772 params = self._processallparams(params)
772 params = self._processallparams(params)
773 return params
773 return params
774
774
775 def _processallparams(self, paramsblock):
775 def _processallparams(self, paramsblock):
776 """"""
776 """"""
777 params = util.sortdict()
777 params = util.sortdict()
778 for p in paramsblock.split(' '):
778 for p in paramsblock.split(' '):
779 p = p.split('=', 1)
779 p = p.split('=', 1)
780 p = [urlreq.unquote(i) for i in p]
780 p = [urlreq.unquote(i) for i in p]
781 if len(p) < 2:
781 if len(p) < 2:
782 p.append(None)
782 p.append(None)
783 self._processparam(*p)
783 self._processparam(*p)
784 params[p[0]] = p[1]
784 params[p[0]] = p[1]
785 return params
785 return params
786
786
787
787
788 def _processparam(self, name, value):
788 def _processparam(self, name, value):
789 """process a parameter, applying its effect if needed
789 """process a parameter, applying its effect if needed
790
790
791 Parameter starting with a lower case letter are advisory and will be
791 Parameter starting with a lower case letter are advisory and will be
792 ignored when unknown. Those starting with an upper case letter are
792 ignored when unknown. Those starting with an upper case letter are
793 mandatory and will this function will raise a KeyError when unknown.
793 mandatory and will this function will raise a KeyError when unknown.
794
794
795 Note: no option are currently supported. Any input will be either
795 Note: no option are currently supported. Any input will be either
796 ignored or failing.
796 ignored or failing.
797 """
797 """
798 if not name:
798 if not name:
799 raise ValueError(r'empty parameter name')
799 raise ValueError(r'empty parameter name')
800 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
800 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
801 raise ValueError(r'non letter first character: %s' % name)
801 raise ValueError(r'non letter first character: %s' % name)
802 try:
802 try:
803 handler = b2streamparamsmap[name.lower()]
803 handler = b2streamparamsmap[name.lower()]
804 except KeyError:
804 except KeyError:
805 if name[0:1].islower():
805 if name[0:1].islower():
806 indebug(self.ui, "ignoring unknown parameter %s" % name)
806 indebug(self.ui, "ignoring unknown parameter %s" % name)
807 else:
807 else:
808 raise error.BundleUnknownFeatureError(params=(name,))
808 raise error.BundleUnknownFeatureError(params=(name,))
809 else:
809 else:
810 handler(self, name, value)
810 handler(self, name, value)
811
811
812 def _forwardchunks(self):
812 def _forwardchunks(self):
813 """utility to transfer a bundle2 as binary
813 """utility to transfer a bundle2 as binary
814
814
815 This is made necessary by the fact the 'getbundle' command over 'ssh'
815 This is made necessary by the fact the 'getbundle' command over 'ssh'
816 have no way to know then the reply end, relying on the bundle to be
816 have no way to know then the reply end, relying on the bundle to be
817 interpreted to know its end. This is terrible and we are sorry, but we
817 interpreted to know its end. This is terrible and we are sorry, but we
818 needed to move forward to get general delta enabled.
818 needed to move forward to get general delta enabled.
819 """
819 """
820 yield self._magicstring
820 yield self._magicstring
821 assert 'params' not in vars(self)
821 assert 'params' not in vars(self)
822 paramssize = self._unpack(_fstreamparamsize)[0]
822 paramssize = self._unpack(_fstreamparamsize)[0]
823 if paramssize < 0:
823 if paramssize < 0:
824 raise error.BundleValueError('negative bundle param size: %i'
824 raise error.BundleValueError('negative bundle param size: %i'
825 % paramssize)
825 % paramssize)
826 yield _pack(_fstreamparamsize, paramssize)
826 yield _pack(_fstreamparamsize, paramssize)
827 if paramssize:
827 if paramssize:
828 params = self._readexact(paramssize)
828 params = self._readexact(paramssize)
829 self._processallparams(params)
829 self._processallparams(params)
830 yield params
830 yield params
831 assert self._compengine.bundletype == 'UN'
831 assert self._compengine.bundletype == 'UN'
832 # From there, payload might need to be decompressed
832 # From there, payload might need to be decompressed
833 self._fp = self._compengine.decompressorreader(self._fp)
833 self._fp = self._compengine.decompressorreader(self._fp)
834 emptycount = 0
834 emptycount = 0
835 while emptycount < 2:
835 while emptycount < 2:
836 # so we can brainlessly loop
836 # so we can brainlessly loop
837 assert _fpartheadersize == _fpayloadsize
837 assert _fpartheadersize == _fpayloadsize
838 size = self._unpack(_fpartheadersize)[0]
838 size = self._unpack(_fpartheadersize)[0]
839 yield _pack(_fpartheadersize, size)
839 yield _pack(_fpartheadersize, size)
840 if size:
840 if size:
841 emptycount = 0
841 emptycount = 0
842 else:
842 else:
843 emptycount += 1
843 emptycount += 1
844 continue
844 continue
845 if size == flaginterrupt:
845 if size == flaginterrupt:
846 continue
846 continue
847 elif size < 0:
847 elif size < 0:
848 raise error.BundleValueError('negative chunk size: %i')
848 raise error.BundleValueError('negative chunk size: %i')
849 yield self._readexact(size)
849 yield self._readexact(size)
850
850
851
851
852 def iterparts(self, seekable=False):
852 def iterparts(self, seekable=False):
853 """yield all parts contained in the stream"""
853 """yield all parts contained in the stream"""
854 cls = seekableunbundlepart if seekable else unbundlepart
854 cls = seekableunbundlepart if seekable else unbundlepart
855 # make sure param have been loaded
855 # make sure param have been loaded
856 self.params
856 self.params
857 # From there, payload need to be decompressed
857 # From there, payload need to be decompressed
858 self._fp = self._compengine.decompressorreader(self._fp)
858 self._fp = self._compengine.decompressorreader(self._fp)
859 indebug(self.ui, 'start extraction of bundle2 parts')
859 indebug(self.ui, 'start extraction of bundle2 parts')
860 headerblock = self._readpartheader()
860 headerblock = self._readpartheader()
861 while headerblock is not None:
861 while headerblock is not None:
862 part = cls(self.ui, headerblock, self._fp)
862 part = cls(self.ui, headerblock, self._fp)
863 yield part
863 yield part
864 # Ensure part is fully consumed so we can start reading the next
864 # Ensure part is fully consumed so we can start reading the next
865 # part.
865 # part.
866 part.consume()
866 part.consume()
867
867
868 headerblock = self._readpartheader()
868 headerblock = self._readpartheader()
869 indebug(self.ui, 'end of bundle2 stream')
869 indebug(self.ui, 'end of bundle2 stream')
870
870
871 def _readpartheader(self):
871 def _readpartheader(self):
872 """reads a part header size and return the bytes blob
872 """reads a part header size and return the bytes blob
873
873
874 returns None if empty"""
874 returns None if empty"""
875 headersize = self._unpack(_fpartheadersize)[0]
875 headersize = self._unpack(_fpartheadersize)[0]
876 if headersize < 0:
876 if headersize < 0:
877 raise error.BundleValueError('negative part header size: %i'
877 raise error.BundleValueError('negative part header size: %i'
878 % headersize)
878 % headersize)
879 indebug(self.ui, 'part header size: %i' % headersize)
879 indebug(self.ui, 'part header size: %i' % headersize)
880 if headersize:
880 if headersize:
881 return self._readexact(headersize)
881 return self._readexact(headersize)
882 return None
882 return None
883
883
884 def compressed(self):
884 def compressed(self):
885 self.params # load params
885 self.params # load params
886 return self._compressed
886 return self._compressed
887
887
888 def close(self):
888 def close(self):
889 """close underlying file"""
889 """close underlying file"""
890 if util.safehasattr(self._fp, 'close'):
890 if util.safehasattr(self._fp, 'close'):
891 return self._fp.close()
891 return self._fp.close()
892
892
893 formatmap = {'20': unbundle20}
893 formatmap = {'20': unbundle20}
894
894
895 b2streamparamsmap = {}
895 b2streamparamsmap = {}
896
896
897 def b2streamparamhandler(name):
897 def b2streamparamhandler(name):
898 """register a handler for a stream level parameter"""
898 """register a handler for a stream level parameter"""
899 def decorator(func):
899 def decorator(func):
900 assert name not in formatmap
900 assert name not in formatmap
901 b2streamparamsmap[name] = func
901 b2streamparamsmap[name] = func
902 return func
902 return func
903 return decorator
903 return decorator
904
904
905 @b2streamparamhandler('compression')
905 @b2streamparamhandler('compression')
906 def processcompression(unbundler, param, value):
906 def processcompression(unbundler, param, value):
907 """read compression parameter and install payload decompression"""
907 """read compression parameter and install payload decompression"""
908 if value not in util.compengines.supportedbundletypes:
908 if value not in util.compengines.supportedbundletypes:
909 raise error.BundleUnknownFeatureError(params=(param,),
909 raise error.BundleUnknownFeatureError(params=(param,),
910 values=(value,))
910 values=(value,))
911 unbundler._compengine = util.compengines.forbundletype(value)
911 unbundler._compengine = util.compengines.forbundletype(value)
912 if value is not None:
912 if value is not None:
913 unbundler._compressed = True
913 unbundler._compressed = True
914
914
915 class bundlepart(object):
915 class bundlepart(object):
916 """A bundle2 part contains application level payload
916 """A bundle2 part contains application level payload
917
917
918 The part `type` is used to route the part to the application level
918 The part `type` is used to route the part to the application level
919 handler.
919 handler.
920
920
921 The part payload is contained in ``part.data``. It could be raw bytes or a
921 The part payload is contained in ``part.data``. It could be raw bytes or a
922 generator of byte chunks.
922 generator of byte chunks.
923
923
924 You can add parameters to the part using the ``addparam`` method.
924 You can add parameters to the part using the ``addparam`` method.
925 Parameters can be either mandatory (default) or advisory. Remote side
925 Parameters can be either mandatory (default) or advisory. Remote side
926 should be able to safely ignore the advisory ones.
926 should be able to safely ignore the advisory ones.
927
927
928 Both data and parameters cannot be modified after the generation has begun.
928 Both data and parameters cannot be modified after the generation has begun.
929 """
929 """
930
930
931 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
931 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
932 data='', mandatory=True):
932 data='', mandatory=True):
933 validateparttype(parttype)
933 validateparttype(parttype)
934 self.id = None
934 self.id = None
935 self.type = parttype
935 self.type = parttype
936 self._data = data
936 self._data = data
937 self._mandatoryparams = list(mandatoryparams)
937 self._mandatoryparams = list(mandatoryparams)
938 self._advisoryparams = list(advisoryparams)
938 self._advisoryparams = list(advisoryparams)
939 # checking for duplicated entries
939 # checking for duplicated entries
940 self._seenparams = set()
940 self._seenparams = set()
941 for pname, __ in self._mandatoryparams + self._advisoryparams:
941 for pname, __ in self._mandatoryparams + self._advisoryparams:
942 if pname in self._seenparams:
942 if pname in self._seenparams:
943 raise error.ProgrammingError('duplicated params: %s' % pname)
943 raise error.ProgrammingError('duplicated params: %s' % pname)
944 self._seenparams.add(pname)
944 self._seenparams.add(pname)
945 # status of the part's generation:
945 # status of the part's generation:
946 # - None: not started,
946 # - None: not started,
947 # - False: currently generated,
947 # - False: currently generated,
948 # - True: generation done.
948 # - True: generation done.
949 self._generated = None
949 self._generated = None
950 self.mandatory = mandatory
950 self.mandatory = mandatory
951
951
952 def __repr__(self):
952 def __repr__(self):
953 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
953 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
954 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
954 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
955 % (cls, id(self), self.id, self.type, self.mandatory))
955 % (cls, id(self), self.id, self.type, self.mandatory))
956
956
957 def copy(self):
957 def copy(self):
958 """return a copy of the part
958 """return a copy of the part
959
959
960 The new part have the very same content but no partid assigned yet.
960 The new part have the very same content but no partid assigned yet.
961 Parts with generated data cannot be copied."""
961 Parts with generated data cannot be copied."""
962 assert not util.safehasattr(self.data, 'next')
962 assert not util.safehasattr(self.data, 'next')
963 return self.__class__(self.type, self._mandatoryparams,
963 return self.__class__(self.type, self._mandatoryparams,
964 self._advisoryparams, self._data, self.mandatory)
964 self._advisoryparams, self._data, self.mandatory)
965
965
966 # methods used to defines the part content
966 # methods used to defines the part content
967 @property
967 @property
968 def data(self):
968 def data(self):
969 return self._data
969 return self._data
970
970
971 @data.setter
971 @data.setter
972 def data(self, data):
972 def data(self, data):
973 if self._generated is not None:
973 if self._generated is not None:
974 raise error.ReadOnlyPartError('part is being generated')
974 raise error.ReadOnlyPartError('part is being generated')
975 self._data = data
975 self._data = data
976
976
977 @property
977 @property
978 def mandatoryparams(self):
978 def mandatoryparams(self):
979 # make it an immutable tuple to force people through ``addparam``
979 # make it an immutable tuple to force people through ``addparam``
980 return tuple(self._mandatoryparams)
980 return tuple(self._mandatoryparams)
981
981
982 @property
982 @property
983 def advisoryparams(self):
983 def advisoryparams(self):
984 # make it an immutable tuple to force people through ``addparam``
984 # make it an immutable tuple to force people through ``addparam``
985 return tuple(self._advisoryparams)
985 return tuple(self._advisoryparams)
986
986
987 def addparam(self, name, value='', mandatory=True):
987 def addparam(self, name, value='', mandatory=True):
988 """add a parameter to the part
988 """add a parameter to the part
989
989
990 If 'mandatory' is set to True, the remote handler must claim support
990 If 'mandatory' is set to True, the remote handler must claim support
991 for this parameter or the unbundling will be aborted.
991 for this parameter or the unbundling will be aborted.
992
992
993 The 'name' and 'value' cannot exceed 255 bytes each.
993 The 'name' and 'value' cannot exceed 255 bytes each.
994 """
994 """
995 if self._generated is not None:
995 if self._generated is not None:
996 raise error.ReadOnlyPartError('part is being generated')
996 raise error.ReadOnlyPartError('part is being generated')
997 if name in self._seenparams:
997 if name in self._seenparams:
998 raise ValueError('duplicated params: %s' % name)
998 raise ValueError('duplicated params: %s' % name)
999 self._seenparams.add(name)
999 self._seenparams.add(name)
1000 params = self._advisoryparams
1000 params = self._advisoryparams
1001 if mandatory:
1001 if mandatory:
1002 params = self._mandatoryparams
1002 params = self._mandatoryparams
1003 params.append((name, value))
1003 params.append((name, value))
1004
1004
1005 # methods used to generates the bundle2 stream
1005 # methods used to generates the bundle2 stream
1006 def getchunks(self, ui):
1006 def getchunks(self, ui):
1007 if self._generated is not None:
1007 if self._generated is not None:
1008 raise error.ProgrammingError('part can only be consumed once')
1008 raise error.ProgrammingError('part can only be consumed once')
1009 self._generated = False
1009 self._generated = False
1010
1010
1011 if ui.debugflag:
1011 if ui.debugflag:
1012 msg = ['bundle2-output-part: "%s"' % self.type]
1012 msg = ['bundle2-output-part: "%s"' % self.type]
1013 if not self.mandatory:
1013 if not self.mandatory:
1014 msg.append(' (advisory)')
1014 msg.append(' (advisory)')
1015 nbmp = len(self.mandatoryparams)
1015 nbmp = len(self.mandatoryparams)
1016 nbap = len(self.advisoryparams)
1016 nbap = len(self.advisoryparams)
1017 if nbmp or nbap:
1017 if nbmp or nbap:
1018 msg.append(' (params:')
1018 msg.append(' (params:')
1019 if nbmp:
1019 if nbmp:
1020 msg.append(' %i mandatory' % nbmp)
1020 msg.append(' %i mandatory' % nbmp)
1021 if nbap:
1021 if nbap:
1022 msg.append(' %i advisory' % nbmp)
1022 msg.append(' %i advisory' % nbmp)
1023 msg.append(')')
1023 msg.append(')')
1024 if not self.data:
1024 if not self.data:
1025 msg.append(' empty payload')
1025 msg.append(' empty payload')
1026 elif (util.safehasattr(self.data, 'next')
1026 elif (util.safehasattr(self.data, 'next')
1027 or util.safehasattr(self.data, '__next__')):
1027 or util.safehasattr(self.data, '__next__')):
1028 msg.append(' streamed payload')
1028 msg.append(' streamed payload')
1029 else:
1029 else:
1030 msg.append(' %i bytes payload' % len(self.data))
1030 msg.append(' %i bytes payload' % len(self.data))
1031 msg.append('\n')
1031 msg.append('\n')
1032 ui.debug(''.join(msg))
1032 ui.debug(''.join(msg))
1033
1033
1034 #### header
1034 #### header
1035 if self.mandatory:
1035 if self.mandatory:
1036 parttype = self.type.upper()
1036 parttype = self.type.upper()
1037 else:
1037 else:
1038 parttype = self.type.lower()
1038 parttype = self.type.lower()
1039 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1039 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1040 ## parttype
1040 ## parttype
1041 header = [_pack(_fparttypesize, len(parttype)),
1041 header = [_pack(_fparttypesize, len(parttype)),
1042 parttype, _pack(_fpartid, self.id),
1042 parttype, _pack(_fpartid, self.id),
1043 ]
1043 ]
1044 ## parameters
1044 ## parameters
1045 # count
1045 # count
1046 manpar = self.mandatoryparams
1046 manpar = self.mandatoryparams
1047 advpar = self.advisoryparams
1047 advpar = self.advisoryparams
1048 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1048 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1049 # size
1049 # size
1050 parsizes = []
1050 parsizes = []
1051 for key, value in manpar:
1051 for key, value in manpar:
1052 parsizes.append(len(key))
1052 parsizes.append(len(key))
1053 parsizes.append(len(value))
1053 parsizes.append(len(value))
1054 for key, value in advpar:
1054 for key, value in advpar:
1055 parsizes.append(len(key))
1055 parsizes.append(len(key))
1056 parsizes.append(len(value))
1056 parsizes.append(len(value))
1057 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1057 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1058 header.append(paramsizes)
1058 header.append(paramsizes)
1059 # key, value
1059 # key, value
1060 for key, value in manpar:
1060 for key, value in manpar:
1061 header.append(key)
1061 header.append(key)
1062 header.append(value)
1062 header.append(value)
1063 for key, value in advpar:
1063 for key, value in advpar:
1064 header.append(key)
1064 header.append(key)
1065 header.append(value)
1065 header.append(value)
1066 ## finalize header
1066 ## finalize header
1067 try:
1067 try:
1068 headerchunk = ''.join(header)
1068 headerchunk = ''.join(header)
1069 except TypeError:
1069 except TypeError:
1070 raise TypeError(r'Found a non-bytes trying to '
1070 raise TypeError(r'Found a non-bytes trying to '
1071 r'build bundle part header: %r' % header)
1071 r'build bundle part header: %r' % header)
1072 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1072 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1073 yield _pack(_fpartheadersize, len(headerchunk))
1073 yield _pack(_fpartheadersize, len(headerchunk))
1074 yield headerchunk
1074 yield headerchunk
1075 ## payload
1075 ## payload
1076 try:
1076 try:
1077 for chunk in self._payloadchunks():
1077 for chunk in self._payloadchunks():
1078 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1078 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1079 yield _pack(_fpayloadsize, len(chunk))
1079 yield _pack(_fpayloadsize, len(chunk))
1080 yield chunk
1080 yield chunk
1081 except GeneratorExit:
1081 except GeneratorExit:
1082 # GeneratorExit means that nobody is listening for our
1082 # GeneratorExit means that nobody is listening for our
1083 # results anyway, so just bail quickly rather than trying
1083 # results anyway, so just bail quickly rather than trying
1084 # to produce an error part.
1084 # to produce an error part.
1085 ui.debug('bundle2-generatorexit\n')
1085 ui.debug('bundle2-generatorexit\n')
1086 raise
1086 raise
1087 except BaseException as exc:
1087 except BaseException as exc:
1088 bexc = util.forcebytestr(exc)
1088 bexc = util.forcebytestr(exc)
1089 # backup exception data for later
1089 # backup exception data for later
1090 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1090 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1091 % bexc)
1091 % bexc)
1092 tb = sys.exc_info()[2]
1092 tb = sys.exc_info()[2]
1093 msg = 'unexpected error: %s' % bexc
1093 msg = 'unexpected error: %s' % bexc
1094 interpart = bundlepart('error:abort', [('message', msg)],
1094 interpart = bundlepart('error:abort', [('message', msg)],
1095 mandatory=False)
1095 mandatory=False)
1096 interpart.id = 0
1096 interpart.id = 0
1097 yield _pack(_fpayloadsize, -1)
1097 yield _pack(_fpayloadsize, -1)
1098 for chunk in interpart.getchunks(ui=ui):
1098 for chunk in interpart.getchunks(ui=ui):
1099 yield chunk
1099 yield chunk
1100 outdebug(ui, 'closing payload chunk')
1100 outdebug(ui, 'closing payload chunk')
1101 # abort current part payload
1101 # abort current part payload
1102 yield _pack(_fpayloadsize, 0)
1102 yield _pack(_fpayloadsize, 0)
1103 pycompat.raisewithtb(exc, tb)
1103 pycompat.raisewithtb(exc, tb)
1104 # end of payload
1104 # end of payload
1105 outdebug(ui, 'closing payload chunk')
1105 outdebug(ui, 'closing payload chunk')
1106 yield _pack(_fpayloadsize, 0)
1106 yield _pack(_fpayloadsize, 0)
1107 self._generated = True
1107 self._generated = True
1108
1108
1109 def _payloadchunks(self):
1109 def _payloadchunks(self):
1110 """yield chunks of a the part payload
1110 """yield chunks of a the part payload
1111
1111
1112 Exists to handle the different methods to provide data to a part."""
1112 Exists to handle the different methods to provide data to a part."""
1113 # we only support fixed size data now.
1113 # we only support fixed size data now.
1114 # This will be improved in the future.
1114 # This will be improved in the future.
1115 if (util.safehasattr(self.data, 'next')
1115 if (util.safehasattr(self.data, 'next')
1116 or util.safehasattr(self.data, '__next__')):
1116 or util.safehasattr(self.data, '__next__')):
1117 buff = util.chunkbuffer(self.data)
1117 buff = util.chunkbuffer(self.data)
1118 chunk = buff.read(preferedchunksize)
1118 chunk = buff.read(preferedchunksize)
1119 while chunk:
1119 while chunk:
1120 yield chunk
1120 yield chunk
1121 chunk = buff.read(preferedchunksize)
1121 chunk = buff.read(preferedchunksize)
1122 elif len(self.data):
1122 elif len(self.data):
1123 yield self.data
1123 yield self.data
1124
1124
1125
1125
1126 flaginterrupt = -1
1126 flaginterrupt = -1
1127
1127
1128 class interrupthandler(unpackermixin):
1128 class interrupthandler(unpackermixin):
1129 """read one part and process it with restricted capability
1129 """read one part and process it with restricted capability
1130
1130
1131 This allows to transmit exception raised on the producer size during part
1131 This allows to transmit exception raised on the producer size during part
1132 iteration while the consumer is reading a part.
1132 iteration while the consumer is reading a part.
1133
1133
1134 Part processed in this manner only have access to a ui object,"""
1134 Part processed in this manner only have access to a ui object,"""
1135
1135
1136 def __init__(self, ui, fp):
1136 def __init__(self, ui, fp):
1137 super(interrupthandler, self).__init__(fp)
1137 super(interrupthandler, self).__init__(fp)
1138 self.ui = ui
1138 self.ui = ui
1139
1139
1140 def _readpartheader(self):
1140 def _readpartheader(self):
1141 """reads a part header size and return the bytes blob
1141 """reads a part header size and return the bytes blob
1142
1142
1143 returns None if empty"""
1143 returns None if empty"""
1144 headersize = self._unpack(_fpartheadersize)[0]
1144 headersize = self._unpack(_fpartheadersize)[0]
1145 if headersize < 0:
1145 if headersize < 0:
1146 raise error.BundleValueError('negative part header size: %i'
1146 raise error.BundleValueError('negative part header size: %i'
1147 % headersize)
1147 % headersize)
1148 indebug(self.ui, 'part header size: %i\n' % headersize)
1148 indebug(self.ui, 'part header size: %i\n' % headersize)
1149 if headersize:
1149 if headersize:
1150 return self._readexact(headersize)
1150 return self._readexact(headersize)
1151 return None
1151 return None
1152
1152
1153 def __call__(self):
1153 def __call__(self):
1154
1154
1155 self.ui.debug('bundle2-input-stream-interrupt:'
1155 self.ui.debug('bundle2-input-stream-interrupt:'
1156 ' opening out of band context\n')
1156 ' opening out of band context\n')
1157 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1157 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1158 headerblock = self._readpartheader()
1158 headerblock = self._readpartheader()
1159 if headerblock is None:
1159 if headerblock is None:
1160 indebug(self.ui, 'no part found during interruption.')
1160 indebug(self.ui, 'no part found during interruption.')
1161 return
1161 return
1162 part = unbundlepart(self.ui, headerblock, self._fp)
1162 part = unbundlepart(self.ui, headerblock, self._fp)
1163 op = interruptoperation(self.ui)
1163 op = interruptoperation(self.ui)
1164 hardabort = False
1164 hardabort = False
1165 try:
1165 try:
1166 _processpart(op, part)
1166 _processpart(op, part)
1167 except (SystemExit, KeyboardInterrupt):
1167 except (SystemExit, KeyboardInterrupt):
1168 hardabort = True
1168 hardabort = True
1169 raise
1169 raise
1170 finally:
1170 finally:
1171 if not hardabort:
1171 if not hardabort:
1172 part.consume()
1172 part.consume()
1173 self.ui.debug('bundle2-input-stream-interrupt:'
1173 self.ui.debug('bundle2-input-stream-interrupt:'
1174 ' closing out of band context\n')
1174 ' closing out of band context\n')
1175
1175
1176 class interruptoperation(object):
1176 class interruptoperation(object):
1177 """A limited operation to be use by part handler during interruption
1177 """A limited operation to be use by part handler during interruption
1178
1178
1179 It only have access to an ui object.
1179 It only have access to an ui object.
1180 """
1180 """
1181
1181
1182 def __init__(self, ui):
1182 def __init__(self, ui):
1183 self.ui = ui
1183 self.ui = ui
1184 self.reply = None
1184 self.reply = None
1185 self.captureoutput = False
1185 self.captureoutput = False
1186
1186
1187 @property
1187 @property
1188 def repo(self):
1188 def repo(self):
1189 raise error.ProgrammingError('no repo access from stream interruption')
1189 raise error.ProgrammingError('no repo access from stream interruption')
1190
1190
1191 def gettransaction(self):
1191 def gettransaction(self):
1192 raise TransactionUnavailable('no repo access from stream interruption')
1192 raise TransactionUnavailable('no repo access from stream interruption')
1193
1193
1194 def decodepayloadchunks(ui, fh):
1194 def decodepayloadchunks(ui, fh):
1195 """Reads bundle2 part payload data into chunks.
1195 """Reads bundle2 part payload data into chunks.
1196
1196
1197 Part payload data consists of framed chunks. This function takes
1197 Part payload data consists of framed chunks. This function takes
1198 a file handle and emits those chunks.
1198 a file handle and emits those chunks.
1199 """
1199 """
1200 dolog = ui.configbool('devel', 'bundle2.debug')
1200 dolog = ui.configbool('devel', 'bundle2.debug')
1201 debug = ui.debug
1201 debug = ui.debug
1202
1202
1203 headerstruct = struct.Struct(_fpayloadsize)
1203 headerstruct = struct.Struct(_fpayloadsize)
1204 headersize = headerstruct.size
1204 headersize = headerstruct.size
1205 unpack = headerstruct.unpack
1205 unpack = headerstruct.unpack
1206
1206
1207 readexactly = changegroup.readexactly
1207 readexactly = changegroup.readexactly
1208 read = fh.read
1208 read = fh.read
1209
1209
1210 chunksize = unpack(readexactly(fh, headersize))[0]
1210 chunksize = unpack(readexactly(fh, headersize))[0]
1211 indebug(ui, 'payload chunk size: %i' % chunksize)
1211 indebug(ui, 'payload chunk size: %i' % chunksize)
1212
1212
1213 # changegroup.readexactly() is inlined below for performance.
1213 # changegroup.readexactly() is inlined below for performance.
1214 while chunksize:
1214 while chunksize:
1215 if chunksize >= 0:
1215 if chunksize >= 0:
1216 s = read(chunksize)
1216 s = read(chunksize)
1217 if len(s) < chunksize:
1217 if len(s) < chunksize:
1218 raise error.Abort(_('stream ended unexpectedly '
1218 raise error.Abort(_('stream ended unexpectedly '
1219 ' (got %d bytes, expected %d)') %
1219 ' (got %d bytes, expected %d)') %
1220 (len(s), chunksize))
1220 (len(s), chunksize))
1221
1221
1222 yield s
1222 yield s
1223 elif chunksize == flaginterrupt:
1223 elif chunksize == flaginterrupt:
1224 # Interrupt "signal" detected. The regular stream is interrupted
1224 # Interrupt "signal" detected. The regular stream is interrupted
1225 # and a bundle2 part follows. Consume it.
1225 # and a bundle2 part follows. Consume it.
1226 interrupthandler(ui, fh)()
1226 interrupthandler(ui, fh)()
1227 else:
1227 else:
1228 raise error.BundleValueError(
1228 raise error.BundleValueError(
1229 'negative payload chunk size: %s' % chunksize)
1229 'negative payload chunk size: %s' % chunksize)
1230
1230
1231 s = read(headersize)
1231 s = read(headersize)
1232 if len(s) < headersize:
1232 if len(s) < headersize:
1233 raise error.Abort(_('stream ended unexpectedly '
1233 raise error.Abort(_('stream ended unexpectedly '
1234 ' (got %d bytes, expected %d)') %
1234 ' (got %d bytes, expected %d)') %
1235 (len(s), chunksize))
1235 (len(s), chunksize))
1236
1236
1237 chunksize = unpack(s)[0]
1237 chunksize = unpack(s)[0]
1238
1238
1239 # indebug() inlined for performance.
1239 # indebug() inlined for performance.
1240 if dolog:
1240 if dolog:
1241 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1241 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1242
1242
1243 class unbundlepart(unpackermixin):
1243 class unbundlepart(unpackermixin):
1244 """a bundle part read from a bundle"""
1244 """a bundle part read from a bundle"""
1245
1245
1246 def __init__(self, ui, header, fp):
1246 def __init__(self, ui, header, fp):
1247 super(unbundlepart, self).__init__(fp)
1247 super(unbundlepart, self).__init__(fp)
1248 self._seekable = (util.safehasattr(fp, 'seek') and
1248 self._seekable = (util.safehasattr(fp, 'seek') and
1249 util.safehasattr(fp, 'tell'))
1249 util.safehasattr(fp, 'tell'))
1250 self.ui = ui
1250 self.ui = ui
1251 # unbundle state attr
1251 # unbundle state attr
1252 self._headerdata = header
1252 self._headerdata = header
1253 self._headeroffset = 0
1253 self._headeroffset = 0
1254 self._initialized = False
1254 self._initialized = False
1255 self.consumed = False
1255 self.consumed = False
1256 # part data
1256 # part data
1257 self.id = None
1257 self.id = None
1258 self.type = None
1258 self.type = None
1259 self.mandatoryparams = None
1259 self.mandatoryparams = None
1260 self.advisoryparams = None
1260 self.advisoryparams = None
1261 self.params = None
1261 self.params = None
1262 self.mandatorykeys = ()
1262 self.mandatorykeys = ()
1263 self._readheader()
1263 self._readheader()
1264 self._mandatory = None
1264 self._mandatory = None
1265 self._pos = 0
1265 self._pos = 0
1266
1266
1267 def _fromheader(self, size):
1267 def _fromheader(self, size):
1268 """return the next <size> byte from the header"""
1268 """return the next <size> byte from the header"""
1269 offset = self._headeroffset
1269 offset = self._headeroffset
1270 data = self._headerdata[offset:(offset + size)]
1270 data = self._headerdata[offset:(offset + size)]
1271 self._headeroffset = offset + size
1271 self._headeroffset = offset + size
1272 return data
1272 return data
1273
1273
1274 def _unpackheader(self, format):
1274 def _unpackheader(self, format):
1275 """read given format from header
1275 """read given format from header
1276
1276
1277 This automatically compute the size of the format to read."""
1277 This automatically compute the size of the format to read."""
1278 data = self._fromheader(struct.calcsize(format))
1278 data = self._fromheader(struct.calcsize(format))
1279 return _unpack(format, data)
1279 return _unpack(format, data)
1280
1280
1281 def _initparams(self, mandatoryparams, advisoryparams):
1281 def _initparams(self, mandatoryparams, advisoryparams):
1282 """internal function to setup all logic related parameters"""
1282 """internal function to setup all logic related parameters"""
1283 # make it read only to prevent people touching it by mistake.
1283 # make it read only to prevent people touching it by mistake.
1284 self.mandatoryparams = tuple(mandatoryparams)
1284 self.mandatoryparams = tuple(mandatoryparams)
1285 self.advisoryparams = tuple(advisoryparams)
1285 self.advisoryparams = tuple(advisoryparams)
1286 # user friendly UI
1286 # user friendly UI
1287 self.params = util.sortdict(self.mandatoryparams)
1287 self.params = util.sortdict(self.mandatoryparams)
1288 self.params.update(self.advisoryparams)
1288 self.params.update(self.advisoryparams)
1289 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1289 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1290
1290
1291 def _readheader(self):
1291 def _readheader(self):
1292 """read the header and setup the object"""
1292 """read the header and setup the object"""
1293 typesize = self._unpackheader(_fparttypesize)[0]
1293 typesize = self._unpackheader(_fparttypesize)[0]
1294 self.type = self._fromheader(typesize)
1294 self.type = self._fromheader(typesize)
1295 indebug(self.ui, 'part type: "%s"' % self.type)
1295 indebug(self.ui, 'part type: "%s"' % self.type)
1296 self.id = self._unpackheader(_fpartid)[0]
1296 self.id = self._unpackheader(_fpartid)[0]
1297 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1297 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1298 # extract mandatory bit from type
1298 # extract mandatory bit from type
1299 self.mandatory = (self.type != self.type.lower())
1299 self.mandatory = (self.type != self.type.lower())
1300 self.type = self.type.lower()
1300 self.type = self.type.lower()
1301 ## reading parameters
1301 ## reading parameters
1302 # param count
1302 # param count
1303 mancount, advcount = self._unpackheader(_fpartparamcount)
1303 mancount, advcount = self._unpackheader(_fpartparamcount)
1304 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1304 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1305 # param size
1305 # param size
1306 fparamsizes = _makefpartparamsizes(mancount + advcount)
1306 fparamsizes = _makefpartparamsizes(mancount + advcount)
1307 paramsizes = self._unpackheader(fparamsizes)
1307 paramsizes = self._unpackheader(fparamsizes)
1308 # make it a list of couple again
1308 # make it a list of couple again
1309 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1309 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1310 # split mandatory from advisory
1310 # split mandatory from advisory
1311 mansizes = paramsizes[:mancount]
1311 mansizes = paramsizes[:mancount]
1312 advsizes = paramsizes[mancount:]
1312 advsizes = paramsizes[mancount:]
1313 # retrieve param value
1313 # retrieve param value
1314 manparams = []
1314 manparams = []
1315 for key, value in mansizes:
1315 for key, value in mansizes:
1316 manparams.append((self._fromheader(key), self._fromheader(value)))
1316 manparams.append((self._fromheader(key), self._fromheader(value)))
1317 advparams = []
1317 advparams = []
1318 for key, value in advsizes:
1318 for key, value in advsizes:
1319 advparams.append((self._fromheader(key), self._fromheader(value)))
1319 advparams.append((self._fromheader(key), self._fromheader(value)))
1320 self._initparams(manparams, advparams)
1320 self._initparams(manparams, advparams)
1321 ## part payload
1321 ## part payload
1322 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1322 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1323 # we read the data, tell it
1323 # we read the data, tell it
1324 self._initialized = True
1324 self._initialized = True
1325
1325
1326 def _payloadchunks(self):
1326 def _payloadchunks(self):
1327 """Generator of decoded chunks in the payload."""
1327 """Generator of decoded chunks in the payload."""
1328 return decodepayloadchunks(self.ui, self._fp)
1328 return decodepayloadchunks(self.ui, self._fp)
1329
1329
1330 def consume(self):
1330 def consume(self):
1331 """Read the part payload until completion.
1331 """Read the part payload until completion.
1332
1332
1333 By consuming the part data, the underlying stream read offset will
1333 By consuming the part data, the underlying stream read offset will
1334 be advanced to the next part (or end of stream).
1334 be advanced to the next part (or end of stream).
1335 """
1335 """
1336 if self.consumed:
1336 if self.consumed:
1337 return
1337 return
1338
1338
1339 chunk = self.read(32768)
1339 chunk = self.read(32768)
1340 while chunk:
1340 while chunk:
1341 self._pos += len(chunk)
1341 self._pos += len(chunk)
1342 chunk = self.read(32768)
1342 chunk = self.read(32768)
1343
1343
1344 def read(self, size=None):
1344 def read(self, size=None):
1345 """read payload data"""
1345 """read payload data"""
1346 if not self._initialized:
1346 if not self._initialized:
1347 self._readheader()
1347 self._readheader()
1348 if size is None:
1348 if size is None:
1349 data = self._payloadstream.read()
1349 data = self._payloadstream.read()
1350 else:
1350 else:
1351 data = self._payloadstream.read(size)
1351 data = self._payloadstream.read(size)
1352 self._pos += len(data)
1352 self._pos += len(data)
1353 if size is None or len(data) < size:
1353 if size is None or len(data) < size:
1354 if not self.consumed and self._pos:
1354 if not self.consumed and self._pos:
1355 self.ui.debug('bundle2-input-part: total payload size %i\n'
1355 self.ui.debug('bundle2-input-part: total payload size %i\n'
1356 % self._pos)
1356 % self._pos)
1357 self.consumed = True
1357 self.consumed = True
1358 return data
1358 return data
1359
1359
1360 class seekableunbundlepart(unbundlepart):
1360 class seekableunbundlepart(unbundlepart):
1361 """A bundle2 part in a bundle that is seekable.
1361 """A bundle2 part in a bundle that is seekable.
1362
1362
1363 Regular ``unbundlepart`` instances can only be read once. This class
1363 Regular ``unbundlepart`` instances can only be read once. This class
1364 extends ``unbundlepart`` to enable bi-directional seeking within the
1364 extends ``unbundlepart`` to enable bi-directional seeking within the
1365 part.
1365 part.
1366
1366
1367 Bundle2 part data consists of framed chunks. Offsets when seeking
1367 Bundle2 part data consists of framed chunks. Offsets when seeking
1368 refer to the decoded data, not the offsets in the underlying bundle2
1368 refer to the decoded data, not the offsets in the underlying bundle2
1369 stream.
1369 stream.
1370
1370
1371 To facilitate quickly seeking within the decoded data, instances of this
1371 To facilitate quickly seeking within the decoded data, instances of this
1372 class maintain a mapping between offsets in the underlying stream and
1372 class maintain a mapping between offsets in the underlying stream and
1373 the decoded payload. This mapping will consume memory in proportion
1373 the decoded payload. This mapping will consume memory in proportion
1374 to the number of chunks within the payload (which almost certainly
1374 to the number of chunks within the payload (which almost certainly
1375 increases in proportion with the size of the part).
1375 increases in proportion with the size of the part).
1376 """
1376 """
1377 def __init__(self, ui, header, fp):
1377 def __init__(self, ui, header, fp):
1378 # (payload, file) offsets for chunk starts.
1378 # (payload, file) offsets for chunk starts.
1379 self._chunkindex = []
1379 self._chunkindex = []
1380
1380
1381 super(seekableunbundlepart, self).__init__(ui, header, fp)
1381 super(seekableunbundlepart, self).__init__(ui, header, fp)
1382
1382
1383 def _payloadchunks(self, chunknum=0):
1383 def _payloadchunks(self, chunknum=0):
1384 '''seek to specified chunk and start yielding data'''
1384 '''seek to specified chunk and start yielding data'''
1385 if len(self._chunkindex) == 0:
1385 if len(self._chunkindex) == 0:
1386 assert chunknum == 0, 'Must start with chunk 0'
1386 assert chunknum == 0, 'Must start with chunk 0'
1387 self._chunkindex.append((0, self._tellfp()))
1387 self._chunkindex.append((0, self._tellfp()))
1388 else:
1388 else:
1389 assert chunknum < len(self._chunkindex), \
1389 assert chunknum < len(self._chunkindex), \
1390 'Unknown chunk %d' % chunknum
1390 'Unknown chunk %d' % chunknum
1391 self._seekfp(self._chunkindex[chunknum][1])
1391 self._seekfp(self._chunkindex[chunknum][1])
1392
1392
1393 pos = self._chunkindex[chunknum][0]
1393 pos = self._chunkindex[chunknum][0]
1394
1394
1395 for chunk in decodepayloadchunks(self.ui, self._fp):
1395 for chunk in decodepayloadchunks(self.ui, self._fp):
1396 chunknum += 1
1396 chunknum += 1
1397 pos += len(chunk)
1397 pos += len(chunk)
1398 if chunknum == len(self._chunkindex):
1398 if chunknum == len(self._chunkindex):
1399 self._chunkindex.append((pos, self._tellfp()))
1399 self._chunkindex.append((pos, self._tellfp()))
1400
1400
1401 yield chunk
1401 yield chunk
1402
1402
1403 def _findchunk(self, pos):
1403 def _findchunk(self, pos):
1404 '''for a given payload position, return a chunk number and offset'''
1404 '''for a given payload position, return a chunk number and offset'''
1405 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1405 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1406 if ppos == pos:
1406 if ppos == pos:
1407 return chunk, 0
1407 return chunk, 0
1408 elif ppos > pos:
1408 elif ppos > pos:
1409 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1409 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1410 raise ValueError('Unknown chunk')
1410 raise ValueError('Unknown chunk')
1411
1411
1412 def tell(self):
1412 def tell(self):
1413 return self._pos
1413 return self._pos
1414
1414
1415 def seek(self, offset, whence=os.SEEK_SET):
1415 def seek(self, offset, whence=os.SEEK_SET):
1416 if whence == os.SEEK_SET:
1416 if whence == os.SEEK_SET:
1417 newpos = offset
1417 newpos = offset
1418 elif whence == os.SEEK_CUR:
1418 elif whence == os.SEEK_CUR:
1419 newpos = self._pos + offset
1419 newpos = self._pos + offset
1420 elif whence == os.SEEK_END:
1420 elif whence == os.SEEK_END:
1421 if not self.consumed:
1421 if not self.consumed:
1422 # Can't use self.consume() here because it advances self._pos.
1422 # Can't use self.consume() here because it advances self._pos.
1423 chunk = self.read(32768)
1423 chunk = self.read(32768)
1424 while chunk:
1424 while chunk:
1425 chunk = self.read(32768)
1425 chunk = self.read(32768)
1426 newpos = self._chunkindex[-1][0] - offset
1426 newpos = self._chunkindex[-1][0] - offset
1427 else:
1427 else:
1428 raise ValueError('Unknown whence value: %r' % (whence,))
1428 raise ValueError('Unknown whence value: %r' % (whence,))
1429
1429
1430 if newpos > self._chunkindex[-1][0] and not self.consumed:
1430 if newpos > self._chunkindex[-1][0] and not self.consumed:
1431 # Can't use self.consume() here because it advances self._pos.
1431 # Can't use self.consume() here because it advances self._pos.
1432 chunk = self.read(32768)
1432 chunk = self.read(32768)
1433 while chunk:
1433 while chunk:
1434 chunk = self.read(32668)
1434 chunk = self.read(32668)
1435
1435
1436 if not 0 <= newpos <= self._chunkindex[-1][0]:
1436 if not 0 <= newpos <= self._chunkindex[-1][0]:
1437 raise ValueError('Offset out of range')
1437 raise ValueError('Offset out of range')
1438
1438
1439 if self._pos != newpos:
1439 if self._pos != newpos:
1440 chunk, internaloffset = self._findchunk(newpos)
1440 chunk, internaloffset = self._findchunk(newpos)
1441 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1441 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1442 adjust = self.read(internaloffset)
1442 adjust = self.read(internaloffset)
1443 if len(adjust) != internaloffset:
1443 if len(adjust) != internaloffset:
1444 raise error.Abort(_('Seek failed\n'))
1444 raise error.Abort(_('Seek failed\n'))
1445 self._pos = newpos
1445 self._pos = newpos
1446
1446
1447 def _seekfp(self, offset, whence=0):
1447 def _seekfp(self, offset, whence=0):
1448 """move the underlying file pointer
1448 """move the underlying file pointer
1449
1449
1450 This method is meant for internal usage by the bundle2 protocol only.
1450 This method is meant for internal usage by the bundle2 protocol only.
1451 They directly manipulate the low level stream including bundle2 level
1451 They directly manipulate the low level stream including bundle2 level
1452 instruction.
1452 instruction.
1453
1453
1454 Do not use it to implement higher-level logic or methods."""
1454 Do not use it to implement higher-level logic or methods."""
1455 if self._seekable:
1455 if self._seekable:
1456 return self._fp.seek(offset, whence)
1456 return self._fp.seek(offset, whence)
1457 else:
1457 else:
1458 raise NotImplementedError(_('File pointer is not seekable'))
1458 raise NotImplementedError(_('File pointer is not seekable'))
1459
1459
1460 def _tellfp(self):
1460 def _tellfp(self):
1461 """return the file offset, or None if file is not seekable
1461 """return the file offset, or None if file is not seekable
1462
1462
1463 This method is meant for internal usage by the bundle2 protocol only.
1463 This method is meant for internal usage by the bundle2 protocol only.
1464 They directly manipulate the low level stream including bundle2 level
1464 They directly manipulate the low level stream including bundle2 level
1465 instruction.
1465 instruction.
1466
1466
1467 Do not use it to implement higher-level logic or methods."""
1467 Do not use it to implement higher-level logic or methods."""
1468 if self._seekable:
1468 if self._seekable:
1469 try:
1469 try:
1470 return self._fp.tell()
1470 return self._fp.tell()
1471 except IOError as e:
1471 except IOError as e:
1472 if e.errno == errno.ESPIPE:
1472 if e.errno == errno.ESPIPE:
1473 self._seekable = False
1473 self._seekable = False
1474 else:
1474 else:
1475 raise
1475 raise
1476 return None
1476 return None
1477
1477
1478 # These are only the static capabilities.
1478 # These are only the static capabilities.
1479 # Check the 'getrepocaps' function for the rest.
1479 # Check the 'getrepocaps' function for the rest.
1480 capabilities = {'HG20': (),
1480 capabilities = {'HG20': (),
1481 'bookmarks': (),
1481 'bookmarks': (),
1482 'error': ('abort', 'unsupportedcontent', 'pushraced',
1482 'error': ('abort', 'unsupportedcontent', 'pushraced',
1483 'pushkey'),
1483 'pushkey'),
1484 'listkeys': (),
1484 'listkeys': (),
1485 'pushkey': (),
1485 'pushkey': (),
1486 'digests': tuple(sorted(util.DIGESTS.keys())),
1486 'digests': tuple(sorted(util.DIGESTS.keys())),
1487 'remote-changegroup': ('http', 'https'),
1487 'remote-changegroup': ('http', 'https'),
1488 'hgtagsfnodes': (),
1488 'hgtagsfnodes': (),
1489 'phases': ('heads',),
1489 'phases': ('heads',),
1490 'stream': ('v2',),
1490 }
1491 }
1491
1492
1492 def getrepocaps(repo, allowpushback=False):
1493 def getrepocaps(repo, allowpushback=False):
1493 """return the bundle2 capabilities for a given repo
1494 """return the bundle2 capabilities for a given repo
1494
1495
1495 Exists to allow extensions (like evolution) to mutate the capabilities.
1496 Exists to allow extensions (like evolution) to mutate the capabilities.
1496 """
1497 """
1497 caps = capabilities.copy()
1498 caps = capabilities.copy()
1498 caps['changegroup'] = tuple(sorted(
1499 caps['changegroup'] = tuple(sorted(
1499 changegroup.supportedincomingversions(repo)))
1500 changegroup.supportedincomingversions(repo)))
1500 if obsolete.isenabled(repo, obsolete.exchangeopt):
1501 if obsolete.isenabled(repo, obsolete.exchangeopt):
1501 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1502 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1502 caps['obsmarkers'] = supportedformat
1503 caps['obsmarkers'] = supportedformat
1503 if allowpushback:
1504 if allowpushback:
1504 caps['pushback'] = ()
1505 caps['pushback'] = ()
1505 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1506 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1506 if cpmode == 'check-related':
1507 if cpmode == 'check-related':
1507 caps['checkheads'] = ('related',)
1508 caps['checkheads'] = ('related',)
1508 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1509 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1509 caps.pop('phases')
1510 caps.pop('phases')
1511 if not repo.ui.configbool('experimental', 'bundle2.stream'):
1512 caps.pop('stream')
1510 return caps
1513 return caps
1511
1514
1512 def bundle2caps(remote):
1515 def bundle2caps(remote):
1513 """return the bundle capabilities of a peer as dict"""
1516 """return the bundle capabilities of a peer as dict"""
1514 raw = remote.capable('bundle2')
1517 raw = remote.capable('bundle2')
1515 if not raw and raw != '':
1518 if not raw and raw != '':
1516 return {}
1519 return {}
1517 capsblob = urlreq.unquote(remote.capable('bundle2'))
1520 capsblob = urlreq.unquote(remote.capable('bundle2'))
1518 return decodecaps(capsblob)
1521 return decodecaps(capsblob)
1519
1522
1520 def obsmarkersversion(caps):
1523 def obsmarkersversion(caps):
1521 """extract the list of supported obsmarkers versions from a bundle2caps dict
1524 """extract the list of supported obsmarkers versions from a bundle2caps dict
1522 """
1525 """
1523 obscaps = caps.get('obsmarkers', ())
1526 obscaps = caps.get('obsmarkers', ())
1524 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1527 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1525
1528
1526 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1529 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1527 vfs=None, compression=None, compopts=None):
1530 vfs=None, compression=None, compopts=None):
1528 if bundletype.startswith('HG10'):
1531 if bundletype.startswith('HG10'):
1529 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1532 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1530 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1533 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1531 compression=compression, compopts=compopts)
1534 compression=compression, compopts=compopts)
1532 elif not bundletype.startswith('HG20'):
1535 elif not bundletype.startswith('HG20'):
1533 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1536 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1534
1537
1535 caps = {}
1538 caps = {}
1536 if 'obsolescence' in opts:
1539 if 'obsolescence' in opts:
1537 caps['obsmarkers'] = ('V1',)
1540 caps['obsmarkers'] = ('V1',)
1538 bundle = bundle20(ui, caps)
1541 bundle = bundle20(ui, caps)
1539 bundle.setcompression(compression, compopts)
1542 bundle.setcompression(compression, compopts)
1540 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1543 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1541 chunkiter = bundle.getchunks()
1544 chunkiter = bundle.getchunks()
1542
1545
1543 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1546 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1544
1547
1545 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1548 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1546 # We should eventually reconcile this logic with the one behind
1549 # We should eventually reconcile this logic with the one behind
1547 # 'exchange.getbundle2partsgenerator'.
1550 # 'exchange.getbundle2partsgenerator'.
1548 #
1551 #
1549 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1552 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1550 # different right now. So we keep them separated for now for the sake of
1553 # different right now. So we keep them separated for now for the sake of
1551 # simplicity.
1554 # simplicity.
1552
1555
1553 # we always want a changegroup in such bundle
1556 # we always want a changegroup in such bundle
1554 cgversion = opts.get('cg.version')
1557 cgversion = opts.get('cg.version')
1555 if cgversion is None:
1558 if cgversion is None:
1556 cgversion = changegroup.safeversion(repo)
1559 cgversion = changegroup.safeversion(repo)
1557 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1560 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1558 part = bundler.newpart('changegroup', data=cg.getchunks())
1561 part = bundler.newpart('changegroup', data=cg.getchunks())
1559 part.addparam('version', cg.version)
1562 part.addparam('version', cg.version)
1560 if 'clcount' in cg.extras:
1563 if 'clcount' in cg.extras:
1561 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1564 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1562 mandatory=False)
1565 mandatory=False)
1563 if opts.get('phases') and repo.revs('%ln and secret()',
1566 if opts.get('phases') and repo.revs('%ln and secret()',
1564 outgoing.missingheads):
1567 outgoing.missingheads):
1565 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1568 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1566
1569
1567 addparttagsfnodescache(repo, bundler, outgoing)
1570 addparttagsfnodescache(repo, bundler, outgoing)
1568
1571
1569 if opts.get('obsolescence', False):
1572 if opts.get('obsolescence', False):
1570 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1573 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1571 buildobsmarkerspart(bundler, obsmarkers)
1574 buildobsmarkerspart(bundler, obsmarkers)
1572
1575
1573 if opts.get('phases', False):
1576 if opts.get('phases', False):
1574 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1577 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1575 phasedata = phases.binaryencode(headsbyphase)
1578 phasedata = phases.binaryencode(headsbyphase)
1576 bundler.newpart('phase-heads', data=phasedata)
1579 bundler.newpart('phase-heads', data=phasedata)
1577
1580
1578 def addparttagsfnodescache(repo, bundler, outgoing):
1581 def addparttagsfnodescache(repo, bundler, outgoing):
1579 # we include the tags fnode cache for the bundle changeset
1582 # we include the tags fnode cache for the bundle changeset
1580 # (as an optional parts)
1583 # (as an optional parts)
1581 cache = tags.hgtagsfnodescache(repo.unfiltered())
1584 cache = tags.hgtagsfnodescache(repo.unfiltered())
1582 chunks = []
1585 chunks = []
1583
1586
1584 # .hgtags fnodes are only relevant for head changesets. While we could
1587 # .hgtags fnodes are only relevant for head changesets. While we could
1585 # transfer values for all known nodes, there will likely be little to
1588 # transfer values for all known nodes, there will likely be little to
1586 # no benefit.
1589 # no benefit.
1587 #
1590 #
1588 # We don't bother using a generator to produce output data because
1591 # We don't bother using a generator to produce output data because
1589 # a) we only have 40 bytes per head and even esoteric numbers of heads
1592 # a) we only have 40 bytes per head and even esoteric numbers of heads
1590 # consume little memory (1M heads is 40MB) b) we don't want to send the
1593 # consume little memory (1M heads is 40MB) b) we don't want to send the
1591 # part if we don't have entries and knowing if we have entries requires
1594 # part if we don't have entries and knowing if we have entries requires
1592 # cache lookups.
1595 # cache lookups.
1593 for node in outgoing.missingheads:
1596 for node in outgoing.missingheads:
1594 # Don't compute missing, as this may slow down serving.
1597 # Don't compute missing, as this may slow down serving.
1595 fnode = cache.getfnode(node, computemissing=False)
1598 fnode = cache.getfnode(node, computemissing=False)
1596 if fnode is not None:
1599 if fnode is not None:
1597 chunks.extend([node, fnode])
1600 chunks.extend([node, fnode])
1598
1601
1599 if chunks:
1602 if chunks:
1600 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1603 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1601
1604
1602 def buildobsmarkerspart(bundler, markers):
1605 def buildobsmarkerspart(bundler, markers):
1603 """add an obsmarker part to the bundler with <markers>
1606 """add an obsmarker part to the bundler with <markers>
1604
1607
1605 No part is created if markers is empty.
1608 No part is created if markers is empty.
1606 Raises ValueError if the bundler doesn't support any known obsmarker format.
1609 Raises ValueError if the bundler doesn't support any known obsmarker format.
1607 """
1610 """
1608 if not markers:
1611 if not markers:
1609 return None
1612 return None
1610
1613
1611 remoteversions = obsmarkersversion(bundler.capabilities)
1614 remoteversions = obsmarkersversion(bundler.capabilities)
1612 version = obsolete.commonversion(remoteversions)
1615 version = obsolete.commonversion(remoteversions)
1613 if version is None:
1616 if version is None:
1614 raise ValueError('bundler does not support common obsmarker format')
1617 raise ValueError('bundler does not support common obsmarker format')
1615 stream = obsolete.encodemarkers(markers, True, version=version)
1618 stream = obsolete.encodemarkers(markers, True, version=version)
1616 return bundler.newpart('obsmarkers', data=stream)
1619 return bundler.newpart('obsmarkers', data=stream)
1617
1620
1618 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1621 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1619 compopts=None):
1622 compopts=None):
1620 """Write a bundle file and return its filename.
1623 """Write a bundle file and return its filename.
1621
1624
1622 Existing files will not be overwritten.
1625 Existing files will not be overwritten.
1623 If no filename is specified, a temporary file is created.
1626 If no filename is specified, a temporary file is created.
1624 bz2 compression can be turned off.
1627 bz2 compression can be turned off.
1625 The bundle file will be deleted in case of errors.
1628 The bundle file will be deleted in case of errors.
1626 """
1629 """
1627
1630
1628 if bundletype == "HG20":
1631 if bundletype == "HG20":
1629 bundle = bundle20(ui)
1632 bundle = bundle20(ui)
1630 bundle.setcompression(compression, compopts)
1633 bundle.setcompression(compression, compopts)
1631 part = bundle.newpart('changegroup', data=cg.getchunks())
1634 part = bundle.newpart('changegroup', data=cg.getchunks())
1632 part.addparam('version', cg.version)
1635 part.addparam('version', cg.version)
1633 if 'clcount' in cg.extras:
1636 if 'clcount' in cg.extras:
1634 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1637 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1635 mandatory=False)
1638 mandatory=False)
1636 chunkiter = bundle.getchunks()
1639 chunkiter = bundle.getchunks()
1637 else:
1640 else:
1638 # compression argument is only for the bundle2 case
1641 # compression argument is only for the bundle2 case
1639 assert compression is None
1642 assert compression is None
1640 if cg.version != '01':
1643 if cg.version != '01':
1641 raise error.Abort(_('old bundle types only supports v1 '
1644 raise error.Abort(_('old bundle types only supports v1 '
1642 'changegroups'))
1645 'changegroups'))
1643 header, comp = bundletypes[bundletype]
1646 header, comp = bundletypes[bundletype]
1644 if comp not in util.compengines.supportedbundletypes:
1647 if comp not in util.compengines.supportedbundletypes:
1645 raise error.Abort(_('unknown stream compression type: %s')
1648 raise error.Abort(_('unknown stream compression type: %s')
1646 % comp)
1649 % comp)
1647 compengine = util.compengines.forbundletype(comp)
1650 compengine = util.compengines.forbundletype(comp)
1648 def chunkiter():
1651 def chunkiter():
1649 yield header
1652 yield header
1650 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1653 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1651 yield chunk
1654 yield chunk
1652 chunkiter = chunkiter()
1655 chunkiter = chunkiter()
1653
1656
1654 # parse the changegroup data, otherwise we will block
1657 # parse the changegroup data, otherwise we will block
1655 # in case of sshrepo because we don't know the end of the stream
1658 # in case of sshrepo because we don't know the end of the stream
1656 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1659 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1657
1660
1658 def combinechangegroupresults(op):
1661 def combinechangegroupresults(op):
1659 """logic to combine 0 or more addchangegroup results into one"""
1662 """logic to combine 0 or more addchangegroup results into one"""
1660 results = [r.get('return', 0)
1663 results = [r.get('return', 0)
1661 for r in op.records['changegroup']]
1664 for r in op.records['changegroup']]
1662 changedheads = 0
1665 changedheads = 0
1663 result = 1
1666 result = 1
1664 for ret in results:
1667 for ret in results:
1665 # If any changegroup result is 0, return 0
1668 # If any changegroup result is 0, return 0
1666 if ret == 0:
1669 if ret == 0:
1667 result = 0
1670 result = 0
1668 break
1671 break
1669 if ret < -1:
1672 if ret < -1:
1670 changedheads += ret + 1
1673 changedheads += ret + 1
1671 elif ret > 1:
1674 elif ret > 1:
1672 changedheads += ret - 1
1675 changedheads += ret - 1
1673 if changedheads > 0:
1676 if changedheads > 0:
1674 result = 1 + changedheads
1677 result = 1 + changedheads
1675 elif changedheads < 0:
1678 elif changedheads < 0:
1676 result = -1 + changedheads
1679 result = -1 + changedheads
1677 return result
1680 return result
1678
1681
1679 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1682 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1680 'targetphase'))
1683 'targetphase'))
1681 def handlechangegroup(op, inpart):
1684 def handlechangegroup(op, inpart):
1682 """apply a changegroup part on the repo
1685 """apply a changegroup part on the repo
1683
1686
1684 This is a very early implementation that will massive rework before being
1687 This is a very early implementation that will massive rework before being
1685 inflicted to any end-user.
1688 inflicted to any end-user.
1686 """
1689 """
1687 tr = op.gettransaction()
1690 tr = op.gettransaction()
1688 unpackerversion = inpart.params.get('version', '01')
1691 unpackerversion = inpart.params.get('version', '01')
1689 # We should raise an appropriate exception here
1692 # We should raise an appropriate exception here
1690 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1693 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1691 # the source and url passed here are overwritten by the one contained in
1694 # the source and url passed here are overwritten by the one contained in
1692 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1695 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1693 nbchangesets = None
1696 nbchangesets = None
1694 if 'nbchanges' in inpart.params:
1697 if 'nbchanges' in inpart.params:
1695 nbchangesets = int(inpart.params.get('nbchanges'))
1698 nbchangesets = int(inpart.params.get('nbchanges'))
1696 if ('treemanifest' in inpart.params and
1699 if ('treemanifest' in inpart.params and
1697 'treemanifest' not in op.repo.requirements):
1700 'treemanifest' not in op.repo.requirements):
1698 if len(op.repo.changelog) != 0:
1701 if len(op.repo.changelog) != 0:
1699 raise error.Abort(_(
1702 raise error.Abort(_(
1700 "bundle contains tree manifests, but local repo is "
1703 "bundle contains tree manifests, but local repo is "
1701 "non-empty and does not use tree manifests"))
1704 "non-empty and does not use tree manifests"))
1702 op.repo.requirements.add('treemanifest')
1705 op.repo.requirements.add('treemanifest')
1703 op.repo._applyopenerreqs()
1706 op.repo._applyopenerreqs()
1704 op.repo._writerequirements()
1707 op.repo._writerequirements()
1705 extrakwargs = {}
1708 extrakwargs = {}
1706 targetphase = inpart.params.get('targetphase')
1709 targetphase = inpart.params.get('targetphase')
1707 if targetphase is not None:
1710 if targetphase is not None:
1708 extrakwargs['targetphase'] = int(targetphase)
1711 extrakwargs['targetphase'] = int(targetphase)
1709 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1712 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1710 expectedtotal=nbchangesets, **extrakwargs)
1713 expectedtotal=nbchangesets, **extrakwargs)
1711 if op.reply is not None:
1714 if op.reply is not None:
1712 # This is definitely not the final form of this
1715 # This is definitely not the final form of this
1713 # return. But one need to start somewhere.
1716 # return. But one need to start somewhere.
1714 part = op.reply.newpart('reply:changegroup', mandatory=False)
1717 part = op.reply.newpart('reply:changegroup', mandatory=False)
1715 part.addparam(
1718 part.addparam(
1716 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1719 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1717 part.addparam('return', '%i' % ret, mandatory=False)
1720 part.addparam('return', '%i' % ret, mandatory=False)
1718 assert not inpart.read()
1721 assert not inpart.read()
1719
1722
1720 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1723 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1721 ['digest:%s' % k for k in util.DIGESTS.keys()])
1724 ['digest:%s' % k for k in util.DIGESTS.keys()])
1722 @parthandler('remote-changegroup', _remotechangegroupparams)
1725 @parthandler('remote-changegroup', _remotechangegroupparams)
1723 def handleremotechangegroup(op, inpart):
1726 def handleremotechangegroup(op, inpart):
1724 """apply a bundle10 on the repo, given an url and validation information
1727 """apply a bundle10 on the repo, given an url and validation information
1725
1728
1726 All the information about the remote bundle to import are given as
1729 All the information about the remote bundle to import are given as
1727 parameters. The parameters include:
1730 parameters. The parameters include:
1728 - url: the url to the bundle10.
1731 - url: the url to the bundle10.
1729 - size: the bundle10 file size. It is used to validate what was
1732 - size: the bundle10 file size. It is used to validate what was
1730 retrieved by the client matches the server knowledge about the bundle.
1733 retrieved by the client matches the server knowledge about the bundle.
1731 - digests: a space separated list of the digest types provided as
1734 - digests: a space separated list of the digest types provided as
1732 parameters.
1735 parameters.
1733 - digest:<digest-type>: the hexadecimal representation of the digest with
1736 - digest:<digest-type>: the hexadecimal representation of the digest with
1734 that name. Like the size, it is used to validate what was retrieved by
1737 that name. Like the size, it is used to validate what was retrieved by
1735 the client matches what the server knows about the bundle.
1738 the client matches what the server knows about the bundle.
1736
1739
1737 When multiple digest types are given, all of them are checked.
1740 When multiple digest types are given, all of them are checked.
1738 """
1741 """
1739 try:
1742 try:
1740 raw_url = inpart.params['url']
1743 raw_url = inpart.params['url']
1741 except KeyError:
1744 except KeyError:
1742 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1745 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1743 parsed_url = util.url(raw_url)
1746 parsed_url = util.url(raw_url)
1744 if parsed_url.scheme not in capabilities['remote-changegroup']:
1747 if parsed_url.scheme not in capabilities['remote-changegroup']:
1745 raise error.Abort(_('remote-changegroup does not support %s urls') %
1748 raise error.Abort(_('remote-changegroup does not support %s urls') %
1746 parsed_url.scheme)
1749 parsed_url.scheme)
1747
1750
1748 try:
1751 try:
1749 size = int(inpart.params['size'])
1752 size = int(inpart.params['size'])
1750 except ValueError:
1753 except ValueError:
1751 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1754 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1752 % 'size')
1755 % 'size')
1753 except KeyError:
1756 except KeyError:
1754 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1757 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1755
1758
1756 digests = {}
1759 digests = {}
1757 for typ in inpart.params.get('digests', '').split():
1760 for typ in inpart.params.get('digests', '').split():
1758 param = 'digest:%s' % typ
1761 param = 'digest:%s' % typ
1759 try:
1762 try:
1760 value = inpart.params[param]
1763 value = inpart.params[param]
1761 except KeyError:
1764 except KeyError:
1762 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1765 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1763 param)
1766 param)
1764 digests[typ] = value
1767 digests[typ] = value
1765
1768
1766 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1769 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1767
1770
1768 tr = op.gettransaction()
1771 tr = op.gettransaction()
1769 from . import exchange
1772 from . import exchange
1770 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1773 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1771 if not isinstance(cg, changegroup.cg1unpacker):
1774 if not isinstance(cg, changegroup.cg1unpacker):
1772 raise error.Abort(_('%s: not a bundle version 1.0') %
1775 raise error.Abort(_('%s: not a bundle version 1.0') %
1773 util.hidepassword(raw_url))
1776 util.hidepassword(raw_url))
1774 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1777 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1775 if op.reply is not None:
1778 if op.reply is not None:
1776 # This is definitely not the final form of this
1779 # This is definitely not the final form of this
1777 # return. But one need to start somewhere.
1780 # return. But one need to start somewhere.
1778 part = op.reply.newpart('reply:changegroup')
1781 part = op.reply.newpart('reply:changegroup')
1779 part.addparam(
1782 part.addparam(
1780 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1783 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1781 part.addparam('return', '%i' % ret, mandatory=False)
1784 part.addparam('return', '%i' % ret, mandatory=False)
1782 try:
1785 try:
1783 real_part.validate()
1786 real_part.validate()
1784 except error.Abort as e:
1787 except error.Abort as e:
1785 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1788 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1786 (util.hidepassword(raw_url), str(e)))
1789 (util.hidepassword(raw_url), str(e)))
1787 assert not inpart.read()
1790 assert not inpart.read()
1788
1791
1789 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1792 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1790 def handlereplychangegroup(op, inpart):
1793 def handlereplychangegroup(op, inpart):
1791 ret = int(inpart.params['return'])
1794 ret = int(inpart.params['return'])
1792 replyto = int(inpart.params['in-reply-to'])
1795 replyto = int(inpart.params['in-reply-to'])
1793 op.records.add('changegroup', {'return': ret}, replyto)
1796 op.records.add('changegroup', {'return': ret}, replyto)
1794
1797
1795 @parthandler('check:bookmarks')
1798 @parthandler('check:bookmarks')
1796 def handlecheckbookmarks(op, inpart):
1799 def handlecheckbookmarks(op, inpart):
1797 """check location of bookmarks
1800 """check location of bookmarks
1798
1801
1799 This part is to be used to detect push race regarding bookmark, it
1802 This part is to be used to detect push race regarding bookmark, it
1800 contains binary encoded (bookmark, node) tuple. If the local state does
1803 contains binary encoded (bookmark, node) tuple. If the local state does
1801 not marks the one in the part, a PushRaced exception is raised
1804 not marks the one in the part, a PushRaced exception is raised
1802 """
1805 """
1803 bookdata = bookmarks.binarydecode(inpart)
1806 bookdata = bookmarks.binarydecode(inpart)
1804
1807
1805 msgstandard = ('repository changed while pushing - please try again '
1808 msgstandard = ('repository changed while pushing - please try again '
1806 '(bookmark "%s" move from %s to %s)')
1809 '(bookmark "%s" move from %s to %s)')
1807 msgmissing = ('repository changed while pushing - please try again '
1810 msgmissing = ('repository changed while pushing - please try again '
1808 '(bookmark "%s" is missing, expected %s)')
1811 '(bookmark "%s" is missing, expected %s)')
1809 msgexist = ('repository changed while pushing - please try again '
1812 msgexist = ('repository changed while pushing - please try again '
1810 '(bookmark "%s" set on %s, expected missing)')
1813 '(bookmark "%s" set on %s, expected missing)')
1811 for book, node in bookdata:
1814 for book, node in bookdata:
1812 currentnode = op.repo._bookmarks.get(book)
1815 currentnode = op.repo._bookmarks.get(book)
1813 if currentnode != node:
1816 if currentnode != node:
1814 if node is None:
1817 if node is None:
1815 finalmsg = msgexist % (book, nodemod.short(currentnode))
1818 finalmsg = msgexist % (book, nodemod.short(currentnode))
1816 elif currentnode is None:
1819 elif currentnode is None:
1817 finalmsg = msgmissing % (book, nodemod.short(node))
1820 finalmsg = msgmissing % (book, nodemod.short(node))
1818 else:
1821 else:
1819 finalmsg = msgstandard % (book, nodemod.short(node),
1822 finalmsg = msgstandard % (book, nodemod.short(node),
1820 nodemod.short(currentnode))
1823 nodemod.short(currentnode))
1821 raise error.PushRaced(finalmsg)
1824 raise error.PushRaced(finalmsg)
1822
1825
1823 @parthandler('check:heads')
1826 @parthandler('check:heads')
1824 def handlecheckheads(op, inpart):
1827 def handlecheckheads(op, inpart):
1825 """check that head of the repo did not change
1828 """check that head of the repo did not change
1826
1829
1827 This is used to detect a push race when using unbundle.
1830 This is used to detect a push race when using unbundle.
1828 This replaces the "heads" argument of unbundle."""
1831 This replaces the "heads" argument of unbundle."""
1829 h = inpart.read(20)
1832 h = inpart.read(20)
1830 heads = []
1833 heads = []
1831 while len(h) == 20:
1834 while len(h) == 20:
1832 heads.append(h)
1835 heads.append(h)
1833 h = inpart.read(20)
1836 h = inpart.read(20)
1834 assert not h
1837 assert not h
1835 # Trigger a transaction so that we are guaranteed to have the lock now.
1838 # Trigger a transaction so that we are guaranteed to have the lock now.
1836 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1839 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1837 op.gettransaction()
1840 op.gettransaction()
1838 if sorted(heads) != sorted(op.repo.heads()):
1841 if sorted(heads) != sorted(op.repo.heads()):
1839 raise error.PushRaced('repository changed while pushing - '
1842 raise error.PushRaced('repository changed while pushing - '
1840 'please try again')
1843 'please try again')
1841
1844
1842 @parthandler('check:updated-heads')
1845 @parthandler('check:updated-heads')
1843 def handlecheckupdatedheads(op, inpart):
1846 def handlecheckupdatedheads(op, inpart):
1844 """check for race on the heads touched by a push
1847 """check for race on the heads touched by a push
1845
1848
1846 This is similar to 'check:heads' but focus on the heads actually updated
1849 This is similar to 'check:heads' but focus on the heads actually updated
1847 during the push. If other activities happen on unrelated heads, it is
1850 during the push. If other activities happen on unrelated heads, it is
1848 ignored.
1851 ignored.
1849
1852
1850 This allow server with high traffic to avoid push contention as long as
1853 This allow server with high traffic to avoid push contention as long as
1851 unrelated parts of the graph are involved."""
1854 unrelated parts of the graph are involved."""
1852 h = inpart.read(20)
1855 h = inpart.read(20)
1853 heads = []
1856 heads = []
1854 while len(h) == 20:
1857 while len(h) == 20:
1855 heads.append(h)
1858 heads.append(h)
1856 h = inpart.read(20)
1859 h = inpart.read(20)
1857 assert not h
1860 assert not h
1858 # trigger a transaction so that we are guaranteed to have the lock now.
1861 # trigger a transaction so that we are guaranteed to have the lock now.
1859 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1862 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1860 op.gettransaction()
1863 op.gettransaction()
1861
1864
1862 currentheads = set()
1865 currentheads = set()
1863 for ls in op.repo.branchmap().itervalues():
1866 for ls in op.repo.branchmap().itervalues():
1864 currentheads.update(ls)
1867 currentheads.update(ls)
1865
1868
1866 for h in heads:
1869 for h in heads:
1867 if h not in currentheads:
1870 if h not in currentheads:
1868 raise error.PushRaced('repository changed while pushing - '
1871 raise error.PushRaced('repository changed while pushing - '
1869 'please try again')
1872 'please try again')
1870
1873
1871 @parthandler('check:phases')
1874 @parthandler('check:phases')
1872 def handlecheckphases(op, inpart):
1875 def handlecheckphases(op, inpart):
1873 """check that phase boundaries of the repository did not change
1876 """check that phase boundaries of the repository did not change
1874
1877
1875 This is used to detect a push race.
1878 This is used to detect a push race.
1876 """
1879 """
1877 phasetonodes = phases.binarydecode(inpart)
1880 phasetonodes = phases.binarydecode(inpart)
1878 unfi = op.repo.unfiltered()
1881 unfi = op.repo.unfiltered()
1879 cl = unfi.changelog
1882 cl = unfi.changelog
1880 phasecache = unfi._phasecache
1883 phasecache = unfi._phasecache
1881 msg = ('repository changed while pushing - please try again '
1884 msg = ('repository changed while pushing - please try again '
1882 '(%s is %s expected %s)')
1885 '(%s is %s expected %s)')
1883 for expectedphase, nodes in enumerate(phasetonodes):
1886 for expectedphase, nodes in enumerate(phasetonodes):
1884 for n in nodes:
1887 for n in nodes:
1885 actualphase = phasecache.phase(unfi, cl.rev(n))
1888 actualphase = phasecache.phase(unfi, cl.rev(n))
1886 if actualphase != expectedphase:
1889 if actualphase != expectedphase:
1887 finalmsg = msg % (nodemod.short(n),
1890 finalmsg = msg % (nodemod.short(n),
1888 phases.phasenames[actualphase],
1891 phases.phasenames[actualphase],
1889 phases.phasenames[expectedphase])
1892 phases.phasenames[expectedphase])
1890 raise error.PushRaced(finalmsg)
1893 raise error.PushRaced(finalmsg)
1891
1894
1892 @parthandler('output')
1895 @parthandler('output')
1893 def handleoutput(op, inpart):
1896 def handleoutput(op, inpart):
1894 """forward output captured on the server to the client"""
1897 """forward output captured on the server to the client"""
1895 for line in inpart.read().splitlines():
1898 for line in inpart.read().splitlines():
1896 op.ui.status(_('remote: %s\n') % line)
1899 op.ui.status(_('remote: %s\n') % line)
1897
1900
1898 @parthandler('replycaps')
1901 @parthandler('replycaps')
1899 def handlereplycaps(op, inpart):
1902 def handlereplycaps(op, inpart):
1900 """Notify that a reply bundle should be created
1903 """Notify that a reply bundle should be created
1901
1904
1902 The payload contains the capabilities information for the reply"""
1905 The payload contains the capabilities information for the reply"""
1903 caps = decodecaps(inpart.read())
1906 caps = decodecaps(inpart.read())
1904 if op.reply is None:
1907 if op.reply is None:
1905 op.reply = bundle20(op.ui, caps)
1908 op.reply = bundle20(op.ui, caps)
1906
1909
1907 class AbortFromPart(error.Abort):
1910 class AbortFromPart(error.Abort):
1908 """Sub-class of Abort that denotes an error from a bundle2 part."""
1911 """Sub-class of Abort that denotes an error from a bundle2 part."""
1909
1912
1910 @parthandler('error:abort', ('message', 'hint'))
1913 @parthandler('error:abort', ('message', 'hint'))
1911 def handleerrorabort(op, inpart):
1914 def handleerrorabort(op, inpart):
1912 """Used to transmit abort error over the wire"""
1915 """Used to transmit abort error over the wire"""
1913 raise AbortFromPart(inpart.params['message'],
1916 raise AbortFromPart(inpart.params['message'],
1914 hint=inpart.params.get('hint'))
1917 hint=inpart.params.get('hint'))
1915
1918
1916 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1919 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1917 'in-reply-to'))
1920 'in-reply-to'))
1918 def handleerrorpushkey(op, inpart):
1921 def handleerrorpushkey(op, inpart):
1919 """Used to transmit failure of a mandatory pushkey over the wire"""
1922 """Used to transmit failure of a mandatory pushkey over the wire"""
1920 kwargs = {}
1923 kwargs = {}
1921 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1924 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1922 value = inpart.params.get(name)
1925 value = inpart.params.get(name)
1923 if value is not None:
1926 if value is not None:
1924 kwargs[name] = value
1927 kwargs[name] = value
1925 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1928 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1926
1929
1927 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1930 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1928 def handleerrorunsupportedcontent(op, inpart):
1931 def handleerrorunsupportedcontent(op, inpart):
1929 """Used to transmit unknown content error over the wire"""
1932 """Used to transmit unknown content error over the wire"""
1930 kwargs = {}
1933 kwargs = {}
1931 parttype = inpart.params.get('parttype')
1934 parttype = inpart.params.get('parttype')
1932 if parttype is not None:
1935 if parttype is not None:
1933 kwargs['parttype'] = parttype
1936 kwargs['parttype'] = parttype
1934 params = inpart.params.get('params')
1937 params = inpart.params.get('params')
1935 if params is not None:
1938 if params is not None:
1936 kwargs['params'] = params.split('\0')
1939 kwargs['params'] = params.split('\0')
1937
1940
1938 raise error.BundleUnknownFeatureError(**kwargs)
1941 raise error.BundleUnknownFeatureError(**kwargs)
1939
1942
1940 @parthandler('error:pushraced', ('message',))
1943 @parthandler('error:pushraced', ('message',))
1941 def handleerrorpushraced(op, inpart):
1944 def handleerrorpushraced(op, inpart):
1942 """Used to transmit push race error over the wire"""
1945 """Used to transmit push race error over the wire"""
1943 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1946 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1944
1947
1945 @parthandler('listkeys', ('namespace',))
1948 @parthandler('listkeys', ('namespace',))
1946 def handlelistkeys(op, inpart):
1949 def handlelistkeys(op, inpart):
1947 """retrieve pushkey namespace content stored in a bundle2"""
1950 """retrieve pushkey namespace content stored in a bundle2"""
1948 namespace = inpart.params['namespace']
1951 namespace = inpart.params['namespace']
1949 r = pushkey.decodekeys(inpart.read())
1952 r = pushkey.decodekeys(inpart.read())
1950 op.records.add('listkeys', (namespace, r))
1953 op.records.add('listkeys', (namespace, r))
1951
1954
1952 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1955 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1953 def handlepushkey(op, inpart):
1956 def handlepushkey(op, inpart):
1954 """process a pushkey request"""
1957 """process a pushkey request"""
1955 dec = pushkey.decode
1958 dec = pushkey.decode
1956 namespace = dec(inpart.params['namespace'])
1959 namespace = dec(inpart.params['namespace'])
1957 key = dec(inpart.params['key'])
1960 key = dec(inpart.params['key'])
1958 old = dec(inpart.params['old'])
1961 old = dec(inpart.params['old'])
1959 new = dec(inpart.params['new'])
1962 new = dec(inpart.params['new'])
1960 # Grab the transaction to ensure that we have the lock before performing the
1963 # Grab the transaction to ensure that we have the lock before performing the
1961 # pushkey.
1964 # pushkey.
1962 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1965 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1963 op.gettransaction()
1966 op.gettransaction()
1964 ret = op.repo.pushkey(namespace, key, old, new)
1967 ret = op.repo.pushkey(namespace, key, old, new)
1965 record = {'namespace': namespace,
1968 record = {'namespace': namespace,
1966 'key': key,
1969 'key': key,
1967 'old': old,
1970 'old': old,
1968 'new': new}
1971 'new': new}
1969 op.records.add('pushkey', record)
1972 op.records.add('pushkey', record)
1970 if op.reply is not None:
1973 if op.reply is not None:
1971 rpart = op.reply.newpart('reply:pushkey')
1974 rpart = op.reply.newpart('reply:pushkey')
1972 rpart.addparam(
1975 rpart.addparam(
1973 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1976 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1974 rpart.addparam('return', '%i' % ret, mandatory=False)
1977 rpart.addparam('return', '%i' % ret, mandatory=False)
1975 if inpart.mandatory and not ret:
1978 if inpart.mandatory and not ret:
1976 kwargs = {}
1979 kwargs = {}
1977 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1980 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1978 if key in inpart.params:
1981 if key in inpart.params:
1979 kwargs[key] = inpart.params[key]
1982 kwargs[key] = inpart.params[key]
1980 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1983 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1981
1984
1982 @parthandler('bookmarks')
1985 @parthandler('bookmarks')
1983 def handlebookmark(op, inpart):
1986 def handlebookmark(op, inpart):
1984 """transmit bookmark information
1987 """transmit bookmark information
1985
1988
1986 The part contains binary encoded bookmark information.
1989 The part contains binary encoded bookmark information.
1987
1990
1988 The exact behavior of this part can be controlled by the 'bookmarks' mode
1991 The exact behavior of this part can be controlled by the 'bookmarks' mode
1989 on the bundle operation.
1992 on the bundle operation.
1990
1993
1991 When mode is 'apply' (the default) the bookmark information is applied as
1994 When mode is 'apply' (the default) the bookmark information is applied as
1992 is to the unbundling repository. Make sure a 'check:bookmarks' part is
1995 is to the unbundling repository. Make sure a 'check:bookmarks' part is
1993 issued earlier to check for push races in such update. This behavior is
1996 issued earlier to check for push races in such update. This behavior is
1994 suitable for pushing.
1997 suitable for pushing.
1995
1998
1996 When mode is 'records', the information is recorded into the 'bookmarks'
1999 When mode is 'records', the information is recorded into the 'bookmarks'
1997 records of the bundle operation. This behavior is suitable for pulling.
2000 records of the bundle operation. This behavior is suitable for pulling.
1998 """
2001 """
1999 changes = bookmarks.binarydecode(inpart)
2002 changes = bookmarks.binarydecode(inpart)
2000
2003
2001 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2004 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2002 bookmarksmode = op.modes.get('bookmarks', 'apply')
2005 bookmarksmode = op.modes.get('bookmarks', 'apply')
2003
2006
2004 if bookmarksmode == 'apply':
2007 if bookmarksmode == 'apply':
2005 tr = op.gettransaction()
2008 tr = op.gettransaction()
2006 bookstore = op.repo._bookmarks
2009 bookstore = op.repo._bookmarks
2007 if pushkeycompat:
2010 if pushkeycompat:
2008 allhooks = []
2011 allhooks = []
2009 for book, node in changes:
2012 for book, node in changes:
2010 hookargs = tr.hookargs.copy()
2013 hookargs = tr.hookargs.copy()
2011 hookargs['pushkeycompat'] = '1'
2014 hookargs['pushkeycompat'] = '1'
2012 hookargs['namespace'] = 'bookmark'
2015 hookargs['namespace'] = 'bookmark'
2013 hookargs['key'] = book
2016 hookargs['key'] = book
2014 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2017 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2015 hookargs['new'] = nodemod.hex(node if node is not None else '')
2018 hookargs['new'] = nodemod.hex(node if node is not None else '')
2016 allhooks.append(hookargs)
2019 allhooks.append(hookargs)
2017
2020
2018 for hookargs in allhooks:
2021 for hookargs in allhooks:
2019 op.repo.hook('prepushkey', throw=True, **hookargs)
2022 op.repo.hook('prepushkey', throw=True, **hookargs)
2020
2023
2021 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2024 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2022
2025
2023 if pushkeycompat:
2026 if pushkeycompat:
2024 def runhook():
2027 def runhook():
2025 for hookargs in allhooks:
2028 for hookargs in allhooks:
2026 op.repo.hook('pushkey', **hookargs)
2029 op.repo.hook('pushkey', **hookargs)
2027 op.repo._afterlock(runhook)
2030 op.repo._afterlock(runhook)
2028
2031
2029 elif bookmarksmode == 'records':
2032 elif bookmarksmode == 'records':
2030 for book, node in changes:
2033 for book, node in changes:
2031 record = {'bookmark': book, 'node': node}
2034 record = {'bookmark': book, 'node': node}
2032 op.records.add('bookmarks', record)
2035 op.records.add('bookmarks', record)
2033 else:
2036 else:
2034 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2037 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2035
2038
2036 @parthandler('phase-heads')
2039 @parthandler('phase-heads')
2037 def handlephases(op, inpart):
2040 def handlephases(op, inpart):
2038 """apply phases from bundle part to repo"""
2041 """apply phases from bundle part to repo"""
2039 headsbyphase = phases.binarydecode(inpart)
2042 headsbyphase = phases.binarydecode(inpart)
2040 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2043 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2041
2044
2042 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2045 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2043 def handlepushkeyreply(op, inpart):
2046 def handlepushkeyreply(op, inpart):
2044 """retrieve the result of a pushkey request"""
2047 """retrieve the result of a pushkey request"""
2045 ret = int(inpart.params['return'])
2048 ret = int(inpart.params['return'])
2046 partid = int(inpart.params['in-reply-to'])
2049 partid = int(inpart.params['in-reply-to'])
2047 op.records.add('pushkey', {'return': ret}, partid)
2050 op.records.add('pushkey', {'return': ret}, partid)
2048
2051
2049 @parthandler('obsmarkers')
2052 @parthandler('obsmarkers')
2050 def handleobsmarker(op, inpart):
2053 def handleobsmarker(op, inpart):
2051 """add a stream of obsmarkers to the repo"""
2054 """add a stream of obsmarkers to the repo"""
2052 tr = op.gettransaction()
2055 tr = op.gettransaction()
2053 markerdata = inpart.read()
2056 markerdata = inpart.read()
2054 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2057 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2055 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2058 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2056 % len(markerdata))
2059 % len(markerdata))
2057 # The mergemarkers call will crash if marker creation is not enabled.
2060 # The mergemarkers call will crash if marker creation is not enabled.
2058 # we want to avoid this if the part is advisory.
2061 # we want to avoid this if the part is advisory.
2059 if not inpart.mandatory and op.repo.obsstore.readonly:
2062 if not inpart.mandatory and op.repo.obsstore.readonly:
2060 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2063 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2061 return
2064 return
2062 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2065 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2063 op.repo.invalidatevolatilesets()
2066 op.repo.invalidatevolatilesets()
2064 if new:
2067 if new:
2065 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2068 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2066 op.records.add('obsmarkers', {'new': new})
2069 op.records.add('obsmarkers', {'new': new})
2067 if op.reply is not None:
2070 if op.reply is not None:
2068 rpart = op.reply.newpart('reply:obsmarkers')
2071 rpart = op.reply.newpart('reply:obsmarkers')
2069 rpart.addparam(
2072 rpart.addparam(
2070 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2073 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2071 rpart.addparam('new', '%i' % new, mandatory=False)
2074 rpart.addparam('new', '%i' % new, mandatory=False)
2072
2075
2073
2076
2074 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2077 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2075 def handleobsmarkerreply(op, inpart):
2078 def handleobsmarkerreply(op, inpart):
2076 """retrieve the result of a pushkey request"""
2079 """retrieve the result of a pushkey request"""
2077 ret = int(inpart.params['new'])
2080 ret = int(inpart.params['new'])
2078 partid = int(inpart.params['in-reply-to'])
2081 partid = int(inpart.params['in-reply-to'])
2079 op.records.add('obsmarkers', {'new': ret}, partid)
2082 op.records.add('obsmarkers', {'new': ret}, partid)
2080
2083
2081 @parthandler('hgtagsfnodes')
2084 @parthandler('hgtagsfnodes')
2082 def handlehgtagsfnodes(op, inpart):
2085 def handlehgtagsfnodes(op, inpart):
2083 """Applies .hgtags fnodes cache entries to the local repo.
2086 """Applies .hgtags fnodes cache entries to the local repo.
2084
2087
2085 Payload is pairs of 20 byte changeset nodes and filenodes.
2088 Payload is pairs of 20 byte changeset nodes and filenodes.
2086 """
2089 """
2087 # Grab the transaction so we ensure that we have the lock at this point.
2090 # Grab the transaction so we ensure that we have the lock at this point.
2088 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2091 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2089 op.gettransaction()
2092 op.gettransaction()
2090 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2093 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2091
2094
2092 count = 0
2095 count = 0
2093 while True:
2096 while True:
2094 node = inpart.read(20)
2097 node = inpart.read(20)
2095 fnode = inpart.read(20)
2098 fnode = inpart.read(20)
2096 if len(node) < 20 or len(fnode) < 20:
2099 if len(node) < 20 or len(fnode) < 20:
2097 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2100 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2098 break
2101 break
2099 cache.setfnode(node, fnode)
2102 cache.setfnode(node, fnode)
2100 count += 1
2103 count += 1
2101
2104
2102 cache.write()
2105 cache.write()
2103 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2106 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2104
2107
2105 @parthandler('pushvars')
2108 @parthandler('pushvars')
2106 def bundle2getvars(op, part):
2109 def bundle2getvars(op, part):
2107 '''unbundle a bundle2 containing shellvars on the server'''
2110 '''unbundle a bundle2 containing shellvars on the server'''
2108 # An option to disable unbundling on server-side for security reasons
2111 # An option to disable unbundling on server-side for security reasons
2109 if op.ui.configbool('push', 'pushvars.server'):
2112 if op.ui.configbool('push', 'pushvars.server'):
2110 hookargs = {}
2113 hookargs = {}
2111 for key, value in part.advisoryparams:
2114 for key, value in part.advisoryparams:
2112 key = key.upper()
2115 key = key.upper()
2113 # We want pushed variables to have USERVAR_ prepended so we know
2116 # We want pushed variables to have USERVAR_ prepended so we know
2114 # they came from the --pushvar flag.
2117 # they came from the --pushvar flag.
2115 key = "USERVAR_" + key
2118 key = "USERVAR_" + key
2116 hookargs[key] = value
2119 hookargs[key] = value
2117 op.addhookargs(hookargs)
2120 op.addhookargs(hookargs)
2118
2121
2119 @parthandler('stream', ('requirements', 'filecount', 'bytecount', 'version'))
2122 @parthandler('stream', ('requirements', 'filecount', 'bytecount', 'version'))
2120 def handlestreambundle(op, part):
2123 def handlestreambundle(op, part):
2121
2124
2122 version = part.params['version']
2125 version = part.params['version']
2123 if version != 'v2':
2126 if version != 'v2':
2124 raise error.Abort(_('unknown stream bundle version %s') % version)
2127 raise error.Abort(_('unknown stream bundle version %s') % version)
2125 requirements = part.params['requirements'].split()
2128 requirements = part.params['requirements'].split()
2126 filecount = int(part.params['filecount'])
2129 filecount = int(part.params['filecount'])
2127 bytecount = int(part.params['bytecount'])
2130 bytecount = int(part.params['bytecount'])
2128
2131
2129 repo = op.repo
2132 repo = op.repo
2130 if len(repo):
2133 if len(repo):
2131 msg = _('cannot apply stream clone to non empty repository')
2134 msg = _('cannot apply stream clone to non empty repository')
2132 raise error.Abort(msg)
2135 raise error.Abort(msg)
2133
2136
2134 repo.ui.debug('applying stream bundle\n')
2137 repo.ui.debug('applying stream bundle\n')
2135 streamclone.applybundlev2(repo, part, filecount, bytecount,
2138 streamclone.applybundlev2(repo, part, filecount, bytecount,
2136 requirements)
2139 requirements)
2137
2140
2138 # new requirements = old non-format requirements +
2141 # new requirements = old non-format requirements +
2139 # new format-related remote requirements
2142 # new format-related remote requirements
2140 # requirements from the streamed-in repository
2143 # requirements from the streamed-in repository
2141 repo.requirements = set(requirements) | (
2144 repo.requirements = set(requirements) | (
2142 repo.requirements - repo.supportedformats)
2145 repo.requirements - repo.supportedformats)
2143 repo._applyopenerreqs()
2146 repo._applyopenerreqs()
2144 repo._writerequirements()
2147 repo._writerequirements()
@@ -1,1292 +1,1295
1 # configitems.py - centralized declaration of configuration option
1 # configitems.py - centralized declaration of configuration option
2 #
2 #
3 # Copyright 2017 Pierre-Yves David <pierre-yves.david@octobus.net>
3 # Copyright 2017 Pierre-Yves David <pierre-yves.david@octobus.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import functools
10 import functools
11 import re
11 import re
12
12
13 from . import (
13 from . import (
14 encoding,
14 encoding,
15 error,
15 error,
16 )
16 )
17
17
18 def loadconfigtable(ui, extname, configtable):
18 def loadconfigtable(ui, extname, configtable):
19 """update config item known to the ui with the extension ones"""
19 """update config item known to the ui with the extension ones"""
20 for section, items in configtable.items():
20 for section, items in configtable.items():
21 knownitems = ui._knownconfig.setdefault(section, itemregister())
21 knownitems = ui._knownconfig.setdefault(section, itemregister())
22 knownkeys = set(knownitems)
22 knownkeys = set(knownitems)
23 newkeys = set(items)
23 newkeys = set(items)
24 for key in sorted(knownkeys & newkeys):
24 for key in sorted(knownkeys & newkeys):
25 msg = "extension '%s' overwrite config item '%s.%s'"
25 msg = "extension '%s' overwrite config item '%s.%s'"
26 msg %= (extname, section, key)
26 msg %= (extname, section, key)
27 ui.develwarn(msg, config='warn-config')
27 ui.develwarn(msg, config='warn-config')
28
28
29 knownitems.update(items)
29 knownitems.update(items)
30
30
31 class configitem(object):
31 class configitem(object):
32 """represent a known config item
32 """represent a known config item
33
33
34 :section: the official config section where to find this item,
34 :section: the official config section where to find this item,
35 :name: the official name within the section,
35 :name: the official name within the section,
36 :default: default value for this item,
36 :default: default value for this item,
37 :alias: optional list of tuples as alternatives,
37 :alias: optional list of tuples as alternatives,
38 :generic: this is a generic definition, match name using regular expression.
38 :generic: this is a generic definition, match name using regular expression.
39 """
39 """
40
40
41 def __init__(self, section, name, default=None, alias=(),
41 def __init__(self, section, name, default=None, alias=(),
42 generic=False, priority=0):
42 generic=False, priority=0):
43 self.section = section
43 self.section = section
44 self.name = name
44 self.name = name
45 self.default = default
45 self.default = default
46 self.alias = list(alias)
46 self.alias = list(alias)
47 self.generic = generic
47 self.generic = generic
48 self.priority = priority
48 self.priority = priority
49 self._re = None
49 self._re = None
50 if generic:
50 if generic:
51 self._re = re.compile(self.name)
51 self._re = re.compile(self.name)
52
52
53 class itemregister(dict):
53 class itemregister(dict):
54 """A specialized dictionary that can handle wild-card selection"""
54 """A specialized dictionary that can handle wild-card selection"""
55
55
56 def __init__(self):
56 def __init__(self):
57 super(itemregister, self).__init__()
57 super(itemregister, self).__init__()
58 self._generics = set()
58 self._generics = set()
59
59
60 def update(self, other):
60 def update(self, other):
61 super(itemregister, self).update(other)
61 super(itemregister, self).update(other)
62 self._generics.update(other._generics)
62 self._generics.update(other._generics)
63
63
64 def __setitem__(self, key, item):
64 def __setitem__(self, key, item):
65 super(itemregister, self).__setitem__(key, item)
65 super(itemregister, self).__setitem__(key, item)
66 if item.generic:
66 if item.generic:
67 self._generics.add(item)
67 self._generics.add(item)
68
68
69 def get(self, key):
69 def get(self, key):
70 baseitem = super(itemregister, self).get(key)
70 baseitem = super(itemregister, self).get(key)
71 if baseitem is not None and not baseitem.generic:
71 if baseitem is not None and not baseitem.generic:
72 return baseitem
72 return baseitem
73
73
74 # search for a matching generic item
74 # search for a matching generic item
75 generics = sorted(self._generics, key=(lambda x: (x.priority, x.name)))
75 generics = sorted(self._generics, key=(lambda x: (x.priority, x.name)))
76 for item in generics:
76 for item in generics:
77 # we use 'match' instead of 'search' to make the matching simpler
77 # we use 'match' instead of 'search' to make the matching simpler
78 # for people unfamiliar with regular expression. Having the match
78 # for people unfamiliar with regular expression. Having the match
79 # rooted to the start of the string will produce less surprising
79 # rooted to the start of the string will produce less surprising
80 # result for user writing simple regex for sub-attribute.
80 # result for user writing simple regex for sub-attribute.
81 #
81 #
82 # For example using "color\..*" match produces an unsurprising
82 # For example using "color\..*" match produces an unsurprising
83 # result, while using search could suddenly match apparently
83 # result, while using search could suddenly match apparently
84 # unrelated configuration that happens to contains "color."
84 # unrelated configuration that happens to contains "color."
85 # anywhere. This is a tradeoff where we favor requiring ".*" on
85 # anywhere. This is a tradeoff where we favor requiring ".*" on
86 # some match to avoid the need to prefix most pattern with "^".
86 # some match to avoid the need to prefix most pattern with "^".
87 # The "^" seems more error prone.
87 # The "^" seems more error prone.
88 if item._re.match(key):
88 if item._re.match(key):
89 return item
89 return item
90
90
91 return None
91 return None
92
92
93 coreitems = {}
93 coreitems = {}
94
94
95 def _register(configtable, *args, **kwargs):
95 def _register(configtable, *args, **kwargs):
96 item = configitem(*args, **kwargs)
96 item = configitem(*args, **kwargs)
97 section = configtable.setdefault(item.section, itemregister())
97 section = configtable.setdefault(item.section, itemregister())
98 if item.name in section:
98 if item.name in section:
99 msg = "duplicated config item registration for '%s.%s'"
99 msg = "duplicated config item registration for '%s.%s'"
100 raise error.ProgrammingError(msg % (item.section, item.name))
100 raise error.ProgrammingError(msg % (item.section, item.name))
101 section[item.name] = item
101 section[item.name] = item
102
102
103 # special value for case where the default is derived from other values
103 # special value for case where the default is derived from other values
104 dynamicdefault = object()
104 dynamicdefault = object()
105
105
106 # Registering actual config items
106 # Registering actual config items
107
107
108 def getitemregister(configtable):
108 def getitemregister(configtable):
109 f = functools.partial(_register, configtable)
109 f = functools.partial(_register, configtable)
110 # export pseudo enum as configitem.*
110 # export pseudo enum as configitem.*
111 f.dynamicdefault = dynamicdefault
111 f.dynamicdefault = dynamicdefault
112 return f
112 return f
113
113
114 coreconfigitem = getitemregister(coreitems)
114 coreconfigitem = getitemregister(coreitems)
115
115
116 coreconfigitem('alias', '.*',
116 coreconfigitem('alias', '.*',
117 default=None,
117 default=None,
118 generic=True,
118 generic=True,
119 )
119 )
120 coreconfigitem('annotate', 'nodates',
120 coreconfigitem('annotate', 'nodates',
121 default=False,
121 default=False,
122 )
122 )
123 coreconfigitem('annotate', 'showfunc',
123 coreconfigitem('annotate', 'showfunc',
124 default=False,
124 default=False,
125 )
125 )
126 coreconfigitem('annotate', 'unified',
126 coreconfigitem('annotate', 'unified',
127 default=None,
127 default=None,
128 )
128 )
129 coreconfigitem('annotate', 'git',
129 coreconfigitem('annotate', 'git',
130 default=False,
130 default=False,
131 )
131 )
132 coreconfigitem('annotate', 'ignorews',
132 coreconfigitem('annotate', 'ignorews',
133 default=False,
133 default=False,
134 )
134 )
135 coreconfigitem('annotate', 'ignorewsamount',
135 coreconfigitem('annotate', 'ignorewsamount',
136 default=False,
136 default=False,
137 )
137 )
138 coreconfigitem('annotate', 'ignoreblanklines',
138 coreconfigitem('annotate', 'ignoreblanklines',
139 default=False,
139 default=False,
140 )
140 )
141 coreconfigitem('annotate', 'ignorewseol',
141 coreconfigitem('annotate', 'ignorewseol',
142 default=False,
142 default=False,
143 )
143 )
144 coreconfigitem('annotate', 'nobinary',
144 coreconfigitem('annotate', 'nobinary',
145 default=False,
145 default=False,
146 )
146 )
147 coreconfigitem('annotate', 'noprefix',
147 coreconfigitem('annotate', 'noprefix',
148 default=False,
148 default=False,
149 )
149 )
150 coreconfigitem('auth', 'cookiefile',
150 coreconfigitem('auth', 'cookiefile',
151 default=None,
151 default=None,
152 )
152 )
153 # bookmarks.pushing: internal hack for discovery
153 # bookmarks.pushing: internal hack for discovery
154 coreconfigitem('bookmarks', 'pushing',
154 coreconfigitem('bookmarks', 'pushing',
155 default=list,
155 default=list,
156 )
156 )
157 # bundle.mainreporoot: internal hack for bundlerepo
157 # bundle.mainreporoot: internal hack for bundlerepo
158 coreconfigitem('bundle', 'mainreporoot',
158 coreconfigitem('bundle', 'mainreporoot',
159 default='',
159 default='',
160 )
160 )
161 # bundle.reorder: experimental config
161 # bundle.reorder: experimental config
162 coreconfigitem('bundle', 'reorder',
162 coreconfigitem('bundle', 'reorder',
163 default='auto',
163 default='auto',
164 )
164 )
165 coreconfigitem('censor', 'policy',
165 coreconfigitem('censor', 'policy',
166 default='abort',
166 default='abort',
167 )
167 )
168 coreconfigitem('chgserver', 'idletimeout',
168 coreconfigitem('chgserver', 'idletimeout',
169 default=3600,
169 default=3600,
170 )
170 )
171 coreconfigitem('chgserver', 'skiphash',
171 coreconfigitem('chgserver', 'skiphash',
172 default=False,
172 default=False,
173 )
173 )
174 coreconfigitem('cmdserver', 'log',
174 coreconfigitem('cmdserver', 'log',
175 default=None,
175 default=None,
176 )
176 )
177 coreconfigitem('color', '.*',
177 coreconfigitem('color', '.*',
178 default=None,
178 default=None,
179 generic=True,
179 generic=True,
180 )
180 )
181 coreconfigitem('color', 'mode',
181 coreconfigitem('color', 'mode',
182 default='auto',
182 default='auto',
183 )
183 )
184 coreconfigitem('color', 'pagermode',
184 coreconfigitem('color', 'pagermode',
185 default=dynamicdefault,
185 default=dynamicdefault,
186 )
186 )
187 coreconfigitem('commands', 'show.aliasprefix',
187 coreconfigitem('commands', 'show.aliasprefix',
188 default=list,
188 default=list,
189 )
189 )
190 coreconfigitem('commands', 'status.relative',
190 coreconfigitem('commands', 'status.relative',
191 default=False,
191 default=False,
192 )
192 )
193 coreconfigitem('commands', 'status.skipstates',
193 coreconfigitem('commands', 'status.skipstates',
194 default=[],
194 default=[],
195 )
195 )
196 coreconfigitem('commands', 'status.verbose',
196 coreconfigitem('commands', 'status.verbose',
197 default=False,
197 default=False,
198 )
198 )
199 coreconfigitem('commands', 'update.check',
199 coreconfigitem('commands', 'update.check',
200 default=None,
200 default=None,
201 # Deprecated, remove after 4.4 release
201 # Deprecated, remove after 4.4 release
202 alias=[('experimental', 'updatecheck')]
202 alias=[('experimental', 'updatecheck')]
203 )
203 )
204 coreconfigitem('commands', 'update.requiredest',
204 coreconfigitem('commands', 'update.requiredest',
205 default=False,
205 default=False,
206 )
206 )
207 coreconfigitem('committemplate', '.*',
207 coreconfigitem('committemplate', '.*',
208 default=None,
208 default=None,
209 generic=True,
209 generic=True,
210 )
210 )
211 coreconfigitem('convert', 'cvsps.cache',
211 coreconfigitem('convert', 'cvsps.cache',
212 default=True,
212 default=True,
213 )
213 )
214 coreconfigitem('convert', 'cvsps.fuzz',
214 coreconfigitem('convert', 'cvsps.fuzz',
215 default=60,
215 default=60,
216 )
216 )
217 coreconfigitem('convert', 'cvsps.logencoding',
217 coreconfigitem('convert', 'cvsps.logencoding',
218 default=None,
218 default=None,
219 )
219 )
220 coreconfigitem('convert', 'cvsps.mergefrom',
220 coreconfigitem('convert', 'cvsps.mergefrom',
221 default=None,
221 default=None,
222 )
222 )
223 coreconfigitem('convert', 'cvsps.mergeto',
223 coreconfigitem('convert', 'cvsps.mergeto',
224 default=None,
224 default=None,
225 )
225 )
226 coreconfigitem('convert', 'git.committeractions',
226 coreconfigitem('convert', 'git.committeractions',
227 default=lambda: ['messagedifferent'],
227 default=lambda: ['messagedifferent'],
228 )
228 )
229 coreconfigitem('convert', 'git.extrakeys',
229 coreconfigitem('convert', 'git.extrakeys',
230 default=list,
230 default=list,
231 )
231 )
232 coreconfigitem('convert', 'git.findcopiesharder',
232 coreconfigitem('convert', 'git.findcopiesharder',
233 default=False,
233 default=False,
234 )
234 )
235 coreconfigitem('convert', 'git.remoteprefix',
235 coreconfigitem('convert', 'git.remoteprefix',
236 default='remote',
236 default='remote',
237 )
237 )
238 coreconfigitem('convert', 'git.renamelimit',
238 coreconfigitem('convert', 'git.renamelimit',
239 default=400,
239 default=400,
240 )
240 )
241 coreconfigitem('convert', 'git.saverev',
241 coreconfigitem('convert', 'git.saverev',
242 default=True,
242 default=True,
243 )
243 )
244 coreconfigitem('convert', 'git.similarity',
244 coreconfigitem('convert', 'git.similarity',
245 default=50,
245 default=50,
246 )
246 )
247 coreconfigitem('convert', 'git.skipsubmodules',
247 coreconfigitem('convert', 'git.skipsubmodules',
248 default=False,
248 default=False,
249 )
249 )
250 coreconfigitem('convert', 'hg.clonebranches',
250 coreconfigitem('convert', 'hg.clonebranches',
251 default=False,
251 default=False,
252 )
252 )
253 coreconfigitem('convert', 'hg.ignoreerrors',
253 coreconfigitem('convert', 'hg.ignoreerrors',
254 default=False,
254 default=False,
255 )
255 )
256 coreconfigitem('convert', 'hg.revs',
256 coreconfigitem('convert', 'hg.revs',
257 default=None,
257 default=None,
258 )
258 )
259 coreconfigitem('convert', 'hg.saverev',
259 coreconfigitem('convert', 'hg.saverev',
260 default=False,
260 default=False,
261 )
261 )
262 coreconfigitem('convert', 'hg.sourcename',
262 coreconfigitem('convert', 'hg.sourcename',
263 default=None,
263 default=None,
264 )
264 )
265 coreconfigitem('convert', 'hg.startrev',
265 coreconfigitem('convert', 'hg.startrev',
266 default=None,
266 default=None,
267 )
267 )
268 coreconfigitem('convert', 'hg.tagsbranch',
268 coreconfigitem('convert', 'hg.tagsbranch',
269 default='default',
269 default='default',
270 )
270 )
271 coreconfigitem('convert', 'hg.usebranchnames',
271 coreconfigitem('convert', 'hg.usebranchnames',
272 default=True,
272 default=True,
273 )
273 )
274 coreconfigitem('convert', 'ignoreancestorcheck',
274 coreconfigitem('convert', 'ignoreancestorcheck',
275 default=False,
275 default=False,
276 )
276 )
277 coreconfigitem('convert', 'localtimezone',
277 coreconfigitem('convert', 'localtimezone',
278 default=False,
278 default=False,
279 )
279 )
280 coreconfigitem('convert', 'p4.encoding',
280 coreconfigitem('convert', 'p4.encoding',
281 default=dynamicdefault,
281 default=dynamicdefault,
282 )
282 )
283 coreconfigitem('convert', 'p4.startrev',
283 coreconfigitem('convert', 'p4.startrev',
284 default=0,
284 default=0,
285 )
285 )
286 coreconfigitem('convert', 'skiptags',
286 coreconfigitem('convert', 'skiptags',
287 default=False,
287 default=False,
288 )
288 )
289 coreconfigitem('convert', 'svn.debugsvnlog',
289 coreconfigitem('convert', 'svn.debugsvnlog',
290 default=True,
290 default=True,
291 )
291 )
292 coreconfigitem('convert', 'svn.trunk',
292 coreconfigitem('convert', 'svn.trunk',
293 default=None,
293 default=None,
294 )
294 )
295 coreconfigitem('convert', 'svn.tags',
295 coreconfigitem('convert', 'svn.tags',
296 default=None,
296 default=None,
297 )
297 )
298 coreconfigitem('convert', 'svn.branches',
298 coreconfigitem('convert', 'svn.branches',
299 default=None,
299 default=None,
300 )
300 )
301 coreconfigitem('convert', 'svn.startrev',
301 coreconfigitem('convert', 'svn.startrev',
302 default=0,
302 default=0,
303 )
303 )
304 coreconfigitem('debug', 'dirstate.delaywrite',
304 coreconfigitem('debug', 'dirstate.delaywrite',
305 default=0,
305 default=0,
306 )
306 )
307 coreconfigitem('defaults', '.*',
307 coreconfigitem('defaults', '.*',
308 default=None,
308 default=None,
309 generic=True,
309 generic=True,
310 )
310 )
311 coreconfigitem('devel', 'all-warnings',
311 coreconfigitem('devel', 'all-warnings',
312 default=False,
312 default=False,
313 )
313 )
314 coreconfigitem('devel', 'bundle2.debug',
314 coreconfigitem('devel', 'bundle2.debug',
315 default=False,
315 default=False,
316 )
316 )
317 coreconfigitem('devel', 'cache-vfs',
317 coreconfigitem('devel', 'cache-vfs',
318 default=None,
318 default=None,
319 )
319 )
320 coreconfigitem('devel', 'check-locks',
320 coreconfigitem('devel', 'check-locks',
321 default=False,
321 default=False,
322 )
322 )
323 coreconfigitem('devel', 'check-relroot',
323 coreconfigitem('devel', 'check-relroot',
324 default=False,
324 default=False,
325 )
325 )
326 coreconfigitem('devel', 'default-date',
326 coreconfigitem('devel', 'default-date',
327 default=None,
327 default=None,
328 )
328 )
329 coreconfigitem('devel', 'deprec-warn',
329 coreconfigitem('devel', 'deprec-warn',
330 default=False,
330 default=False,
331 )
331 )
332 coreconfigitem('devel', 'disableloaddefaultcerts',
332 coreconfigitem('devel', 'disableloaddefaultcerts',
333 default=False,
333 default=False,
334 )
334 )
335 coreconfigitem('devel', 'warn-empty-changegroup',
335 coreconfigitem('devel', 'warn-empty-changegroup',
336 default=False,
336 default=False,
337 )
337 )
338 coreconfigitem('devel', 'legacy.exchange',
338 coreconfigitem('devel', 'legacy.exchange',
339 default=list,
339 default=list,
340 )
340 )
341 coreconfigitem('devel', 'servercafile',
341 coreconfigitem('devel', 'servercafile',
342 default='',
342 default='',
343 )
343 )
344 coreconfigitem('devel', 'serverexactprotocol',
344 coreconfigitem('devel', 'serverexactprotocol',
345 default='',
345 default='',
346 )
346 )
347 coreconfigitem('devel', 'serverrequirecert',
347 coreconfigitem('devel', 'serverrequirecert',
348 default=False,
348 default=False,
349 )
349 )
350 coreconfigitem('devel', 'strip-obsmarkers',
350 coreconfigitem('devel', 'strip-obsmarkers',
351 default=True,
351 default=True,
352 )
352 )
353 coreconfigitem('devel', 'warn-config',
353 coreconfigitem('devel', 'warn-config',
354 default=None,
354 default=None,
355 )
355 )
356 coreconfigitem('devel', 'warn-config-default',
356 coreconfigitem('devel', 'warn-config-default',
357 default=None,
357 default=None,
358 )
358 )
359 coreconfigitem('devel', 'user.obsmarker',
359 coreconfigitem('devel', 'user.obsmarker',
360 default=None,
360 default=None,
361 )
361 )
362 coreconfigitem('devel', 'warn-config-unknown',
362 coreconfigitem('devel', 'warn-config-unknown',
363 default=None,
363 default=None,
364 )
364 )
365 coreconfigitem('devel', 'debug.peer-request',
365 coreconfigitem('devel', 'debug.peer-request',
366 default=False,
366 default=False,
367 )
367 )
368 coreconfigitem('diff', 'nodates',
368 coreconfigitem('diff', 'nodates',
369 default=False,
369 default=False,
370 )
370 )
371 coreconfigitem('diff', 'showfunc',
371 coreconfigitem('diff', 'showfunc',
372 default=False,
372 default=False,
373 )
373 )
374 coreconfigitem('diff', 'unified',
374 coreconfigitem('diff', 'unified',
375 default=None,
375 default=None,
376 )
376 )
377 coreconfigitem('diff', 'git',
377 coreconfigitem('diff', 'git',
378 default=False,
378 default=False,
379 )
379 )
380 coreconfigitem('diff', 'ignorews',
380 coreconfigitem('diff', 'ignorews',
381 default=False,
381 default=False,
382 )
382 )
383 coreconfigitem('diff', 'ignorewsamount',
383 coreconfigitem('diff', 'ignorewsamount',
384 default=False,
384 default=False,
385 )
385 )
386 coreconfigitem('diff', 'ignoreblanklines',
386 coreconfigitem('diff', 'ignoreblanklines',
387 default=False,
387 default=False,
388 )
388 )
389 coreconfigitem('diff', 'ignorewseol',
389 coreconfigitem('diff', 'ignorewseol',
390 default=False,
390 default=False,
391 )
391 )
392 coreconfigitem('diff', 'nobinary',
392 coreconfigitem('diff', 'nobinary',
393 default=False,
393 default=False,
394 )
394 )
395 coreconfigitem('diff', 'noprefix',
395 coreconfigitem('diff', 'noprefix',
396 default=False,
396 default=False,
397 )
397 )
398 coreconfigitem('email', 'bcc',
398 coreconfigitem('email', 'bcc',
399 default=None,
399 default=None,
400 )
400 )
401 coreconfigitem('email', 'cc',
401 coreconfigitem('email', 'cc',
402 default=None,
402 default=None,
403 )
403 )
404 coreconfigitem('email', 'charsets',
404 coreconfigitem('email', 'charsets',
405 default=list,
405 default=list,
406 )
406 )
407 coreconfigitem('email', 'from',
407 coreconfigitem('email', 'from',
408 default=None,
408 default=None,
409 )
409 )
410 coreconfigitem('email', 'method',
410 coreconfigitem('email', 'method',
411 default='smtp',
411 default='smtp',
412 )
412 )
413 coreconfigitem('email', 'reply-to',
413 coreconfigitem('email', 'reply-to',
414 default=None,
414 default=None,
415 )
415 )
416 coreconfigitem('email', 'to',
416 coreconfigitem('email', 'to',
417 default=None,
417 default=None,
418 )
418 )
419 coreconfigitem('experimental', 'archivemetatemplate',
419 coreconfigitem('experimental', 'archivemetatemplate',
420 default=dynamicdefault,
420 default=dynamicdefault,
421 )
421 )
422 coreconfigitem('experimental', 'bundle-phases',
422 coreconfigitem('experimental', 'bundle-phases',
423 default=False,
423 default=False,
424 )
424 )
425 coreconfigitem('experimental', 'bundle2-advertise',
425 coreconfigitem('experimental', 'bundle2-advertise',
426 default=True,
426 default=True,
427 )
427 )
428 coreconfigitem('experimental', 'bundle2-output-capture',
428 coreconfigitem('experimental', 'bundle2-output-capture',
429 default=False,
429 default=False,
430 )
430 )
431 coreconfigitem('experimental', 'bundle2.pushback',
431 coreconfigitem('experimental', 'bundle2.pushback',
432 default=False,
432 default=False,
433 )
433 )
434 coreconfigitem('experimental', 'bundle2.stream',
435 default=False,
436 )
434 coreconfigitem('experimental', 'bundle2lazylocking',
437 coreconfigitem('experimental', 'bundle2lazylocking',
435 default=False,
438 default=False,
436 )
439 )
437 coreconfigitem('experimental', 'bundlecomplevel',
440 coreconfigitem('experimental', 'bundlecomplevel',
438 default=None,
441 default=None,
439 )
442 )
440 coreconfigitem('experimental', 'changegroup3',
443 coreconfigitem('experimental', 'changegroup3',
441 default=False,
444 default=False,
442 )
445 )
443 coreconfigitem('experimental', 'clientcompressionengines',
446 coreconfigitem('experimental', 'clientcompressionengines',
444 default=list,
447 default=list,
445 )
448 )
446 coreconfigitem('experimental', 'copytrace',
449 coreconfigitem('experimental', 'copytrace',
447 default='on',
450 default='on',
448 )
451 )
449 coreconfigitem('experimental', 'copytrace.movecandidateslimit',
452 coreconfigitem('experimental', 'copytrace.movecandidateslimit',
450 default=100,
453 default=100,
451 )
454 )
452 coreconfigitem('experimental', 'copytrace.sourcecommitlimit',
455 coreconfigitem('experimental', 'copytrace.sourcecommitlimit',
453 default=100,
456 default=100,
454 )
457 )
455 coreconfigitem('experimental', 'crecordtest',
458 coreconfigitem('experimental', 'crecordtest',
456 default=None,
459 default=None,
457 )
460 )
458 coreconfigitem('experimental', 'directaccess',
461 coreconfigitem('experimental', 'directaccess',
459 default=False,
462 default=False,
460 )
463 )
461 coreconfigitem('experimental', 'directaccess.revnums',
464 coreconfigitem('experimental', 'directaccess.revnums',
462 default=False,
465 default=False,
463 )
466 )
464 coreconfigitem('experimental', 'editortmpinhg',
467 coreconfigitem('experimental', 'editortmpinhg',
465 default=False,
468 default=False,
466 )
469 )
467 coreconfigitem('experimental', 'evolution',
470 coreconfigitem('experimental', 'evolution',
468 default=list,
471 default=list,
469 )
472 )
470 coreconfigitem('experimental', 'evolution.allowdivergence',
473 coreconfigitem('experimental', 'evolution.allowdivergence',
471 default=False,
474 default=False,
472 alias=[('experimental', 'allowdivergence')]
475 alias=[('experimental', 'allowdivergence')]
473 )
476 )
474 coreconfigitem('experimental', 'evolution.allowunstable',
477 coreconfigitem('experimental', 'evolution.allowunstable',
475 default=None,
478 default=None,
476 )
479 )
477 coreconfigitem('experimental', 'evolution.createmarkers',
480 coreconfigitem('experimental', 'evolution.createmarkers',
478 default=None,
481 default=None,
479 )
482 )
480 coreconfigitem('experimental', 'evolution.effect-flags',
483 coreconfigitem('experimental', 'evolution.effect-flags',
481 default=True,
484 default=True,
482 alias=[('experimental', 'effect-flags')]
485 alias=[('experimental', 'effect-flags')]
483 )
486 )
484 coreconfigitem('experimental', 'evolution.exchange',
487 coreconfigitem('experimental', 'evolution.exchange',
485 default=None,
488 default=None,
486 )
489 )
487 coreconfigitem('experimental', 'evolution.bundle-obsmarker',
490 coreconfigitem('experimental', 'evolution.bundle-obsmarker',
488 default=False,
491 default=False,
489 )
492 )
490 coreconfigitem('experimental', 'evolution.report-instabilities',
493 coreconfigitem('experimental', 'evolution.report-instabilities',
491 default=True,
494 default=True,
492 )
495 )
493 coreconfigitem('experimental', 'evolution.track-operation',
496 coreconfigitem('experimental', 'evolution.track-operation',
494 default=True,
497 default=True,
495 )
498 )
496 coreconfigitem('experimental', 'worddiff',
499 coreconfigitem('experimental', 'worddiff',
497 default=False,
500 default=False,
498 )
501 )
499 coreconfigitem('experimental', 'maxdeltachainspan',
502 coreconfigitem('experimental', 'maxdeltachainspan',
500 default=-1,
503 default=-1,
501 )
504 )
502 coreconfigitem('experimental', 'mmapindexthreshold',
505 coreconfigitem('experimental', 'mmapindexthreshold',
503 default=None,
506 default=None,
504 )
507 )
505 coreconfigitem('experimental', 'nonnormalparanoidcheck',
508 coreconfigitem('experimental', 'nonnormalparanoidcheck',
506 default=False,
509 default=False,
507 )
510 )
508 coreconfigitem('experimental', 'exportableenviron',
511 coreconfigitem('experimental', 'exportableenviron',
509 default=list,
512 default=list,
510 )
513 )
511 coreconfigitem('experimental', 'extendedheader.index',
514 coreconfigitem('experimental', 'extendedheader.index',
512 default=None,
515 default=None,
513 )
516 )
514 coreconfigitem('experimental', 'extendedheader.similarity',
517 coreconfigitem('experimental', 'extendedheader.similarity',
515 default=False,
518 default=False,
516 )
519 )
517 coreconfigitem('experimental', 'format.compression',
520 coreconfigitem('experimental', 'format.compression',
518 default='zlib',
521 default='zlib',
519 )
522 )
520 coreconfigitem('experimental', 'graphshorten',
523 coreconfigitem('experimental', 'graphshorten',
521 default=False,
524 default=False,
522 )
525 )
523 coreconfigitem('experimental', 'graphstyle.parent',
526 coreconfigitem('experimental', 'graphstyle.parent',
524 default=dynamicdefault,
527 default=dynamicdefault,
525 )
528 )
526 coreconfigitem('experimental', 'graphstyle.missing',
529 coreconfigitem('experimental', 'graphstyle.missing',
527 default=dynamicdefault,
530 default=dynamicdefault,
528 )
531 )
529 coreconfigitem('experimental', 'graphstyle.grandparent',
532 coreconfigitem('experimental', 'graphstyle.grandparent',
530 default=dynamicdefault,
533 default=dynamicdefault,
531 )
534 )
532 coreconfigitem('experimental', 'hook-track-tags',
535 coreconfigitem('experimental', 'hook-track-tags',
533 default=False,
536 default=False,
534 )
537 )
535 coreconfigitem('experimental', 'httppostargs',
538 coreconfigitem('experimental', 'httppostargs',
536 default=False,
539 default=False,
537 )
540 )
538 coreconfigitem('experimental', 'manifestv2',
541 coreconfigitem('experimental', 'manifestv2',
539 default=False,
542 default=False,
540 )
543 )
541 coreconfigitem('experimental', 'mergedriver',
544 coreconfigitem('experimental', 'mergedriver',
542 default=None,
545 default=None,
543 )
546 )
544 coreconfigitem('experimental', 'obsmarkers-exchange-debug',
547 coreconfigitem('experimental', 'obsmarkers-exchange-debug',
545 default=False,
548 default=False,
546 )
549 )
547 coreconfigitem('experimental', 'remotenames',
550 coreconfigitem('experimental', 'remotenames',
548 default=False,
551 default=False,
549 )
552 )
550 coreconfigitem('experimental', 'revlogv2',
553 coreconfigitem('experimental', 'revlogv2',
551 default=None,
554 default=None,
552 )
555 )
553 coreconfigitem('experimental', 'single-head-per-branch',
556 coreconfigitem('experimental', 'single-head-per-branch',
554 default=False,
557 default=False,
555 )
558 )
556 coreconfigitem('experimental', 'spacemovesdown',
559 coreconfigitem('experimental', 'spacemovesdown',
557 default=False,
560 default=False,
558 )
561 )
559 coreconfigitem('experimental', 'sparse-read',
562 coreconfigitem('experimental', 'sparse-read',
560 default=False,
563 default=False,
561 )
564 )
562 coreconfigitem('experimental', 'sparse-read.density-threshold',
565 coreconfigitem('experimental', 'sparse-read.density-threshold',
563 default=0.25,
566 default=0.25,
564 )
567 )
565 coreconfigitem('experimental', 'sparse-read.min-gap-size',
568 coreconfigitem('experimental', 'sparse-read.min-gap-size',
566 default='256K',
569 default='256K',
567 )
570 )
568 coreconfigitem('experimental', 'treemanifest',
571 coreconfigitem('experimental', 'treemanifest',
569 default=False,
572 default=False,
570 )
573 )
571 coreconfigitem('experimental', 'update.atomic-file',
574 coreconfigitem('experimental', 'update.atomic-file',
572 default=False,
575 default=False,
573 )
576 )
574 coreconfigitem('extensions', '.*',
577 coreconfigitem('extensions', '.*',
575 default=None,
578 default=None,
576 generic=True,
579 generic=True,
577 )
580 )
578 coreconfigitem('extdata', '.*',
581 coreconfigitem('extdata', '.*',
579 default=None,
582 default=None,
580 generic=True,
583 generic=True,
581 )
584 )
582 coreconfigitem('format', 'aggressivemergedeltas',
585 coreconfigitem('format', 'aggressivemergedeltas',
583 default=False,
586 default=False,
584 )
587 )
585 coreconfigitem('format', 'chunkcachesize',
588 coreconfigitem('format', 'chunkcachesize',
586 default=None,
589 default=None,
587 )
590 )
588 coreconfigitem('format', 'dotencode',
591 coreconfigitem('format', 'dotencode',
589 default=True,
592 default=True,
590 )
593 )
591 coreconfigitem('format', 'generaldelta',
594 coreconfigitem('format', 'generaldelta',
592 default=False,
595 default=False,
593 )
596 )
594 coreconfigitem('format', 'manifestcachesize',
597 coreconfigitem('format', 'manifestcachesize',
595 default=None,
598 default=None,
596 )
599 )
597 coreconfigitem('format', 'maxchainlen',
600 coreconfigitem('format', 'maxchainlen',
598 default=None,
601 default=None,
599 )
602 )
600 coreconfigitem('format', 'obsstore-version',
603 coreconfigitem('format', 'obsstore-version',
601 default=None,
604 default=None,
602 )
605 )
603 coreconfigitem('format', 'usefncache',
606 coreconfigitem('format', 'usefncache',
604 default=True,
607 default=True,
605 )
608 )
606 coreconfigitem('format', 'usegeneraldelta',
609 coreconfigitem('format', 'usegeneraldelta',
607 default=True,
610 default=True,
608 )
611 )
609 coreconfigitem('format', 'usestore',
612 coreconfigitem('format', 'usestore',
610 default=True,
613 default=True,
611 )
614 )
612 coreconfigitem('fsmonitor', 'warn_when_unused',
615 coreconfigitem('fsmonitor', 'warn_when_unused',
613 default=True,
616 default=True,
614 )
617 )
615 coreconfigitem('fsmonitor', 'warn_update_file_count',
618 coreconfigitem('fsmonitor', 'warn_update_file_count',
616 default=50000,
619 default=50000,
617 )
620 )
618 coreconfigitem('hooks', '.*',
621 coreconfigitem('hooks', '.*',
619 default=dynamicdefault,
622 default=dynamicdefault,
620 generic=True,
623 generic=True,
621 )
624 )
622 coreconfigitem('hgweb-paths', '.*',
625 coreconfigitem('hgweb-paths', '.*',
623 default=list,
626 default=list,
624 generic=True,
627 generic=True,
625 )
628 )
626 coreconfigitem('hostfingerprints', '.*',
629 coreconfigitem('hostfingerprints', '.*',
627 default=list,
630 default=list,
628 generic=True,
631 generic=True,
629 )
632 )
630 coreconfigitem('hostsecurity', 'ciphers',
633 coreconfigitem('hostsecurity', 'ciphers',
631 default=None,
634 default=None,
632 )
635 )
633 coreconfigitem('hostsecurity', 'disabletls10warning',
636 coreconfigitem('hostsecurity', 'disabletls10warning',
634 default=False,
637 default=False,
635 )
638 )
636 coreconfigitem('hostsecurity', 'minimumprotocol',
639 coreconfigitem('hostsecurity', 'minimumprotocol',
637 default=dynamicdefault,
640 default=dynamicdefault,
638 )
641 )
639 coreconfigitem('hostsecurity', '.*:minimumprotocol$',
642 coreconfigitem('hostsecurity', '.*:minimumprotocol$',
640 default=dynamicdefault,
643 default=dynamicdefault,
641 generic=True,
644 generic=True,
642 )
645 )
643 coreconfigitem('hostsecurity', '.*:ciphers$',
646 coreconfigitem('hostsecurity', '.*:ciphers$',
644 default=dynamicdefault,
647 default=dynamicdefault,
645 generic=True,
648 generic=True,
646 )
649 )
647 coreconfigitem('hostsecurity', '.*:fingerprints$',
650 coreconfigitem('hostsecurity', '.*:fingerprints$',
648 default=list,
651 default=list,
649 generic=True,
652 generic=True,
650 )
653 )
651 coreconfigitem('hostsecurity', '.*:verifycertsfile$',
654 coreconfigitem('hostsecurity', '.*:verifycertsfile$',
652 default=None,
655 default=None,
653 generic=True,
656 generic=True,
654 )
657 )
655
658
656 coreconfigitem('http_proxy', 'always',
659 coreconfigitem('http_proxy', 'always',
657 default=False,
660 default=False,
658 )
661 )
659 coreconfigitem('http_proxy', 'host',
662 coreconfigitem('http_proxy', 'host',
660 default=None,
663 default=None,
661 )
664 )
662 coreconfigitem('http_proxy', 'no',
665 coreconfigitem('http_proxy', 'no',
663 default=list,
666 default=list,
664 )
667 )
665 coreconfigitem('http_proxy', 'passwd',
668 coreconfigitem('http_proxy', 'passwd',
666 default=None,
669 default=None,
667 )
670 )
668 coreconfigitem('http_proxy', 'user',
671 coreconfigitem('http_proxy', 'user',
669 default=None,
672 default=None,
670 )
673 )
671 coreconfigitem('logtoprocess', 'commandexception',
674 coreconfigitem('logtoprocess', 'commandexception',
672 default=None,
675 default=None,
673 )
676 )
674 coreconfigitem('logtoprocess', 'commandfinish',
677 coreconfigitem('logtoprocess', 'commandfinish',
675 default=None,
678 default=None,
676 )
679 )
677 coreconfigitem('logtoprocess', 'command',
680 coreconfigitem('logtoprocess', 'command',
678 default=None,
681 default=None,
679 )
682 )
680 coreconfigitem('logtoprocess', 'develwarn',
683 coreconfigitem('logtoprocess', 'develwarn',
681 default=None,
684 default=None,
682 )
685 )
683 coreconfigitem('logtoprocess', 'uiblocked',
686 coreconfigitem('logtoprocess', 'uiblocked',
684 default=None,
687 default=None,
685 )
688 )
686 coreconfigitem('merge', 'checkunknown',
689 coreconfigitem('merge', 'checkunknown',
687 default='abort',
690 default='abort',
688 )
691 )
689 coreconfigitem('merge', 'checkignored',
692 coreconfigitem('merge', 'checkignored',
690 default='abort',
693 default='abort',
691 )
694 )
692 coreconfigitem('experimental', 'merge.checkpathconflicts',
695 coreconfigitem('experimental', 'merge.checkpathconflicts',
693 default=False,
696 default=False,
694 )
697 )
695 coreconfigitem('merge', 'followcopies',
698 coreconfigitem('merge', 'followcopies',
696 default=True,
699 default=True,
697 )
700 )
698 coreconfigitem('merge', 'on-failure',
701 coreconfigitem('merge', 'on-failure',
699 default='continue',
702 default='continue',
700 )
703 )
701 coreconfigitem('merge', 'preferancestor',
704 coreconfigitem('merge', 'preferancestor',
702 default=lambda: ['*'],
705 default=lambda: ['*'],
703 )
706 )
704 coreconfigitem('merge-tools', '.*',
707 coreconfigitem('merge-tools', '.*',
705 default=None,
708 default=None,
706 generic=True,
709 generic=True,
707 )
710 )
708 coreconfigitem('merge-tools', br'.*\.args$',
711 coreconfigitem('merge-tools', br'.*\.args$',
709 default="$local $base $other",
712 default="$local $base $other",
710 generic=True,
713 generic=True,
711 priority=-1,
714 priority=-1,
712 )
715 )
713 coreconfigitem('merge-tools', br'.*\.binary$',
716 coreconfigitem('merge-tools', br'.*\.binary$',
714 default=False,
717 default=False,
715 generic=True,
718 generic=True,
716 priority=-1,
719 priority=-1,
717 )
720 )
718 coreconfigitem('merge-tools', br'.*\.check$',
721 coreconfigitem('merge-tools', br'.*\.check$',
719 default=list,
722 default=list,
720 generic=True,
723 generic=True,
721 priority=-1,
724 priority=-1,
722 )
725 )
723 coreconfigitem('merge-tools', br'.*\.checkchanged$',
726 coreconfigitem('merge-tools', br'.*\.checkchanged$',
724 default=False,
727 default=False,
725 generic=True,
728 generic=True,
726 priority=-1,
729 priority=-1,
727 )
730 )
728 coreconfigitem('merge-tools', br'.*\.executable$',
731 coreconfigitem('merge-tools', br'.*\.executable$',
729 default=dynamicdefault,
732 default=dynamicdefault,
730 generic=True,
733 generic=True,
731 priority=-1,
734 priority=-1,
732 )
735 )
733 coreconfigitem('merge-tools', br'.*\.fixeol$',
736 coreconfigitem('merge-tools', br'.*\.fixeol$',
734 default=False,
737 default=False,
735 generic=True,
738 generic=True,
736 priority=-1,
739 priority=-1,
737 )
740 )
738 coreconfigitem('merge-tools', br'.*\.gui$',
741 coreconfigitem('merge-tools', br'.*\.gui$',
739 default=False,
742 default=False,
740 generic=True,
743 generic=True,
741 priority=-1,
744 priority=-1,
742 )
745 )
743 coreconfigitem('merge-tools', br'.*\.priority$',
746 coreconfigitem('merge-tools', br'.*\.priority$',
744 default=0,
747 default=0,
745 generic=True,
748 generic=True,
746 priority=-1,
749 priority=-1,
747 )
750 )
748 coreconfigitem('merge-tools', br'.*\.premerge$',
751 coreconfigitem('merge-tools', br'.*\.premerge$',
749 default=dynamicdefault,
752 default=dynamicdefault,
750 generic=True,
753 generic=True,
751 priority=-1,
754 priority=-1,
752 )
755 )
753 coreconfigitem('merge-tools', br'.*\.symlink$',
756 coreconfigitem('merge-tools', br'.*\.symlink$',
754 default=False,
757 default=False,
755 generic=True,
758 generic=True,
756 priority=-1,
759 priority=-1,
757 )
760 )
758 coreconfigitem('pager', 'attend-.*',
761 coreconfigitem('pager', 'attend-.*',
759 default=dynamicdefault,
762 default=dynamicdefault,
760 generic=True,
763 generic=True,
761 )
764 )
762 coreconfigitem('pager', 'ignore',
765 coreconfigitem('pager', 'ignore',
763 default=list,
766 default=list,
764 )
767 )
765 coreconfigitem('pager', 'pager',
768 coreconfigitem('pager', 'pager',
766 default=dynamicdefault,
769 default=dynamicdefault,
767 )
770 )
768 coreconfigitem('patch', 'eol',
771 coreconfigitem('patch', 'eol',
769 default='strict',
772 default='strict',
770 )
773 )
771 coreconfigitem('patch', 'fuzz',
774 coreconfigitem('patch', 'fuzz',
772 default=2,
775 default=2,
773 )
776 )
774 coreconfigitem('paths', 'default',
777 coreconfigitem('paths', 'default',
775 default=None,
778 default=None,
776 )
779 )
777 coreconfigitem('paths', 'default-push',
780 coreconfigitem('paths', 'default-push',
778 default=None,
781 default=None,
779 )
782 )
780 coreconfigitem('paths', '.*',
783 coreconfigitem('paths', '.*',
781 default=None,
784 default=None,
782 generic=True,
785 generic=True,
783 )
786 )
784 coreconfigitem('phases', 'checksubrepos',
787 coreconfigitem('phases', 'checksubrepos',
785 default='follow',
788 default='follow',
786 )
789 )
787 coreconfigitem('phases', 'new-commit',
790 coreconfigitem('phases', 'new-commit',
788 default='draft',
791 default='draft',
789 )
792 )
790 coreconfigitem('phases', 'publish',
793 coreconfigitem('phases', 'publish',
791 default=True,
794 default=True,
792 )
795 )
793 coreconfigitem('profiling', 'enabled',
796 coreconfigitem('profiling', 'enabled',
794 default=False,
797 default=False,
795 )
798 )
796 coreconfigitem('profiling', 'format',
799 coreconfigitem('profiling', 'format',
797 default='text',
800 default='text',
798 )
801 )
799 coreconfigitem('profiling', 'freq',
802 coreconfigitem('profiling', 'freq',
800 default=1000,
803 default=1000,
801 )
804 )
802 coreconfigitem('profiling', 'limit',
805 coreconfigitem('profiling', 'limit',
803 default=30,
806 default=30,
804 )
807 )
805 coreconfigitem('profiling', 'nested',
808 coreconfigitem('profiling', 'nested',
806 default=0,
809 default=0,
807 )
810 )
808 coreconfigitem('profiling', 'output',
811 coreconfigitem('profiling', 'output',
809 default=None,
812 default=None,
810 )
813 )
811 coreconfigitem('profiling', 'showmax',
814 coreconfigitem('profiling', 'showmax',
812 default=0.999,
815 default=0.999,
813 )
816 )
814 coreconfigitem('profiling', 'showmin',
817 coreconfigitem('profiling', 'showmin',
815 default=dynamicdefault,
818 default=dynamicdefault,
816 )
819 )
817 coreconfigitem('profiling', 'sort',
820 coreconfigitem('profiling', 'sort',
818 default='inlinetime',
821 default='inlinetime',
819 )
822 )
820 coreconfigitem('profiling', 'statformat',
823 coreconfigitem('profiling', 'statformat',
821 default='hotpath',
824 default='hotpath',
822 )
825 )
823 coreconfigitem('profiling', 'type',
826 coreconfigitem('profiling', 'type',
824 default='stat',
827 default='stat',
825 )
828 )
826 coreconfigitem('progress', 'assume-tty',
829 coreconfigitem('progress', 'assume-tty',
827 default=False,
830 default=False,
828 )
831 )
829 coreconfigitem('progress', 'changedelay',
832 coreconfigitem('progress', 'changedelay',
830 default=1,
833 default=1,
831 )
834 )
832 coreconfigitem('progress', 'clear-complete',
835 coreconfigitem('progress', 'clear-complete',
833 default=True,
836 default=True,
834 )
837 )
835 coreconfigitem('progress', 'debug',
838 coreconfigitem('progress', 'debug',
836 default=False,
839 default=False,
837 )
840 )
838 coreconfigitem('progress', 'delay',
841 coreconfigitem('progress', 'delay',
839 default=3,
842 default=3,
840 )
843 )
841 coreconfigitem('progress', 'disable',
844 coreconfigitem('progress', 'disable',
842 default=False,
845 default=False,
843 )
846 )
844 coreconfigitem('progress', 'estimateinterval',
847 coreconfigitem('progress', 'estimateinterval',
845 default=60.0,
848 default=60.0,
846 )
849 )
847 coreconfigitem('progress', 'format',
850 coreconfigitem('progress', 'format',
848 default=lambda: ['topic', 'bar', 'number', 'estimate'],
851 default=lambda: ['topic', 'bar', 'number', 'estimate'],
849 )
852 )
850 coreconfigitem('progress', 'refresh',
853 coreconfigitem('progress', 'refresh',
851 default=0.1,
854 default=0.1,
852 )
855 )
853 coreconfigitem('progress', 'width',
856 coreconfigitem('progress', 'width',
854 default=dynamicdefault,
857 default=dynamicdefault,
855 )
858 )
856 coreconfigitem('push', 'pushvars.server',
859 coreconfigitem('push', 'pushvars.server',
857 default=False,
860 default=False,
858 )
861 )
859 coreconfigitem('server', 'bookmarks-pushkey-compat',
862 coreconfigitem('server', 'bookmarks-pushkey-compat',
860 default=True,
863 default=True,
861 )
864 )
862 coreconfigitem('server', 'bundle1',
865 coreconfigitem('server', 'bundle1',
863 default=True,
866 default=True,
864 )
867 )
865 coreconfigitem('server', 'bundle1gd',
868 coreconfigitem('server', 'bundle1gd',
866 default=None,
869 default=None,
867 )
870 )
868 coreconfigitem('server', 'bundle1.pull',
871 coreconfigitem('server', 'bundle1.pull',
869 default=None,
872 default=None,
870 )
873 )
871 coreconfigitem('server', 'bundle1gd.pull',
874 coreconfigitem('server', 'bundle1gd.pull',
872 default=None,
875 default=None,
873 )
876 )
874 coreconfigitem('server', 'bundle1.push',
877 coreconfigitem('server', 'bundle1.push',
875 default=None,
878 default=None,
876 )
879 )
877 coreconfigitem('server', 'bundle1gd.push',
880 coreconfigitem('server', 'bundle1gd.push',
878 default=None,
881 default=None,
879 )
882 )
880 coreconfigitem('server', 'compressionengines',
883 coreconfigitem('server', 'compressionengines',
881 default=list,
884 default=list,
882 )
885 )
883 coreconfigitem('server', 'concurrent-push-mode',
886 coreconfigitem('server', 'concurrent-push-mode',
884 default='strict',
887 default='strict',
885 )
888 )
886 coreconfigitem('server', 'disablefullbundle',
889 coreconfigitem('server', 'disablefullbundle',
887 default=False,
890 default=False,
888 )
891 )
889 coreconfigitem('server', 'maxhttpheaderlen',
892 coreconfigitem('server', 'maxhttpheaderlen',
890 default=1024,
893 default=1024,
891 )
894 )
892 coreconfigitem('server', 'preferuncompressed',
895 coreconfigitem('server', 'preferuncompressed',
893 default=False,
896 default=False,
894 )
897 )
895 coreconfigitem('server', 'uncompressed',
898 coreconfigitem('server', 'uncompressed',
896 default=True,
899 default=True,
897 )
900 )
898 coreconfigitem('server', 'uncompressedallowsecret',
901 coreconfigitem('server', 'uncompressedallowsecret',
899 default=False,
902 default=False,
900 )
903 )
901 coreconfigitem('server', 'validate',
904 coreconfigitem('server', 'validate',
902 default=False,
905 default=False,
903 )
906 )
904 coreconfigitem('server', 'zliblevel',
907 coreconfigitem('server', 'zliblevel',
905 default=-1,
908 default=-1,
906 )
909 )
907 coreconfigitem('share', 'pool',
910 coreconfigitem('share', 'pool',
908 default=None,
911 default=None,
909 )
912 )
910 coreconfigitem('share', 'poolnaming',
913 coreconfigitem('share', 'poolnaming',
911 default='identity',
914 default='identity',
912 )
915 )
913 coreconfigitem('smtp', 'host',
916 coreconfigitem('smtp', 'host',
914 default=None,
917 default=None,
915 )
918 )
916 coreconfigitem('smtp', 'local_hostname',
919 coreconfigitem('smtp', 'local_hostname',
917 default=None,
920 default=None,
918 )
921 )
919 coreconfigitem('smtp', 'password',
922 coreconfigitem('smtp', 'password',
920 default=None,
923 default=None,
921 )
924 )
922 coreconfigitem('smtp', 'port',
925 coreconfigitem('smtp', 'port',
923 default=dynamicdefault,
926 default=dynamicdefault,
924 )
927 )
925 coreconfigitem('smtp', 'tls',
928 coreconfigitem('smtp', 'tls',
926 default='none',
929 default='none',
927 )
930 )
928 coreconfigitem('smtp', 'username',
931 coreconfigitem('smtp', 'username',
929 default=None,
932 default=None,
930 )
933 )
931 coreconfigitem('sparse', 'missingwarning',
934 coreconfigitem('sparse', 'missingwarning',
932 default=True,
935 default=True,
933 )
936 )
934 coreconfigitem('subrepos', 'allowed',
937 coreconfigitem('subrepos', 'allowed',
935 default=dynamicdefault, # to make backporting simpler
938 default=dynamicdefault, # to make backporting simpler
936 )
939 )
937 coreconfigitem('subrepos', 'hg:allowed',
940 coreconfigitem('subrepos', 'hg:allowed',
938 default=dynamicdefault,
941 default=dynamicdefault,
939 )
942 )
940 coreconfigitem('subrepos', 'git:allowed',
943 coreconfigitem('subrepos', 'git:allowed',
941 default=dynamicdefault,
944 default=dynamicdefault,
942 )
945 )
943 coreconfigitem('subrepos', 'svn:allowed',
946 coreconfigitem('subrepos', 'svn:allowed',
944 default=dynamicdefault,
947 default=dynamicdefault,
945 )
948 )
946 coreconfigitem('templates', '.*',
949 coreconfigitem('templates', '.*',
947 default=None,
950 default=None,
948 generic=True,
951 generic=True,
949 )
952 )
950 coreconfigitem('trusted', 'groups',
953 coreconfigitem('trusted', 'groups',
951 default=list,
954 default=list,
952 )
955 )
953 coreconfigitem('trusted', 'users',
956 coreconfigitem('trusted', 'users',
954 default=list,
957 default=list,
955 )
958 )
956 coreconfigitem('ui', '_usedassubrepo',
959 coreconfigitem('ui', '_usedassubrepo',
957 default=False,
960 default=False,
958 )
961 )
959 coreconfigitem('ui', 'allowemptycommit',
962 coreconfigitem('ui', 'allowemptycommit',
960 default=False,
963 default=False,
961 )
964 )
962 coreconfigitem('ui', 'archivemeta',
965 coreconfigitem('ui', 'archivemeta',
963 default=True,
966 default=True,
964 )
967 )
965 coreconfigitem('ui', 'askusername',
968 coreconfigitem('ui', 'askusername',
966 default=False,
969 default=False,
967 )
970 )
968 coreconfigitem('ui', 'clonebundlefallback',
971 coreconfigitem('ui', 'clonebundlefallback',
969 default=False,
972 default=False,
970 )
973 )
971 coreconfigitem('ui', 'clonebundleprefers',
974 coreconfigitem('ui', 'clonebundleprefers',
972 default=list,
975 default=list,
973 )
976 )
974 coreconfigitem('ui', 'clonebundles',
977 coreconfigitem('ui', 'clonebundles',
975 default=True,
978 default=True,
976 )
979 )
977 coreconfigitem('ui', 'color',
980 coreconfigitem('ui', 'color',
978 default='auto',
981 default='auto',
979 )
982 )
980 coreconfigitem('ui', 'commitsubrepos',
983 coreconfigitem('ui', 'commitsubrepos',
981 default=False,
984 default=False,
982 )
985 )
983 coreconfigitem('ui', 'debug',
986 coreconfigitem('ui', 'debug',
984 default=False,
987 default=False,
985 )
988 )
986 coreconfigitem('ui', 'debugger',
989 coreconfigitem('ui', 'debugger',
987 default=None,
990 default=None,
988 )
991 )
989 coreconfigitem('ui', 'editor',
992 coreconfigitem('ui', 'editor',
990 default=dynamicdefault,
993 default=dynamicdefault,
991 )
994 )
992 coreconfigitem('ui', 'fallbackencoding',
995 coreconfigitem('ui', 'fallbackencoding',
993 default=None,
996 default=None,
994 )
997 )
995 coreconfigitem('ui', 'forcecwd',
998 coreconfigitem('ui', 'forcecwd',
996 default=None,
999 default=None,
997 )
1000 )
998 coreconfigitem('ui', 'forcemerge',
1001 coreconfigitem('ui', 'forcemerge',
999 default=None,
1002 default=None,
1000 )
1003 )
1001 coreconfigitem('ui', 'formatdebug',
1004 coreconfigitem('ui', 'formatdebug',
1002 default=False,
1005 default=False,
1003 )
1006 )
1004 coreconfigitem('ui', 'formatjson',
1007 coreconfigitem('ui', 'formatjson',
1005 default=False,
1008 default=False,
1006 )
1009 )
1007 coreconfigitem('ui', 'formatted',
1010 coreconfigitem('ui', 'formatted',
1008 default=None,
1011 default=None,
1009 )
1012 )
1010 coreconfigitem('ui', 'graphnodetemplate',
1013 coreconfigitem('ui', 'graphnodetemplate',
1011 default=None,
1014 default=None,
1012 )
1015 )
1013 coreconfigitem('ui', 'http2debuglevel',
1016 coreconfigitem('ui', 'http2debuglevel',
1014 default=None,
1017 default=None,
1015 )
1018 )
1016 coreconfigitem('ui', 'interactive',
1019 coreconfigitem('ui', 'interactive',
1017 default=None,
1020 default=None,
1018 )
1021 )
1019 coreconfigitem('ui', 'interface',
1022 coreconfigitem('ui', 'interface',
1020 default=None,
1023 default=None,
1021 )
1024 )
1022 coreconfigitem('ui', 'interface.chunkselector',
1025 coreconfigitem('ui', 'interface.chunkselector',
1023 default=None,
1026 default=None,
1024 )
1027 )
1025 coreconfigitem('ui', 'logblockedtimes',
1028 coreconfigitem('ui', 'logblockedtimes',
1026 default=False,
1029 default=False,
1027 )
1030 )
1028 coreconfigitem('ui', 'logtemplate',
1031 coreconfigitem('ui', 'logtemplate',
1029 default=None,
1032 default=None,
1030 )
1033 )
1031 coreconfigitem('ui', 'merge',
1034 coreconfigitem('ui', 'merge',
1032 default=None,
1035 default=None,
1033 )
1036 )
1034 coreconfigitem('ui', 'mergemarkers',
1037 coreconfigitem('ui', 'mergemarkers',
1035 default='basic',
1038 default='basic',
1036 )
1039 )
1037 coreconfigitem('ui', 'mergemarkertemplate',
1040 coreconfigitem('ui', 'mergemarkertemplate',
1038 default=('{node|short} '
1041 default=('{node|short} '
1039 '{ifeq(tags, "tip", "", '
1042 '{ifeq(tags, "tip", "", '
1040 'ifeq(tags, "", "", "{tags} "))}'
1043 'ifeq(tags, "", "", "{tags} "))}'
1041 '{if(bookmarks, "{bookmarks} ")}'
1044 '{if(bookmarks, "{bookmarks} ")}'
1042 '{ifeq(branch, "default", "", "{branch} ")}'
1045 '{ifeq(branch, "default", "", "{branch} ")}'
1043 '- {author|user}: {desc|firstline}')
1046 '- {author|user}: {desc|firstline}')
1044 )
1047 )
1045 coreconfigitem('ui', 'nontty',
1048 coreconfigitem('ui', 'nontty',
1046 default=False,
1049 default=False,
1047 )
1050 )
1048 coreconfigitem('ui', 'origbackuppath',
1051 coreconfigitem('ui', 'origbackuppath',
1049 default=None,
1052 default=None,
1050 )
1053 )
1051 coreconfigitem('ui', 'paginate',
1054 coreconfigitem('ui', 'paginate',
1052 default=True,
1055 default=True,
1053 )
1056 )
1054 coreconfigitem('ui', 'patch',
1057 coreconfigitem('ui', 'patch',
1055 default=None,
1058 default=None,
1056 )
1059 )
1057 coreconfigitem('ui', 'portablefilenames',
1060 coreconfigitem('ui', 'portablefilenames',
1058 default='warn',
1061 default='warn',
1059 )
1062 )
1060 coreconfigitem('ui', 'promptecho',
1063 coreconfigitem('ui', 'promptecho',
1061 default=False,
1064 default=False,
1062 )
1065 )
1063 coreconfigitem('ui', 'quiet',
1066 coreconfigitem('ui', 'quiet',
1064 default=False,
1067 default=False,
1065 )
1068 )
1066 coreconfigitem('ui', 'quietbookmarkmove',
1069 coreconfigitem('ui', 'quietbookmarkmove',
1067 default=False,
1070 default=False,
1068 )
1071 )
1069 coreconfigitem('ui', 'remotecmd',
1072 coreconfigitem('ui', 'remotecmd',
1070 default='hg',
1073 default='hg',
1071 )
1074 )
1072 coreconfigitem('ui', 'report_untrusted',
1075 coreconfigitem('ui', 'report_untrusted',
1073 default=True,
1076 default=True,
1074 )
1077 )
1075 coreconfigitem('ui', 'rollback',
1078 coreconfigitem('ui', 'rollback',
1076 default=True,
1079 default=True,
1077 )
1080 )
1078 coreconfigitem('ui', 'slash',
1081 coreconfigitem('ui', 'slash',
1079 default=False,
1082 default=False,
1080 )
1083 )
1081 coreconfigitem('ui', 'ssh',
1084 coreconfigitem('ui', 'ssh',
1082 default='ssh',
1085 default='ssh',
1083 )
1086 )
1084 coreconfigitem('ui', 'ssherrorhint',
1087 coreconfigitem('ui', 'ssherrorhint',
1085 default=None,
1088 default=None,
1086 )
1089 )
1087 coreconfigitem('ui', 'statuscopies',
1090 coreconfigitem('ui', 'statuscopies',
1088 default=False,
1091 default=False,
1089 )
1092 )
1090 coreconfigitem('ui', 'strict',
1093 coreconfigitem('ui', 'strict',
1091 default=False,
1094 default=False,
1092 )
1095 )
1093 coreconfigitem('ui', 'style',
1096 coreconfigitem('ui', 'style',
1094 default='',
1097 default='',
1095 )
1098 )
1096 coreconfigitem('ui', 'supportcontact',
1099 coreconfigitem('ui', 'supportcontact',
1097 default=None,
1100 default=None,
1098 )
1101 )
1099 coreconfigitem('ui', 'textwidth',
1102 coreconfigitem('ui', 'textwidth',
1100 default=78,
1103 default=78,
1101 )
1104 )
1102 coreconfigitem('ui', 'timeout',
1105 coreconfigitem('ui', 'timeout',
1103 default='600',
1106 default='600',
1104 )
1107 )
1105 coreconfigitem('ui', 'timeout.warn',
1108 coreconfigitem('ui', 'timeout.warn',
1106 default=0,
1109 default=0,
1107 )
1110 )
1108 coreconfigitem('ui', 'traceback',
1111 coreconfigitem('ui', 'traceback',
1109 default=False,
1112 default=False,
1110 )
1113 )
1111 coreconfigitem('ui', 'tweakdefaults',
1114 coreconfigitem('ui', 'tweakdefaults',
1112 default=False,
1115 default=False,
1113 )
1116 )
1114 coreconfigitem('ui', 'usehttp2',
1117 coreconfigitem('ui', 'usehttp2',
1115 default=False,
1118 default=False,
1116 )
1119 )
1117 coreconfigitem('ui', 'username',
1120 coreconfigitem('ui', 'username',
1118 alias=[('ui', 'user')]
1121 alias=[('ui', 'user')]
1119 )
1122 )
1120 coreconfigitem('ui', 'verbose',
1123 coreconfigitem('ui', 'verbose',
1121 default=False,
1124 default=False,
1122 )
1125 )
1123 coreconfigitem('verify', 'skipflags',
1126 coreconfigitem('verify', 'skipflags',
1124 default=None,
1127 default=None,
1125 )
1128 )
1126 coreconfigitem('web', 'allowbz2',
1129 coreconfigitem('web', 'allowbz2',
1127 default=False,
1130 default=False,
1128 )
1131 )
1129 coreconfigitem('web', 'allowgz',
1132 coreconfigitem('web', 'allowgz',
1130 default=False,
1133 default=False,
1131 )
1134 )
1132 coreconfigitem('web', 'allow-pull',
1135 coreconfigitem('web', 'allow-pull',
1133 alias=[('web', 'allowpull')],
1136 alias=[('web', 'allowpull')],
1134 default=True,
1137 default=True,
1135 )
1138 )
1136 coreconfigitem('web', 'allow-push',
1139 coreconfigitem('web', 'allow-push',
1137 alias=[('web', 'allow_push')],
1140 alias=[('web', 'allow_push')],
1138 default=list,
1141 default=list,
1139 )
1142 )
1140 coreconfigitem('web', 'allowzip',
1143 coreconfigitem('web', 'allowzip',
1141 default=False,
1144 default=False,
1142 )
1145 )
1143 coreconfigitem('web', 'archivesubrepos',
1146 coreconfigitem('web', 'archivesubrepos',
1144 default=False,
1147 default=False,
1145 )
1148 )
1146 coreconfigitem('web', 'cache',
1149 coreconfigitem('web', 'cache',
1147 default=True,
1150 default=True,
1148 )
1151 )
1149 coreconfigitem('web', 'contact',
1152 coreconfigitem('web', 'contact',
1150 default=None,
1153 default=None,
1151 )
1154 )
1152 coreconfigitem('web', 'deny_push',
1155 coreconfigitem('web', 'deny_push',
1153 default=list,
1156 default=list,
1154 )
1157 )
1155 coreconfigitem('web', 'guessmime',
1158 coreconfigitem('web', 'guessmime',
1156 default=False,
1159 default=False,
1157 )
1160 )
1158 coreconfigitem('web', 'hidden',
1161 coreconfigitem('web', 'hidden',
1159 default=False,
1162 default=False,
1160 )
1163 )
1161 coreconfigitem('web', 'labels',
1164 coreconfigitem('web', 'labels',
1162 default=list,
1165 default=list,
1163 )
1166 )
1164 coreconfigitem('web', 'logoimg',
1167 coreconfigitem('web', 'logoimg',
1165 default='hglogo.png',
1168 default='hglogo.png',
1166 )
1169 )
1167 coreconfigitem('web', 'logourl',
1170 coreconfigitem('web', 'logourl',
1168 default='https://mercurial-scm.org/',
1171 default='https://mercurial-scm.org/',
1169 )
1172 )
1170 coreconfigitem('web', 'accesslog',
1173 coreconfigitem('web', 'accesslog',
1171 default='-',
1174 default='-',
1172 )
1175 )
1173 coreconfigitem('web', 'address',
1176 coreconfigitem('web', 'address',
1174 default='',
1177 default='',
1175 )
1178 )
1176 coreconfigitem('web', 'allow_archive',
1179 coreconfigitem('web', 'allow_archive',
1177 default=list,
1180 default=list,
1178 )
1181 )
1179 coreconfigitem('web', 'allow_read',
1182 coreconfigitem('web', 'allow_read',
1180 default=list,
1183 default=list,
1181 )
1184 )
1182 coreconfigitem('web', 'baseurl',
1185 coreconfigitem('web', 'baseurl',
1183 default=None,
1186 default=None,
1184 )
1187 )
1185 coreconfigitem('web', 'cacerts',
1188 coreconfigitem('web', 'cacerts',
1186 default=None,
1189 default=None,
1187 )
1190 )
1188 coreconfigitem('web', 'certificate',
1191 coreconfigitem('web', 'certificate',
1189 default=None,
1192 default=None,
1190 )
1193 )
1191 coreconfigitem('web', 'collapse',
1194 coreconfigitem('web', 'collapse',
1192 default=False,
1195 default=False,
1193 )
1196 )
1194 coreconfigitem('web', 'csp',
1197 coreconfigitem('web', 'csp',
1195 default=None,
1198 default=None,
1196 )
1199 )
1197 coreconfigitem('web', 'deny_read',
1200 coreconfigitem('web', 'deny_read',
1198 default=list,
1201 default=list,
1199 )
1202 )
1200 coreconfigitem('web', 'descend',
1203 coreconfigitem('web', 'descend',
1201 default=True,
1204 default=True,
1202 )
1205 )
1203 coreconfigitem('web', 'description',
1206 coreconfigitem('web', 'description',
1204 default="",
1207 default="",
1205 )
1208 )
1206 coreconfigitem('web', 'encoding',
1209 coreconfigitem('web', 'encoding',
1207 default=lambda: encoding.encoding,
1210 default=lambda: encoding.encoding,
1208 )
1211 )
1209 coreconfigitem('web', 'errorlog',
1212 coreconfigitem('web', 'errorlog',
1210 default='-',
1213 default='-',
1211 )
1214 )
1212 coreconfigitem('web', 'ipv6',
1215 coreconfigitem('web', 'ipv6',
1213 default=False,
1216 default=False,
1214 )
1217 )
1215 coreconfigitem('web', 'maxchanges',
1218 coreconfigitem('web', 'maxchanges',
1216 default=10,
1219 default=10,
1217 )
1220 )
1218 coreconfigitem('web', 'maxfiles',
1221 coreconfigitem('web', 'maxfiles',
1219 default=10,
1222 default=10,
1220 )
1223 )
1221 coreconfigitem('web', 'maxshortchanges',
1224 coreconfigitem('web', 'maxshortchanges',
1222 default=60,
1225 default=60,
1223 )
1226 )
1224 coreconfigitem('web', 'motd',
1227 coreconfigitem('web', 'motd',
1225 default='',
1228 default='',
1226 )
1229 )
1227 coreconfigitem('web', 'name',
1230 coreconfigitem('web', 'name',
1228 default=dynamicdefault,
1231 default=dynamicdefault,
1229 )
1232 )
1230 coreconfigitem('web', 'port',
1233 coreconfigitem('web', 'port',
1231 default=8000,
1234 default=8000,
1232 )
1235 )
1233 coreconfigitem('web', 'prefix',
1236 coreconfigitem('web', 'prefix',
1234 default='',
1237 default='',
1235 )
1238 )
1236 coreconfigitem('web', 'push_ssl',
1239 coreconfigitem('web', 'push_ssl',
1237 default=True,
1240 default=True,
1238 )
1241 )
1239 coreconfigitem('web', 'refreshinterval',
1242 coreconfigitem('web', 'refreshinterval',
1240 default=20,
1243 default=20,
1241 )
1244 )
1242 coreconfigitem('web', 'staticurl',
1245 coreconfigitem('web', 'staticurl',
1243 default=None,
1246 default=None,
1244 )
1247 )
1245 coreconfigitem('web', 'stripes',
1248 coreconfigitem('web', 'stripes',
1246 default=1,
1249 default=1,
1247 )
1250 )
1248 coreconfigitem('web', 'style',
1251 coreconfigitem('web', 'style',
1249 default='paper',
1252 default='paper',
1250 )
1253 )
1251 coreconfigitem('web', 'templates',
1254 coreconfigitem('web', 'templates',
1252 default=None,
1255 default=None,
1253 )
1256 )
1254 coreconfigitem('web', 'view',
1257 coreconfigitem('web', 'view',
1255 default='served',
1258 default='served',
1256 )
1259 )
1257 coreconfigitem('worker', 'backgroundclose',
1260 coreconfigitem('worker', 'backgroundclose',
1258 default=dynamicdefault,
1261 default=dynamicdefault,
1259 )
1262 )
1260 # Windows defaults to a limit of 512 open files. A buffer of 128
1263 # Windows defaults to a limit of 512 open files. A buffer of 128
1261 # should give us enough headway.
1264 # should give us enough headway.
1262 coreconfigitem('worker', 'backgroundclosemaxqueue',
1265 coreconfigitem('worker', 'backgroundclosemaxqueue',
1263 default=384,
1266 default=384,
1264 )
1267 )
1265 coreconfigitem('worker', 'backgroundcloseminfilecount',
1268 coreconfigitem('worker', 'backgroundcloseminfilecount',
1266 default=2048,
1269 default=2048,
1267 )
1270 )
1268 coreconfigitem('worker', 'backgroundclosethreadcount',
1271 coreconfigitem('worker', 'backgroundclosethreadcount',
1269 default=4,
1272 default=4,
1270 )
1273 )
1271 coreconfigitem('worker', 'enabled',
1274 coreconfigitem('worker', 'enabled',
1272 default=True,
1275 default=True,
1273 )
1276 )
1274 coreconfigitem('worker', 'numcpus',
1277 coreconfigitem('worker', 'numcpus',
1275 default=None,
1278 default=None,
1276 )
1279 )
1277
1280
1278 # Rebase related configuration moved to core because other extension are doing
1281 # Rebase related configuration moved to core because other extension are doing
1279 # strange things. For example, shelve import the extensions to reuse some bit
1282 # strange things. For example, shelve import the extensions to reuse some bit
1280 # without formally loading it.
1283 # without formally loading it.
1281 coreconfigitem('commands', 'rebase.requiredest',
1284 coreconfigitem('commands', 'rebase.requiredest',
1282 default=False,
1285 default=False,
1283 )
1286 )
1284 coreconfigitem('experimental', 'rebaseskipobsolete',
1287 coreconfigitem('experimental', 'rebaseskipobsolete',
1285 default=True,
1288 default=True,
1286 )
1289 )
1287 coreconfigitem('rebase', 'singletransaction',
1290 coreconfigitem('rebase', 'singletransaction',
1288 default=False,
1291 default=False,
1289 )
1292 )
1290 coreconfigitem('rebase', 'experimental.inmemory',
1293 coreconfigitem('rebase', 'experimental.inmemory',
1291 default=False,
1294 default=False,
1292 )
1295 )
@@ -1,2229 +1,2234
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import errno
11 import errno
12 import hashlib
12 import hashlib
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 bin,
16 bin,
17 hex,
17 hex,
18 nullid,
18 nullid,
19 )
19 )
20 from . import (
20 from . import (
21 bookmarks as bookmod,
21 bookmarks as bookmod,
22 bundle2,
22 bundle2,
23 changegroup,
23 changegroup,
24 discovery,
24 discovery,
25 error,
25 error,
26 lock as lockmod,
26 lock as lockmod,
27 logexchange,
27 logexchange,
28 obsolete,
28 obsolete,
29 phases,
29 phases,
30 pushkey,
30 pushkey,
31 pycompat,
31 pycompat,
32 scmutil,
32 scmutil,
33 sslutil,
33 sslutil,
34 streamclone,
34 streamclone,
35 url as urlmod,
35 url as urlmod,
36 util,
36 util,
37 )
37 )
38
38
39 urlerr = util.urlerr
39 urlerr = util.urlerr
40 urlreq = util.urlreq
40 urlreq = util.urlreq
41
41
42 # Maps bundle version human names to changegroup versions.
42 # Maps bundle version human names to changegroup versions.
43 _bundlespeccgversions = {'v1': '01',
43 _bundlespeccgversions = {'v1': '01',
44 'v2': '02',
44 'v2': '02',
45 'packed1': 's1',
45 'packed1': 's1',
46 'bundle2': '02', #legacy
46 'bundle2': '02', #legacy
47 }
47 }
48
48
49 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
49 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
50 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
50 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
51
51
52 def parsebundlespec(repo, spec, strict=True, externalnames=False):
52 def parsebundlespec(repo, spec, strict=True, externalnames=False):
53 """Parse a bundle string specification into parts.
53 """Parse a bundle string specification into parts.
54
54
55 Bundle specifications denote a well-defined bundle/exchange format.
55 Bundle specifications denote a well-defined bundle/exchange format.
56 The content of a given specification should not change over time in
56 The content of a given specification should not change over time in
57 order to ensure that bundles produced by a newer version of Mercurial are
57 order to ensure that bundles produced by a newer version of Mercurial are
58 readable from an older version.
58 readable from an older version.
59
59
60 The string currently has the form:
60 The string currently has the form:
61
61
62 <compression>-<type>[;<parameter0>[;<parameter1>]]
62 <compression>-<type>[;<parameter0>[;<parameter1>]]
63
63
64 Where <compression> is one of the supported compression formats
64 Where <compression> is one of the supported compression formats
65 and <type> is (currently) a version string. A ";" can follow the type and
65 and <type> is (currently) a version string. A ";" can follow the type and
66 all text afterwards is interpreted as URI encoded, ";" delimited key=value
66 all text afterwards is interpreted as URI encoded, ";" delimited key=value
67 pairs.
67 pairs.
68
68
69 If ``strict`` is True (the default) <compression> is required. Otherwise,
69 If ``strict`` is True (the default) <compression> is required. Otherwise,
70 it is optional.
70 it is optional.
71
71
72 If ``externalnames`` is False (the default), the human-centric names will
72 If ``externalnames`` is False (the default), the human-centric names will
73 be converted to their internal representation.
73 be converted to their internal representation.
74
74
75 Returns a 3-tuple of (compression, version, parameters). Compression will
75 Returns a 3-tuple of (compression, version, parameters). Compression will
76 be ``None`` if not in strict mode and a compression isn't defined.
76 be ``None`` if not in strict mode and a compression isn't defined.
77
77
78 An ``InvalidBundleSpecification`` is raised when the specification is
78 An ``InvalidBundleSpecification`` is raised when the specification is
79 not syntactically well formed.
79 not syntactically well formed.
80
80
81 An ``UnsupportedBundleSpecification`` is raised when the compression or
81 An ``UnsupportedBundleSpecification`` is raised when the compression or
82 bundle type/version is not recognized.
82 bundle type/version is not recognized.
83
83
84 Note: this function will likely eventually return a more complex data
84 Note: this function will likely eventually return a more complex data
85 structure, including bundle2 part information.
85 structure, including bundle2 part information.
86 """
86 """
87 def parseparams(s):
87 def parseparams(s):
88 if ';' not in s:
88 if ';' not in s:
89 return s, {}
89 return s, {}
90
90
91 params = {}
91 params = {}
92 version, paramstr = s.split(';', 1)
92 version, paramstr = s.split(';', 1)
93
93
94 for p in paramstr.split(';'):
94 for p in paramstr.split(';'):
95 if '=' not in p:
95 if '=' not in p:
96 raise error.InvalidBundleSpecification(
96 raise error.InvalidBundleSpecification(
97 _('invalid bundle specification: '
97 _('invalid bundle specification: '
98 'missing "=" in parameter: %s') % p)
98 'missing "=" in parameter: %s') % p)
99
99
100 key, value = p.split('=', 1)
100 key, value = p.split('=', 1)
101 key = urlreq.unquote(key)
101 key = urlreq.unquote(key)
102 value = urlreq.unquote(value)
102 value = urlreq.unquote(value)
103 params[key] = value
103 params[key] = value
104
104
105 return version, params
105 return version, params
106
106
107
107
108 if strict and '-' not in spec:
108 if strict and '-' not in spec:
109 raise error.InvalidBundleSpecification(
109 raise error.InvalidBundleSpecification(
110 _('invalid bundle specification; '
110 _('invalid bundle specification; '
111 'must be prefixed with compression: %s') % spec)
111 'must be prefixed with compression: %s') % spec)
112
112
113 if '-' in spec:
113 if '-' in spec:
114 compression, version = spec.split('-', 1)
114 compression, version = spec.split('-', 1)
115
115
116 if compression not in util.compengines.supportedbundlenames:
116 if compression not in util.compengines.supportedbundlenames:
117 raise error.UnsupportedBundleSpecification(
117 raise error.UnsupportedBundleSpecification(
118 _('%s compression is not supported') % compression)
118 _('%s compression is not supported') % compression)
119
119
120 version, params = parseparams(version)
120 version, params = parseparams(version)
121
121
122 if version not in _bundlespeccgversions:
122 if version not in _bundlespeccgversions:
123 raise error.UnsupportedBundleSpecification(
123 raise error.UnsupportedBundleSpecification(
124 _('%s is not a recognized bundle version') % version)
124 _('%s is not a recognized bundle version') % version)
125 else:
125 else:
126 # Value could be just the compression or just the version, in which
126 # Value could be just the compression or just the version, in which
127 # case some defaults are assumed (but only when not in strict mode).
127 # case some defaults are assumed (but only when not in strict mode).
128 assert not strict
128 assert not strict
129
129
130 spec, params = parseparams(spec)
130 spec, params = parseparams(spec)
131
131
132 if spec in util.compengines.supportedbundlenames:
132 if spec in util.compengines.supportedbundlenames:
133 compression = spec
133 compression = spec
134 version = 'v1'
134 version = 'v1'
135 # Generaldelta repos require v2.
135 # Generaldelta repos require v2.
136 if 'generaldelta' in repo.requirements:
136 if 'generaldelta' in repo.requirements:
137 version = 'v2'
137 version = 'v2'
138 # Modern compression engines require v2.
138 # Modern compression engines require v2.
139 if compression not in _bundlespecv1compengines:
139 if compression not in _bundlespecv1compengines:
140 version = 'v2'
140 version = 'v2'
141 elif spec in _bundlespeccgversions:
141 elif spec in _bundlespeccgversions:
142 if spec == 'packed1':
142 if spec == 'packed1':
143 compression = 'none'
143 compression = 'none'
144 else:
144 else:
145 compression = 'bzip2'
145 compression = 'bzip2'
146 version = spec
146 version = spec
147 else:
147 else:
148 raise error.UnsupportedBundleSpecification(
148 raise error.UnsupportedBundleSpecification(
149 _('%s is not a recognized bundle specification') % spec)
149 _('%s is not a recognized bundle specification') % spec)
150
150
151 # Bundle version 1 only supports a known set of compression engines.
151 # Bundle version 1 only supports a known set of compression engines.
152 if version == 'v1' and compression not in _bundlespecv1compengines:
152 if version == 'v1' and compression not in _bundlespecv1compengines:
153 raise error.UnsupportedBundleSpecification(
153 raise error.UnsupportedBundleSpecification(
154 _('compression engine %s is not supported on v1 bundles') %
154 _('compression engine %s is not supported on v1 bundles') %
155 compression)
155 compression)
156
156
157 # The specification for packed1 can optionally declare the data formats
157 # The specification for packed1 can optionally declare the data formats
158 # required to apply it. If we see this metadata, compare against what the
158 # required to apply it. If we see this metadata, compare against what the
159 # repo supports and error if the bundle isn't compatible.
159 # repo supports and error if the bundle isn't compatible.
160 if version == 'packed1' and 'requirements' in params:
160 if version == 'packed1' and 'requirements' in params:
161 requirements = set(params['requirements'].split(','))
161 requirements = set(params['requirements'].split(','))
162 missingreqs = requirements - repo.supportedformats
162 missingreqs = requirements - repo.supportedformats
163 if missingreqs:
163 if missingreqs:
164 raise error.UnsupportedBundleSpecification(
164 raise error.UnsupportedBundleSpecification(
165 _('missing support for repository features: %s') %
165 _('missing support for repository features: %s') %
166 ', '.join(sorted(missingreqs)))
166 ', '.join(sorted(missingreqs)))
167
167
168 if not externalnames:
168 if not externalnames:
169 engine = util.compengines.forbundlename(compression)
169 engine = util.compengines.forbundlename(compression)
170 compression = engine.bundletype()[1]
170 compression = engine.bundletype()[1]
171 version = _bundlespeccgversions[version]
171 version = _bundlespeccgversions[version]
172 return compression, version, params
172 return compression, version, params
173
173
174 def readbundle(ui, fh, fname, vfs=None):
174 def readbundle(ui, fh, fname, vfs=None):
175 header = changegroup.readexactly(fh, 4)
175 header = changegroup.readexactly(fh, 4)
176
176
177 alg = None
177 alg = None
178 if not fname:
178 if not fname:
179 fname = "stream"
179 fname = "stream"
180 if not header.startswith('HG') and header.startswith('\0'):
180 if not header.startswith('HG') and header.startswith('\0'):
181 fh = changegroup.headerlessfixup(fh, header)
181 fh = changegroup.headerlessfixup(fh, header)
182 header = "HG10"
182 header = "HG10"
183 alg = 'UN'
183 alg = 'UN'
184 elif vfs:
184 elif vfs:
185 fname = vfs.join(fname)
185 fname = vfs.join(fname)
186
186
187 magic, version = header[0:2], header[2:4]
187 magic, version = header[0:2], header[2:4]
188
188
189 if magic != 'HG':
189 if magic != 'HG':
190 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
190 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
191 if version == '10':
191 if version == '10':
192 if alg is None:
192 if alg is None:
193 alg = changegroup.readexactly(fh, 2)
193 alg = changegroup.readexactly(fh, 2)
194 return changegroup.cg1unpacker(fh, alg)
194 return changegroup.cg1unpacker(fh, alg)
195 elif version.startswith('2'):
195 elif version.startswith('2'):
196 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
196 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
197 elif version == 'S1':
197 elif version == 'S1':
198 return streamclone.streamcloneapplier(fh)
198 return streamclone.streamcloneapplier(fh)
199 else:
199 else:
200 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
200 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
201
201
202 def getbundlespec(ui, fh):
202 def getbundlespec(ui, fh):
203 """Infer the bundlespec from a bundle file handle.
203 """Infer the bundlespec from a bundle file handle.
204
204
205 The input file handle is seeked and the original seek position is not
205 The input file handle is seeked and the original seek position is not
206 restored.
206 restored.
207 """
207 """
208 def speccompression(alg):
208 def speccompression(alg):
209 try:
209 try:
210 return util.compengines.forbundletype(alg).bundletype()[0]
210 return util.compengines.forbundletype(alg).bundletype()[0]
211 except KeyError:
211 except KeyError:
212 return None
212 return None
213
213
214 b = readbundle(ui, fh, None)
214 b = readbundle(ui, fh, None)
215 if isinstance(b, changegroup.cg1unpacker):
215 if isinstance(b, changegroup.cg1unpacker):
216 alg = b._type
216 alg = b._type
217 if alg == '_truncatedBZ':
217 if alg == '_truncatedBZ':
218 alg = 'BZ'
218 alg = 'BZ'
219 comp = speccompression(alg)
219 comp = speccompression(alg)
220 if not comp:
220 if not comp:
221 raise error.Abort(_('unknown compression algorithm: %s') % alg)
221 raise error.Abort(_('unknown compression algorithm: %s') % alg)
222 return '%s-v1' % comp
222 return '%s-v1' % comp
223 elif isinstance(b, bundle2.unbundle20):
223 elif isinstance(b, bundle2.unbundle20):
224 if 'Compression' in b.params:
224 if 'Compression' in b.params:
225 comp = speccompression(b.params['Compression'])
225 comp = speccompression(b.params['Compression'])
226 if not comp:
226 if not comp:
227 raise error.Abort(_('unknown compression algorithm: %s') % comp)
227 raise error.Abort(_('unknown compression algorithm: %s') % comp)
228 else:
228 else:
229 comp = 'none'
229 comp = 'none'
230
230
231 version = None
231 version = None
232 for part in b.iterparts():
232 for part in b.iterparts():
233 if part.type == 'changegroup':
233 if part.type == 'changegroup':
234 version = part.params['version']
234 version = part.params['version']
235 if version in ('01', '02'):
235 if version in ('01', '02'):
236 version = 'v2'
236 version = 'v2'
237 else:
237 else:
238 raise error.Abort(_('changegroup version %s does not have '
238 raise error.Abort(_('changegroup version %s does not have '
239 'a known bundlespec') % version,
239 'a known bundlespec') % version,
240 hint=_('try upgrading your Mercurial '
240 hint=_('try upgrading your Mercurial '
241 'client'))
241 'client'))
242
242
243 if not version:
243 if not version:
244 raise error.Abort(_('could not identify changegroup version in '
244 raise error.Abort(_('could not identify changegroup version in '
245 'bundle'))
245 'bundle'))
246
246
247 return '%s-%s' % (comp, version)
247 return '%s-%s' % (comp, version)
248 elif isinstance(b, streamclone.streamcloneapplier):
248 elif isinstance(b, streamclone.streamcloneapplier):
249 requirements = streamclone.readbundle1header(fh)[2]
249 requirements = streamclone.readbundle1header(fh)[2]
250 params = 'requirements=%s' % ','.join(sorted(requirements))
250 params = 'requirements=%s' % ','.join(sorted(requirements))
251 return 'none-packed1;%s' % urlreq.quote(params)
251 return 'none-packed1;%s' % urlreq.quote(params)
252 else:
252 else:
253 raise error.Abort(_('unknown bundle type: %s') % b)
253 raise error.Abort(_('unknown bundle type: %s') % b)
254
254
255 def _computeoutgoing(repo, heads, common):
255 def _computeoutgoing(repo, heads, common):
256 """Computes which revs are outgoing given a set of common
256 """Computes which revs are outgoing given a set of common
257 and a set of heads.
257 and a set of heads.
258
258
259 This is a separate function so extensions can have access to
259 This is a separate function so extensions can have access to
260 the logic.
260 the logic.
261
261
262 Returns a discovery.outgoing object.
262 Returns a discovery.outgoing object.
263 """
263 """
264 cl = repo.changelog
264 cl = repo.changelog
265 if common:
265 if common:
266 hasnode = cl.hasnode
266 hasnode = cl.hasnode
267 common = [n for n in common if hasnode(n)]
267 common = [n for n in common if hasnode(n)]
268 else:
268 else:
269 common = [nullid]
269 common = [nullid]
270 if not heads:
270 if not heads:
271 heads = cl.heads()
271 heads = cl.heads()
272 return discovery.outgoing(repo, common, heads)
272 return discovery.outgoing(repo, common, heads)
273
273
274 def _forcebundle1(op):
274 def _forcebundle1(op):
275 """return true if a pull/push must use bundle1
275 """return true if a pull/push must use bundle1
276
276
277 This function is used to allow testing of the older bundle version"""
277 This function is used to allow testing of the older bundle version"""
278 ui = op.repo.ui
278 ui = op.repo.ui
279 forcebundle1 = False
279 forcebundle1 = False
280 # The goal is this config is to allow developer to choose the bundle
280 # The goal is this config is to allow developer to choose the bundle
281 # version used during exchanged. This is especially handy during test.
281 # version used during exchanged. This is especially handy during test.
282 # Value is a list of bundle version to be picked from, highest version
282 # Value is a list of bundle version to be picked from, highest version
283 # should be used.
283 # should be used.
284 #
284 #
285 # developer config: devel.legacy.exchange
285 # developer config: devel.legacy.exchange
286 exchange = ui.configlist('devel', 'legacy.exchange')
286 exchange = ui.configlist('devel', 'legacy.exchange')
287 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
287 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
288 return forcebundle1 or not op.remote.capable('bundle2')
288 return forcebundle1 or not op.remote.capable('bundle2')
289
289
290 class pushoperation(object):
290 class pushoperation(object):
291 """A object that represent a single push operation
291 """A object that represent a single push operation
292
292
293 Its purpose is to carry push related state and very common operations.
293 Its purpose is to carry push related state and very common operations.
294
294
295 A new pushoperation should be created at the beginning of each push and
295 A new pushoperation should be created at the beginning of each push and
296 discarded afterward.
296 discarded afterward.
297 """
297 """
298
298
299 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
299 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
300 bookmarks=(), pushvars=None):
300 bookmarks=(), pushvars=None):
301 # repo we push from
301 # repo we push from
302 self.repo = repo
302 self.repo = repo
303 self.ui = repo.ui
303 self.ui = repo.ui
304 # repo we push to
304 # repo we push to
305 self.remote = remote
305 self.remote = remote
306 # force option provided
306 # force option provided
307 self.force = force
307 self.force = force
308 # revs to be pushed (None is "all")
308 # revs to be pushed (None is "all")
309 self.revs = revs
309 self.revs = revs
310 # bookmark explicitly pushed
310 # bookmark explicitly pushed
311 self.bookmarks = bookmarks
311 self.bookmarks = bookmarks
312 # allow push of new branch
312 # allow push of new branch
313 self.newbranch = newbranch
313 self.newbranch = newbranch
314 # step already performed
314 # step already performed
315 # (used to check what steps have been already performed through bundle2)
315 # (used to check what steps have been already performed through bundle2)
316 self.stepsdone = set()
316 self.stepsdone = set()
317 # Integer version of the changegroup push result
317 # Integer version of the changegroup push result
318 # - None means nothing to push
318 # - None means nothing to push
319 # - 0 means HTTP error
319 # - 0 means HTTP error
320 # - 1 means we pushed and remote head count is unchanged *or*
320 # - 1 means we pushed and remote head count is unchanged *or*
321 # we have outgoing changesets but refused to push
321 # we have outgoing changesets but refused to push
322 # - other values as described by addchangegroup()
322 # - other values as described by addchangegroup()
323 self.cgresult = None
323 self.cgresult = None
324 # Boolean value for the bookmark push
324 # Boolean value for the bookmark push
325 self.bkresult = None
325 self.bkresult = None
326 # discover.outgoing object (contains common and outgoing data)
326 # discover.outgoing object (contains common and outgoing data)
327 self.outgoing = None
327 self.outgoing = None
328 # all remote topological heads before the push
328 # all remote topological heads before the push
329 self.remoteheads = None
329 self.remoteheads = None
330 # Details of the remote branch pre and post push
330 # Details of the remote branch pre and post push
331 #
331 #
332 # mapping: {'branch': ([remoteheads],
332 # mapping: {'branch': ([remoteheads],
333 # [newheads],
333 # [newheads],
334 # [unsyncedheads],
334 # [unsyncedheads],
335 # [discardedheads])}
335 # [discardedheads])}
336 # - branch: the branch name
336 # - branch: the branch name
337 # - remoteheads: the list of remote heads known locally
337 # - remoteheads: the list of remote heads known locally
338 # None if the branch is new
338 # None if the branch is new
339 # - newheads: the new remote heads (known locally) with outgoing pushed
339 # - newheads: the new remote heads (known locally) with outgoing pushed
340 # - unsyncedheads: the list of remote heads unknown locally.
340 # - unsyncedheads: the list of remote heads unknown locally.
341 # - discardedheads: the list of remote heads made obsolete by the push
341 # - discardedheads: the list of remote heads made obsolete by the push
342 self.pushbranchmap = None
342 self.pushbranchmap = None
343 # testable as a boolean indicating if any nodes are missing locally.
343 # testable as a boolean indicating if any nodes are missing locally.
344 self.incoming = None
344 self.incoming = None
345 # summary of the remote phase situation
345 # summary of the remote phase situation
346 self.remotephases = None
346 self.remotephases = None
347 # phases changes that must be pushed along side the changesets
347 # phases changes that must be pushed along side the changesets
348 self.outdatedphases = None
348 self.outdatedphases = None
349 # phases changes that must be pushed if changeset push fails
349 # phases changes that must be pushed if changeset push fails
350 self.fallbackoutdatedphases = None
350 self.fallbackoutdatedphases = None
351 # outgoing obsmarkers
351 # outgoing obsmarkers
352 self.outobsmarkers = set()
352 self.outobsmarkers = set()
353 # outgoing bookmarks
353 # outgoing bookmarks
354 self.outbookmarks = []
354 self.outbookmarks = []
355 # transaction manager
355 # transaction manager
356 self.trmanager = None
356 self.trmanager = None
357 # map { pushkey partid -> callback handling failure}
357 # map { pushkey partid -> callback handling failure}
358 # used to handle exception from mandatory pushkey part failure
358 # used to handle exception from mandatory pushkey part failure
359 self.pkfailcb = {}
359 self.pkfailcb = {}
360 # an iterable of pushvars or None
360 # an iterable of pushvars or None
361 self.pushvars = pushvars
361 self.pushvars = pushvars
362
362
363 @util.propertycache
363 @util.propertycache
364 def futureheads(self):
364 def futureheads(self):
365 """future remote heads if the changeset push succeeds"""
365 """future remote heads if the changeset push succeeds"""
366 return self.outgoing.missingheads
366 return self.outgoing.missingheads
367
367
368 @util.propertycache
368 @util.propertycache
369 def fallbackheads(self):
369 def fallbackheads(self):
370 """future remote heads if the changeset push fails"""
370 """future remote heads if the changeset push fails"""
371 if self.revs is None:
371 if self.revs is None:
372 # not target to push, all common are relevant
372 # not target to push, all common are relevant
373 return self.outgoing.commonheads
373 return self.outgoing.commonheads
374 unfi = self.repo.unfiltered()
374 unfi = self.repo.unfiltered()
375 # I want cheads = heads(::missingheads and ::commonheads)
375 # I want cheads = heads(::missingheads and ::commonheads)
376 # (missingheads is revs with secret changeset filtered out)
376 # (missingheads is revs with secret changeset filtered out)
377 #
377 #
378 # This can be expressed as:
378 # This can be expressed as:
379 # cheads = ( (missingheads and ::commonheads)
379 # cheads = ( (missingheads and ::commonheads)
380 # + (commonheads and ::missingheads))"
380 # + (commonheads and ::missingheads))"
381 # )
381 # )
382 #
382 #
383 # while trying to push we already computed the following:
383 # while trying to push we already computed the following:
384 # common = (::commonheads)
384 # common = (::commonheads)
385 # missing = ((commonheads::missingheads) - commonheads)
385 # missing = ((commonheads::missingheads) - commonheads)
386 #
386 #
387 # We can pick:
387 # We can pick:
388 # * missingheads part of common (::commonheads)
388 # * missingheads part of common (::commonheads)
389 common = self.outgoing.common
389 common = self.outgoing.common
390 nm = self.repo.changelog.nodemap
390 nm = self.repo.changelog.nodemap
391 cheads = [node for node in self.revs if nm[node] in common]
391 cheads = [node for node in self.revs if nm[node] in common]
392 # and
392 # and
393 # * commonheads parents on missing
393 # * commonheads parents on missing
394 revset = unfi.set('%ln and parents(roots(%ln))',
394 revset = unfi.set('%ln and parents(roots(%ln))',
395 self.outgoing.commonheads,
395 self.outgoing.commonheads,
396 self.outgoing.missing)
396 self.outgoing.missing)
397 cheads.extend(c.node() for c in revset)
397 cheads.extend(c.node() for c in revset)
398 return cheads
398 return cheads
399
399
400 @property
400 @property
401 def commonheads(self):
401 def commonheads(self):
402 """set of all common heads after changeset bundle push"""
402 """set of all common heads after changeset bundle push"""
403 if self.cgresult:
403 if self.cgresult:
404 return self.futureheads
404 return self.futureheads
405 else:
405 else:
406 return self.fallbackheads
406 return self.fallbackheads
407
407
408 # mapping of message used when pushing bookmark
408 # mapping of message used when pushing bookmark
409 bookmsgmap = {'update': (_("updating bookmark %s\n"),
409 bookmsgmap = {'update': (_("updating bookmark %s\n"),
410 _('updating bookmark %s failed!\n')),
410 _('updating bookmark %s failed!\n')),
411 'export': (_("exporting bookmark %s\n"),
411 'export': (_("exporting bookmark %s\n"),
412 _('exporting bookmark %s failed!\n')),
412 _('exporting bookmark %s failed!\n')),
413 'delete': (_("deleting remote bookmark %s\n"),
413 'delete': (_("deleting remote bookmark %s\n"),
414 _('deleting remote bookmark %s failed!\n')),
414 _('deleting remote bookmark %s failed!\n')),
415 }
415 }
416
416
417
417
418 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
418 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
419 opargs=None):
419 opargs=None):
420 '''Push outgoing changesets (limited by revs) from a local
420 '''Push outgoing changesets (limited by revs) from a local
421 repository to remote. Return an integer:
421 repository to remote. Return an integer:
422 - None means nothing to push
422 - None means nothing to push
423 - 0 means HTTP error
423 - 0 means HTTP error
424 - 1 means we pushed and remote head count is unchanged *or*
424 - 1 means we pushed and remote head count is unchanged *or*
425 we have outgoing changesets but refused to push
425 we have outgoing changesets but refused to push
426 - other values as described by addchangegroup()
426 - other values as described by addchangegroup()
427 '''
427 '''
428 if opargs is None:
428 if opargs is None:
429 opargs = {}
429 opargs = {}
430 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
430 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
431 **pycompat.strkwargs(opargs))
431 **pycompat.strkwargs(opargs))
432 if pushop.remote.local():
432 if pushop.remote.local():
433 missing = (set(pushop.repo.requirements)
433 missing = (set(pushop.repo.requirements)
434 - pushop.remote.local().supported)
434 - pushop.remote.local().supported)
435 if missing:
435 if missing:
436 msg = _("required features are not"
436 msg = _("required features are not"
437 " supported in the destination:"
437 " supported in the destination:"
438 " %s") % (', '.join(sorted(missing)))
438 " %s") % (', '.join(sorted(missing)))
439 raise error.Abort(msg)
439 raise error.Abort(msg)
440
440
441 if not pushop.remote.canpush():
441 if not pushop.remote.canpush():
442 raise error.Abort(_("destination does not support push"))
442 raise error.Abort(_("destination does not support push"))
443
443
444 if not pushop.remote.capable('unbundle'):
444 if not pushop.remote.capable('unbundle'):
445 raise error.Abort(_('cannot push: destination does not support the '
445 raise error.Abort(_('cannot push: destination does not support the '
446 'unbundle wire protocol command'))
446 'unbundle wire protocol command'))
447
447
448 # get lock as we might write phase data
448 # get lock as we might write phase data
449 wlock = lock = None
449 wlock = lock = None
450 try:
450 try:
451 # bundle2 push may receive a reply bundle touching bookmarks or other
451 # bundle2 push may receive a reply bundle touching bookmarks or other
452 # things requiring the wlock. Take it now to ensure proper ordering.
452 # things requiring the wlock. Take it now to ensure proper ordering.
453 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
453 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
454 if (not _forcebundle1(pushop)) and maypushback:
454 if (not _forcebundle1(pushop)) and maypushback:
455 wlock = pushop.repo.wlock()
455 wlock = pushop.repo.wlock()
456 lock = pushop.repo.lock()
456 lock = pushop.repo.lock()
457 pushop.trmanager = transactionmanager(pushop.repo,
457 pushop.trmanager = transactionmanager(pushop.repo,
458 'push-response',
458 'push-response',
459 pushop.remote.url())
459 pushop.remote.url())
460 except IOError as err:
460 except IOError as err:
461 if err.errno != errno.EACCES:
461 if err.errno != errno.EACCES:
462 raise
462 raise
463 # source repo cannot be locked.
463 # source repo cannot be locked.
464 # We do not abort the push, but just disable the local phase
464 # We do not abort the push, but just disable the local phase
465 # synchronisation.
465 # synchronisation.
466 msg = 'cannot lock source repository: %s\n' % err
466 msg = 'cannot lock source repository: %s\n' % err
467 pushop.ui.debug(msg)
467 pushop.ui.debug(msg)
468
468
469 with wlock or util.nullcontextmanager(), \
469 with wlock or util.nullcontextmanager(), \
470 lock or util.nullcontextmanager(), \
470 lock or util.nullcontextmanager(), \
471 pushop.trmanager or util.nullcontextmanager():
471 pushop.trmanager or util.nullcontextmanager():
472 pushop.repo.checkpush(pushop)
472 pushop.repo.checkpush(pushop)
473 _pushdiscovery(pushop)
473 _pushdiscovery(pushop)
474 if not _forcebundle1(pushop):
474 if not _forcebundle1(pushop):
475 _pushbundle2(pushop)
475 _pushbundle2(pushop)
476 _pushchangeset(pushop)
476 _pushchangeset(pushop)
477 _pushsyncphase(pushop)
477 _pushsyncphase(pushop)
478 _pushobsolete(pushop)
478 _pushobsolete(pushop)
479 _pushbookmark(pushop)
479 _pushbookmark(pushop)
480
480
481 return pushop
481 return pushop
482
482
483 # list of steps to perform discovery before push
483 # list of steps to perform discovery before push
484 pushdiscoveryorder = []
484 pushdiscoveryorder = []
485
485
486 # Mapping between step name and function
486 # Mapping between step name and function
487 #
487 #
488 # This exists to help extensions wrap steps if necessary
488 # This exists to help extensions wrap steps if necessary
489 pushdiscoverymapping = {}
489 pushdiscoverymapping = {}
490
490
491 def pushdiscovery(stepname):
491 def pushdiscovery(stepname):
492 """decorator for function performing discovery before push
492 """decorator for function performing discovery before push
493
493
494 The function is added to the step -> function mapping and appended to the
494 The function is added to the step -> function mapping and appended to the
495 list of steps. Beware that decorated function will be added in order (this
495 list of steps. Beware that decorated function will be added in order (this
496 may matter).
496 may matter).
497
497
498 You can only use this decorator for a new step, if you want to wrap a step
498 You can only use this decorator for a new step, if you want to wrap a step
499 from an extension, change the pushdiscovery dictionary directly."""
499 from an extension, change the pushdiscovery dictionary directly."""
500 def dec(func):
500 def dec(func):
501 assert stepname not in pushdiscoverymapping
501 assert stepname not in pushdiscoverymapping
502 pushdiscoverymapping[stepname] = func
502 pushdiscoverymapping[stepname] = func
503 pushdiscoveryorder.append(stepname)
503 pushdiscoveryorder.append(stepname)
504 return func
504 return func
505 return dec
505 return dec
506
506
507 def _pushdiscovery(pushop):
507 def _pushdiscovery(pushop):
508 """Run all discovery steps"""
508 """Run all discovery steps"""
509 for stepname in pushdiscoveryorder:
509 for stepname in pushdiscoveryorder:
510 step = pushdiscoverymapping[stepname]
510 step = pushdiscoverymapping[stepname]
511 step(pushop)
511 step(pushop)
512
512
513 @pushdiscovery('changeset')
513 @pushdiscovery('changeset')
514 def _pushdiscoverychangeset(pushop):
514 def _pushdiscoverychangeset(pushop):
515 """discover the changeset that need to be pushed"""
515 """discover the changeset that need to be pushed"""
516 fci = discovery.findcommonincoming
516 fci = discovery.findcommonincoming
517 if pushop.revs:
517 if pushop.revs:
518 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
518 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force,
519 ancestorsof=pushop.revs)
519 ancestorsof=pushop.revs)
520 else:
520 else:
521 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
521 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
522 common, inc, remoteheads = commoninc
522 common, inc, remoteheads = commoninc
523 fco = discovery.findcommonoutgoing
523 fco = discovery.findcommonoutgoing
524 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
524 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
525 commoninc=commoninc, force=pushop.force)
525 commoninc=commoninc, force=pushop.force)
526 pushop.outgoing = outgoing
526 pushop.outgoing = outgoing
527 pushop.remoteheads = remoteheads
527 pushop.remoteheads = remoteheads
528 pushop.incoming = inc
528 pushop.incoming = inc
529
529
530 @pushdiscovery('phase')
530 @pushdiscovery('phase')
531 def _pushdiscoveryphase(pushop):
531 def _pushdiscoveryphase(pushop):
532 """discover the phase that needs to be pushed
532 """discover the phase that needs to be pushed
533
533
534 (computed for both success and failure case for changesets push)"""
534 (computed for both success and failure case for changesets push)"""
535 outgoing = pushop.outgoing
535 outgoing = pushop.outgoing
536 unfi = pushop.repo.unfiltered()
536 unfi = pushop.repo.unfiltered()
537 remotephases = pushop.remote.listkeys('phases')
537 remotephases = pushop.remote.listkeys('phases')
538 if (pushop.ui.configbool('ui', '_usedassubrepo')
538 if (pushop.ui.configbool('ui', '_usedassubrepo')
539 and remotephases # server supports phases
539 and remotephases # server supports phases
540 and not pushop.outgoing.missing # no changesets to be pushed
540 and not pushop.outgoing.missing # no changesets to be pushed
541 and remotephases.get('publishing', False)):
541 and remotephases.get('publishing', False)):
542 # When:
542 # When:
543 # - this is a subrepo push
543 # - this is a subrepo push
544 # - and remote support phase
544 # - and remote support phase
545 # - and no changeset are to be pushed
545 # - and no changeset are to be pushed
546 # - and remote is publishing
546 # - and remote is publishing
547 # We may be in issue 3781 case!
547 # We may be in issue 3781 case!
548 # We drop the possible phase synchronisation done by
548 # We drop the possible phase synchronisation done by
549 # courtesy to publish changesets possibly locally draft
549 # courtesy to publish changesets possibly locally draft
550 # on the remote.
550 # on the remote.
551 pushop.outdatedphases = []
551 pushop.outdatedphases = []
552 pushop.fallbackoutdatedphases = []
552 pushop.fallbackoutdatedphases = []
553 return
553 return
554
554
555 pushop.remotephases = phases.remotephasessummary(pushop.repo,
555 pushop.remotephases = phases.remotephasessummary(pushop.repo,
556 pushop.fallbackheads,
556 pushop.fallbackheads,
557 remotephases)
557 remotephases)
558 droots = pushop.remotephases.draftroots
558 droots = pushop.remotephases.draftroots
559
559
560 extracond = ''
560 extracond = ''
561 if not pushop.remotephases.publishing:
561 if not pushop.remotephases.publishing:
562 extracond = ' and public()'
562 extracond = ' and public()'
563 revset = 'heads((%%ln::%%ln) %s)' % extracond
563 revset = 'heads((%%ln::%%ln) %s)' % extracond
564 # Get the list of all revs draft on remote by public here.
564 # Get the list of all revs draft on remote by public here.
565 # XXX Beware that revset break if droots is not strictly
565 # XXX Beware that revset break if droots is not strictly
566 # XXX root we may want to ensure it is but it is costly
566 # XXX root we may want to ensure it is but it is costly
567 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
567 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
568 if not outgoing.missing:
568 if not outgoing.missing:
569 future = fallback
569 future = fallback
570 else:
570 else:
571 # adds changeset we are going to push as draft
571 # adds changeset we are going to push as draft
572 #
572 #
573 # should not be necessary for publishing server, but because of an
573 # should not be necessary for publishing server, but because of an
574 # issue fixed in xxxxx we have to do it anyway.
574 # issue fixed in xxxxx we have to do it anyway.
575 fdroots = list(unfi.set('roots(%ln + %ln::)',
575 fdroots = list(unfi.set('roots(%ln + %ln::)',
576 outgoing.missing, droots))
576 outgoing.missing, droots))
577 fdroots = [f.node() for f in fdroots]
577 fdroots = [f.node() for f in fdroots]
578 future = list(unfi.set(revset, fdroots, pushop.futureheads))
578 future = list(unfi.set(revset, fdroots, pushop.futureheads))
579 pushop.outdatedphases = future
579 pushop.outdatedphases = future
580 pushop.fallbackoutdatedphases = fallback
580 pushop.fallbackoutdatedphases = fallback
581
581
582 @pushdiscovery('obsmarker')
582 @pushdiscovery('obsmarker')
583 def _pushdiscoveryobsmarkers(pushop):
583 def _pushdiscoveryobsmarkers(pushop):
584 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
584 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
585 and pushop.repo.obsstore
585 and pushop.repo.obsstore
586 and 'obsolete' in pushop.remote.listkeys('namespaces')):
586 and 'obsolete' in pushop.remote.listkeys('namespaces')):
587 repo = pushop.repo
587 repo = pushop.repo
588 # very naive computation, that can be quite expensive on big repo.
588 # very naive computation, that can be quite expensive on big repo.
589 # However: evolution is currently slow on them anyway.
589 # However: evolution is currently slow on them anyway.
590 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
590 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
591 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
591 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
592
592
593 @pushdiscovery('bookmarks')
593 @pushdiscovery('bookmarks')
594 def _pushdiscoverybookmarks(pushop):
594 def _pushdiscoverybookmarks(pushop):
595 ui = pushop.ui
595 ui = pushop.ui
596 repo = pushop.repo.unfiltered()
596 repo = pushop.repo.unfiltered()
597 remote = pushop.remote
597 remote = pushop.remote
598 ui.debug("checking for updated bookmarks\n")
598 ui.debug("checking for updated bookmarks\n")
599 ancestors = ()
599 ancestors = ()
600 if pushop.revs:
600 if pushop.revs:
601 revnums = map(repo.changelog.rev, pushop.revs)
601 revnums = map(repo.changelog.rev, pushop.revs)
602 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
602 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
603 remotebookmark = remote.listkeys('bookmarks')
603 remotebookmark = remote.listkeys('bookmarks')
604
604
605 explicit = set([repo._bookmarks.expandname(bookmark)
605 explicit = set([repo._bookmarks.expandname(bookmark)
606 for bookmark in pushop.bookmarks])
606 for bookmark in pushop.bookmarks])
607
607
608 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
608 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
609 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
609 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
610
610
611 def safehex(x):
611 def safehex(x):
612 if x is None:
612 if x is None:
613 return x
613 return x
614 return hex(x)
614 return hex(x)
615
615
616 def hexifycompbookmarks(bookmarks):
616 def hexifycompbookmarks(bookmarks):
617 for b, scid, dcid in bookmarks:
617 for b, scid, dcid in bookmarks:
618 yield b, safehex(scid), safehex(dcid)
618 yield b, safehex(scid), safehex(dcid)
619
619
620 comp = [hexifycompbookmarks(marks) for marks in comp]
620 comp = [hexifycompbookmarks(marks) for marks in comp]
621 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
621 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
622
622
623 for b, scid, dcid in advsrc:
623 for b, scid, dcid in advsrc:
624 if b in explicit:
624 if b in explicit:
625 explicit.remove(b)
625 explicit.remove(b)
626 if not ancestors or repo[scid].rev() in ancestors:
626 if not ancestors or repo[scid].rev() in ancestors:
627 pushop.outbookmarks.append((b, dcid, scid))
627 pushop.outbookmarks.append((b, dcid, scid))
628 # search added bookmark
628 # search added bookmark
629 for b, scid, dcid in addsrc:
629 for b, scid, dcid in addsrc:
630 if b in explicit:
630 if b in explicit:
631 explicit.remove(b)
631 explicit.remove(b)
632 pushop.outbookmarks.append((b, '', scid))
632 pushop.outbookmarks.append((b, '', scid))
633 # search for overwritten bookmark
633 # search for overwritten bookmark
634 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
634 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
635 if b in explicit:
635 if b in explicit:
636 explicit.remove(b)
636 explicit.remove(b)
637 pushop.outbookmarks.append((b, dcid, scid))
637 pushop.outbookmarks.append((b, dcid, scid))
638 # search for bookmark to delete
638 # search for bookmark to delete
639 for b, scid, dcid in adddst:
639 for b, scid, dcid in adddst:
640 if b in explicit:
640 if b in explicit:
641 explicit.remove(b)
641 explicit.remove(b)
642 # treat as "deleted locally"
642 # treat as "deleted locally"
643 pushop.outbookmarks.append((b, dcid, ''))
643 pushop.outbookmarks.append((b, dcid, ''))
644 # identical bookmarks shouldn't get reported
644 # identical bookmarks shouldn't get reported
645 for b, scid, dcid in same:
645 for b, scid, dcid in same:
646 if b in explicit:
646 if b in explicit:
647 explicit.remove(b)
647 explicit.remove(b)
648
648
649 if explicit:
649 if explicit:
650 explicit = sorted(explicit)
650 explicit = sorted(explicit)
651 # we should probably list all of them
651 # we should probably list all of them
652 ui.warn(_('bookmark %s does not exist on the local '
652 ui.warn(_('bookmark %s does not exist on the local '
653 'or remote repository!\n') % explicit[0])
653 'or remote repository!\n') % explicit[0])
654 pushop.bkresult = 2
654 pushop.bkresult = 2
655
655
656 pushop.outbookmarks.sort()
656 pushop.outbookmarks.sort()
657
657
658 def _pushcheckoutgoing(pushop):
658 def _pushcheckoutgoing(pushop):
659 outgoing = pushop.outgoing
659 outgoing = pushop.outgoing
660 unfi = pushop.repo.unfiltered()
660 unfi = pushop.repo.unfiltered()
661 if not outgoing.missing:
661 if not outgoing.missing:
662 # nothing to push
662 # nothing to push
663 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
663 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
664 return False
664 return False
665 # something to push
665 # something to push
666 if not pushop.force:
666 if not pushop.force:
667 # if repo.obsstore == False --> no obsolete
667 # if repo.obsstore == False --> no obsolete
668 # then, save the iteration
668 # then, save the iteration
669 if unfi.obsstore:
669 if unfi.obsstore:
670 # this message are here for 80 char limit reason
670 # this message are here for 80 char limit reason
671 mso = _("push includes obsolete changeset: %s!")
671 mso = _("push includes obsolete changeset: %s!")
672 mspd = _("push includes phase-divergent changeset: %s!")
672 mspd = _("push includes phase-divergent changeset: %s!")
673 mscd = _("push includes content-divergent changeset: %s!")
673 mscd = _("push includes content-divergent changeset: %s!")
674 mst = {"orphan": _("push includes orphan changeset: %s!"),
674 mst = {"orphan": _("push includes orphan changeset: %s!"),
675 "phase-divergent": mspd,
675 "phase-divergent": mspd,
676 "content-divergent": mscd}
676 "content-divergent": mscd}
677 # If we are to push if there is at least one
677 # If we are to push if there is at least one
678 # obsolete or unstable changeset in missing, at
678 # obsolete or unstable changeset in missing, at
679 # least one of the missinghead will be obsolete or
679 # least one of the missinghead will be obsolete or
680 # unstable. So checking heads only is ok
680 # unstable. So checking heads only is ok
681 for node in outgoing.missingheads:
681 for node in outgoing.missingheads:
682 ctx = unfi[node]
682 ctx = unfi[node]
683 if ctx.obsolete():
683 if ctx.obsolete():
684 raise error.Abort(mso % ctx)
684 raise error.Abort(mso % ctx)
685 elif ctx.isunstable():
685 elif ctx.isunstable():
686 # TODO print more than one instability in the abort
686 # TODO print more than one instability in the abort
687 # message
687 # message
688 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
688 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
689
689
690 discovery.checkheads(pushop)
690 discovery.checkheads(pushop)
691 return True
691 return True
692
692
693 # List of names of steps to perform for an outgoing bundle2, order matters.
693 # List of names of steps to perform for an outgoing bundle2, order matters.
694 b2partsgenorder = []
694 b2partsgenorder = []
695
695
696 # Mapping between step name and function
696 # Mapping between step name and function
697 #
697 #
698 # This exists to help extensions wrap steps if necessary
698 # This exists to help extensions wrap steps if necessary
699 b2partsgenmapping = {}
699 b2partsgenmapping = {}
700
700
701 def b2partsgenerator(stepname, idx=None):
701 def b2partsgenerator(stepname, idx=None):
702 """decorator for function generating bundle2 part
702 """decorator for function generating bundle2 part
703
703
704 The function is added to the step -> function mapping and appended to the
704 The function is added to the step -> function mapping and appended to the
705 list of steps. Beware that decorated functions will be added in order
705 list of steps. Beware that decorated functions will be added in order
706 (this may matter).
706 (this may matter).
707
707
708 You can only use this decorator for new steps, if you want to wrap a step
708 You can only use this decorator for new steps, if you want to wrap a step
709 from an extension, attack the b2partsgenmapping dictionary directly."""
709 from an extension, attack the b2partsgenmapping dictionary directly."""
710 def dec(func):
710 def dec(func):
711 assert stepname not in b2partsgenmapping
711 assert stepname not in b2partsgenmapping
712 b2partsgenmapping[stepname] = func
712 b2partsgenmapping[stepname] = func
713 if idx is None:
713 if idx is None:
714 b2partsgenorder.append(stepname)
714 b2partsgenorder.append(stepname)
715 else:
715 else:
716 b2partsgenorder.insert(idx, stepname)
716 b2partsgenorder.insert(idx, stepname)
717 return func
717 return func
718 return dec
718 return dec
719
719
720 def _pushb2ctxcheckheads(pushop, bundler):
720 def _pushb2ctxcheckheads(pushop, bundler):
721 """Generate race condition checking parts
721 """Generate race condition checking parts
722
722
723 Exists as an independent function to aid extensions
723 Exists as an independent function to aid extensions
724 """
724 """
725 # * 'force' do not check for push race,
725 # * 'force' do not check for push race,
726 # * if we don't push anything, there are nothing to check.
726 # * if we don't push anything, there are nothing to check.
727 if not pushop.force and pushop.outgoing.missingheads:
727 if not pushop.force and pushop.outgoing.missingheads:
728 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
728 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
729 emptyremote = pushop.pushbranchmap is None
729 emptyremote = pushop.pushbranchmap is None
730 if not allowunrelated or emptyremote:
730 if not allowunrelated or emptyremote:
731 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
731 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
732 else:
732 else:
733 affected = set()
733 affected = set()
734 for branch, heads in pushop.pushbranchmap.iteritems():
734 for branch, heads in pushop.pushbranchmap.iteritems():
735 remoteheads, newheads, unsyncedheads, discardedheads = heads
735 remoteheads, newheads, unsyncedheads, discardedheads = heads
736 if remoteheads is not None:
736 if remoteheads is not None:
737 remote = set(remoteheads)
737 remote = set(remoteheads)
738 affected |= set(discardedheads) & remote
738 affected |= set(discardedheads) & remote
739 affected |= remote - set(newheads)
739 affected |= remote - set(newheads)
740 if affected:
740 if affected:
741 data = iter(sorted(affected))
741 data = iter(sorted(affected))
742 bundler.newpart('check:updated-heads', data=data)
742 bundler.newpart('check:updated-heads', data=data)
743
743
744 def _pushing(pushop):
744 def _pushing(pushop):
745 """return True if we are pushing anything"""
745 """return True if we are pushing anything"""
746 return bool(pushop.outgoing.missing
746 return bool(pushop.outgoing.missing
747 or pushop.outdatedphases
747 or pushop.outdatedphases
748 or pushop.outobsmarkers
748 or pushop.outobsmarkers
749 or pushop.outbookmarks)
749 or pushop.outbookmarks)
750
750
751 @b2partsgenerator('check-bookmarks')
751 @b2partsgenerator('check-bookmarks')
752 def _pushb2checkbookmarks(pushop, bundler):
752 def _pushb2checkbookmarks(pushop, bundler):
753 """insert bookmark move checking"""
753 """insert bookmark move checking"""
754 if not _pushing(pushop) or pushop.force:
754 if not _pushing(pushop) or pushop.force:
755 return
755 return
756 b2caps = bundle2.bundle2caps(pushop.remote)
756 b2caps = bundle2.bundle2caps(pushop.remote)
757 hasbookmarkcheck = 'bookmarks' in b2caps
757 hasbookmarkcheck = 'bookmarks' in b2caps
758 if not (pushop.outbookmarks and hasbookmarkcheck):
758 if not (pushop.outbookmarks and hasbookmarkcheck):
759 return
759 return
760 data = []
760 data = []
761 for book, old, new in pushop.outbookmarks:
761 for book, old, new in pushop.outbookmarks:
762 old = bin(old)
762 old = bin(old)
763 data.append((book, old))
763 data.append((book, old))
764 checkdata = bookmod.binaryencode(data)
764 checkdata = bookmod.binaryencode(data)
765 bundler.newpart('check:bookmarks', data=checkdata)
765 bundler.newpart('check:bookmarks', data=checkdata)
766
766
767 @b2partsgenerator('check-phases')
767 @b2partsgenerator('check-phases')
768 def _pushb2checkphases(pushop, bundler):
768 def _pushb2checkphases(pushop, bundler):
769 """insert phase move checking"""
769 """insert phase move checking"""
770 if not _pushing(pushop) or pushop.force:
770 if not _pushing(pushop) or pushop.force:
771 return
771 return
772 b2caps = bundle2.bundle2caps(pushop.remote)
772 b2caps = bundle2.bundle2caps(pushop.remote)
773 hasphaseheads = 'heads' in b2caps.get('phases', ())
773 hasphaseheads = 'heads' in b2caps.get('phases', ())
774 if pushop.remotephases is not None and hasphaseheads:
774 if pushop.remotephases is not None and hasphaseheads:
775 # check that the remote phase has not changed
775 # check that the remote phase has not changed
776 checks = [[] for p in phases.allphases]
776 checks = [[] for p in phases.allphases]
777 checks[phases.public].extend(pushop.remotephases.publicheads)
777 checks[phases.public].extend(pushop.remotephases.publicheads)
778 checks[phases.draft].extend(pushop.remotephases.draftroots)
778 checks[phases.draft].extend(pushop.remotephases.draftroots)
779 if any(checks):
779 if any(checks):
780 for nodes in checks:
780 for nodes in checks:
781 nodes.sort()
781 nodes.sort()
782 checkdata = phases.binaryencode(checks)
782 checkdata = phases.binaryencode(checks)
783 bundler.newpart('check:phases', data=checkdata)
783 bundler.newpart('check:phases', data=checkdata)
784
784
785 @b2partsgenerator('changeset')
785 @b2partsgenerator('changeset')
786 def _pushb2ctx(pushop, bundler):
786 def _pushb2ctx(pushop, bundler):
787 """handle changegroup push through bundle2
787 """handle changegroup push through bundle2
788
788
789 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
789 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
790 """
790 """
791 if 'changesets' in pushop.stepsdone:
791 if 'changesets' in pushop.stepsdone:
792 return
792 return
793 pushop.stepsdone.add('changesets')
793 pushop.stepsdone.add('changesets')
794 # Send known heads to the server for race detection.
794 # Send known heads to the server for race detection.
795 if not _pushcheckoutgoing(pushop):
795 if not _pushcheckoutgoing(pushop):
796 return
796 return
797 pushop.repo.prepushoutgoinghooks(pushop)
797 pushop.repo.prepushoutgoinghooks(pushop)
798
798
799 _pushb2ctxcheckheads(pushop, bundler)
799 _pushb2ctxcheckheads(pushop, bundler)
800
800
801 b2caps = bundle2.bundle2caps(pushop.remote)
801 b2caps = bundle2.bundle2caps(pushop.remote)
802 version = '01'
802 version = '01'
803 cgversions = b2caps.get('changegroup')
803 cgversions = b2caps.get('changegroup')
804 if cgversions: # 3.1 and 3.2 ship with an empty value
804 if cgversions: # 3.1 and 3.2 ship with an empty value
805 cgversions = [v for v in cgversions
805 cgversions = [v for v in cgversions
806 if v in changegroup.supportedoutgoingversions(
806 if v in changegroup.supportedoutgoingversions(
807 pushop.repo)]
807 pushop.repo)]
808 if not cgversions:
808 if not cgversions:
809 raise ValueError(_('no common changegroup version'))
809 raise ValueError(_('no common changegroup version'))
810 version = max(cgversions)
810 version = max(cgversions)
811 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
811 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
812 'push')
812 'push')
813 cgpart = bundler.newpart('changegroup', data=cgstream)
813 cgpart = bundler.newpart('changegroup', data=cgstream)
814 if cgversions:
814 if cgversions:
815 cgpart.addparam('version', version)
815 cgpart.addparam('version', version)
816 if 'treemanifest' in pushop.repo.requirements:
816 if 'treemanifest' in pushop.repo.requirements:
817 cgpart.addparam('treemanifest', '1')
817 cgpart.addparam('treemanifest', '1')
818 def handlereply(op):
818 def handlereply(op):
819 """extract addchangegroup returns from server reply"""
819 """extract addchangegroup returns from server reply"""
820 cgreplies = op.records.getreplies(cgpart.id)
820 cgreplies = op.records.getreplies(cgpart.id)
821 assert len(cgreplies['changegroup']) == 1
821 assert len(cgreplies['changegroup']) == 1
822 pushop.cgresult = cgreplies['changegroup'][0]['return']
822 pushop.cgresult = cgreplies['changegroup'][0]['return']
823 return handlereply
823 return handlereply
824
824
825 @b2partsgenerator('phase')
825 @b2partsgenerator('phase')
826 def _pushb2phases(pushop, bundler):
826 def _pushb2phases(pushop, bundler):
827 """handle phase push through bundle2"""
827 """handle phase push through bundle2"""
828 if 'phases' in pushop.stepsdone:
828 if 'phases' in pushop.stepsdone:
829 return
829 return
830 b2caps = bundle2.bundle2caps(pushop.remote)
830 b2caps = bundle2.bundle2caps(pushop.remote)
831 ui = pushop.repo.ui
831 ui = pushop.repo.ui
832
832
833 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
833 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
834 haspushkey = 'pushkey' in b2caps
834 haspushkey = 'pushkey' in b2caps
835 hasphaseheads = 'heads' in b2caps.get('phases', ())
835 hasphaseheads = 'heads' in b2caps.get('phases', ())
836
836
837 if hasphaseheads and not legacyphase:
837 if hasphaseheads and not legacyphase:
838 return _pushb2phaseheads(pushop, bundler)
838 return _pushb2phaseheads(pushop, bundler)
839 elif haspushkey:
839 elif haspushkey:
840 return _pushb2phasespushkey(pushop, bundler)
840 return _pushb2phasespushkey(pushop, bundler)
841
841
842 def _pushb2phaseheads(pushop, bundler):
842 def _pushb2phaseheads(pushop, bundler):
843 """push phase information through a bundle2 - binary part"""
843 """push phase information through a bundle2 - binary part"""
844 pushop.stepsdone.add('phases')
844 pushop.stepsdone.add('phases')
845 if pushop.outdatedphases:
845 if pushop.outdatedphases:
846 updates = [[] for p in phases.allphases]
846 updates = [[] for p in phases.allphases]
847 updates[0].extend(h.node() for h in pushop.outdatedphases)
847 updates[0].extend(h.node() for h in pushop.outdatedphases)
848 phasedata = phases.binaryencode(updates)
848 phasedata = phases.binaryencode(updates)
849 bundler.newpart('phase-heads', data=phasedata)
849 bundler.newpart('phase-heads', data=phasedata)
850
850
851 def _pushb2phasespushkey(pushop, bundler):
851 def _pushb2phasespushkey(pushop, bundler):
852 """push phase information through a bundle2 - pushkey part"""
852 """push phase information through a bundle2 - pushkey part"""
853 pushop.stepsdone.add('phases')
853 pushop.stepsdone.add('phases')
854 part2node = []
854 part2node = []
855
855
856 def handlefailure(pushop, exc):
856 def handlefailure(pushop, exc):
857 targetid = int(exc.partid)
857 targetid = int(exc.partid)
858 for partid, node in part2node:
858 for partid, node in part2node:
859 if partid == targetid:
859 if partid == targetid:
860 raise error.Abort(_('updating %s to public failed') % node)
860 raise error.Abort(_('updating %s to public failed') % node)
861
861
862 enc = pushkey.encode
862 enc = pushkey.encode
863 for newremotehead in pushop.outdatedphases:
863 for newremotehead in pushop.outdatedphases:
864 part = bundler.newpart('pushkey')
864 part = bundler.newpart('pushkey')
865 part.addparam('namespace', enc('phases'))
865 part.addparam('namespace', enc('phases'))
866 part.addparam('key', enc(newremotehead.hex()))
866 part.addparam('key', enc(newremotehead.hex()))
867 part.addparam('old', enc('%d' % phases.draft))
867 part.addparam('old', enc('%d' % phases.draft))
868 part.addparam('new', enc('%d' % phases.public))
868 part.addparam('new', enc('%d' % phases.public))
869 part2node.append((part.id, newremotehead))
869 part2node.append((part.id, newremotehead))
870 pushop.pkfailcb[part.id] = handlefailure
870 pushop.pkfailcb[part.id] = handlefailure
871
871
872 def handlereply(op):
872 def handlereply(op):
873 for partid, node in part2node:
873 for partid, node in part2node:
874 partrep = op.records.getreplies(partid)
874 partrep = op.records.getreplies(partid)
875 results = partrep['pushkey']
875 results = partrep['pushkey']
876 assert len(results) <= 1
876 assert len(results) <= 1
877 msg = None
877 msg = None
878 if not results:
878 if not results:
879 msg = _('server ignored update of %s to public!\n') % node
879 msg = _('server ignored update of %s to public!\n') % node
880 elif not int(results[0]['return']):
880 elif not int(results[0]['return']):
881 msg = _('updating %s to public failed!\n') % node
881 msg = _('updating %s to public failed!\n') % node
882 if msg is not None:
882 if msg is not None:
883 pushop.ui.warn(msg)
883 pushop.ui.warn(msg)
884 return handlereply
884 return handlereply
885
885
886 @b2partsgenerator('obsmarkers')
886 @b2partsgenerator('obsmarkers')
887 def _pushb2obsmarkers(pushop, bundler):
887 def _pushb2obsmarkers(pushop, bundler):
888 if 'obsmarkers' in pushop.stepsdone:
888 if 'obsmarkers' in pushop.stepsdone:
889 return
889 return
890 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
890 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
891 if obsolete.commonversion(remoteversions) is None:
891 if obsolete.commonversion(remoteversions) is None:
892 return
892 return
893 pushop.stepsdone.add('obsmarkers')
893 pushop.stepsdone.add('obsmarkers')
894 if pushop.outobsmarkers:
894 if pushop.outobsmarkers:
895 markers = sorted(pushop.outobsmarkers)
895 markers = sorted(pushop.outobsmarkers)
896 bundle2.buildobsmarkerspart(bundler, markers)
896 bundle2.buildobsmarkerspart(bundler, markers)
897
897
898 @b2partsgenerator('bookmarks')
898 @b2partsgenerator('bookmarks')
899 def _pushb2bookmarks(pushop, bundler):
899 def _pushb2bookmarks(pushop, bundler):
900 """handle bookmark push through bundle2"""
900 """handle bookmark push through bundle2"""
901 if 'bookmarks' in pushop.stepsdone:
901 if 'bookmarks' in pushop.stepsdone:
902 return
902 return
903 b2caps = bundle2.bundle2caps(pushop.remote)
903 b2caps = bundle2.bundle2caps(pushop.remote)
904
904
905 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
905 legacy = pushop.repo.ui.configlist('devel', 'legacy.exchange')
906 legacybooks = 'bookmarks' in legacy
906 legacybooks = 'bookmarks' in legacy
907
907
908 if not legacybooks and 'bookmarks' in b2caps:
908 if not legacybooks and 'bookmarks' in b2caps:
909 return _pushb2bookmarkspart(pushop, bundler)
909 return _pushb2bookmarkspart(pushop, bundler)
910 elif 'pushkey' in b2caps:
910 elif 'pushkey' in b2caps:
911 return _pushb2bookmarkspushkey(pushop, bundler)
911 return _pushb2bookmarkspushkey(pushop, bundler)
912
912
913 def _bmaction(old, new):
913 def _bmaction(old, new):
914 """small utility for bookmark pushing"""
914 """small utility for bookmark pushing"""
915 if not old:
915 if not old:
916 return 'export'
916 return 'export'
917 elif not new:
917 elif not new:
918 return 'delete'
918 return 'delete'
919 return 'update'
919 return 'update'
920
920
921 def _pushb2bookmarkspart(pushop, bundler):
921 def _pushb2bookmarkspart(pushop, bundler):
922 pushop.stepsdone.add('bookmarks')
922 pushop.stepsdone.add('bookmarks')
923 if not pushop.outbookmarks:
923 if not pushop.outbookmarks:
924 return
924 return
925
925
926 allactions = []
926 allactions = []
927 data = []
927 data = []
928 for book, old, new in pushop.outbookmarks:
928 for book, old, new in pushop.outbookmarks:
929 new = bin(new)
929 new = bin(new)
930 data.append((book, new))
930 data.append((book, new))
931 allactions.append((book, _bmaction(old, new)))
931 allactions.append((book, _bmaction(old, new)))
932 checkdata = bookmod.binaryencode(data)
932 checkdata = bookmod.binaryencode(data)
933 bundler.newpart('bookmarks', data=checkdata)
933 bundler.newpart('bookmarks', data=checkdata)
934
934
935 def handlereply(op):
935 def handlereply(op):
936 ui = pushop.ui
936 ui = pushop.ui
937 # if success
937 # if success
938 for book, action in allactions:
938 for book, action in allactions:
939 ui.status(bookmsgmap[action][0] % book)
939 ui.status(bookmsgmap[action][0] % book)
940
940
941 return handlereply
941 return handlereply
942
942
943 def _pushb2bookmarkspushkey(pushop, bundler):
943 def _pushb2bookmarkspushkey(pushop, bundler):
944 pushop.stepsdone.add('bookmarks')
944 pushop.stepsdone.add('bookmarks')
945 part2book = []
945 part2book = []
946 enc = pushkey.encode
946 enc = pushkey.encode
947
947
948 def handlefailure(pushop, exc):
948 def handlefailure(pushop, exc):
949 targetid = int(exc.partid)
949 targetid = int(exc.partid)
950 for partid, book, action in part2book:
950 for partid, book, action in part2book:
951 if partid == targetid:
951 if partid == targetid:
952 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
952 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
953 # we should not be called for part we did not generated
953 # we should not be called for part we did not generated
954 assert False
954 assert False
955
955
956 for book, old, new in pushop.outbookmarks:
956 for book, old, new in pushop.outbookmarks:
957 part = bundler.newpart('pushkey')
957 part = bundler.newpart('pushkey')
958 part.addparam('namespace', enc('bookmarks'))
958 part.addparam('namespace', enc('bookmarks'))
959 part.addparam('key', enc(book))
959 part.addparam('key', enc(book))
960 part.addparam('old', enc(old))
960 part.addparam('old', enc(old))
961 part.addparam('new', enc(new))
961 part.addparam('new', enc(new))
962 action = 'update'
962 action = 'update'
963 if not old:
963 if not old:
964 action = 'export'
964 action = 'export'
965 elif not new:
965 elif not new:
966 action = 'delete'
966 action = 'delete'
967 part2book.append((part.id, book, action))
967 part2book.append((part.id, book, action))
968 pushop.pkfailcb[part.id] = handlefailure
968 pushop.pkfailcb[part.id] = handlefailure
969
969
970 def handlereply(op):
970 def handlereply(op):
971 ui = pushop.ui
971 ui = pushop.ui
972 for partid, book, action in part2book:
972 for partid, book, action in part2book:
973 partrep = op.records.getreplies(partid)
973 partrep = op.records.getreplies(partid)
974 results = partrep['pushkey']
974 results = partrep['pushkey']
975 assert len(results) <= 1
975 assert len(results) <= 1
976 if not results:
976 if not results:
977 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
977 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
978 else:
978 else:
979 ret = int(results[0]['return'])
979 ret = int(results[0]['return'])
980 if ret:
980 if ret:
981 ui.status(bookmsgmap[action][0] % book)
981 ui.status(bookmsgmap[action][0] % book)
982 else:
982 else:
983 ui.warn(bookmsgmap[action][1] % book)
983 ui.warn(bookmsgmap[action][1] % book)
984 if pushop.bkresult is not None:
984 if pushop.bkresult is not None:
985 pushop.bkresult = 1
985 pushop.bkresult = 1
986 return handlereply
986 return handlereply
987
987
988 @b2partsgenerator('pushvars', idx=0)
988 @b2partsgenerator('pushvars', idx=0)
989 def _getbundlesendvars(pushop, bundler):
989 def _getbundlesendvars(pushop, bundler):
990 '''send shellvars via bundle2'''
990 '''send shellvars via bundle2'''
991 pushvars = pushop.pushvars
991 pushvars = pushop.pushvars
992 if pushvars:
992 if pushvars:
993 shellvars = {}
993 shellvars = {}
994 for raw in pushvars:
994 for raw in pushvars:
995 if '=' not in raw:
995 if '=' not in raw:
996 msg = ("unable to parse variable '%s', should follow "
996 msg = ("unable to parse variable '%s', should follow "
997 "'KEY=VALUE' or 'KEY=' format")
997 "'KEY=VALUE' or 'KEY=' format")
998 raise error.Abort(msg % raw)
998 raise error.Abort(msg % raw)
999 k, v = raw.split('=', 1)
999 k, v = raw.split('=', 1)
1000 shellvars[k] = v
1000 shellvars[k] = v
1001
1001
1002 part = bundler.newpart('pushvars')
1002 part = bundler.newpart('pushvars')
1003
1003
1004 for key, value in shellvars.iteritems():
1004 for key, value in shellvars.iteritems():
1005 part.addparam(key, value, mandatory=False)
1005 part.addparam(key, value, mandatory=False)
1006
1006
1007 def _pushbundle2(pushop):
1007 def _pushbundle2(pushop):
1008 """push data to the remote using bundle2
1008 """push data to the remote using bundle2
1009
1009
1010 The only currently supported type of data is changegroup but this will
1010 The only currently supported type of data is changegroup but this will
1011 evolve in the future."""
1011 evolve in the future."""
1012 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1012 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
1013 pushback = (pushop.trmanager
1013 pushback = (pushop.trmanager
1014 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1014 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
1015
1015
1016 # create reply capability
1016 # create reply capability
1017 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1017 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
1018 allowpushback=pushback))
1018 allowpushback=pushback))
1019 bundler.newpart('replycaps', data=capsblob)
1019 bundler.newpart('replycaps', data=capsblob)
1020 replyhandlers = []
1020 replyhandlers = []
1021 for partgenname in b2partsgenorder:
1021 for partgenname in b2partsgenorder:
1022 partgen = b2partsgenmapping[partgenname]
1022 partgen = b2partsgenmapping[partgenname]
1023 ret = partgen(pushop, bundler)
1023 ret = partgen(pushop, bundler)
1024 if callable(ret):
1024 if callable(ret):
1025 replyhandlers.append(ret)
1025 replyhandlers.append(ret)
1026 # do not push if nothing to push
1026 # do not push if nothing to push
1027 if bundler.nbparts <= 1:
1027 if bundler.nbparts <= 1:
1028 return
1028 return
1029 stream = util.chunkbuffer(bundler.getchunks())
1029 stream = util.chunkbuffer(bundler.getchunks())
1030 try:
1030 try:
1031 try:
1031 try:
1032 reply = pushop.remote.unbundle(
1032 reply = pushop.remote.unbundle(
1033 stream, ['force'], pushop.remote.url())
1033 stream, ['force'], pushop.remote.url())
1034 except error.BundleValueError as exc:
1034 except error.BundleValueError as exc:
1035 raise error.Abort(_('missing support for %s') % exc)
1035 raise error.Abort(_('missing support for %s') % exc)
1036 try:
1036 try:
1037 trgetter = None
1037 trgetter = None
1038 if pushback:
1038 if pushback:
1039 trgetter = pushop.trmanager.transaction
1039 trgetter = pushop.trmanager.transaction
1040 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1040 op = bundle2.processbundle(pushop.repo, reply, trgetter)
1041 except error.BundleValueError as exc:
1041 except error.BundleValueError as exc:
1042 raise error.Abort(_('missing support for %s') % exc)
1042 raise error.Abort(_('missing support for %s') % exc)
1043 except bundle2.AbortFromPart as exc:
1043 except bundle2.AbortFromPart as exc:
1044 pushop.ui.status(_('remote: %s\n') % exc)
1044 pushop.ui.status(_('remote: %s\n') % exc)
1045 if exc.hint is not None:
1045 if exc.hint is not None:
1046 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1046 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
1047 raise error.Abort(_('push failed on remote'))
1047 raise error.Abort(_('push failed on remote'))
1048 except error.PushkeyFailed as exc:
1048 except error.PushkeyFailed as exc:
1049 partid = int(exc.partid)
1049 partid = int(exc.partid)
1050 if partid not in pushop.pkfailcb:
1050 if partid not in pushop.pkfailcb:
1051 raise
1051 raise
1052 pushop.pkfailcb[partid](pushop, exc)
1052 pushop.pkfailcb[partid](pushop, exc)
1053 for rephand in replyhandlers:
1053 for rephand in replyhandlers:
1054 rephand(op)
1054 rephand(op)
1055
1055
1056 def _pushchangeset(pushop):
1056 def _pushchangeset(pushop):
1057 """Make the actual push of changeset bundle to remote repo"""
1057 """Make the actual push of changeset bundle to remote repo"""
1058 if 'changesets' in pushop.stepsdone:
1058 if 'changesets' in pushop.stepsdone:
1059 return
1059 return
1060 pushop.stepsdone.add('changesets')
1060 pushop.stepsdone.add('changesets')
1061 if not _pushcheckoutgoing(pushop):
1061 if not _pushcheckoutgoing(pushop):
1062 return
1062 return
1063
1063
1064 # Should have verified this in push().
1064 # Should have verified this in push().
1065 assert pushop.remote.capable('unbundle')
1065 assert pushop.remote.capable('unbundle')
1066
1066
1067 pushop.repo.prepushoutgoinghooks(pushop)
1067 pushop.repo.prepushoutgoinghooks(pushop)
1068 outgoing = pushop.outgoing
1068 outgoing = pushop.outgoing
1069 # TODO: get bundlecaps from remote
1069 # TODO: get bundlecaps from remote
1070 bundlecaps = None
1070 bundlecaps = None
1071 # create a changegroup from local
1071 # create a changegroup from local
1072 if pushop.revs is None and not (outgoing.excluded
1072 if pushop.revs is None and not (outgoing.excluded
1073 or pushop.repo.changelog.filteredrevs):
1073 or pushop.repo.changelog.filteredrevs):
1074 # push everything,
1074 # push everything,
1075 # use the fast path, no race possible on push
1075 # use the fast path, no race possible on push
1076 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1076 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1077 fastpath=True, bundlecaps=bundlecaps)
1077 fastpath=True, bundlecaps=bundlecaps)
1078 else:
1078 else:
1079 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1079 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1080 'push', bundlecaps=bundlecaps)
1080 'push', bundlecaps=bundlecaps)
1081
1081
1082 # apply changegroup to remote
1082 # apply changegroup to remote
1083 # local repo finds heads on server, finds out what
1083 # local repo finds heads on server, finds out what
1084 # revs it must push. once revs transferred, if server
1084 # revs it must push. once revs transferred, if server
1085 # finds it has different heads (someone else won
1085 # finds it has different heads (someone else won
1086 # commit/push race), server aborts.
1086 # commit/push race), server aborts.
1087 if pushop.force:
1087 if pushop.force:
1088 remoteheads = ['force']
1088 remoteheads = ['force']
1089 else:
1089 else:
1090 remoteheads = pushop.remoteheads
1090 remoteheads = pushop.remoteheads
1091 # ssh: return remote's addchangegroup()
1091 # ssh: return remote's addchangegroup()
1092 # http: return remote's addchangegroup() or 0 for error
1092 # http: return remote's addchangegroup() or 0 for error
1093 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1093 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1094 pushop.repo.url())
1094 pushop.repo.url())
1095
1095
1096 def _pushsyncphase(pushop):
1096 def _pushsyncphase(pushop):
1097 """synchronise phase information locally and remotely"""
1097 """synchronise phase information locally and remotely"""
1098 cheads = pushop.commonheads
1098 cheads = pushop.commonheads
1099 # even when we don't push, exchanging phase data is useful
1099 # even when we don't push, exchanging phase data is useful
1100 remotephases = pushop.remote.listkeys('phases')
1100 remotephases = pushop.remote.listkeys('phases')
1101 if (pushop.ui.configbool('ui', '_usedassubrepo')
1101 if (pushop.ui.configbool('ui', '_usedassubrepo')
1102 and remotephases # server supports phases
1102 and remotephases # server supports phases
1103 and pushop.cgresult is None # nothing was pushed
1103 and pushop.cgresult is None # nothing was pushed
1104 and remotephases.get('publishing', False)):
1104 and remotephases.get('publishing', False)):
1105 # When:
1105 # When:
1106 # - this is a subrepo push
1106 # - this is a subrepo push
1107 # - and remote support phase
1107 # - and remote support phase
1108 # - and no changeset was pushed
1108 # - and no changeset was pushed
1109 # - and remote is publishing
1109 # - and remote is publishing
1110 # We may be in issue 3871 case!
1110 # We may be in issue 3871 case!
1111 # We drop the possible phase synchronisation done by
1111 # We drop the possible phase synchronisation done by
1112 # courtesy to publish changesets possibly locally draft
1112 # courtesy to publish changesets possibly locally draft
1113 # on the remote.
1113 # on the remote.
1114 remotephases = {'publishing': 'True'}
1114 remotephases = {'publishing': 'True'}
1115 if not remotephases: # old server or public only reply from non-publishing
1115 if not remotephases: # old server or public only reply from non-publishing
1116 _localphasemove(pushop, cheads)
1116 _localphasemove(pushop, cheads)
1117 # don't push any phase data as there is nothing to push
1117 # don't push any phase data as there is nothing to push
1118 else:
1118 else:
1119 ana = phases.analyzeremotephases(pushop.repo, cheads,
1119 ana = phases.analyzeremotephases(pushop.repo, cheads,
1120 remotephases)
1120 remotephases)
1121 pheads, droots = ana
1121 pheads, droots = ana
1122 ### Apply remote phase on local
1122 ### Apply remote phase on local
1123 if remotephases.get('publishing', False):
1123 if remotephases.get('publishing', False):
1124 _localphasemove(pushop, cheads)
1124 _localphasemove(pushop, cheads)
1125 else: # publish = False
1125 else: # publish = False
1126 _localphasemove(pushop, pheads)
1126 _localphasemove(pushop, pheads)
1127 _localphasemove(pushop, cheads, phases.draft)
1127 _localphasemove(pushop, cheads, phases.draft)
1128 ### Apply local phase on remote
1128 ### Apply local phase on remote
1129
1129
1130 if pushop.cgresult:
1130 if pushop.cgresult:
1131 if 'phases' in pushop.stepsdone:
1131 if 'phases' in pushop.stepsdone:
1132 # phases already pushed though bundle2
1132 # phases already pushed though bundle2
1133 return
1133 return
1134 outdated = pushop.outdatedphases
1134 outdated = pushop.outdatedphases
1135 else:
1135 else:
1136 outdated = pushop.fallbackoutdatedphases
1136 outdated = pushop.fallbackoutdatedphases
1137
1137
1138 pushop.stepsdone.add('phases')
1138 pushop.stepsdone.add('phases')
1139
1139
1140 # filter heads already turned public by the push
1140 # filter heads already turned public by the push
1141 outdated = [c for c in outdated if c.node() not in pheads]
1141 outdated = [c for c in outdated if c.node() not in pheads]
1142 # fallback to independent pushkey command
1142 # fallback to independent pushkey command
1143 for newremotehead in outdated:
1143 for newremotehead in outdated:
1144 r = pushop.remote.pushkey('phases',
1144 r = pushop.remote.pushkey('phases',
1145 newremotehead.hex(),
1145 newremotehead.hex(),
1146 str(phases.draft),
1146 str(phases.draft),
1147 str(phases.public))
1147 str(phases.public))
1148 if not r:
1148 if not r:
1149 pushop.ui.warn(_('updating %s to public failed!\n')
1149 pushop.ui.warn(_('updating %s to public failed!\n')
1150 % newremotehead)
1150 % newremotehead)
1151
1151
1152 def _localphasemove(pushop, nodes, phase=phases.public):
1152 def _localphasemove(pushop, nodes, phase=phases.public):
1153 """move <nodes> to <phase> in the local source repo"""
1153 """move <nodes> to <phase> in the local source repo"""
1154 if pushop.trmanager:
1154 if pushop.trmanager:
1155 phases.advanceboundary(pushop.repo,
1155 phases.advanceboundary(pushop.repo,
1156 pushop.trmanager.transaction(),
1156 pushop.trmanager.transaction(),
1157 phase,
1157 phase,
1158 nodes)
1158 nodes)
1159 else:
1159 else:
1160 # repo is not locked, do not change any phases!
1160 # repo is not locked, do not change any phases!
1161 # Informs the user that phases should have been moved when
1161 # Informs the user that phases should have been moved when
1162 # applicable.
1162 # applicable.
1163 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1163 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1164 phasestr = phases.phasenames[phase]
1164 phasestr = phases.phasenames[phase]
1165 if actualmoves:
1165 if actualmoves:
1166 pushop.ui.status(_('cannot lock source repo, skipping '
1166 pushop.ui.status(_('cannot lock source repo, skipping '
1167 'local %s phase update\n') % phasestr)
1167 'local %s phase update\n') % phasestr)
1168
1168
1169 def _pushobsolete(pushop):
1169 def _pushobsolete(pushop):
1170 """utility function to push obsolete markers to a remote"""
1170 """utility function to push obsolete markers to a remote"""
1171 if 'obsmarkers' in pushop.stepsdone:
1171 if 'obsmarkers' in pushop.stepsdone:
1172 return
1172 return
1173 repo = pushop.repo
1173 repo = pushop.repo
1174 remote = pushop.remote
1174 remote = pushop.remote
1175 pushop.stepsdone.add('obsmarkers')
1175 pushop.stepsdone.add('obsmarkers')
1176 if pushop.outobsmarkers:
1176 if pushop.outobsmarkers:
1177 pushop.ui.debug('try to push obsolete markers to remote\n')
1177 pushop.ui.debug('try to push obsolete markers to remote\n')
1178 rslts = []
1178 rslts = []
1179 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1179 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1180 for key in sorted(remotedata, reverse=True):
1180 for key in sorted(remotedata, reverse=True):
1181 # reverse sort to ensure we end with dump0
1181 # reverse sort to ensure we end with dump0
1182 data = remotedata[key]
1182 data = remotedata[key]
1183 rslts.append(remote.pushkey('obsolete', key, '', data))
1183 rslts.append(remote.pushkey('obsolete', key, '', data))
1184 if [r for r in rslts if not r]:
1184 if [r for r in rslts if not r]:
1185 msg = _('failed to push some obsolete markers!\n')
1185 msg = _('failed to push some obsolete markers!\n')
1186 repo.ui.warn(msg)
1186 repo.ui.warn(msg)
1187
1187
1188 def _pushbookmark(pushop):
1188 def _pushbookmark(pushop):
1189 """Update bookmark position on remote"""
1189 """Update bookmark position on remote"""
1190 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1190 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1191 return
1191 return
1192 pushop.stepsdone.add('bookmarks')
1192 pushop.stepsdone.add('bookmarks')
1193 ui = pushop.ui
1193 ui = pushop.ui
1194 remote = pushop.remote
1194 remote = pushop.remote
1195
1195
1196 for b, old, new in pushop.outbookmarks:
1196 for b, old, new in pushop.outbookmarks:
1197 action = 'update'
1197 action = 'update'
1198 if not old:
1198 if not old:
1199 action = 'export'
1199 action = 'export'
1200 elif not new:
1200 elif not new:
1201 action = 'delete'
1201 action = 'delete'
1202 if remote.pushkey('bookmarks', b, old, new):
1202 if remote.pushkey('bookmarks', b, old, new):
1203 ui.status(bookmsgmap[action][0] % b)
1203 ui.status(bookmsgmap[action][0] % b)
1204 else:
1204 else:
1205 ui.warn(bookmsgmap[action][1] % b)
1205 ui.warn(bookmsgmap[action][1] % b)
1206 # discovery can have set the value form invalid entry
1206 # discovery can have set the value form invalid entry
1207 if pushop.bkresult is not None:
1207 if pushop.bkresult is not None:
1208 pushop.bkresult = 1
1208 pushop.bkresult = 1
1209
1209
1210 class pulloperation(object):
1210 class pulloperation(object):
1211 """A object that represent a single pull operation
1211 """A object that represent a single pull operation
1212
1212
1213 It purpose is to carry pull related state and very common operation.
1213 It purpose is to carry pull related state and very common operation.
1214
1214
1215 A new should be created at the beginning of each pull and discarded
1215 A new should be created at the beginning of each pull and discarded
1216 afterward.
1216 afterward.
1217 """
1217 """
1218
1218
1219 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1219 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1220 remotebookmarks=None, streamclonerequested=None):
1220 remotebookmarks=None, streamclonerequested=None):
1221 # repo we pull into
1221 # repo we pull into
1222 self.repo = repo
1222 self.repo = repo
1223 # repo we pull from
1223 # repo we pull from
1224 self.remote = remote
1224 self.remote = remote
1225 # revision we try to pull (None is "all")
1225 # revision we try to pull (None is "all")
1226 self.heads = heads
1226 self.heads = heads
1227 # bookmark pulled explicitly
1227 # bookmark pulled explicitly
1228 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1228 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1229 for bookmark in bookmarks]
1229 for bookmark in bookmarks]
1230 # do we force pull?
1230 # do we force pull?
1231 self.force = force
1231 self.force = force
1232 # whether a streaming clone was requested
1232 # whether a streaming clone was requested
1233 self.streamclonerequested = streamclonerequested
1233 self.streamclonerequested = streamclonerequested
1234 # transaction manager
1234 # transaction manager
1235 self.trmanager = None
1235 self.trmanager = None
1236 # set of common changeset between local and remote before pull
1236 # set of common changeset between local and remote before pull
1237 self.common = None
1237 self.common = None
1238 # set of pulled head
1238 # set of pulled head
1239 self.rheads = None
1239 self.rheads = None
1240 # list of missing changeset to fetch remotely
1240 # list of missing changeset to fetch remotely
1241 self.fetch = None
1241 self.fetch = None
1242 # remote bookmarks data
1242 # remote bookmarks data
1243 self.remotebookmarks = remotebookmarks
1243 self.remotebookmarks = remotebookmarks
1244 # result of changegroup pulling (used as return code by pull)
1244 # result of changegroup pulling (used as return code by pull)
1245 self.cgresult = None
1245 self.cgresult = None
1246 # list of step already done
1246 # list of step already done
1247 self.stepsdone = set()
1247 self.stepsdone = set()
1248 # Whether we attempted a clone from pre-generated bundles.
1248 # Whether we attempted a clone from pre-generated bundles.
1249 self.clonebundleattempted = False
1249 self.clonebundleattempted = False
1250
1250
1251 @util.propertycache
1251 @util.propertycache
1252 def pulledsubset(self):
1252 def pulledsubset(self):
1253 """heads of the set of changeset target by the pull"""
1253 """heads of the set of changeset target by the pull"""
1254 # compute target subset
1254 # compute target subset
1255 if self.heads is None:
1255 if self.heads is None:
1256 # We pulled every thing possible
1256 # We pulled every thing possible
1257 # sync on everything common
1257 # sync on everything common
1258 c = set(self.common)
1258 c = set(self.common)
1259 ret = list(self.common)
1259 ret = list(self.common)
1260 for n in self.rheads:
1260 for n in self.rheads:
1261 if n not in c:
1261 if n not in c:
1262 ret.append(n)
1262 ret.append(n)
1263 return ret
1263 return ret
1264 else:
1264 else:
1265 # We pulled a specific subset
1265 # We pulled a specific subset
1266 # sync on this subset
1266 # sync on this subset
1267 return self.heads
1267 return self.heads
1268
1268
1269 @util.propertycache
1269 @util.propertycache
1270 def canusebundle2(self):
1270 def canusebundle2(self):
1271 return not _forcebundle1(self)
1271 return not _forcebundle1(self)
1272
1272
1273 @util.propertycache
1273 @util.propertycache
1274 def remotebundle2caps(self):
1274 def remotebundle2caps(self):
1275 return bundle2.bundle2caps(self.remote)
1275 return bundle2.bundle2caps(self.remote)
1276
1276
1277 def gettransaction(self):
1277 def gettransaction(self):
1278 # deprecated; talk to trmanager directly
1278 # deprecated; talk to trmanager directly
1279 return self.trmanager.transaction()
1279 return self.trmanager.transaction()
1280
1280
1281 class transactionmanager(util.transactional):
1281 class transactionmanager(util.transactional):
1282 """An object to manage the life cycle of a transaction
1282 """An object to manage the life cycle of a transaction
1283
1283
1284 It creates the transaction on demand and calls the appropriate hooks when
1284 It creates the transaction on demand and calls the appropriate hooks when
1285 closing the transaction."""
1285 closing the transaction."""
1286 def __init__(self, repo, source, url):
1286 def __init__(self, repo, source, url):
1287 self.repo = repo
1287 self.repo = repo
1288 self.source = source
1288 self.source = source
1289 self.url = url
1289 self.url = url
1290 self._tr = None
1290 self._tr = None
1291
1291
1292 def transaction(self):
1292 def transaction(self):
1293 """Return an open transaction object, constructing if necessary"""
1293 """Return an open transaction object, constructing if necessary"""
1294 if not self._tr:
1294 if not self._tr:
1295 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1295 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1296 self._tr = self.repo.transaction(trname)
1296 self._tr = self.repo.transaction(trname)
1297 self._tr.hookargs['source'] = self.source
1297 self._tr.hookargs['source'] = self.source
1298 self._tr.hookargs['url'] = self.url
1298 self._tr.hookargs['url'] = self.url
1299 return self._tr
1299 return self._tr
1300
1300
1301 def close(self):
1301 def close(self):
1302 """close transaction if created"""
1302 """close transaction if created"""
1303 if self._tr is not None:
1303 if self._tr is not None:
1304 self._tr.close()
1304 self._tr.close()
1305
1305
1306 def release(self):
1306 def release(self):
1307 """release transaction if created"""
1307 """release transaction if created"""
1308 if self._tr is not None:
1308 if self._tr is not None:
1309 self._tr.release()
1309 self._tr.release()
1310
1310
1311 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1311 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1312 streamclonerequested=None):
1312 streamclonerequested=None):
1313 """Fetch repository data from a remote.
1313 """Fetch repository data from a remote.
1314
1314
1315 This is the main function used to retrieve data from a remote repository.
1315 This is the main function used to retrieve data from a remote repository.
1316
1316
1317 ``repo`` is the local repository to clone into.
1317 ``repo`` is the local repository to clone into.
1318 ``remote`` is a peer instance.
1318 ``remote`` is a peer instance.
1319 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1319 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1320 default) means to pull everything from the remote.
1320 default) means to pull everything from the remote.
1321 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1321 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1322 default, all remote bookmarks are pulled.
1322 default, all remote bookmarks are pulled.
1323 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1323 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1324 initialization.
1324 initialization.
1325 ``streamclonerequested`` is a boolean indicating whether a "streaming
1325 ``streamclonerequested`` is a boolean indicating whether a "streaming
1326 clone" is requested. A "streaming clone" is essentially a raw file copy
1326 clone" is requested. A "streaming clone" is essentially a raw file copy
1327 of revlogs from the server. This only works when the local repository is
1327 of revlogs from the server. This only works when the local repository is
1328 empty. The default value of ``None`` means to respect the server
1328 empty. The default value of ``None`` means to respect the server
1329 configuration for preferring stream clones.
1329 configuration for preferring stream clones.
1330
1330
1331 Returns the ``pulloperation`` created for this pull.
1331 Returns the ``pulloperation`` created for this pull.
1332 """
1332 """
1333 if opargs is None:
1333 if opargs is None:
1334 opargs = {}
1334 opargs = {}
1335 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1335 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1336 streamclonerequested=streamclonerequested,
1336 streamclonerequested=streamclonerequested,
1337 **pycompat.strkwargs(opargs))
1337 **pycompat.strkwargs(opargs))
1338
1338
1339 peerlocal = pullop.remote.local()
1339 peerlocal = pullop.remote.local()
1340 if peerlocal:
1340 if peerlocal:
1341 missing = set(peerlocal.requirements) - pullop.repo.supported
1341 missing = set(peerlocal.requirements) - pullop.repo.supported
1342 if missing:
1342 if missing:
1343 msg = _("required features are not"
1343 msg = _("required features are not"
1344 " supported in the destination:"
1344 " supported in the destination:"
1345 " %s") % (', '.join(sorted(missing)))
1345 " %s") % (', '.join(sorted(missing)))
1346 raise error.Abort(msg)
1346 raise error.Abort(msg)
1347
1347
1348 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1348 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1349 with repo.wlock(), repo.lock(), pullop.trmanager:
1349 with repo.wlock(), repo.lock(), pullop.trmanager:
1350 # This should ideally be in _pullbundle2(). However, it needs to run
1350 # This should ideally be in _pullbundle2(). However, it needs to run
1351 # before discovery to avoid extra work.
1351 # before discovery to avoid extra work.
1352 _maybeapplyclonebundle(pullop)
1352 _maybeapplyclonebundle(pullop)
1353 streamclone.maybeperformlegacystreamclone(pullop)
1353 streamclone.maybeperformlegacystreamclone(pullop)
1354 _pulldiscovery(pullop)
1354 _pulldiscovery(pullop)
1355 if pullop.canusebundle2:
1355 if pullop.canusebundle2:
1356 _pullbundle2(pullop)
1356 _pullbundle2(pullop)
1357 _pullchangeset(pullop)
1357 _pullchangeset(pullop)
1358 _pullphase(pullop)
1358 _pullphase(pullop)
1359 _pullbookmarks(pullop)
1359 _pullbookmarks(pullop)
1360 _pullobsolete(pullop)
1360 _pullobsolete(pullop)
1361
1361
1362 # storing remotenames
1362 # storing remotenames
1363 if repo.ui.configbool('experimental', 'remotenames'):
1363 if repo.ui.configbool('experimental', 'remotenames'):
1364 logexchange.pullremotenames(repo, remote)
1364 logexchange.pullremotenames(repo, remote)
1365
1365
1366 return pullop
1366 return pullop
1367
1367
1368 # list of steps to perform discovery before pull
1368 # list of steps to perform discovery before pull
1369 pulldiscoveryorder = []
1369 pulldiscoveryorder = []
1370
1370
1371 # Mapping between step name and function
1371 # Mapping between step name and function
1372 #
1372 #
1373 # This exists to help extensions wrap steps if necessary
1373 # This exists to help extensions wrap steps if necessary
1374 pulldiscoverymapping = {}
1374 pulldiscoverymapping = {}
1375
1375
1376 def pulldiscovery(stepname):
1376 def pulldiscovery(stepname):
1377 """decorator for function performing discovery before pull
1377 """decorator for function performing discovery before pull
1378
1378
1379 The function is added to the step -> function mapping and appended to the
1379 The function is added to the step -> function mapping and appended to the
1380 list of steps. Beware that decorated function will be added in order (this
1380 list of steps. Beware that decorated function will be added in order (this
1381 may matter).
1381 may matter).
1382
1382
1383 You can only use this decorator for a new step, if you want to wrap a step
1383 You can only use this decorator for a new step, if you want to wrap a step
1384 from an extension, change the pulldiscovery dictionary directly."""
1384 from an extension, change the pulldiscovery dictionary directly."""
1385 def dec(func):
1385 def dec(func):
1386 assert stepname not in pulldiscoverymapping
1386 assert stepname not in pulldiscoverymapping
1387 pulldiscoverymapping[stepname] = func
1387 pulldiscoverymapping[stepname] = func
1388 pulldiscoveryorder.append(stepname)
1388 pulldiscoveryorder.append(stepname)
1389 return func
1389 return func
1390 return dec
1390 return dec
1391
1391
1392 def _pulldiscovery(pullop):
1392 def _pulldiscovery(pullop):
1393 """Run all discovery steps"""
1393 """Run all discovery steps"""
1394 for stepname in pulldiscoveryorder:
1394 for stepname in pulldiscoveryorder:
1395 step = pulldiscoverymapping[stepname]
1395 step = pulldiscoverymapping[stepname]
1396 step(pullop)
1396 step(pullop)
1397
1397
1398 @pulldiscovery('b1:bookmarks')
1398 @pulldiscovery('b1:bookmarks')
1399 def _pullbookmarkbundle1(pullop):
1399 def _pullbookmarkbundle1(pullop):
1400 """fetch bookmark data in bundle1 case
1400 """fetch bookmark data in bundle1 case
1401
1401
1402 If not using bundle2, we have to fetch bookmarks before changeset
1402 If not using bundle2, we have to fetch bookmarks before changeset
1403 discovery to reduce the chance and impact of race conditions."""
1403 discovery to reduce the chance and impact of race conditions."""
1404 if pullop.remotebookmarks is not None:
1404 if pullop.remotebookmarks is not None:
1405 return
1405 return
1406 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1406 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1407 # all known bundle2 servers now support listkeys, but lets be nice with
1407 # all known bundle2 servers now support listkeys, but lets be nice with
1408 # new implementation.
1408 # new implementation.
1409 return
1409 return
1410 books = pullop.remote.listkeys('bookmarks')
1410 books = pullop.remote.listkeys('bookmarks')
1411 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1411 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1412
1412
1413
1413
1414 @pulldiscovery('changegroup')
1414 @pulldiscovery('changegroup')
1415 def _pulldiscoverychangegroup(pullop):
1415 def _pulldiscoverychangegroup(pullop):
1416 """discovery phase for the pull
1416 """discovery phase for the pull
1417
1417
1418 Current handle changeset discovery only, will change handle all discovery
1418 Current handle changeset discovery only, will change handle all discovery
1419 at some point."""
1419 at some point."""
1420 tmp = discovery.findcommonincoming(pullop.repo,
1420 tmp = discovery.findcommonincoming(pullop.repo,
1421 pullop.remote,
1421 pullop.remote,
1422 heads=pullop.heads,
1422 heads=pullop.heads,
1423 force=pullop.force)
1423 force=pullop.force)
1424 common, fetch, rheads = tmp
1424 common, fetch, rheads = tmp
1425 nm = pullop.repo.unfiltered().changelog.nodemap
1425 nm = pullop.repo.unfiltered().changelog.nodemap
1426 if fetch and rheads:
1426 if fetch and rheads:
1427 # If a remote heads is filtered locally, put in back in common.
1427 # If a remote heads is filtered locally, put in back in common.
1428 #
1428 #
1429 # This is a hackish solution to catch most of "common but locally
1429 # This is a hackish solution to catch most of "common but locally
1430 # hidden situation". We do not performs discovery on unfiltered
1430 # hidden situation". We do not performs discovery on unfiltered
1431 # repository because it end up doing a pathological amount of round
1431 # repository because it end up doing a pathological amount of round
1432 # trip for w huge amount of changeset we do not care about.
1432 # trip for w huge amount of changeset we do not care about.
1433 #
1433 #
1434 # If a set of such "common but filtered" changeset exist on the server
1434 # If a set of such "common but filtered" changeset exist on the server
1435 # but are not including a remote heads, we'll not be able to detect it,
1435 # but are not including a remote heads, we'll not be able to detect it,
1436 scommon = set(common)
1436 scommon = set(common)
1437 for n in rheads:
1437 for n in rheads:
1438 if n in nm:
1438 if n in nm:
1439 if n not in scommon:
1439 if n not in scommon:
1440 common.append(n)
1440 common.append(n)
1441 if set(rheads).issubset(set(common)):
1441 if set(rheads).issubset(set(common)):
1442 fetch = []
1442 fetch = []
1443 pullop.common = common
1443 pullop.common = common
1444 pullop.fetch = fetch
1444 pullop.fetch = fetch
1445 pullop.rheads = rheads
1445 pullop.rheads = rheads
1446
1446
1447 def _pullbundle2(pullop):
1447 def _pullbundle2(pullop):
1448 """pull data using bundle2
1448 """pull data using bundle2
1449
1449
1450 For now, the only supported data are changegroup."""
1450 For now, the only supported data are changegroup."""
1451 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1451 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1452
1452
1453 # make ui easier to access
1453 # make ui easier to access
1454 ui = pullop.repo.ui
1454 ui = pullop.repo.ui
1455
1455
1456 # At the moment we don't do stream clones over bundle2. If that is
1456 # At the moment we don't do stream clones over bundle2. If that is
1457 # implemented then here's where the check for that will go.
1457 # implemented then here's where the check for that will go.
1458 streaming = False
1458 streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0]
1459
1459
1460 # declare pull perimeters
1460 # declare pull perimeters
1461 kwargs['common'] = pullop.common
1461 kwargs['common'] = pullop.common
1462 kwargs['heads'] = pullop.heads or pullop.rheads
1462 kwargs['heads'] = pullop.heads or pullop.rheads
1463
1463
1464 if True:
1464 if streaming:
1465 kwargs['cg'] = False
1466 kwargs['stream'] = True
1467 pullop.stepsdone.add('changegroup')
1468
1469 else:
1465 # pulling changegroup
1470 # pulling changegroup
1466 pullop.stepsdone.add('changegroup')
1471 pullop.stepsdone.add('changegroup')
1467
1472
1468 kwargs['cg'] = pullop.fetch
1473 kwargs['cg'] = pullop.fetch
1469
1474
1470 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1475 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1471 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1476 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1472 if (not legacyphase and hasbinaryphase):
1477 if (not legacyphase and hasbinaryphase):
1473 kwargs['phases'] = True
1478 kwargs['phases'] = True
1474 pullop.stepsdone.add('phases')
1479 pullop.stepsdone.add('phases')
1475
1480
1476 if 'listkeys' in pullop.remotebundle2caps:
1481 if 'listkeys' in pullop.remotebundle2caps:
1477 if 'phases' not in pullop.stepsdone:
1482 if 'phases' not in pullop.stepsdone:
1478 kwargs['listkeys'] = ['phases']
1483 kwargs['listkeys'] = ['phases']
1479
1484
1480 bookmarksrequested = False
1485 bookmarksrequested = False
1481 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1486 legacybookmark = 'bookmarks' in ui.configlist('devel', 'legacy.exchange')
1482 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1487 hasbinarybook = 'bookmarks' in pullop.remotebundle2caps
1483
1488
1484 if pullop.remotebookmarks is not None:
1489 if pullop.remotebookmarks is not None:
1485 pullop.stepsdone.add('request-bookmarks')
1490 pullop.stepsdone.add('request-bookmarks')
1486
1491
1487 if ('request-bookmarks' not in pullop.stepsdone
1492 if ('request-bookmarks' not in pullop.stepsdone
1488 and pullop.remotebookmarks is None
1493 and pullop.remotebookmarks is None
1489 and not legacybookmark and hasbinarybook):
1494 and not legacybookmark and hasbinarybook):
1490 kwargs['bookmarks'] = True
1495 kwargs['bookmarks'] = True
1491 bookmarksrequested = True
1496 bookmarksrequested = True
1492
1497
1493 if 'listkeys' in pullop.remotebundle2caps:
1498 if 'listkeys' in pullop.remotebundle2caps:
1494 if 'request-bookmarks' not in pullop.stepsdone:
1499 if 'request-bookmarks' not in pullop.stepsdone:
1495 # make sure to always includes bookmark data when migrating
1500 # make sure to always includes bookmark data when migrating
1496 # `hg incoming --bundle` to using this function.
1501 # `hg incoming --bundle` to using this function.
1497 pullop.stepsdone.add('request-bookmarks')
1502 pullop.stepsdone.add('request-bookmarks')
1498 kwargs.setdefault('listkeys', []).append('bookmarks')
1503 kwargs.setdefault('listkeys', []).append('bookmarks')
1499
1504
1500 # If this is a full pull / clone and the server supports the clone bundles
1505 # If this is a full pull / clone and the server supports the clone bundles
1501 # feature, tell the server whether we attempted a clone bundle. The
1506 # feature, tell the server whether we attempted a clone bundle. The
1502 # presence of this flag indicates the client supports clone bundles. This
1507 # presence of this flag indicates the client supports clone bundles. This
1503 # will enable the server to treat clients that support clone bundles
1508 # will enable the server to treat clients that support clone bundles
1504 # differently from those that don't.
1509 # differently from those that don't.
1505 if (pullop.remote.capable('clonebundles')
1510 if (pullop.remote.capable('clonebundles')
1506 and pullop.heads is None and list(pullop.common) == [nullid]):
1511 and pullop.heads is None and list(pullop.common) == [nullid]):
1507 kwargs['cbattempted'] = pullop.clonebundleattempted
1512 kwargs['cbattempted'] = pullop.clonebundleattempted
1508
1513
1509 if streaming:
1514 if streaming:
1510 pullop.repo.ui.status(_('streaming all changes\n'))
1515 pullop.repo.ui.status(_('streaming all changes\n'))
1511 elif not pullop.fetch:
1516 elif not pullop.fetch:
1512 pullop.repo.ui.status(_("no changes found\n"))
1517 pullop.repo.ui.status(_("no changes found\n"))
1513 pullop.cgresult = 0
1518 pullop.cgresult = 0
1514 else:
1519 else:
1515 if pullop.heads is None and list(pullop.common) == [nullid]:
1520 if pullop.heads is None and list(pullop.common) == [nullid]:
1516 pullop.repo.ui.status(_("requesting all changes\n"))
1521 pullop.repo.ui.status(_("requesting all changes\n"))
1517 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1522 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1518 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1523 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1519 if obsolete.commonversion(remoteversions) is not None:
1524 if obsolete.commonversion(remoteversions) is not None:
1520 kwargs['obsmarkers'] = True
1525 kwargs['obsmarkers'] = True
1521 pullop.stepsdone.add('obsmarkers')
1526 pullop.stepsdone.add('obsmarkers')
1522 _pullbundle2extraprepare(pullop, kwargs)
1527 _pullbundle2extraprepare(pullop, kwargs)
1523 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1528 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1524 try:
1529 try:
1525 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1530 op = bundle2.bundleoperation(pullop.repo, pullop.gettransaction)
1526 op.modes['bookmarks'] = 'records'
1531 op.modes['bookmarks'] = 'records'
1527 bundle2.processbundle(pullop.repo, bundle, op=op)
1532 bundle2.processbundle(pullop.repo, bundle, op=op)
1528 except bundle2.AbortFromPart as exc:
1533 except bundle2.AbortFromPart as exc:
1529 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1534 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1530 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1535 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1531 except error.BundleValueError as exc:
1536 except error.BundleValueError as exc:
1532 raise error.Abort(_('missing support for %s') % exc)
1537 raise error.Abort(_('missing support for %s') % exc)
1533
1538
1534 if pullop.fetch:
1539 if pullop.fetch:
1535 pullop.cgresult = bundle2.combinechangegroupresults(op)
1540 pullop.cgresult = bundle2.combinechangegroupresults(op)
1536
1541
1537 # processing phases change
1542 # processing phases change
1538 for namespace, value in op.records['listkeys']:
1543 for namespace, value in op.records['listkeys']:
1539 if namespace == 'phases':
1544 if namespace == 'phases':
1540 _pullapplyphases(pullop, value)
1545 _pullapplyphases(pullop, value)
1541
1546
1542 # processing bookmark update
1547 # processing bookmark update
1543 if bookmarksrequested:
1548 if bookmarksrequested:
1544 books = {}
1549 books = {}
1545 for record in op.records['bookmarks']:
1550 for record in op.records['bookmarks']:
1546 books[record['bookmark']] = record["node"]
1551 books[record['bookmark']] = record["node"]
1547 pullop.remotebookmarks = books
1552 pullop.remotebookmarks = books
1548 else:
1553 else:
1549 for namespace, value in op.records['listkeys']:
1554 for namespace, value in op.records['listkeys']:
1550 if namespace == 'bookmarks':
1555 if namespace == 'bookmarks':
1551 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1556 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1552
1557
1553 # bookmark data were either already there or pulled in the bundle
1558 # bookmark data were either already there or pulled in the bundle
1554 if pullop.remotebookmarks is not None:
1559 if pullop.remotebookmarks is not None:
1555 _pullbookmarks(pullop)
1560 _pullbookmarks(pullop)
1556
1561
1557 def _pullbundle2extraprepare(pullop, kwargs):
1562 def _pullbundle2extraprepare(pullop, kwargs):
1558 """hook function so that extensions can extend the getbundle call"""
1563 """hook function so that extensions can extend the getbundle call"""
1559
1564
1560 def _pullchangeset(pullop):
1565 def _pullchangeset(pullop):
1561 """pull changeset from unbundle into the local repo"""
1566 """pull changeset from unbundle into the local repo"""
1562 # We delay the open of the transaction as late as possible so we
1567 # We delay the open of the transaction as late as possible so we
1563 # don't open transaction for nothing or you break future useful
1568 # don't open transaction for nothing or you break future useful
1564 # rollback call
1569 # rollback call
1565 if 'changegroup' in pullop.stepsdone:
1570 if 'changegroup' in pullop.stepsdone:
1566 return
1571 return
1567 pullop.stepsdone.add('changegroup')
1572 pullop.stepsdone.add('changegroup')
1568 if not pullop.fetch:
1573 if not pullop.fetch:
1569 pullop.repo.ui.status(_("no changes found\n"))
1574 pullop.repo.ui.status(_("no changes found\n"))
1570 pullop.cgresult = 0
1575 pullop.cgresult = 0
1571 return
1576 return
1572 tr = pullop.gettransaction()
1577 tr = pullop.gettransaction()
1573 if pullop.heads is None and list(pullop.common) == [nullid]:
1578 if pullop.heads is None and list(pullop.common) == [nullid]:
1574 pullop.repo.ui.status(_("requesting all changes\n"))
1579 pullop.repo.ui.status(_("requesting all changes\n"))
1575 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1580 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1576 # issue1320, avoid a race if remote changed after discovery
1581 # issue1320, avoid a race if remote changed after discovery
1577 pullop.heads = pullop.rheads
1582 pullop.heads = pullop.rheads
1578
1583
1579 if pullop.remote.capable('getbundle'):
1584 if pullop.remote.capable('getbundle'):
1580 # TODO: get bundlecaps from remote
1585 # TODO: get bundlecaps from remote
1581 cg = pullop.remote.getbundle('pull', common=pullop.common,
1586 cg = pullop.remote.getbundle('pull', common=pullop.common,
1582 heads=pullop.heads or pullop.rheads)
1587 heads=pullop.heads or pullop.rheads)
1583 elif pullop.heads is None:
1588 elif pullop.heads is None:
1584 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1589 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1585 elif not pullop.remote.capable('changegroupsubset'):
1590 elif not pullop.remote.capable('changegroupsubset'):
1586 raise error.Abort(_("partial pull cannot be done because "
1591 raise error.Abort(_("partial pull cannot be done because "
1587 "other repository doesn't support "
1592 "other repository doesn't support "
1588 "changegroupsubset."))
1593 "changegroupsubset."))
1589 else:
1594 else:
1590 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1595 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1591 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1596 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1592 pullop.remote.url())
1597 pullop.remote.url())
1593 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1598 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1594
1599
1595 def _pullphase(pullop):
1600 def _pullphase(pullop):
1596 # Get remote phases data from remote
1601 # Get remote phases data from remote
1597 if 'phases' in pullop.stepsdone:
1602 if 'phases' in pullop.stepsdone:
1598 return
1603 return
1599 remotephases = pullop.remote.listkeys('phases')
1604 remotephases = pullop.remote.listkeys('phases')
1600 _pullapplyphases(pullop, remotephases)
1605 _pullapplyphases(pullop, remotephases)
1601
1606
1602 def _pullapplyphases(pullop, remotephases):
1607 def _pullapplyphases(pullop, remotephases):
1603 """apply phase movement from observed remote state"""
1608 """apply phase movement from observed remote state"""
1604 if 'phases' in pullop.stepsdone:
1609 if 'phases' in pullop.stepsdone:
1605 return
1610 return
1606 pullop.stepsdone.add('phases')
1611 pullop.stepsdone.add('phases')
1607 publishing = bool(remotephases.get('publishing', False))
1612 publishing = bool(remotephases.get('publishing', False))
1608 if remotephases and not publishing:
1613 if remotephases and not publishing:
1609 # remote is new and non-publishing
1614 # remote is new and non-publishing
1610 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1615 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1611 pullop.pulledsubset,
1616 pullop.pulledsubset,
1612 remotephases)
1617 remotephases)
1613 dheads = pullop.pulledsubset
1618 dheads = pullop.pulledsubset
1614 else:
1619 else:
1615 # Remote is old or publishing all common changesets
1620 # Remote is old or publishing all common changesets
1616 # should be seen as public
1621 # should be seen as public
1617 pheads = pullop.pulledsubset
1622 pheads = pullop.pulledsubset
1618 dheads = []
1623 dheads = []
1619 unfi = pullop.repo.unfiltered()
1624 unfi = pullop.repo.unfiltered()
1620 phase = unfi._phasecache.phase
1625 phase = unfi._phasecache.phase
1621 rev = unfi.changelog.nodemap.get
1626 rev = unfi.changelog.nodemap.get
1622 public = phases.public
1627 public = phases.public
1623 draft = phases.draft
1628 draft = phases.draft
1624
1629
1625 # exclude changesets already public locally and update the others
1630 # exclude changesets already public locally and update the others
1626 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1631 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1627 if pheads:
1632 if pheads:
1628 tr = pullop.gettransaction()
1633 tr = pullop.gettransaction()
1629 phases.advanceboundary(pullop.repo, tr, public, pheads)
1634 phases.advanceboundary(pullop.repo, tr, public, pheads)
1630
1635
1631 # exclude changesets already draft locally and update the others
1636 # exclude changesets already draft locally and update the others
1632 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1637 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1633 if dheads:
1638 if dheads:
1634 tr = pullop.gettransaction()
1639 tr = pullop.gettransaction()
1635 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1640 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1636
1641
1637 def _pullbookmarks(pullop):
1642 def _pullbookmarks(pullop):
1638 """process the remote bookmark information to update the local one"""
1643 """process the remote bookmark information to update the local one"""
1639 if 'bookmarks' in pullop.stepsdone:
1644 if 'bookmarks' in pullop.stepsdone:
1640 return
1645 return
1641 pullop.stepsdone.add('bookmarks')
1646 pullop.stepsdone.add('bookmarks')
1642 repo = pullop.repo
1647 repo = pullop.repo
1643 remotebookmarks = pullop.remotebookmarks
1648 remotebookmarks = pullop.remotebookmarks
1644 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1649 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1645 pullop.remote.url(),
1650 pullop.remote.url(),
1646 pullop.gettransaction,
1651 pullop.gettransaction,
1647 explicit=pullop.explicitbookmarks)
1652 explicit=pullop.explicitbookmarks)
1648
1653
1649 def _pullobsolete(pullop):
1654 def _pullobsolete(pullop):
1650 """utility function to pull obsolete markers from a remote
1655 """utility function to pull obsolete markers from a remote
1651
1656
1652 The `gettransaction` is function that return the pull transaction, creating
1657 The `gettransaction` is function that return the pull transaction, creating
1653 one if necessary. We return the transaction to inform the calling code that
1658 one if necessary. We return the transaction to inform the calling code that
1654 a new transaction have been created (when applicable).
1659 a new transaction have been created (when applicable).
1655
1660
1656 Exists mostly to allow overriding for experimentation purpose"""
1661 Exists mostly to allow overriding for experimentation purpose"""
1657 if 'obsmarkers' in pullop.stepsdone:
1662 if 'obsmarkers' in pullop.stepsdone:
1658 return
1663 return
1659 pullop.stepsdone.add('obsmarkers')
1664 pullop.stepsdone.add('obsmarkers')
1660 tr = None
1665 tr = None
1661 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1666 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1662 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1667 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1663 remoteobs = pullop.remote.listkeys('obsolete')
1668 remoteobs = pullop.remote.listkeys('obsolete')
1664 if 'dump0' in remoteobs:
1669 if 'dump0' in remoteobs:
1665 tr = pullop.gettransaction()
1670 tr = pullop.gettransaction()
1666 markers = []
1671 markers = []
1667 for key in sorted(remoteobs, reverse=True):
1672 for key in sorted(remoteobs, reverse=True):
1668 if key.startswith('dump'):
1673 if key.startswith('dump'):
1669 data = util.b85decode(remoteobs[key])
1674 data = util.b85decode(remoteobs[key])
1670 version, newmarks = obsolete._readmarkers(data)
1675 version, newmarks = obsolete._readmarkers(data)
1671 markers += newmarks
1676 markers += newmarks
1672 if markers:
1677 if markers:
1673 pullop.repo.obsstore.add(tr, markers)
1678 pullop.repo.obsstore.add(tr, markers)
1674 pullop.repo.invalidatevolatilesets()
1679 pullop.repo.invalidatevolatilesets()
1675 return tr
1680 return tr
1676
1681
1677 def caps20to10(repo):
1682 def caps20to10(repo):
1678 """return a set with appropriate options to use bundle20 during getbundle"""
1683 """return a set with appropriate options to use bundle20 during getbundle"""
1679 caps = {'HG20'}
1684 caps = {'HG20'}
1680 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1685 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1681 caps.add('bundle2=' + urlreq.quote(capsblob))
1686 caps.add('bundle2=' + urlreq.quote(capsblob))
1682 return caps
1687 return caps
1683
1688
1684 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1689 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1685 getbundle2partsorder = []
1690 getbundle2partsorder = []
1686
1691
1687 # Mapping between step name and function
1692 # Mapping between step name and function
1688 #
1693 #
1689 # This exists to help extensions wrap steps if necessary
1694 # This exists to help extensions wrap steps if necessary
1690 getbundle2partsmapping = {}
1695 getbundle2partsmapping = {}
1691
1696
1692 def getbundle2partsgenerator(stepname, idx=None):
1697 def getbundle2partsgenerator(stepname, idx=None):
1693 """decorator for function generating bundle2 part for getbundle
1698 """decorator for function generating bundle2 part for getbundle
1694
1699
1695 The function is added to the step -> function mapping and appended to the
1700 The function is added to the step -> function mapping and appended to the
1696 list of steps. Beware that decorated functions will be added in order
1701 list of steps. Beware that decorated functions will be added in order
1697 (this may matter).
1702 (this may matter).
1698
1703
1699 You can only use this decorator for new steps, if you want to wrap a step
1704 You can only use this decorator for new steps, if you want to wrap a step
1700 from an extension, attack the getbundle2partsmapping dictionary directly."""
1705 from an extension, attack the getbundle2partsmapping dictionary directly."""
1701 def dec(func):
1706 def dec(func):
1702 assert stepname not in getbundle2partsmapping
1707 assert stepname not in getbundle2partsmapping
1703 getbundle2partsmapping[stepname] = func
1708 getbundle2partsmapping[stepname] = func
1704 if idx is None:
1709 if idx is None:
1705 getbundle2partsorder.append(stepname)
1710 getbundle2partsorder.append(stepname)
1706 else:
1711 else:
1707 getbundle2partsorder.insert(idx, stepname)
1712 getbundle2partsorder.insert(idx, stepname)
1708 return func
1713 return func
1709 return dec
1714 return dec
1710
1715
1711 def bundle2requested(bundlecaps):
1716 def bundle2requested(bundlecaps):
1712 if bundlecaps is not None:
1717 if bundlecaps is not None:
1713 return any(cap.startswith('HG2') for cap in bundlecaps)
1718 return any(cap.startswith('HG2') for cap in bundlecaps)
1714 return False
1719 return False
1715
1720
1716 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1721 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1717 **kwargs):
1722 **kwargs):
1718 """Return chunks constituting a bundle's raw data.
1723 """Return chunks constituting a bundle's raw data.
1719
1724
1720 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1725 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1721 passed.
1726 passed.
1722
1727
1723 Returns an iterator over raw chunks (of varying sizes).
1728 Returns an iterator over raw chunks (of varying sizes).
1724 """
1729 """
1725 kwargs = pycompat.byteskwargs(kwargs)
1730 kwargs = pycompat.byteskwargs(kwargs)
1726 usebundle2 = bundle2requested(bundlecaps)
1731 usebundle2 = bundle2requested(bundlecaps)
1727 # bundle10 case
1732 # bundle10 case
1728 if not usebundle2:
1733 if not usebundle2:
1729 if bundlecaps and not kwargs.get('cg', True):
1734 if bundlecaps and not kwargs.get('cg', True):
1730 raise ValueError(_('request for bundle10 must include changegroup'))
1735 raise ValueError(_('request for bundle10 must include changegroup'))
1731
1736
1732 if kwargs:
1737 if kwargs:
1733 raise ValueError(_('unsupported getbundle arguments: %s')
1738 raise ValueError(_('unsupported getbundle arguments: %s')
1734 % ', '.join(sorted(kwargs.keys())))
1739 % ', '.join(sorted(kwargs.keys())))
1735 outgoing = _computeoutgoing(repo, heads, common)
1740 outgoing = _computeoutgoing(repo, heads, common)
1736 return changegroup.makestream(repo, outgoing, '01', source,
1741 return changegroup.makestream(repo, outgoing, '01', source,
1737 bundlecaps=bundlecaps)
1742 bundlecaps=bundlecaps)
1738
1743
1739 # bundle20 case
1744 # bundle20 case
1740 b2caps = {}
1745 b2caps = {}
1741 for bcaps in bundlecaps:
1746 for bcaps in bundlecaps:
1742 if bcaps.startswith('bundle2='):
1747 if bcaps.startswith('bundle2='):
1743 blob = urlreq.unquote(bcaps[len('bundle2='):])
1748 blob = urlreq.unquote(bcaps[len('bundle2='):])
1744 b2caps.update(bundle2.decodecaps(blob))
1749 b2caps.update(bundle2.decodecaps(blob))
1745 bundler = bundle2.bundle20(repo.ui, b2caps)
1750 bundler = bundle2.bundle20(repo.ui, b2caps)
1746
1751
1747 kwargs['heads'] = heads
1752 kwargs['heads'] = heads
1748 kwargs['common'] = common
1753 kwargs['common'] = common
1749
1754
1750 for name in getbundle2partsorder:
1755 for name in getbundle2partsorder:
1751 func = getbundle2partsmapping[name]
1756 func = getbundle2partsmapping[name]
1752 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1757 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1753 **pycompat.strkwargs(kwargs))
1758 **pycompat.strkwargs(kwargs))
1754
1759
1755 return bundler.getchunks()
1760 return bundler.getchunks()
1756
1761
1757 @getbundle2partsgenerator('stream')
1762 @getbundle2partsgenerator('stream')
1758 def _getbundlestream(bundler, repo, source, bundlecaps=None,
1763 def _getbundlestream(bundler, repo, source, bundlecaps=None,
1759 b2caps=None, heads=None, common=None, **kwargs):
1764 b2caps=None, heads=None, common=None, **kwargs):
1760 if not kwargs.get('stream', False):
1765 if not kwargs.get('stream', False):
1761 return
1766 return
1762 filecount, bytecount, it = streamclone.generatev2(repo)
1767 filecount, bytecount, it = streamclone.generatev2(repo)
1763 requirements = ' '.join(repo.requirements)
1768 requirements = ' '.join(repo.requirements)
1764 part = bundler.newpart('stream', data=it)
1769 part = bundler.newpart('stream', data=it)
1765 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1770 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1766 part.addparam('filecount', '%d' % filecount, mandatory=True)
1771 part.addparam('filecount', '%d' % filecount, mandatory=True)
1767 part.addparam('requirements', requirements, mandatory=True)
1772 part.addparam('requirements', requirements, mandatory=True)
1768 part.addparam('version', 'v2', mandatory=True)
1773 part.addparam('version', 'v2', mandatory=True)
1769
1774
1770 @getbundle2partsgenerator('changegroup')
1775 @getbundle2partsgenerator('changegroup')
1771 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1776 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1772 b2caps=None, heads=None, common=None, **kwargs):
1777 b2caps=None, heads=None, common=None, **kwargs):
1773 """add a changegroup part to the requested bundle"""
1778 """add a changegroup part to the requested bundle"""
1774 cgstream = None
1779 cgstream = None
1775 if kwargs.get(r'cg', True):
1780 if kwargs.get(r'cg', True):
1776 # build changegroup bundle here.
1781 # build changegroup bundle here.
1777 version = '01'
1782 version = '01'
1778 cgversions = b2caps.get('changegroup')
1783 cgversions = b2caps.get('changegroup')
1779 if cgversions: # 3.1 and 3.2 ship with an empty value
1784 if cgversions: # 3.1 and 3.2 ship with an empty value
1780 cgversions = [v for v in cgversions
1785 cgversions = [v for v in cgversions
1781 if v in changegroup.supportedoutgoingversions(repo)]
1786 if v in changegroup.supportedoutgoingversions(repo)]
1782 if not cgversions:
1787 if not cgversions:
1783 raise ValueError(_('no common changegroup version'))
1788 raise ValueError(_('no common changegroup version'))
1784 version = max(cgversions)
1789 version = max(cgversions)
1785 outgoing = _computeoutgoing(repo, heads, common)
1790 outgoing = _computeoutgoing(repo, heads, common)
1786 if outgoing.missing:
1791 if outgoing.missing:
1787 cgstream = changegroup.makestream(repo, outgoing, version, source,
1792 cgstream = changegroup.makestream(repo, outgoing, version, source,
1788 bundlecaps=bundlecaps)
1793 bundlecaps=bundlecaps)
1789
1794
1790 if cgstream:
1795 if cgstream:
1791 part = bundler.newpart('changegroup', data=cgstream)
1796 part = bundler.newpart('changegroup', data=cgstream)
1792 if cgversions:
1797 if cgversions:
1793 part.addparam('version', version)
1798 part.addparam('version', version)
1794 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1799 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1795 mandatory=False)
1800 mandatory=False)
1796 if 'treemanifest' in repo.requirements:
1801 if 'treemanifest' in repo.requirements:
1797 part.addparam('treemanifest', '1')
1802 part.addparam('treemanifest', '1')
1798
1803
1799 @getbundle2partsgenerator('bookmarks')
1804 @getbundle2partsgenerator('bookmarks')
1800 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1805 def _getbundlebookmarkpart(bundler, repo, source, bundlecaps=None,
1801 b2caps=None, **kwargs):
1806 b2caps=None, **kwargs):
1802 """add a bookmark part to the requested bundle"""
1807 """add a bookmark part to the requested bundle"""
1803 if not kwargs.get(r'bookmarks', False):
1808 if not kwargs.get(r'bookmarks', False):
1804 return
1809 return
1805 if 'bookmarks' not in b2caps:
1810 if 'bookmarks' not in b2caps:
1806 raise ValueError(_('no common bookmarks exchange method'))
1811 raise ValueError(_('no common bookmarks exchange method'))
1807 books = bookmod.listbinbookmarks(repo)
1812 books = bookmod.listbinbookmarks(repo)
1808 data = bookmod.binaryencode(books)
1813 data = bookmod.binaryencode(books)
1809 if data:
1814 if data:
1810 bundler.newpart('bookmarks', data=data)
1815 bundler.newpart('bookmarks', data=data)
1811
1816
1812 @getbundle2partsgenerator('listkeys')
1817 @getbundle2partsgenerator('listkeys')
1813 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1818 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1814 b2caps=None, **kwargs):
1819 b2caps=None, **kwargs):
1815 """add parts containing listkeys namespaces to the requested bundle"""
1820 """add parts containing listkeys namespaces to the requested bundle"""
1816 listkeys = kwargs.get(r'listkeys', ())
1821 listkeys = kwargs.get(r'listkeys', ())
1817 for namespace in listkeys:
1822 for namespace in listkeys:
1818 part = bundler.newpart('listkeys')
1823 part = bundler.newpart('listkeys')
1819 part.addparam('namespace', namespace)
1824 part.addparam('namespace', namespace)
1820 keys = repo.listkeys(namespace).items()
1825 keys = repo.listkeys(namespace).items()
1821 part.data = pushkey.encodekeys(keys)
1826 part.data = pushkey.encodekeys(keys)
1822
1827
1823 @getbundle2partsgenerator('obsmarkers')
1828 @getbundle2partsgenerator('obsmarkers')
1824 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1829 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1825 b2caps=None, heads=None, **kwargs):
1830 b2caps=None, heads=None, **kwargs):
1826 """add an obsolescence markers part to the requested bundle"""
1831 """add an obsolescence markers part to the requested bundle"""
1827 if kwargs.get(r'obsmarkers', False):
1832 if kwargs.get(r'obsmarkers', False):
1828 if heads is None:
1833 if heads is None:
1829 heads = repo.heads()
1834 heads = repo.heads()
1830 subset = [c.node() for c in repo.set('::%ln', heads)]
1835 subset = [c.node() for c in repo.set('::%ln', heads)]
1831 markers = repo.obsstore.relevantmarkers(subset)
1836 markers = repo.obsstore.relevantmarkers(subset)
1832 markers = sorted(markers)
1837 markers = sorted(markers)
1833 bundle2.buildobsmarkerspart(bundler, markers)
1838 bundle2.buildobsmarkerspart(bundler, markers)
1834
1839
1835 @getbundle2partsgenerator('phases')
1840 @getbundle2partsgenerator('phases')
1836 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1841 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1837 b2caps=None, heads=None, **kwargs):
1842 b2caps=None, heads=None, **kwargs):
1838 """add phase heads part to the requested bundle"""
1843 """add phase heads part to the requested bundle"""
1839 if kwargs.get(r'phases', False):
1844 if kwargs.get(r'phases', False):
1840 if not 'heads' in b2caps.get('phases'):
1845 if not 'heads' in b2caps.get('phases'):
1841 raise ValueError(_('no common phases exchange method'))
1846 raise ValueError(_('no common phases exchange method'))
1842 if heads is None:
1847 if heads is None:
1843 heads = repo.heads()
1848 heads = repo.heads()
1844
1849
1845 headsbyphase = collections.defaultdict(set)
1850 headsbyphase = collections.defaultdict(set)
1846 if repo.publishing():
1851 if repo.publishing():
1847 headsbyphase[phases.public] = heads
1852 headsbyphase[phases.public] = heads
1848 else:
1853 else:
1849 # find the appropriate heads to move
1854 # find the appropriate heads to move
1850
1855
1851 phase = repo._phasecache.phase
1856 phase = repo._phasecache.phase
1852 node = repo.changelog.node
1857 node = repo.changelog.node
1853 rev = repo.changelog.rev
1858 rev = repo.changelog.rev
1854 for h in heads:
1859 for h in heads:
1855 headsbyphase[phase(repo, rev(h))].add(h)
1860 headsbyphase[phase(repo, rev(h))].add(h)
1856 seenphases = list(headsbyphase.keys())
1861 seenphases = list(headsbyphase.keys())
1857
1862
1858 # We do not handle anything but public and draft phase for now)
1863 # We do not handle anything but public and draft phase for now)
1859 if seenphases:
1864 if seenphases:
1860 assert max(seenphases) <= phases.draft
1865 assert max(seenphases) <= phases.draft
1861
1866
1862 # if client is pulling non-public changesets, we need to find
1867 # if client is pulling non-public changesets, we need to find
1863 # intermediate public heads.
1868 # intermediate public heads.
1864 draftheads = headsbyphase.get(phases.draft, set())
1869 draftheads = headsbyphase.get(phases.draft, set())
1865 if draftheads:
1870 if draftheads:
1866 publicheads = headsbyphase.get(phases.public, set())
1871 publicheads = headsbyphase.get(phases.public, set())
1867
1872
1868 revset = 'heads(only(%ln, %ln) and public())'
1873 revset = 'heads(only(%ln, %ln) and public())'
1869 extraheads = repo.revs(revset, draftheads, publicheads)
1874 extraheads = repo.revs(revset, draftheads, publicheads)
1870 for r in extraheads:
1875 for r in extraheads:
1871 headsbyphase[phases.public].add(node(r))
1876 headsbyphase[phases.public].add(node(r))
1872
1877
1873 # transform data in a format used by the encoding function
1878 # transform data in a format used by the encoding function
1874 phasemapping = []
1879 phasemapping = []
1875 for phase in phases.allphases:
1880 for phase in phases.allphases:
1876 phasemapping.append(sorted(headsbyphase[phase]))
1881 phasemapping.append(sorted(headsbyphase[phase]))
1877
1882
1878 # generate the actual part
1883 # generate the actual part
1879 phasedata = phases.binaryencode(phasemapping)
1884 phasedata = phases.binaryencode(phasemapping)
1880 bundler.newpart('phase-heads', data=phasedata)
1885 bundler.newpart('phase-heads', data=phasedata)
1881
1886
1882 @getbundle2partsgenerator('hgtagsfnodes')
1887 @getbundle2partsgenerator('hgtagsfnodes')
1883 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1888 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1884 b2caps=None, heads=None, common=None,
1889 b2caps=None, heads=None, common=None,
1885 **kwargs):
1890 **kwargs):
1886 """Transfer the .hgtags filenodes mapping.
1891 """Transfer the .hgtags filenodes mapping.
1887
1892
1888 Only values for heads in this bundle will be transferred.
1893 Only values for heads in this bundle will be transferred.
1889
1894
1890 The part data consists of pairs of 20 byte changeset node and .hgtags
1895 The part data consists of pairs of 20 byte changeset node and .hgtags
1891 filenodes raw values.
1896 filenodes raw values.
1892 """
1897 """
1893 # Don't send unless:
1898 # Don't send unless:
1894 # - changeset are being exchanged,
1899 # - changeset are being exchanged,
1895 # - the client supports it.
1900 # - the client supports it.
1896 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1901 if not (kwargs.get(r'cg', True) and 'hgtagsfnodes' in b2caps):
1897 return
1902 return
1898
1903
1899 outgoing = _computeoutgoing(repo, heads, common)
1904 outgoing = _computeoutgoing(repo, heads, common)
1900 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1905 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1901
1906
1902 def check_heads(repo, their_heads, context):
1907 def check_heads(repo, their_heads, context):
1903 """check if the heads of a repo have been modified
1908 """check if the heads of a repo have been modified
1904
1909
1905 Used by peer for unbundling.
1910 Used by peer for unbundling.
1906 """
1911 """
1907 heads = repo.heads()
1912 heads = repo.heads()
1908 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1913 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1909 if not (their_heads == ['force'] or their_heads == heads or
1914 if not (their_heads == ['force'] or their_heads == heads or
1910 their_heads == ['hashed', heads_hash]):
1915 their_heads == ['hashed', heads_hash]):
1911 # someone else committed/pushed/unbundled while we
1916 # someone else committed/pushed/unbundled while we
1912 # were transferring data
1917 # were transferring data
1913 raise error.PushRaced('repository changed while %s - '
1918 raise error.PushRaced('repository changed while %s - '
1914 'please try again' % context)
1919 'please try again' % context)
1915
1920
1916 def unbundle(repo, cg, heads, source, url):
1921 def unbundle(repo, cg, heads, source, url):
1917 """Apply a bundle to a repo.
1922 """Apply a bundle to a repo.
1918
1923
1919 this function makes sure the repo is locked during the application and have
1924 this function makes sure the repo is locked during the application and have
1920 mechanism to check that no push race occurred between the creation of the
1925 mechanism to check that no push race occurred between the creation of the
1921 bundle and its application.
1926 bundle and its application.
1922
1927
1923 If the push was raced as PushRaced exception is raised."""
1928 If the push was raced as PushRaced exception is raised."""
1924 r = 0
1929 r = 0
1925 # need a transaction when processing a bundle2 stream
1930 # need a transaction when processing a bundle2 stream
1926 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1931 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1927 lockandtr = [None, None, None]
1932 lockandtr = [None, None, None]
1928 recordout = None
1933 recordout = None
1929 # quick fix for output mismatch with bundle2 in 3.4
1934 # quick fix for output mismatch with bundle2 in 3.4
1930 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1935 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1931 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1936 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1932 captureoutput = True
1937 captureoutput = True
1933 try:
1938 try:
1934 # note: outside bundle1, 'heads' is expected to be empty and this
1939 # note: outside bundle1, 'heads' is expected to be empty and this
1935 # 'check_heads' call wil be a no-op
1940 # 'check_heads' call wil be a no-op
1936 check_heads(repo, heads, 'uploading changes')
1941 check_heads(repo, heads, 'uploading changes')
1937 # push can proceed
1942 # push can proceed
1938 if not isinstance(cg, bundle2.unbundle20):
1943 if not isinstance(cg, bundle2.unbundle20):
1939 # legacy case: bundle1 (changegroup 01)
1944 # legacy case: bundle1 (changegroup 01)
1940 txnname = "\n".join([source, util.hidepassword(url)])
1945 txnname = "\n".join([source, util.hidepassword(url)])
1941 with repo.lock(), repo.transaction(txnname) as tr:
1946 with repo.lock(), repo.transaction(txnname) as tr:
1942 op = bundle2.applybundle(repo, cg, tr, source, url)
1947 op = bundle2.applybundle(repo, cg, tr, source, url)
1943 r = bundle2.combinechangegroupresults(op)
1948 r = bundle2.combinechangegroupresults(op)
1944 else:
1949 else:
1945 r = None
1950 r = None
1946 try:
1951 try:
1947 def gettransaction():
1952 def gettransaction():
1948 if not lockandtr[2]:
1953 if not lockandtr[2]:
1949 lockandtr[0] = repo.wlock()
1954 lockandtr[0] = repo.wlock()
1950 lockandtr[1] = repo.lock()
1955 lockandtr[1] = repo.lock()
1951 lockandtr[2] = repo.transaction(source)
1956 lockandtr[2] = repo.transaction(source)
1952 lockandtr[2].hookargs['source'] = source
1957 lockandtr[2].hookargs['source'] = source
1953 lockandtr[2].hookargs['url'] = url
1958 lockandtr[2].hookargs['url'] = url
1954 lockandtr[2].hookargs['bundle2'] = '1'
1959 lockandtr[2].hookargs['bundle2'] = '1'
1955 return lockandtr[2]
1960 return lockandtr[2]
1956
1961
1957 # Do greedy locking by default until we're satisfied with lazy
1962 # Do greedy locking by default until we're satisfied with lazy
1958 # locking.
1963 # locking.
1959 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1964 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1960 gettransaction()
1965 gettransaction()
1961
1966
1962 op = bundle2.bundleoperation(repo, gettransaction,
1967 op = bundle2.bundleoperation(repo, gettransaction,
1963 captureoutput=captureoutput)
1968 captureoutput=captureoutput)
1964 try:
1969 try:
1965 op = bundle2.processbundle(repo, cg, op=op)
1970 op = bundle2.processbundle(repo, cg, op=op)
1966 finally:
1971 finally:
1967 r = op.reply
1972 r = op.reply
1968 if captureoutput and r is not None:
1973 if captureoutput and r is not None:
1969 repo.ui.pushbuffer(error=True, subproc=True)
1974 repo.ui.pushbuffer(error=True, subproc=True)
1970 def recordout(output):
1975 def recordout(output):
1971 r.newpart('output', data=output, mandatory=False)
1976 r.newpart('output', data=output, mandatory=False)
1972 if lockandtr[2] is not None:
1977 if lockandtr[2] is not None:
1973 lockandtr[2].close()
1978 lockandtr[2].close()
1974 except BaseException as exc:
1979 except BaseException as exc:
1975 exc.duringunbundle2 = True
1980 exc.duringunbundle2 = True
1976 if captureoutput and r is not None:
1981 if captureoutput and r is not None:
1977 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1982 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1978 def recordout(output):
1983 def recordout(output):
1979 part = bundle2.bundlepart('output', data=output,
1984 part = bundle2.bundlepart('output', data=output,
1980 mandatory=False)
1985 mandatory=False)
1981 parts.append(part)
1986 parts.append(part)
1982 raise
1987 raise
1983 finally:
1988 finally:
1984 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1989 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1985 if recordout is not None:
1990 if recordout is not None:
1986 recordout(repo.ui.popbuffer())
1991 recordout(repo.ui.popbuffer())
1987 return r
1992 return r
1988
1993
1989 def _maybeapplyclonebundle(pullop):
1994 def _maybeapplyclonebundle(pullop):
1990 """Apply a clone bundle from a remote, if possible."""
1995 """Apply a clone bundle from a remote, if possible."""
1991
1996
1992 repo = pullop.repo
1997 repo = pullop.repo
1993 remote = pullop.remote
1998 remote = pullop.remote
1994
1999
1995 if not repo.ui.configbool('ui', 'clonebundles'):
2000 if not repo.ui.configbool('ui', 'clonebundles'):
1996 return
2001 return
1997
2002
1998 # Only run if local repo is empty.
2003 # Only run if local repo is empty.
1999 if len(repo):
2004 if len(repo):
2000 return
2005 return
2001
2006
2002 if pullop.heads:
2007 if pullop.heads:
2003 return
2008 return
2004
2009
2005 if not remote.capable('clonebundles'):
2010 if not remote.capable('clonebundles'):
2006 return
2011 return
2007
2012
2008 res = remote._call('clonebundles')
2013 res = remote._call('clonebundles')
2009
2014
2010 # If we call the wire protocol command, that's good enough to record the
2015 # If we call the wire protocol command, that's good enough to record the
2011 # attempt.
2016 # attempt.
2012 pullop.clonebundleattempted = True
2017 pullop.clonebundleattempted = True
2013
2018
2014 entries = parseclonebundlesmanifest(repo, res)
2019 entries = parseclonebundlesmanifest(repo, res)
2015 if not entries:
2020 if not entries:
2016 repo.ui.note(_('no clone bundles available on remote; '
2021 repo.ui.note(_('no clone bundles available on remote; '
2017 'falling back to regular clone\n'))
2022 'falling back to regular clone\n'))
2018 return
2023 return
2019
2024
2020 entries = filterclonebundleentries(
2025 entries = filterclonebundleentries(
2021 repo, entries, streamclonerequested=pullop.streamclonerequested)
2026 repo, entries, streamclonerequested=pullop.streamclonerequested)
2022
2027
2023 if not entries:
2028 if not entries:
2024 # There is a thundering herd concern here. However, if a server
2029 # There is a thundering herd concern here. However, if a server
2025 # operator doesn't advertise bundles appropriate for its clients,
2030 # operator doesn't advertise bundles appropriate for its clients,
2026 # they deserve what's coming. Furthermore, from a client's
2031 # they deserve what's coming. Furthermore, from a client's
2027 # perspective, no automatic fallback would mean not being able to
2032 # perspective, no automatic fallback would mean not being able to
2028 # clone!
2033 # clone!
2029 repo.ui.warn(_('no compatible clone bundles available on server; '
2034 repo.ui.warn(_('no compatible clone bundles available on server; '
2030 'falling back to regular clone\n'))
2035 'falling back to regular clone\n'))
2031 repo.ui.warn(_('(you may want to report this to the server '
2036 repo.ui.warn(_('(you may want to report this to the server '
2032 'operator)\n'))
2037 'operator)\n'))
2033 return
2038 return
2034
2039
2035 entries = sortclonebundleentries(repo.ui, entries)
2040 entries = sortclonebundleentries(repo.ui, entries)
2036
2041
2037 url = entries[0]['URL']
2042 url = entries[0]['URL']
2038 repo.ui.status(_('applying clone bundle from %s\n') % url)
2043 repo.ui.status(_('applying clone bundle from %s\n') % url)
2039 if trypullbundlefromurl(repo.ui, repo, url):
2044 if trypullbundlefromurl(repo.ui, repo, url):
2040 repo.ui.status(_('finished applying clone bundle\n'))
2045 repo.ui.status(_('finished applying clone bundle\n'))
2041 # Bundle failed.
2046 # Bundle failed.
2042 #
2047 #
2043 # We abort by default to avoid the thundering herd of
2048 # We abort by default to avoid the thundering herd of
2044 # clients flooding a server that was expecting expensive
2049 # clients flooding a server that was expecting expensive
2045 # clone load to be offloaded.
2050 # clone load to be offloaded.
2046 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2051 elif repo.ui.configbool('ui', 'clonebundlefallback'):
2047 repo.ui.warn(_('falling back to normal clone\n'))
2052 repo.ui.warn(_('falling back to normal clone\n'))
2048 else:
2053 else:
2049 raise error.Abort(_('error applying bundle'),
2054 raise error.Abort(_('error applying bundle'),
2050 hint=_('if this error persists, consider contacting '
2055 hint=_('if this error persists, consider contacting '
2051 'the server operator or disable clone '
2056 'the server operator or disable clone '
2052 'bundles via '
2057 'bundles via '
2053 '"--config ui.clonebundles=false"'))
2058 '"--config ui.clonebundles=false"'))
2054
2059
2055 def parseclonebundlesmanifest(repo, s):
2060 def parseclonebundlesmanifest(repo, s):
2056 """Parses the raw text of a clone bundles manifest.
2061 """Parses the raw text of a clone bundles manifest.
2057
2062
2058 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2063 Returns a list of dicts. The dicts have a ``URL`` key corresponding
2059 to the URL and other keys are the attributes for the entry.
2064 to the URL and other keys are the attributes for the entry.
2060 """
2065 """
2061 m = []
2066 m = []
2062 for line in s.splitlines():
2067 for line in s.splitlines():
2063 fields = line.split()
2068 fields = line.split()
2064 if not fields:
2069 if not fields:
2065 continue
2070 continue
2066 attrs = {'URL': fields[0]}
2071 attrs = {'URL': fields[0]}
2067 for rawattr in fields[1:]:
2072 for rawattr in fields[1:]:
2068 key, value = rawattr.split('=', 1)
2073 key, value = rawattr.split('=', 1)
2069 key = urlreq.unquote(key)
2074 key = urlreq.unquote(key)
2070 value = urlreq.unquote(value)
2075 value = urlreq.unquote(value)
2071 attrs[key] = value
2076 attrs[key] = value
2072
2077
2073 # Parse BUNDLESPEC into components. This makes client-side
2078 # Parse BUNDLESPEC into components. This makes client-side
2074 # preferences easier to specify since you can prefer a single
2079 # preferences easier to specify since you can prefer a single
2075 # component of the BUNDLESPEC.
2080 # component of the BUNDLESPEC.
2076 if key == 'BUNDLESPEC':
2081 if key == 'BUNDLESPEC':
2077 try:
2082 try:
2078 comp, version, params = parsebundlespec(repo, value,
2083 comp, version, params = parsebundlespec(repo, value,
2079 externalnames=True)
2084 externalnames=True)
2080 attrs['COMPRESSION'] = comp
2085 attrs['COMPRESSION'] = comp
2081 attrs['VERSION'] = version
2086 attrs['VERSION'] = version
2082 except error.InvalidBundleSpecification:
2087 except error.InvalidBundleSpecification:
2083 pass
2088 pass
2084 except error.UnsupportedBundleSpecification:
2089 except error.UnsupportedBundleSpecification:
2085 pass
2090 pass
2086
2091
2087 m.append(attrs)
2092 m.append(attrs)
2088
2093
2089 return m
2094 return m
2090
2095
2091 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2096 def filterclonebundleentries(repo, entries, streamclonerequested=False):
2092 """Remove incompatible clone bundle manifest entries.
2097 """Remove incompatible clone bundle manifest entries.
2093
2098
2094 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2099 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
2095 and returns a new list consisting of only the entries that this client
2100 and returns a new list consisting of only the entries that this client
2096 should be able to apply.
2101 should be able to apply.
2097
2102
2098 There is no guarantee we'll be able to apply all returned entries because
2103 There is no guarantee we'll be able to apply all returned entries because
2099 the metadata we use to filter on may be missing or wrong.
2104 the metadata we use to filter on may be missing or wrong.
2100 """
2105 """
2101 newentries = []
2106 newentries = []
2102 for entry in entries:
2107 for entry in entries:
2103 spec = entry.get('BUNDLESPEC')
2108 spec = entry.get('BUNDLESPEC')
2104 if spec:
2109 if spec:
2105 try:
2110 try:
2106 comp, version, params = parsebundlespec(repo, spec, strict=True)
2111 comp, version, params = parsebundlespec(repo, spec, strict=True)
2107
2112
2108 # If a stream clone was requested, filter out non-streamclone
2113 # If a stream clone was requested, filter out non-streamclone
2109 # entries.
2114 # entries.
2110 if streamclonerequested and (comp != 'UN' or version != 's1'):
2115 if streamclonerequested and (comp != 'UN' or version != 's1'):
2111 repo.ui.debug('filtering %s because not a stream clone\n' %
2116 repo.ui.debug('filtering %s because not a stream clone\n' %
2112 entry['URL'])
2117 entry['URL'])
2113 continue
2118 continue
2114
2119
2115 except error.InvalidBundleSpecification as e:
2120 except error.InvalidBundleSpecification as e:
2116 repo.ui.debug(str(e) + '\n')
2121 repo.ui.debug(str(e) + '\n')
2117 continue
2122 continue
2118 except error.UnsupportedBundleSpecification as e:
2123 except error.UnsupportedBundleSpecification as e:
2119 repo.ui.debug('filtering %s because unsupported bundle '
2124 repo.ui.debug('filtering %s because unsupported bundle '
2120 'spec: %s\n' % (entry['URL'], str(e)))
2125 'spec: %s\n' % (entry['URL'], str(e)))
2121 continue
2126 continue
2122 # If we don't have a spec and requested a stream clone, we don't know
2127 # If we don't have a spec and requested a stream clone, we don't know
2123 # what the entry is so don't attempt to apply it.
2128 # what the entry is so don't attempt to apply it.
2124 elif streamclonerequested:
2129 elif streamclonerequested:
2125 repo.ui.debug('filtering %s because cannot determine if a stream '
2130 repo.ui.debug('filtering %s because cannot determine if a stream '
2126 'clone bundle\n' % entry['URL'])
2131 'clone bundle\n' % entry['URL'])
2127 continue
2132 continue
2128
2133
2129 if 'REQUIRESNI' in entry and not sslutil.hassni:
2134 if 'REQUIRESNI' in entry and not sslutil.hassni:
2130 repo.ui.debug('filtering %s because SNI not supported\n' %
2135 repo.ui.debug('filtering %s because SNI not supported\n' %
2131 entry['URL'])
2136 entry['URL'])
2132 continue
2137 continue
2133
2138
2134 newentries.append(entry)
2139 newentries.append(entry)
2135
2140
2136 return newentries
2141 return newentries
2137
2142
2138 class clonebundleentry(object):
2143 class clonebundleentry(object):
2139 """Represents an item in a clone bundles manifest.
2144 """Represents an item in a clone bundles manifest.
2140
2145
2141 This rich class is needed to support sorting since sorted() in Python 3
2146 This rich class is needed to support sorting since sorted() in Python 3
2142 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2147 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2143 won't work.
2148 won't work.
2144 """
2149 """
2145
2150
2146 def __init__(self, value, prefers):
2151 def __init__(self, value, prefers):
2147 self.value = value
2152 self.value = value
2148 self.prefers = prefers
2153 self.prefers = prefers
2149
2154
2150 def _cmp(self, other):
2155 def _cmp(self, other):
2151 for prefkey, prefvalue in self.prefers:
2156 for prefkey, prefvalue in self.prefers:
2152 avalue = self.value.get(prefkey)
2157 avalue = self.value.get(prefkey)
2153 bvalue = other.value.get(prefkey)
2158 bvalue = other.value.get(prefkey)
2154
2159
2155 # Special case for b missing attribute and a matches exactly.
2160 # Special case for b missing attribute and a matches exactly.
2156 if avalue is not None and bvalue is None and avalue == prefvalue:
2161 if avalue is not None and bvalue is None and avalue == prefvalue:
2157 return -1
2162 return -1
2158
2163
2159 # Special case for a missing attribute and b matches exactly.
2164 # Special case for a missing attribute and b matches exactly.
2160 if bvalue is not None and avalue is None and bvalue == prefvalue:
2165 if bvalue is not None and avalue is None and bvalue == prefvalue:
2161 return 1
2166 return 1
2162
2167
2163 # We can't compare unless attribute present on both.
2168 # We can't compare unless attribute present on both.
2164 if avalue is None or bvalue is None:
2169 if avalue is None or bvalue is None:
2165 continue
2170 continue
2166
2171
2167 # Same values should fall back to next attribute.
2172 # Same values should fall back to next attribute.
2168 if avalue == bvalue:
2173 if avalue == bvalue:
2169 continue
2174 continue
2170
2175
2171 # Exact matches come first.
2176 # Exact matches come first.
2172 if avalue == prefvalue:
2177 if avalue == prefvalue:
2173 return -1
2178 return -1
2174 if bvalue == prefvalue:
2179 if bvalue == prefvalue:
2175 return 1
2180 return 1
2176
2181
2177 # Fall back to next attribute.
2182 # Fall back to next attribute.
2178 continue
2183 continue
2179
2184
2180 # If we got here we couldn't sort by attributes and prefers. Fall
2185 # If we got here we couldn't sort by attributes and prefers. Fall
2181 # back to index order.
2186 # back to index order.
2182 return 0
2187 return 0
2183
2188
2184 def __lt__(self, other):
2189 def __lt__(self, other):
2185 return self._cmp(other) < 0
2190 return self._cmp(other) < 0
2186
2191
2187 def __gt__(self, other):
2192 def __gt__(self, other):
2188 return self._cmp(other) > 0
2193 return self._cmp(other) > 0
2189
2194
2190 def __eq__(self, other):
2195 def __eq__(self, other):
2191 return self._cmp(other) == 0
2196 return self._cmp(other) == 0
2192
2197
2193 def __le__(self, other):
2198 def __le__(self, other):
2194 return self._cmp(other) <= 0
2199 return self._cmp(other) <= 0
2195
2200
2196 def __ge__(self, other):
2201 def __ge__(self, other):
2197 return self._cmp(other) >= 0
2202 return self._cmp(other) >= 0
2198
2203
2199 def __ne__(self, other):
2204 def __ne__(self, other):
2200 return self._cmp(other) != 0
2205 return self._cmp(other) != 0
2201
2206
2202 def sortclonebundleentries(ui, entries):
2207 def sortclonebundleentries(ui, entries):
2203 prefers = ui.configlist('ui', 'clonebundleprefers')
2208 prefers = ui.configlist('ui', 'clonebundleprefers')
2204 if not prefers:
2209 if not prefers:
2205 return list(entries)
2210 return list(entries)
2206
2211
2207 prefers = [p.split('=', 1) for p in prefers]
2212 prefers = [p.split('=', 1) for p in prefers]
2208
2213
2209 items = sorted(clonebundleentry(v, prefers) for v in entries)
2214 items = sorted(clonebundleentry(v, prefers) for v in entries)
2210 return [i.value for i in items]
2215 return [i.value for i in items]
2211
2216
2212 def trypullbundlefromurl(ui, repo, url):
2217 def trypullbundlefromurl(ui, repo, url):
2213 """Attempt to apply a bundle from a URL."""
2218 """Attempt to apply a bundle from a URL."""
2214 with repo.lock(), repo.transaction('bundleurl') as tr:
2219 with repo.lock(), repo.transaction('bundleurl') as tr:
2215 try:
2220 try:
2216 fh = urlmod.open(ui, url)
2221 fh = urlmod.open(ui, url)
2217 cg = readbundle(ui, fh, 'stream')
2222 cg = readbundle(ui, fh, 'stream')
2218
2223
2219 if isinstance(cg, streamclone.streamcloneapplier):
2224 if isinstance(cg, streamclone.streamcloneapplier):
2220 cg.apply(repo)
2225 cg.apply(repo)
2221 else:
2226 else:
2222 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2227 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2223 return True
2228 return True
2224 except urlerr.httperror as e:
2229 except urlerr.httperror as e:
2225 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2230 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2226 except urlerr.urlerror as e:
2231 except urlerr.urlerror as e:
2227 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2232 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2228
2233
2229 return False
2234 return False
@@ -1,198 +1,264
1 #require serve
1 #require serve
2
2
3 #testcases stream-legacy stream-bundle2
4
5 #if stream-bundle2
6 $ cat << EOF >> $HGRCPATH
7 > [experimental]
8 > bundle2.stream = yes
9 > EOF
10 #endif
11
3 Initialize repository
12 Initialize repository
4 the status call is to check for issue5130
13 the status call is to check for issue5130
5
14
6 $ hg init server
15 $ hg init server
7 $ cd server
16 $ cd server
8 $ touch foo
17 $ touch foo
9 $ hg -q commit -A -m initial
18 $ hg -q commit -A -m initial
10 >>> for i in range(1024):
19 >>> for i in range(1024):
11 ... with open(str(i), 'wb') as fh:
20 ... with open(str(i), 'wb') as fh:
12 ... fh.write(str(i))
21 ... fh.write(str(i))
13 $ hg -q commit -A -m 'add a lot of files'
22 $ hg -q commit -A -m 'add a lot of files'
14 $ hg st
23 $ hg st
15 $ hg serve -p $HGPORT -d --pid-file=hg.pid
24 $ hg serve -p $HGPORT -d --pid-file=hg.pid
16 $ cat hg.pid >> $DAEMON_PIDS
25 $ cat hg.pid >> $DAEMON_PIDS
17 $ cd ..
26 $ cd ..
18
27
19 Basic clone
28 Basic clone
20
29
30 #if stream-legacy
21 $ hg clone --stream -U http://localhost:$HGPORT clone1
31 $ hg clone --stream -U http://localhost:$HGPORT clone1
22 streaming all changes
32 streaming all changes
23 1027 files to transfer, 96.3 KB of data
33 1027 files to transfer, 96.3 KB of data
24 transferred 96.3 KB in * seconds (*/sec) (glob)
34 transferred 96.3 KB in * seconds (*/sec) (glob)
25 searching for changes
35 searching for changes
26 no changes found
36 no changes found
37 #endif
38 #if stream-bundle2
39 $ hg clone --stream -U http://localhost:$HGPORT clone1
40 streaming all changes
41 1027 files to transfer, 96.3 KB of data
42 transferred 96.3 KB in * seconds (* */sec) (glob)
43 #endif
27
44
28 --uncompressed is an alias to --stream
45 --uncompressed is an alias to --stream
29
46
47 #if stream-legacy
30 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
48 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
31 streaming all changes
49 streaming all changes
32 1027 files to transfer, 96.3 KB of data
50 1027 files to transfer, 96.3 KB of data
33 transferred 96.3 KB in * seconds (*/sec) (glob)
51 transferred 96.3 KB in * seconds (*/sec) (glob)
34 searching for changes
52 searching for changes
35 no changes found
53 no changes found
54 #endif
55 #if stream-bundle2
56 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
57 streaming all changes
58 1027 files to transfer, 96.3 KB of data
59 transferred 96.3 KB in * seconds (* */sec) (glob)
60 #endif
36
61
37 Clone with background file closing enabled
62 Clone with background file closing enabled
38
63
64 #if stream-legacy
39 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
65 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
40 using http://localhost:$HGPORT/
66 using http://localhost:$HGPORT/
41 sending capabilities command
67 sending capabilities command
42 sending branchmap command
68 sending branchmap command
43 streaming all changes
69 streaming all changes
44 sending stream_out command
70 sending stream_out command
45 1027 files to transfer, 96.3 KB of data
71 1027 files to transfer, 96.3 KB of data
46 starting 4 threads for background file closing
72 starting 4 threads for background file closing
47 transferred 96.3 KB in * seconds (*/sec) (glob)
73 transferred 96.3 KB in * seconds (*/sec) (glob)
48 query 1; heads
74 query 1; heads
49 sending batch command
75 sending batch command
50 searching for changes
76 searching for changes
51 all remote heads known locally
77 all remote heads known locally
52 no changes found
78 no changes found
53 sending getbundle command
79 sending getbundle command
54 bundle2-input-bundle: with-transaction
80 bundle2-input-bundle: with-transaction
55 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
81 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
56 bundle2-input-part: "phase-heads" supported
82 bundle2-input-part: "phase-heads" supported
57 bundle2-input-part: total payload size 24
83 bundle2-input-part: total payload size 24
58 bundle2-input-bundle: 1 parts total
84 bundle2-input-bundle: 1 parts total
59 checking for updated bookmarks
85 checking for updated bookmarks
86 #endif
87 #if stream-bundle2
88 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
89 using http://localhost:$HGPORT/
90 sending capabilities command
91 query 1; heads
92 sending batch command
93 streaming all changes
94 sending getbundle command
95 bundle2-input-bundle: with-transaction
96 bundle2-input-part: "stream" (params: 4 mandatory) supported
97 applying stream bundle
98 1027 files to transfer, 96.3 KB of data
99 starting 4 threads for background file closing
100 transferred 96.3 KB in * seconds (* */sec) (glob)
101 bundle2-input-part: total payload size 110887
102 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
103 bundle2-input-part: "phase-heads" supported
104 bundle2-input-part: total payload size 24
105 bundle2-input-bundle: 2 parts total
106 checking for updated bookmarks
107 #endif
60
108
61 Cannot stream clone when there are secret changesets
109 Cannot stream clone when there are secret changesets
62
110
63 $ hg -R server phase --force --secret -r tip
111 $ hg -R server phase --force --secret -r tip
64 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
112 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
65 warning: stream clone requested but server has them disabled
113 warning: stream clone requested but server has them disabled
66 requesting all changes
114 requesting all changes
67 adding changesets
115 adding changesets
68 adding manifests
116 adding manifests
69 adding file changes
117 adding file changes
70 added 1 changesets with 1 changes to 1 files
118 added 1 changesets with 1 changes to 1 files
71 new changesets 96ee1d7354c4
119 new changesets 96ee1d7354c4
72
120
73 $ killdaemons.py
121 $ killdaemons.py
74
122
75 Streaming of secrets can be overridden by server config
123 Streaming of secrets can be overridden by server config
76
124
77 $ cd server
125 $ cd server
78 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
126 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
79 $ cat hg.pid > $DAEMON_PIDS
127 $ cat hg.pid > $DAEMON_PIDS
80 $ cd ..
128 $ cd ..
81
129
130 #if stream-legacy
82 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
131 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
83 streaming all changes
132 streaming all changes
84 1027 files to transfer, 96.3 KB of data
133 1027 files to transfer, 96.3 KB of data
85 transferred 96.3 KB in * seconds (*/sec) (glob)
134 transferred 96.3 KB in * seconds (*/sec) (glob)
86 searching for changes
135 searching for changes
87 no changes found
136 no changes found
137 #endif
138 #if stream-bundle2
139 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
140 streaming all changes
141 1027 files to transfer, 96.3 KB of data
142 transferred 96.3 KB in * seconds (* */sec) (glob)
143 #endif
88
144
89 $ killdaemons.py
145 $ killdaemons.py
90
146
91 Verify interaction between preferuncompressed and secret presence
147 Verify interaction between preferuncompressed and secret presence
92
148
93 $ cd server
149 $ cd server
94 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
150 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
95 $ cat hg.pid > $DAEMON_PIDS
151 $ cat hg.pid > $DAEMON_PIDS
96 $ cd ..
152 $ cd ..
97
153
98 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
154 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
99 requesting all changes
155 requesting all changes
100 adding changesets
156 adding changesets
101 adding manifests
157 adding manifests
102 adding file changes
158 adding file changes
103 added 1 changesets with 1 changes to 1 files
159 added 1 changesets with 1 changes to 1 files
104 new changesets 96ee1d7354c4
160 new changesets 96ee1d7354c4
105
161
106 $ killdaemons.py
162 $ killdaemons.py
107
163
108 Clone not allowed when full bundles disabled and can't serve secrets
164 Clone not allowed when full bundles disabled and can't serve secrets
109
165
110 $ cd server
166 $ cd server
111 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
167 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
112 $ cat hg.pid > $DAEMON_PIDS
168 $ cat hg.pid > $DAEMON_PIDS
113 $ cd ..
169 $ cd ..
114
170
115 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
171 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
116 warning: stream clone requested but server has them disabled
172 warning: stream clone requested but server has them disabled
117 requesting all changes
173 requesting all changes
118 remote: abort: server has pull-based clones disabled
174 remote: abort: server has pull-based clones disabled
119 abort: pull failed on remote
175 abort: pull failed on remote
120 (remove --pull if specified or upgrade Mercurial)
176 (remove --pull if specified or upgrade Mercurial)
121 [255]
177 [255]
122
178
123 Local stream clone with secrets involved
179 Local stream clone with secrets involved
124 (This is just a test over behavior: if you have access to the repo's files,
180 (This is just a test over behavior: if you have access to the repo's files,
125 there is no security so it isn't important to prevent a clone here.)
181 there is no security so it isn't important to prevent a clone here.)
126
182
127 $ hg clone -U --stream server local-secret
183 $ hg clone -U --stream server local-secret
128 warning: stream clone requested but server has them disabled
184 warning: stream clone requested but server has them disabled
129 requesting all changes
185 requesting all changes
130 adding changesets
186 adding changesets
131 adding manifests
187 adding manifests
132 adding file changes
188 adding file changes
133 added 1 changesets with 1 changes to 1 files
189 added 1 changesets with 1 changes to 1 files
134 new changesets 96ee1d7354c4
190 new changesets 96ee1d7354c4
135
191
136 Stream clone while repo is changing:
192 Stream clone while repo is changing:
137
193
138 $ mkdir changing
194 $ mkdir changing
139 $ cd changing
195 $ cd changing
140
196
141 extension for delaying the server process so we reliably can modify the repo
197 extension for delaying the server process so we reliably can modify the repo
142 while cloning
198 while cloning
143
199
144 $ cat > delayer.py <<EOF
200 $ cat > delayer.py <<EOF
145 > import time
201 > import time
146 > from mercurial import extensions, vfs
202 > from mercurial import extensions, vfs
147 > def __call__(orig, self, path, *args, **kwargs):
203 > def __call__(orig, self, path, *args, **kwargs):
148 > if path == 'data/f1.i':
204 > if path == 'data/f1.i':
149 > time.sleep(2)
205 > time.sleep(2)
150 > return orig(self, path, *args, **kwargs)
206 > return orig(self, path, *args, **kwargs)
151 > extensions.wrapfunction(vfs.vfs, '__call__', __call__)
207 > extensions.wrapfunction(vfs.vfs, '__call__', __call__)
152 > EOF
208 > EOF
153
209
154 prepare repo with small and big file to cover both code paths in emitrevlogdata
210 prepare repo with small and big file to cover both code paths in emitrevlogdata
155
211
156 $ hg init repo
212 $ hg init repo
157 $ touch repo/f1
213 $ touch repo/f1
158 $ $TESTDIR/seq.py 50000 > repo/f2
214 $ $TESTDIR/seq.py 50000 > repo/f2
159 $ hg -R repo ci -Aqm "0"
215 $ hg -R repo ci -Aqm "0"
160 $ hg serve -R repo -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py
216 $ hg serve -R repo -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py
161 $ cat hg.pid >> $DAEMON_PIDS
217 $ cat hg.pid >> $DAEMON_PIDS
162
218
163 clone while modifying the repo between stating file with write lock and
219 clone while modifying the repo between stating file with write lock and
164 actually serving file content
220 actually serving file content
165
221
166 $ hg clone -q --stream -U http://localhost:$HGPORT1 clone &
222 $ hg clone -q --stream -U http://localhost:$HGPORT1 clone &
167 $ sleep 1
223 $ sleep 1
168 $ echo >> repo/f1
224 $ echo >> repo/f1
169 $ echo >> repo/f2
225 $ echo >> repo/f2
170 $ hg -R repo ci -m "1"
226 $ hg -R repo ci -m "1"
171 $ wait
227 $ wait
172 $ hg -R clone id
228 $ hg -R clone id
173 000000000000
229 000000000000
174 $ cd ..
230 $ cd ..
175
231
176 Stream repository with bookmarks
232 Stream repository with bookmarks
177 --------------------------------
233 --------------------------------
178
234
179 (revert introduction of secret changeset)
235 (revert introduction of secret changeset)
180
236
181 $ hg -R server phase --draft 'secret()'
237 $ hg -R server phase --draft 'secret()'
182
238
183 add a bookmark
239 add a bookmark
184
240
185 $ hg -R server bookmark -r tip some-bookmark
241 $ hg -R server bookmark -r tip some-bookmark
186
242
187 clone it
243 clone it
188
244
245 #if stream-legacy
189 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
246 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
190 streaming all changes
247 streaming all changes
191 1027 files to transfer, 96.3 KB of data
248 1027 files to transfer, 96.3 KB of data
192 transferred 96.3 KB in * seconds (*) (glob)
249 transferred 96.3 KB in * seconds (*) (glob)
193 searching for changes
250 searching for changes
194 no changes found
251 no changes found
195 updating to branch default
252 updating to branch default
196 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved
253 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved
254 #endif
255 #if stream-bundle2
256 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
257 streaming all changes
258 1027 files to transfer, 96.3 KB of data
259 transferred 96.3 KB in * seconds (* */sec) (glob)
260 updating to branch default
261 1025 files updated, 0 files merged, 0 files removed, 0 files unresolved
262 #endif
197 $ hg -R with-bookmarks bookmarks
263 $ hg -R with-bookmarks bookmarks
198 some-bookmark 1:c17445101a72
264 some-bookmark 1:c17445101a72
General Comments 0
You need to be logged in to leave comments. Login now