##// END OF EJS Templates
changegroup: replace getchangegroup with makechangegroup...
Durham Goode -
r34103:5ede882c default
parent child Browse files
Show More
@@ -1,1899 +1,1898 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 phases,
161 phases,
162 pushkey,
162 pushkey,
163 pycompat,
163 pycompat,
164 tags,
164 tags,
165 url,
165 url,
166 util,
166 util,
167 )
167 )
168
168
169 urlerr = util.urlerr
169 urlerr = util.urlerr
170 urlreq = util.urlreq
170 urlreq = util.urlreq
171
171
172 _pack = struct.pack
172 _pack = struct.pack
173 _unpack = struct.unpack
173 _unpack = struct.unpack
174
174
175 _fstreamparamsize = '>i'
175 _fstreamparamsize = '>i'
176 _fpartheadersize = '>i'
176 _fpartheadersize = '>i'
177 _fparttypesize = '>B'
177 _fparttypesize = '>B'
178 _fpartid = '>I'
178 _fpartid = '>I'
179 _fpayloadsize = '>i'
179 _fpayloadsize = '>i'
180 _fpartparamcount = '>BB'
180 _fpartparamcount = '>BB'
181
181
182 _fphasesentry = '>i20s'
182 _fphasesentry = '>i20s'
183
183
184 preferedchunksize = 4096
184 preferedchunksize = 4096
185
185
186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
186 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
187
187
188 def outdebug(ui, message):
188 def outdebug(ui, message):
189 """debug regarding output stream (bundling)"""
189 """debug regarding output stream (bundling)"""
190 if ui.configbool('devel', 'bundle2.debug'):
190 if ui.configbool('devel', 'bundle2.debug'):
191 ui.debug('bundle2-output: %s\n' % message)
191 ui.debug('bundle2-output: %s\n' % message)
192
192
193 def indebug(ui, message):
193 def indebug(ui, message):
194 """debug on input stream (unbundling)"""
194 """debug on input stream (unbundling)"""
195 if ui.configbool('devel', 'bundle2.debug'):
195 if ui.configbool('devel', 'bundle2.debug'):
196 ui.debug('bundle2-input: %s\n' % message)
196 ui.debug('bundle2-input: %s\n' % message)
197
197
198 def validateparttype(parttype):
198 def validateparttype(parttype):
199 """raise ValueError if a parttype contains invalid character"""
199 """raise ValueError if a parttype contains invalid character"""
200 if _parttypeforbidden.search(parttype):
200 if _parttypeforbidden.search(parttype):
201 raise ValueError(parttype)
201 raise ValueError(parttype)
202
202
203 def _makefpartparamsizes(nbparams):
203 def _makefpartparamsizes(nbparams):
204 """return a struct format to read part parameter sizes
204 """return a struct format to read part parameter sizes
205
205
206 The number parameters is variable so we need to build that format
206 The number parameters is variable so we need to build that format
207 dynamically.
207 dynamically.
208 """
208 """
209 return '>'+('BB'*nbparams)
209 return '>'+('BB'*nbparams)
210
210
211 parthandlermapping = {}
211 parthandlermapping = {}
212
212
213 def parthandler(parttype, params=()):
213 def parthandler(parttype, params=()):
214 """decorator that register a function as a bundle2 part handler
214 """decorator that register a function as a bundle2 part handler
215
215
216 eg::
216 eg::
217
217
218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
218 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
219 def myparttypehandler(...):
219 def myparttypehandler(...):
220 '''process a part of type "my part".'''
220 '''process a part of type "my part".'''
221 ...
221 ...
222 """
222 """
223 validateparttype(parttype)
223 validateparttype(parttype)
224 def _decorator(func):
224 def _decorator(func):
225 lparttype = parttype.lower() # enforce lower case matching.
225 lparttype = parttype.lower() # enforce lower case matching.
226 assert lparttype not in parthandlermapping
226 assert lparttype not in parthandlermapping
227 parthandlermapping[lparttype] = func
227 parthandlermapping[lparttype] = func
228 func.params = frozenset(params)
228 func.params = frozenset(params)
229 return func
229 return func
230 return _decorator
230 return _decorator
231
231
232 class unbundlerecords(object):
232 class unbundlerecords(object):
233 """keep record of what happens during and unbundle
233 """keep record of what happens during and unbundle
234
234
235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
235 New records are added using `records.add('cat', obj)`. Where 'cat' is a
236 category of record and obj is an arbitrary object.
236 category of record and obj is an arbitrary object.
237
237
238 `records['cat']` will return all entries of this category 'cat'.
238 `records['cat']` will return all entries of this category 'cat'.
239
239
240 Iterating on the object itself will yield `('category', obj)` tuples
240 Iterating on the object itself will yield `('category', obj)` tuples
241 for all entries.
241 for all entries.
242
242
243 All iterations happens in chronological order.
243 All iterations happens in chronological order.
244 """
244 """
245
245
246 def __init__(self):
246 def __init__(self):
247 self._categories = {}
247 self._categories = {}
248 self._sequences = []
248 self._sequences = []
249 self._replies = {}
249 self._replies = {}
250
250
251 def add(self, category, entry, inreplyto=None):
251 def add(self, category, entry, inreplyto=None):
252 """add a new record of a given category.
252 """add a new record of a given category.
253
253
254 The entry can then be retrieved in the list returned by
254 The entry can then be retrieved in the list returned by
255 self['category']."""
255 self['category']."""
256 self._categories.setdefault(category, []).append(entry)
256 self._categories.setdefault(category, []).append(entry)
257 self._sequences.append((category, entry))
257 self._sequences.append((category, entry))
258 if inreplyto is not None:
258 if inreplyto is not None:
259 self.getreplies(inreplyto).add(category, entry)
259 self.getreplies(inreplyto).add(category, entry)
260
260
261 def getreplies(self, partid):
261 def getreplies(self, partid):
262 """get the records that are replies to a specific part"""
262 """get the records that are replies to a specific part"""
263 return self._replies.setdefault(partid, unbundlerecords())
263 return self._replies.setdefault(partid, unbundlerecords())
264
264
265 def __getitem__(self, cat):
265 def __getitem__(self, cat):
266 return tuple(self._categories.get(cat, ()))
266 return tuple(self._categories.get(cat, ()))
267
267
268 def __iter__(self):
268 def __iter__(self):
269 return iter(self._sequences)
269 return iter(self._sequences)
270
270
271 def __len__(self):
271 def __len__(self):
272 return len(self._sequences)
272 return len(self._sequences)
273
273
274 def __nonzero__(self):
274 def __nonzero__(self):
275 return bool(self._sequences)
275 return bool(self._sequences)
276
276
277 __bool__ = __nonzero__
277 __bool__ = __nonzero__
278
278
279 class bundleoperation(object):
279 class bundleoperation(object):
280 """an object that represents a single bundling process
280 """an object that represents a single bundling process
281
281
282 Its purpose is to carry unbundle-related objects and states.
282 Its purpose is to carry unbundle-related objects and states.
283
283
284 A new object should be created at the beginning of each bundle processing.
284 A new object should be created at the beginning of each bundle processing.
285 The object is to be returned by the processing function.
285 The object is to be returned by the processing function.
286
286
287 The object has very little content now it will ultimately contain:
287 The object has very little content now it will ultimately contain:
288 * an access to the repo the bundle is applied to,
288 * an access to the repo the bundle is applied to,
289 * a ui object,
289 * a ui object,
290 * a way to retrieve a transaction to add changes to the repo,
290 * a way to retrieve a transaction to add changes to the repo,
291 * a way to record the result of processing each part,
291 * a way to record the result of processing each part,
292 * a way to construct a bundle response when applicable.
292 * a way to construct a bundle response when applicable.
293 """
293 """
294
294
295 def __init__(self, repo, transactiongetter, captureoutput=True):
295 def __init__(self, repo, transactiongetter, captureoutput=True):
296 self.repo = repo
296 self.repo = repo
297 self.ui = repo.ui
297 self.ui = repo.ui
298 self.records = unbundlerecords()
298 self.records = unbundlerecords()
299 self.reply = None
299 self.reply = None
300 self.captureoutput = captureoutput
300 self.captureoutput = captureoutput
301 self.hookargs = {}
301 self.hookargs = {}
302 self._gettransaction = transactiongetter
302 self._gettransaction = transactiongetter
303
303
304 def gettransaction(self):
304 def gettransaction(self):
305 transaction = self._gettransaction()
305 transaction = self._gettransaction()
306
306
307 if self.hookargs:
307 if self.hookargs:
308 # the ones added to the transaction supercede those added
308 # the ones added to the transaction supercede those added
309 # to the operation.
309 # to the operation.
310 self.hookargs.update(transaction.hookargs)
310 self.hookargs.update(transaction.hookargs)
311 transaction.hookargs = self.hookargs
311 transaction.hookargs = self.hookargs
312
312
313 # mark the hookargs as flushed. further attempts to add to
313 # mark the hookargs as flushed. further attempts to add to
314 # hookargs will result in an abort.
314 # hookargs will result in an abort.
315 self.hookargs = None
315 self.hookargs = None
316
316
317 return transaction
317 return transaction
318
318
319 def addhookargs(self, hookargs):
319 def addhookargs(self, hookargs):
320 if self.hookargs is None:
320 if self.hookargs is None:
321 raise error.ProgrammingError('attempted to add hookargs to '
321 raise error.ProgrammingError('attempted to add hookargs to '
322 'operation after transaction started')
322 'operation after transaction started')
323 self.hookargs.update(hookargs)
323 self.hookargs.update(hookargs)
324
324
325 class TransactionUnavailable(RuntimeError):
325 class TransactionUnavailable(RuntimeError):
326 pass
326 pass
327
327
328 def _notransaction():
328 def _notransaction():
329 """default method to get a transaction while processing a bundle
329 """default method to get a transaction while processing a bundle
330
330
331 Raise an exception to highlight the fact that no transaction was expected
331 Raise an exception to highlight the fact that no transaction was expected
332 to be created"""
332 to be created"""
333 raise TransactionUnavailable()
333 raise TransactionUnavailable()
334
334
335 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
335 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
336 # transform me into unbundler.apply() as soon as the freeze is lifted
336 # transform me into unbundler.apply() as soon as the freeze is lifted
337 if isinstance(unbundler, unbundle20):
337 if isinstance(unbundler, unbundle20):
338 tr.hookargs['bundle2'] = '1'
338 tr.hookargs['bundle2'] = '1'
339 if source is not None and 'source' not in tr.hookargs:
339 if source is not None and 'source' not in tr.hookargs:
340 tr.hookargs['source'] = source
340 tr.hookargs['source'] = source
341 if url is not None and 'url' not in tr.hookargs:
341 if url is not None and 'url' not in tr.hookargs:
342 tr.hookargs['url'] = url
342 tr.hookargs['url'] = url
343 return processbundle(repo, unbundler, lambda: tr)
343 return processbundle(repo, unbundler, lambda: tr)
344 else:
344 else:
345 # the transactiongetter won't be used, but we might as well set it
345 # the transactiongetter won't be used, but we might as well set it
346 op = bundleoperation(repo, lambda: tr)
346 op = bundleoperation(repo, lambda: tr)
347 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
347 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
348 return op
348 return op
349
349
350 def processbundle(repo, unbundler, transactiongetter=None, op=None):
350 def processbundle(repo, unbundler, transactiongetter=None, op=None):
351 """This function process a bundle, apply effect to/from a repo
351 """This function process a bundle, apply effect to/from a repo
352
352
353 It iterates over each part then searches for and uses the proper handling
353 It iterates over each part then searches for and uses the proper handling
354 code to process the part. Parts are processed in order.
354 code to process the part. Parts are processed in order.
355
355
356 Unknown Mandatory part will abort the process.
356 Unknown Mandatory part will abort the process.
357
357
358 It is temporarily possible to provide a prebuilt bundleoperation to the
358 It is temporarily possible to provide a prebuilt bundleoperation to the
359 function. This is used to ensure output is properly propagated in case of
359 function. This is used to ensure output is properly propagated in case of
360 an error during the unbundling. This output capturing part will likely be
360 an error during the unbundling. This output capturing part will likely be
361 reworked and this ability will probably go away in the process.
361 reworked and this ability will probably go away in the process.
362 """
362 """
363 if op is None:
363 if op is None:
364 if transactiongetter is None:
364 if transactiongetter is None:
365 transactiongetter = _notransaction
365 transactiongetter = _notransaction
366 op = bundleoperation(repo, transactiongetter)
366 op = bundleoperation(repo, transactiongetter)
367 # todo:
367 # todo:
368 # - replace this is a init function soon.
368 # - replace this is a init function soon.
369 # - exception catching
369 # - exception catching
370 unbundler.params
370 unbundler.params
371 if repo.ui.debugflag:
371 if repo.ui.debugflag:
372 msg = ['bundle2-input-bundle:']
372 msg = ['bundle2-input-bundle:']
373 if unbundler.params:
373 if unbundler.params:
374 msg.append(' %i params' % len(unbundler.params))
374 msg.append(' %i params' % len(unbundler.params))
375 if op._gettransaction is None or op._gettransaction is _notransaction:
375 if op._gettransaction is None or op._gettransaction is _notransaction:
376 msg.append(' no-transaction')
376 msg.append(' no-transaction')
377 else:
377 else:
378 msg.append(' with-transaction')
378 msg.append(' with-transaction')
379 msg.append('\n')
379 msg.append('\n')
380 repo.ui.debug(''.join(msg))
380 repo.ui.debug(''.join(msg))
381 iterparts = enumerate(unbundler.iterparts())
381 iterparts = enumerate(unbundler.iterparts())
382 part = None
382 part = None
383 nbpart = 0
383 nbpart = 0
384 try:
384 try:
385 for nbpart, part in iterparts:
385 for nbpart, part in iterparts:
386 _processpart(op, part)
386 _processpart(op, part)
387 except Exception as exc:
387 except Exception as exc:
388 # Any exceptions seeking to the end of the bundle at this point are
388 # Any exceptions seeking to the end of the bundle at this point are
389 # almost certainly related to the underlying stream being bad.
389 # almost certainly related to the underlying stream being bad.
390 # And, chances are that the exception we're handling is related to
390 # And, chances are that the exception we're handling is related to
391 # getting in that bad state. So, we swallow the seeking error and
391 # getting in that bad state. So, we swallow the seeking error and
392 # re-raise the original error.
392 # re-raise the original error.
393 seekerror = False
393 seekerror = False
394 try:
394 try:
395 for nbpart, part in iterparts:
395 for nbpart, part in iterparts:
396 # consume the bundle content
396 # consume the bundle content
397 part.seek(0, 2)
397 part.seek(0, 2)
398 except Exception:
398 except Exception:
399 seekerror = True
399 seekerror = True
400
400
401 # Small hack to let caller code distinguish exceptions from bundle2
401 # Small hack to let caller code distinguish exceptions from bundle2
402 # processing from processing the old format. This is mostly
402 # processing from processing the old format. This is mostly
403 # needed to handle different return codes to unbundle according to the
403 # needed to handle different return codes to unbundle according to the
404 # type of bundle. We should probably clean up or drop this return code
404 # type of bundle. We should probably clean up or drop this return code
405 # craziness in a future version.
405 # craziness in a future version.
406 exc.duringunbundle2 = True
406 exc.duringunbundle2 = True
407 salvaged = []
407 salvaged = []
408 replycaps = None
408 replycaps = None
409 if op.reply is not None:
409 if op.reply is not None:
410 salvaged = op.reply.salvageoutput()
410 salvaged = op.reply.salvageoutput()
411 replycaps = op.reply.capabilities
411 replycaps = op.reply.capabilities
412 exc._replycaps = replycaps
412 exc._replycaps = replycaps
413 exc._bundle2salvagedoutput = salvaged
413 exc._bundle2salvagedoutput = salvaged
414
414
415 # Re-raising from a variable loses the original stack. So only use
415 # Re-raising from a variable loses the original stack. So only use
416 # that form if we need to.
416 # that form if we need to.
417 if seekerror:
417 if seekerror:
418 raise exc
418 raise exc
419 else:
419 else:
420 raise
420 raise
421 finally:
421 finally:
422 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
422 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
423
423
424 return op
424 return op
425
425
426 def _processchangegroup(op, cg, tr, source, url, **kwargs):
426 def _processchangegroup(op, cg, tr, source, url, **kwargs):
427 ret = cg.apply(op.repo, tr, source, url, **kwargs)
427 ret = cg.apply(op.repo, tr, source, url, **kwargs)
428 op.records.add('changegroup', {
428 op.records.add('changegroup', {
429 'return': ret,
429 'return': ret,
430 })
430 })
431 return ret
431 return ret
432
432
433 def _processpart(op, part):
433 def _processpart(op, part):
434 """process a single part from a bundle
434 """process a single part from a bundle
435
435
436 The part is guaranteed to have been fully consumed when the function exits
436 The part is guaranteed to have been fully consumed when the function exits
437 (even if an exception is raised)."""
437 (even if an exception is raised)."""
438 status = 'unknown' # used by debug output
438 status = 'unknown' # used by debug output
439 hardabort = False
439 hardabort = False
440 try:
440 try:
441 try:
441 try:
442 handler = parthandlermapping.get(part.type)
442 handler = parthandlermapping.get(part.type)
443 if handler is None:
443 if handler is None:
444 status = 'unsupported-type'
444 status = 'unsupported-type'
445 raise error.BundleUnknownFeatureError(parttype=part.type)
445 raise error.BundleUnknownFeatureError(parttype=part.type)
446 indebug(op.ui, 'found a handler for part %r' % part.type)
446 indebug(op.ui, 'found a handler for part %r' % part.type)
447 unknownparams = part.mandatorykeys - handler.params
447 unknownparams = part.mandatorykeys - handler.params
448 if unknownparams:
448 if unknownparams:
449 unknownparams = list(unknownparams)
449 unknownparams = list(unknownparams)
450 unknownparams.sort()
450 unknownparams.sort()
451 status = 'unsupported-params (%s)' % unknownparams
451 status = 'unsupported-params (%s)' % unknownparams
452 raise error.BundleUnknownFeatureError(parttype=part.type,
452 raise error.BundleUnknownFeatureError(parttype=part.type,
453 params=unknownparams)
453 params=unknownparams)
454 status = 'supported'
454 status = 'supported'
455 except error.BundleUnknownFeatureError as exc:
455 except error.BundleUnknownFeatureError as exc:
456 if part.mandatory: # mandatory parts
456 if part.mandatory: # mandatory parts
457 raise
457 raise
458 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
458 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
459 return # skip to part processing
459 return # skip to part processing
460 finally:
460 finally:
461 if op.ui.debugflag:
461 if op.ui.debugflag:
462 msg = ['bundle2-input-part: "%s"' % part.type]
462 msg = ['bundle2-input-part: "%s"' % part.type]
463 if not part.mandatory:
463 if not part.mandatory:
464 msg.append(' (advisory)')
464 msg.append(' (advisory)')
465 nbmp = len(part.mandatorykeys)
465 nbmp = len(part.mandatorykeys)
466 nbap = len(part.params) - nbmp
466 nbap = len(part.params) - nbmp
467 if nbmp or nbap:
467 if nbmp or nbap:
468 msg.append(' (params:')
468 msg.append(' (params:')
469 if nbmp:
469 if nbmp:
470 msg.append(' %i mandatory' % nbmp)
470 msg.append(' %i mandatory' % nbmp)
471 if nbap:
471 if nbap:
472 msg.append(' %i advisory' % nbmp)
472 msg.append(' %i advisory' % nbmp)
473 msg.append(')')
473 msg.append(')')
474 msg.append(' %s\n' % status)
474 msg.append(' %s\n' % status)
475 op.ui.debug(''.join(msg))
475 op.ui.debug(''.join(msg))
476
476
477 # handler is called outside the above try block so that we don't
477 # handler is called outside the above try block so that we don't
478 # risk catching KeyErrors from anything other than the
478 # risk catching KeyErrors from anything other than the
479 # parthandlermapping lookup (any KeyError raised by handler()
479 # parthandlermapping lookup (any KeyError raised by handler()
480 # itself represents a defect of a different variety).
480 # itself represents a defect of a different variety).
481 output = None
481 output = None
482 if op.captureoutput and op.reply is not None:
482 if op.captureoutput and op.reply is not None:
483 op.ui.pushbuffer(error=True, subproc=True)
483 op.ui.pushbuffer(error=True, subproc=True)
484 output = ''
484 output = ''
485 try:
485 try:
486 handler(op, part)
486 handler(op, part)
487 finally:
487 finally:
488 if output is not None:
488 if output is not None:
489 output = op.ui.popbuffer()
489 output = op.ui.popbuffer()
490 if output:
490 if output:
491 outpart = op.reply.newpart('output', data=output,
491 outpart = op.reply.newpart('output', data=output,
492 mandatory=False)
492 mandatory=False)
493 outpart.addparam(
493 outpart.addparam(
494 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
494 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
495 # If exiting or interrupted, do not attempt to seek the stream in the
495 # If exiting or interrupted, do not attempt to seek the stream in the
496 # finally block below. This makes abort faster.
496 # finally block below. This makes abort faster.
497 except (SystemExit, KeyboardInterrupt):
497 except (SystemExit, KeyboardInterrupt):
498 hardabort = True
498 hardabort = True
499 raise
499 raise
500 finally:
500 finally:
501 # consume the part content to not corrupt the stream.
501 # consume the part content to not corrupt the stream.
502 if not hardabort:
502 if not hardabort:
503 part.seek(0, 2)
503 part.seek(0, 2)
504
504
505
505
506 def decodecaps(blob):
506 def decodecaps(blob):
507 """decode a bundle2 caps bytes blob into a dictionary
507 """decode a bundle2 caps bytes blob into a dictionary
508
508
509 The blob is a list of capabilities (one per line)
509 The blob is a list of capabilities (one per line)
510 Capabilities may have values using a line of the form::
510 Capabilities may have values using a line of the form::
511
511
512 capability=value1,value2,value3
512 capability=value1,value2,value3
513
513
514 The values are always a list."""
514 The values are always a list."""
515 caps = {}
515 caps = {}
516 for line in blob.splitlines():
516 for line in blob.splitlines():
517 if not line:
517 if not line:
518 continue
518 continue
519 if '=' not in line:
519 if '=' not in line:
520 key, vals = line, ()
520 key, vals = line, ()
521 else:
521 else:
522 key, vals = line.split('=', 1)
522 key, vals = line.split('=', 1)
523 vals = vals.split(',')
523 vals = vals.split(',')
524 key = urlreq.unquote(key)
524 key = urlreq.unquote(key)
525 vals = [urlreq.unquote(v) for v in vals]
525 vals = [urlreq.unquote(v) for v in vals]
526 caps[key] = vals
526 caps[key] = vals
527 return caps
527 return caps
528
528
529 def encodecaps(caps):
529 def encodecaps(caps):
530 """encode a bundle2 caps dictionary into a bytes blob"""
530 """encode a bundle2 caps dictionary into a bytes blob"""
531 chunks = []
531 chunks = []
532 for ca in sorted(caps):
532 for ca in sorted(caps):
533 vals = caps[ca]
533 vals = caps[ca]
534 ca = urlreq.quote(ca)
534 ca = urlreq.quote(ca)
535 vals = [urlreq.quote(v) for v in vals]
535 vals = [urlreq.quote(v) for v in vals]
536 if vals:
536 if vals:
537 ca = "%s=%s" % (ca, ','.join(vals))
537 ca = "%s=%s" % (ca, ','.join(vals))
538 chunks.append(ca)
538 chunks.append(ca)
539 return '\n'.join(chunks)
539 return '\n'.join(chunks)
540
540
541 bundletypes = {
541 bundletypes = {
542 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
542 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
543 # since the unification ssh accepts a header but there
543 # since the unification ssh accepts a header but there
544 # is no capability signaling it.
544 # is no capability signaling it.
545 "HG20": (), # special-cased below
545 "HG20": (), # special-cased below
546 "HG10UN": ("HG10UN", 'UN'),
546 "HG10UN": ("HG10UN", 'UN'),
547 "HG10BZ": ("HG10", 'BZ'),
547 "HG10BZ": ("HG10", 'BZ'),
548 "HG10GZ": ("HG10GZ", 'GZ'),
548 "HG10GZ": ("HG10GZ", 'GZ'),
549 }
549 }
550
550
551 # hgweb uses this list to communicate its preferred type
551 # hgweb uses this list to communicate its preferred type
552 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
552 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
553
553
554 class bundle20(object):
554 class bundle20(object):
555 """represent an outgoing bundle2 container
555 """represent an outgoing bundle2 container
556
556
557 Use the `addparam` method to add stream level parameter. and `newpart` to
557 Use the `addparam` method to add stream level parameter. and `newpart` to
558 populate it. Then call `getchunks` to retrieve all the binary chunks of
558 populate it. Then call `getchunks` to retrieve all the binary chunks of
559 data that compose the bundle2 container."""
559 data that compose the bundle2 container."""
560
560
561 _magicstring = 'HG20'
561 _magicstring = 'HG20'
562
562
563 def __init__(self, ui, capabilities=()):
563 def __init__(self, ui, capabilities=()):
564 self.ui = ui
564 self.ui = ui
565 self._params = []
565 self._params = []
566 self._parts = []
566 self._parts = []
567 self.capabilities = dict(capabilities)
567 self.capabilities = dict(capabilities)
568 self._compengine = util.compengines.forbundletype('UN')
568 self._compengine = util.compengines.forbundletype('UN')
569 self._compopts = None
569 self._compopts = None
570
570
571 def setcompression(self, alg, compopts=None):
571 def setcompression(self, alg, compopts=None):
572 """setup core part compression to <alg>"""
572 """setup core part compression to <alg>"""
573 if alg in (None, 'UN'):
573 if alg in (None, 'UN'):
574 return
574 return
575 assert not any(n.lower() == 'compression' for n, v in self._params)
575 assert not any(n.lower() == 'compression' for n, v in self._params)
576 self.addparam('Compression', alg)
576 self.addparam('Compression', alg)
577 self._compengine = util.compengines.forbundletype(alg)
577 self._compengine = util.compengines.forbundletype(alg)
578 self._compopts = compopts
578 self._compopts = compopts
579
579
580 @property
580 @property
581 def nbparts(self):
581 def nbparts(self):
582 """total number of parts added to the bundler"""
582 """total number of parts added to the bundler"""
583 return len(self._parts)
583 return len(self._parts)
584
584
585 # methods used to defines the bundle2 content
585 # methods used to defines the bundle2 content
586 def addparam(self, name, value=None):
586 def addparam(self, name, value=None):
587 """add a stream level parameter"""
587 """add a stream level parameter"""
588 if not name:
588 if not name:
589 raise ValueError('empty parameter name')
589 raise ValueError('empty parameter name')
590 if name[0] not in pycompat.bytestr(string.ascii_letters):
590 if name[0] not in pycompat.bytestr(string.ascii_letters):
591 raise ValueError('non letter first character: %r' % name)
591 raise ValueError('non letter first character: %r' % name)
592 self._params.append((name, value))
592 self._params.append((name, value))
593
593
594 def addpart(self, part):
594 def addpart(self, part):
595 """add a new part to the bundle2 container
595 """add a new part to the bundle2 container
596
596
597 Parts contains the actual applicative payload."""
597 Parts contains the actual applicative payload."""
598 assert part.id is None
598 assert part.id is None
599 part.id = len(self._parts) # very cheap counter
599 part.id = len(self._parts) # very cheap counter
600 self._parts.append(part)
600 self._parts.append(part)
601
601
602 def newpart(self, typeid, *args, **kwargs):
602 def newpart(self, typeid, *args, **kwargs):
603 """create a new part and add it to the containers
603 """create a new part and add it to the containers
604
604
605 As the part is directly added to the containers. For now, this means
605 As the part is directly added to the containers. For now, this means
606 that any failure to properly initialize the part after calling
606 that any failure to properly initialize the part after calling
607 ``newpart`` should result in a failure of the whole bundling process.
607 ``newpart`` should result in a failure of the whole bundling process.
608
608
609 You can still fall back to manually create and add if you need better
609 You can still fall back to manually create and add if you need better
610 control."""
610 control."""
611 part = bundlepart(typeid, *args, **kwargs)
611 part = bundlepart(typeid, *args, **kwargs)
612 self.addpart(part)
612 self.addpart(part)
613 return part
613 return part
614
614
615 # methods used to generate the bundle2 stream
615 # methods used to generate the bundle2 stream
616 def getchunks(self):
616 def getchunks(self):
617 if self.ui.debugflag:
617 if self.ui.debugflag:
618 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
618 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
619 if self._params:
619 if self._params:
620 msg.append(' (%i params)' % len(self._params))
620 msg.append(' (%i params)' % len(self._params))
621 msg.append(' %i parts total\n' % len(self._parts))
621 msg.append(' %i parts total\n' % len(self._parts))
622 self.ui.debug(''.join(msg))
622 self.ui.debug(''.join(msg))
623 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
623 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
624 yield self._magicstring
624 yield self._magicstring
625 param = self._paramchunk()
625 param = self._paramchunk()
626 outdebug(self.ui, 'bundle parameter: %s' % param)
626 outdebug(self.ui, 'bundle parameter: %s' % param)
627 yield _pack(_fstreamparamsize, len(param))
627 yield _pack(_fstreamparamsize, len(param))
628 if param:
628 if param:
629 yield param
629 yield param
630 for chunk in self._compengine.compressstream(self._getcorechunk(),
630 for chunk in self._compengine.compressstream(self._getcorechunk(),
631 self._compopts):
631 self._compopts):
632 yield chunk
632 yield chunk
633
633
634 def _paramchunk(self):
634 def _paramchunk(self):
635 """return a encoded version of all stream parameters"""
635 """return a encoded version of all stream parameters"""
636 blocks = []
636 blocks = []
637 for par, value in self._params:
637 for par, value in self._params:
638 par = urlreq.quote(par)
638 par = urlreq.quote(par)
639 if value is not None:
639 if value is not None:
640 value = urlreq.quote(value)
640 value = urlreq.quote(value)
641 par = '%s=%s' % (par, value)
641 par = '%s=%s' % (par, value)
642 blocks.append(par)
642 blocks.append(par)
643 return ' '.join(blocks)
643 return ' '.join(blocks)
644
644
645 def _getcorechunk(self):
645 def _getcorechunk(self):
646 """yield chunk for the core part of the bundle
646 """yield chunk for the core part of the bundle
647
647
648 (all but headers and parameters)"""
648 (all but headers and parameters)"""
649 outdebug(self.ui, 'start of parts')
649 outdebug(self.ui, 'start of parts')
650 for part in self._parts:
650 for part in self._parts:
651 outdebug(self.ui, 'bundle part: "%s"' % part.type)
651 outdebug(self.ui, 'bundle part: "%s"' % part.type)
652 for chunk in part.getchunks(ui=self.ui):
652 for chunk in part.getchunks(ui=self.ui):
653 yield chunk
653 yield chunk
654 outdebug(self.ui, 'end of bundle')
654 outdebug(self.ui, 'end of bundle')
655 yield _pack(_fpartheadersize, 0)
655 yield _pack(_fpartheadersize, 0)
656
656
657
657
658 def salvageoutput(self):
658 def salvageoutput(self):
659 """return a list with a copy of all output parts in the bundle
659 """return a list with a copy of all output parts in the bundle
660
660
661 This is meant to be used during error handling to make sure we preserve
661 This is meant to be used during error handling to make sure we preserve
662 server output"""
662 server output"""
663 salvaged = []
663 salvaged = []
664 for part in self._parts:
664 for part in self._parts:
665 if part.type.startswith('output'):
665 if part.type.startswith('output'):
666 salvaged.append(part.copy())
666 salvaged.append(part.copy())
667 return salvaged
667 return salvaged
668
668
669
669
670 class unpackermixin(object):
670 class unpackermixin(object):
671 """A mixin to extract bytes and struct data from a stream"""
671 """A mixin to extract bytes and struct data from a stream"""
672
672
673 def __init__(self, fp):
673 def __init__(self, fp):
674 self._fp = fp
674 self._fp = fp
675
675
676 def _unpack(self, format):
676 def _unpack(self, format):
677 """unpack this struct format from the stream
677 """unpack this struct format from the stream
678
678
679 This method is meant for internal usage by the bundle2 protocol only.
679 This method is meant for internal usage by the bundle2 protocol only.
680 They directly manipulate the low level stream including bundle2 level
680 They directly manipulate the low level stream including bundle2 level
681 instruction.
681 instruction.
682
682
683 Do not use it to implement higher-level logic or methods."""
683 Do not use it to implement higher-level logic or methods."""
684 data = self._readexact(struct.calcsize(format))
684 data = self._readexact(struct.calcsize(format))
685 return _unpack(format, data)
685 return _unpack(format, data)
686
686
687 def _readexact(self, size):
687 def _readexact(self, size):
688 """read exactly <size> bytes from the stream
688 """read exactly <size> bytes from the stream
689
689
690 This method is meant for internal usage by the bundle2 protocol only.
690 This method is meant for internal usage by the bundle2 protocol only.
691 They directly manipulate the low level stream including bundle2 level
691 They directly manipulate the low level stream including bundle2 level
692 instruction.
692 instruction.
693
693
694 Do not use it to implement higher-level logic or methods."""
694 Do not use it to implement higher-level logic or methods."""
695 return changegroup.readexactly(self._fp, size)
695 return changegroup.readexactly(self._fp, size)
696
696
697 def getunbundler(ui, fp, magicstring=None):
697 def getunbundler(ui, fp, magicstring=None):
698 """return a valid unbundler object for a given magicstring"""
698 """return a valid unbundler object for a given magicstring"""
699 if magicstring is None:
699 if magicstring is None:
700 magicstring = changegroup.readexactly(fp, 4)
700 magicstring = changegroup.readexactly(fp, 4)
701 magic, version = magicstring[0:2], magicstring[2:4]
701 magic, version = magicstring[0:2], magicstring[2:4]
702 if magic != 'HG':
702 if magic != 'HG':
703 ui.debug(
703 ui.debug(
704 "error: invalid magic: %r (version %r), should be 'HG'\n"
704 "error: invalid magic: %r (version %r), should be 'HG'\n"
705 % (magic, version))
705 % (magic, version))
706 raise error.Abort(_('not a Mercurial bundle'))
706 raise error.Abort(_('not a Mercurial bundle'))
707 unbundlerclass = formatmap.get(version)
707 unbundlerclass = formatmap.get(version)
708 if unbundlerclass is None:
708 if unbundlerclass is None:
709 raise error.Abort(_('unknown bundle version %s') % version)
709 raise error.Abort(_('unknown bundle version %s') % version)
710 unbundler = unbundlerclass(ui, fp)
710 unbundler = unbundlerclass(ui, fp)
711 indebug(ui, 'start processing of %s stream' % magicstring)
711 indebug(ui, 'start processing of %s stream' % magicstring)
712 return unbundler
712 return unbundler
713
713
714 class unbundle20(unpackermixin):
714 class unbundle20(unpackermixin):
715 """interpret a bundle2 stream
715 """interpret a bundle2 stream
716
716
717 This class is fed with a binary stream and yields parts through its
717 This class is fed with a binary stream and yields parts through its
718 `iterparts` methods."""
718 `iterparts` methods."""
719
719
720 _magicstring = 'HG20'
720 _magicstring = 'HG20'
721
721
722 def __init__(self, ui, fp):
722 def __init__(self, ui, fp):
723 """If header is specified, we do not read it out of the stream."""
723 """If header is specified, we do not read it out of the stream."""
724 self.ui = ui
724 self.ui = ui
725 self._compengine = util.compengines.forbundletype('UN')
725 self._compengine = util.compengines.forbundletype('UN')
726 self._compressed = None
726 self._compressed = None
727 super(unbundle20, self).__init__(fp)
727 super(unbundle20, self).__init__(fp)
728
728
729 @util.propertycache
729 @util.propertycache
730 def params(self):
730 def params(self):
731 """dictionary of stream level parameters"""
731 """dictionary of stream level parameters"""
732 indebug(self.ui, 'reading bundle2 stream parameters')
732 indebug(self.ui, 'reading bundle2 stream parameters')
733 params = {}
733 params = {}
734 paramssize = self._unpack(_fstreamparamsize)[0]
734 paramssize = self._unpack(_fstreamparamsize)[0]
735 if paramssize < 0:
735 if paramssize < 0:
736 raise error.BundleValueError('negative bundle param size: %i'
736 raise error.BundleValueError('negative bundle param size: %i'
737 % paramssize)
737 % paramssize)
738 if paramssize:
738 if paramssize:
739 params = self._readexact(paramssize)
739 params = self._readexact(paramssize)
740 params = self._processallparams(params)
740 params = self._processallparams(params)
741 return params
741 return params
742
742
743 def _processallparams(self, paramsblock):
743 def _processallparams(self, paramsblock):
744 """"""
744 """"""
745 params = util.sortdict()
745 params = util.sortdict()
746 for p in paramsblock.split(' '):
746 for p in paramsblock.split(' '):
747 p = p.split('=', 1)
747 p = p.split('=', 1)
748 p = [urlreq.unquote(i) for i in p]
748 p = [urlreq.unquote(i) for i in p]
749 if len(p) < 2:
749 if len(p) < 2:
750 p.append(None)
750 p.append(None)
751 self._processparam(*p)
751 self._processparam(*p)
752 params[p[0]] = p[1]
752 params[p[0]] = p[1]
753 return params
753 return params
754
754
755
755
756 def _processparam(self, name, value):
756 def _processparam(self, name, value):
757 """process a parameter, applying its effect if needed
757 """process a parameter, applying its effect if needed
758
758
759 Parameter starting with a lower case letter are advisory and will be
759 Parameter starting with a lower case letter are advisory and will be
760 ignored when unknown. Those starting with an upper case letter are
760 ignored when unknown. Those starting with an upper case letter are
761 mandatory and will this function will raise a KeyError when unknown.
761 mandatory and will this function will raise a KeyError when unknown.
762
762
763 Note: no option are currently supported. Any input will be either
763 Note: no option are currently supported. Any input will be either
764 ignored or failing.
764 ignored or failing.
765 """
765 """
766 if not name:
766 if not name:
767 raise ValueError('empty parameter name')
767 raise ValueError('empty parameter name')
768 if name[0] not in pycompat.bytestr(string.ascii_letters):
768 if name[0] not in pycompat.bytestr(string.ascii_letters):
769 raise ValueError('non letter first character: %r' % name)
769 raise ValueError('non letter first character: %r' % name)
770 try:
770 try:
771 handler = b2streamparamsmap[name.lower()]
771 handler = b2streamparamsmap[name.lower()]
772 except KeyError:
772 except KeyError:
773 if name[0].islower():
773 if name[0].islower():
774 indebug(self.ui, "ignoring unknown parameter %r" % name)
774 indebug(self.ui, "ignoring unknown parameter %r" % name)
775 else:
775 else:
776 raise error.BundleUnknownFeatureError(params=(name,))
776 raise error.BundleUnknownFeatureError(params=(name,))
777 else:
777 else:
778 handler(self, name, value)
778 handler(self, name, value)
779
779
780 def _forwardchunks(self):
780 def _forwardchunks(self):
781 """utility to transfer a bundle2 as binary
781 """utility to transfer a bundle2 as binary
782
782
783 This is made necessary by the fact the 'getbundle' command over 'ssh'
783 This is made necessary by the fact the 'getbundle' command over 'ssh'
784 have no way to know then the reply end, relying on the bundle to be
784 have no way to know then the reply end, relying on the bundle to be
785 interpreted to know its end. This is terrible and we are sorry, but we
785 interpreted to know its end. This is terrible and we are sorry, but we
786 needed to move forward to get general delta enabled.
786 needed to move forward to get general delta enabled.
787 """
787 """
788 yield self._magicstring
788 yield self._magicstring
789 assert 'params' not in vars(self)
789 assert 'params' not in vars(self)
790 paramssize = self._unpack(_fstreamparamsize)[0]
790 paramssize = self._unpack(_fstreamparamsize)[0]
791 if paramssize < 0:
791 if paramssize < 0:
792 raise error.BundleValueError('negative bundle param size: %i'
792 raise error.BundleValueError('negative bundle param size: %i'
793 % paramssize)
793 % paramssize)
794 yield _pack(_fstreamparamsize, paramssize)
794 yield _pack(_fstreamparamsize, paramssize)
795 if paramssize:
795 if paramssize:
796 params = self._readexact(paramssize)
796 params = self._readexact(paramssize)
797 self._processallparams(params)
797 self._processallparams(params)
798 yield params
798 yield params
799 assert self._compengine.bundletype == 'UN'
799 assert self._compengine.bundletype == 'UN'
800 # From there, payload might need to be decompressed
800 # From there, payload might need to be decompressed
801 self._fp = self._compengine.decompressorreader(self._fp)
801 self._fp = self._compengine.decompressorreader(self._fp)
802 emptycount = 0
802 emptycount = 0
803 while emptycount < 2:
803 while emptycount < 2:
804 # so we can brainlessly loop
804 # so we can brainlessly loop
805 assert _fpartheadersize == _fpayloadsize
805 assert _fpartheadersize == _fpayloadsize
806 size = self._unpack(_fpartheadersize)[0]
806 size = self._unpack(_fpartheadersize)[0]
807 yield _pack(_fpartheadersize, size)
807 yield _pack(_fpartheadersize, size)
808 if size:
808 if size:
809 emptycount = 0
809 emptycount = 0
810 else:
810 else:
811 emptycount += 1
811 emptycount += 1
812 continue
812 continue
813 if size == flaginterrupt:
813 if size == flaginterrupt:
814 continue
814 continue
815 elif size < 0:
815 elif size < 0:
816 raise error.BundleValueError('negative chunk size: %i')
816 raise error.BundleValueError('negative chunk size: %i')
817 yield self._readexact(size)
817 yield self._readexact(size)
818
818
819
819
820 def iterparts(self):
820 def iterparts(self):
821 """yield all parts contained in the stream"""
821 """yield all parts contained in the stream"""
822 # make sure param have been loaded
822 # make sure param have been loaded
823 self.params
823 self.params
824 # From there, payload need to be decompressed
824 # From there, payload need to be decompressed
825 self._fp = self._compengine.decompressorreader(self._fp)
825 self._fp = self._compengine.decompressorreader(self._fp)
826 indebug(self.ui, 'start extraction of bundle2 parts')
826 indebug(self.ui, 'start extraction of bundle2 parts')
827 headerblock = self._readpartheader()
827 headerblock = self._readpartheader()
828 while headerblock is not None:
828 while headerblock is not None:
829 part = unbundlepart(self.ui, headerblock, self._fp)
829 part = unbundlepart(self.ui, headerblock, self._fp)
830 yield part
830 yield part
831 # Seek to the end of the part to force it's consumption so the next
831 # Seek to the end of the part to force it's consumption so the next
832 # part can be read. But then seek back to the beginning so the
832 # part can be read. But then seek back to the beginning so the
833 # code consuming this generator has a part that starts at 0.
833 # code consuming this generator has a part that starts at 0.
834 part.seek(0, 2)
834 part.seek(0, 2)
835 part.seek(0)
835 part.seek(0)
836 headerblock = self._readpartheader()
836 headerblock = self._readpartheader()
837 indebug(self.ui, 'end of bundle2 stream')
837 indebug(self.ui, 'end of bundle2 stream')
838
838
839 def _readpartheader(self):
839 def _readpartheader(self):
840 """reads a part header size and return the bytes blob
840 """reads a part header size and return the bytes blob
841
841
842 returns None if empty"""
842 returns None if empty"""
843 headersize = self._unpack(_fpartheadersize)[0]
843 headersize = self._unpack(_fpartheadersize)[0]
844 if headersize < 0:
844 if headersize < 0:
845 raise error.BundleValueError('negative part header size: %i'
845 raise error.BundleValueError('negative part header size: %i'
846 % headersize)
846 % headersize)
847 indebug(self.ui, 'part header size: %i' % headersize)
847 indebug(self.ui, 'part header size: %i' % headersize)
848 if headersize:
848 if headersize:
849 return self._readexact(headersize)
849 return self._readexact(headersize)
850 return None
850 return None
851
851
852 def compressed(self):
852 def compressed(self):
853 self.params # load params
853 self.params # load params
854 return self._compressed
854 return self._compressed
855
855
856 def close(self):
856 def close(self):
857 """close underlying file"""
857 """close underlying file"""
858 if util.safehasattr(self._fp, 'close'):
858 if util.safehasattr(self._fp, 'close'):
859 return self._fp.close()
859 return self._fp.close()
860
860
861 formatmap = {'20': unbundle20}
861 formatmap = {'20': unbundle20}
862
862
863 b2streamparamsmap = {}
863 b2streamparamsmap = {}
864
864
865 def b2streamparamhandler(name):
865 def b2streamparamhandler(name):
866 """register a handler for a stream level parameter"""
866 """register a handler for a stream level parameter"""
867 def decorator(func):
867 def decorator(func):
868 assert name not in formatmap
868 assert name not in formatmap
869 b2streamparamsmap[name] = func
869 b2streamparamsmap[name] = func
870 return func
870 return func
871 return decorator
871 return decorator
872
872
873 @b2streamparamhandler('compression')
873 @b2streamparamhandler('compression')
874 def processcompression(unbundler, param, value):
874 def processcompression(unbundler, param, value):
875 """read compression parameter and install payload decompression"""
875 """read compression parameter and install payload decompression"""
876 if value not in util.compengines.supportedbundletypes:
876 if value not in util.compengines.supportedbundletypes:
877 raise error.BundleUnknownFeatureError(params=(param,),
877 raise error.BundleUnknownFeatureError(params=(param,),
878 values=(value,))
878 values=(value,))
879 unbundler._compengine = util.compengines.forbundletype(value)
879 unbundler._compengine = util.compengines.forbundletype(value)
880 if value is not None:
880 if value is not None:
881 unbundler._compressed = True
881 unbundler._compressed = True
882
882
883 class bundlepart(object):
883 class bundlepart(object):
884 """A bundle2 part contains application level payload
884 """A bundle2 part contains application level payload
885
885
886 The part `type` is used to route the part to the application level
886 The part `type` is used to route the part to the application level
887 handler.
887 handler.
888
888
889 The part payload is contained in ``part.data``. It could be raw bytes or a
889 The part payload is contained in ``part.data``. It could be raw bytes or a
890 generator of byte chunks.
890 generator of byte chunks.
891
891
892 You can add parameters to the part using the ``addparam`` method.
892 You can add parameters to the part using the ``addparam`` method.
893 Parameters can be either mandatory (default) or advisory. Remote side
893 Parameters can be either mandatory (default) or advisory. Remote side
894 should be able to safely ignore the advisory ones.
894 should be able to safely ignore the advisory ones.
895
895
896 Both data and parameters cannot be modified after the generation has begun.
896 Both data and parameters cannot be modified after the generation has begun.
897 """
897 """
898
898
899 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
899 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
900 data='', mandatory=True):
900 data='', mandatory=True):
901 validateparttype(parttype)
901 validateparttype(parttype)
902 self.id = None
902 self.id = None
903 self.type = parttype
903 self.type = parttype
904 self._data = data
904 self._data = data
905 self._mandatoryparams = list(mandatoryparams)
905 self._mandatoryparams = list(mandatoryparams)
906 self._advisoryparams = list(advisoryparams)
906 self._advisoryparams = list(advisoryparams)
907 # checking for duplicated entries
907 # checking for duplicated entries
908 self._seenparams = set()
908 self._seenparams = set()
909 for pname, __ in self._mandatoryparams + self._advisoryparams:
909 for pname, __ in self._mandatoryparams + self._advisoryparams:
910 if pname in self._seenparams:
910 if pname in self._seenparams:
911 raise error.ProgrammingError('duplicated params: %s' % pname)
911 raise error.ProgrammingError('duplicated params: %s' % pname)
912 self._seenparams.add(pname)
912 self._seenparams.add(pname)
913 # status of the part's generation:
913 # status of the part's generation:
914 # - None: not started,
914 # - None: not started,
915 # - False: currently generated,
915 # - False: currently generated,
916 # - True: generation done.
916 # - True: generation done.
917 self._generated = None
917 self._generated = None
918 self.mandatory = mandatory
918 self.mandatory = mandatory
919
919
920 def __repr__(self):
920 def __repr__(self):
921 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
921 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
922 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
922 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
923 % (cls, id(self), self.id, self.type, self.mandatory))
923 % (cls, id(self), self.id, self.type, self.mandatory))
924
924
925 def copy(self):
925 def copy(self):
926 """return a copy of the part
926 """return a copy of the part
927
927
928 The new part have the very same content but no partid assigned yet.
928 The new part have the very same content but no partid assigned yet.
929 Parts with generated data cannot be copied."""
929 Parts with generated data cannot be copied."""
930 assert not util.safehasattr(self.data, 'next')
930 assert not util.safehasattr(self.data, 'next')
931 return self.__class__(self.type, self._mandatoryparams,
931 return self.__class__(self.type, self._mandatoryparams,
932 self._advisoryparams, self._data, self.mandatory)
932 self._advisoryparams, self._data, self.mandatory)
933
933
934 # methods used to defines the part content
934 # methods used to defines the part content
935 @property
935 @property
936 def data(self):
936 def data(self):
937 return self._data
937 return self._data
938
938
939 @data.setter
939 @data.setter
940 def data(self, data):
940 def data(self, data):
941 if self._generated is not None:
941 if self._generated is not None:
942 raise error.ReadOnlyPartError('part is being generated')
942 raise error.ReadOnlyPartError('part is being generated')
943 self._data = data
943 self._data = data
944
944
945 @property
945 @property
946 def mandatoryparams(self):
946 def mandatoryparams(self):
947 # make it an immutable tuple to force people through ``addparam``
947 # make it an immutable tuple to force people through ``addparam``
948 return tuple(self._mandatoryparams)
948 return tuple(self._mandatoryparams)
949
949
950 @property
950 @property
951 def advisoryparams(self):
951 def advisoryparams(self):
952 # make it an immutable tuple to force people through ``addparam``
952 # make it an immutable tuple to force people through ``addparam``
953 return tuple(self._advisoryparams)
953 return tuple(self._advisoryparams)
954
954
955 def addparam(self, name, value='', mandatory=True):
955 def addparam(self, name, value='', mandatory=True):
956 """add a parameter to the part
956 """add a parameter to the part
957
957
958 If 'mandatory' is set to True, the remote handler must claim support
958 If 'mandatory' is set to True, the remote handler must claim support
959 for this parameter or the unbundling will be aborted.
959 for this parameter or the unbundling will be aborted.
960
960
961 The 'name' and 'value' cannot exceed 255 bytes each.
961 The 'name' and 'value' cannot exceed 255 bytes each.
962 """
962 """
963 if self._generated is not None:
963 if self._generated is not None:
964 raise error.ReadOnlyPartError('part is being generated')
964 raise error.ReadOnlyPartError('part is being generated')
965 if name in self._seenparams:
965 if name in self._seenparams:
966 raise ValueError('duplicated params: %s' % name)
966 raise ValueError('duplicated params: %s' % name)
967 self._seenparams.add(name)
967 self._seenparams.add(name)
968 params = self._advisoryparams
968 params = self._advisoryparams
969 if mandatory:
969 if mandatory:
970 params = self._mandatoryparams
970 params = self._mandatoryparams
971 params.append((name, value))
971 params.append((name, value))
972
972
973 # methods used to generates the bundle2 stream
973 # methods used to generates the bundle2 stream
974 def getchunks(self, ui):
974 def getchunks(self, ui):
975 if self._generated is not None:
975 if self._generated is not None:
976 raise error.ProgrammingError('part can only be consumed once')
976 raise error.ProgrammingError('part can only be consumed once')
977 self._generated = False
977 self._generated = False
978
978
979 if ui.debugflag:
979 if ui.debugflag:
980 msg = ['bundle2-output-part: "%s"' % self.type]
980 msg = ['bundle2-output-part: "%s"' % self.type]
981 if not self.mandatory:
981 if not self.mandatory:
982 msg.append(' (advisory)')
982 msg.append(' (advisory)')
983 nbmp = len(self.mandatoryparams)
983 nbmp = len(self.mandatoryparams)
984 nbap = len(self.advisoryparams)
984 nbap = len(self.advisoryparams)
985 if nbmp or nbap:
985 if nbmp or nbap:
986 msg.append(' (params:')
986 msg.append(' (params:')
987 if nbmp:
987 if nbmp:
988 msg.append(' %i mandatory' % nbmp)
988 msg.append(' %i mandatory' % nbmp)
989 if nbap:
989 if nbap:
990 msg.append(' %i advisory' % nbmp)
990 msg.append(' %i advisory' % nbmp)
991 msg.append(')')
991 msg.append(')')
992 if not self.data:
992 if not self.data:
993 msg.append(' empty payload')
993 msg.append(' empty payload')
994 elif util.safehasattr(self.data, 'next'):
994 elif util.safehasattr(self.data, 'next'):
995 msg.append(' streamed payload')
995 msg.append(' streamed payload')
996 else:
996 else:
997 msg.append(' %i bytes payload' % len(self.data))
997 msg.append(' %i bytes payload' % len(self.data))
998 msg.append('\n')
998 msg.append('\n')
999 ui.debug(''.join(msg))
999 ui.debug(''.join(msg))
1000
1000
1001 #### header
1001 #### header
1002 if self.mandatory:
1002 if self.mandatory:
1003 parttype = self.type.upper()
1003 parttype = self.type.upper()
1004 else:
1004 else:
1005 parttype = self.type.lower()
1005 parttype = self.type.lower()
1006 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1006 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1007 ## parttype
1007 ## parttype
1008 header = [_pack(_fparttypesize, len(parttype)),
1008 header = [_pack(_fparttypesize, len(parttype)),
1009 parttype, _pack(_fpartid, self.id),
1009 parttype, _pack(_fpartid, self.id),
1010 ]
1010 ]
1011 ## parameters
1011 ## parameters
1012 # count
1012 # count
1013 manpar = self.mandatoryparams
1013 manpar = self.mandatoryparams
1014 advpar = self.advisoryparams
1014 advpar = self.advisoryparams
1015 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1015 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1016 # size
1016 # size
1017 parsizes = []
1017 parsizes = []
1018 for key, value in manpar:
1018 for key, value in manpar:
1019 parsizes.append(len(key))
1019 parsizes.append(len(key))
1020 parsizes.append(len(value))
1020 parsizes.append(len(value))
1021 for key, value in advpar:
1021 for key, value in advpar:
1022 parsizes.append(len(key))
1022 parsizes.append(len(key))
1023 parsizes.append(len(value))
1023 parsizes.append(len(value))
1024 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1024 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1025 header.append(paramsizes)
1025 header.append(paramsizes)
1026 # key, value
1026 # key, value
1027 for key, value in manpar:
1027 for key, value in manpar:
1028 header.append(key)
1028 header.append(key)
1029 header.append(value)
1029 header.append(value)
1030 for key, value in advpar:
1030 for key, value in advpar:
1031 header.append(key)
1031 header.append(key)
1032 header.append(value)
1032 header.append(value)
1033 ## finalize header
1033 ## finalize header
1034 headerchunk = ''.join(header)
1034 headerchunk = ''.join(header)
1035 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1035 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1036 yield _pack(_fpartheadersize, len(headerchunk))
1036 yield _pack(_fpartheadersize, len(headerchunk))
1037 yield headerchunk
1037 yield headerchunk
1038 ## payload
1038 ## payload
1039 try:
1039 try:
1040 for chunk in self._payloadchunks():
1040 for chunk in self._payloadchunks():
1041 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1041 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1042 yield _pack(_fpayloadsize, len(chunk))
1042 yield _pack(_fpayloadsize, len(chunk))
1043 yield chunk
1043 yield chunk
1044 except GeneratorExit:
1044 except GeneratorExit:
1045 # GeneratorExit means that nobody is listening for our
1045 # GeneratorExit means that nobody is listening for our
1046 # results anyway, so just bail quickly rather than trying
1046 # results anyway, so just bail quickly rather than trying
1047 # to produce an error part.
1047 # to produce an error part.
1048 ui.debug('bundle2-generatorexit\n')
1048 ui.debug('bundle2-generatorexit\n')
1049 raise
1049 raise
1050 except BaseException as exc:
1050 except BaseException as exc:
1051 bexc = util.forcebytestr(exc)
1051 bexc = util.forcebytestr(exc)
1052 # backup exception data for later
1052 # backup exception data for later
1053 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1053 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1054 % bexc)
1054 % bexc)
1055 tb = sys.exc_info()[2]
1055 tb = sys.exc_info()[2]
1056 msg = 'unexpected error: %s' % bexc
1056 msg = 'unexpected error: %s' % bexc
1057 interpart = bundlepart('error:abort', [('message', msg)],
1057 interpart = bundlepart('error:abort', [('message', msg)],
1058 mandatory=False)
1058 mandatory=False)
1059 interpart.id = 0
1059 interpart.id = 0
1060 yield _pack(_fpayloadsize, -1)
1060 yield _pack(_fpayloadsize, -1)
1061 for chunk in interpart.getchunks(ui=ui):
1061 for chunk in interpart.getchunks(ui=ui):
1062 yield chunk
1062 yield chunk
1063 outdebug(ui, 'closing payload chunk')
1063 outdebug(ui, 'closing payload chunk')
1064 # abort current part payload
1064 # abort current part payload
1065 yield _pack(_fpayloadsize, 0)
1065 yield _pack(_fpayloadsize, 0)
1066 pycompat.raisewithtb(exc, tb)
1066 pycompat.raisewithtb(exc, tb)
1067 # end of payload
1067 # end of payload
1068 outdebug(ui, 'closing payload chunk')
1068 outdebug(ui, 'closing payload chunk')
1069 yield _pack(_fpayloadsize, 0)
1069 yield _pack(_fpayloadsize, 0)
1070 self._generated = True
1070 self._generated = True
1071
1071
1072 def _payloadchunks(self):
1072 def _payloadchunks(self):
1073 """yield chunks of a the part payload
1073 """yield chunks of a the part payload
1074
1074
1075 Exists to handle the different methods to provide data to a part."""
1075 Exists to handle the different methods to provide data to a part."""
1076 # we only support fixed size data now.
1076 # we only support fixed size data now.
1077 # This will be improved in the future.
1077 # This will be improved in the future.
1078 if (util.safehasattr(self.data, 'next')
1078 if (util.safehasattr(self.data, 'next')
1079 or util.safehasattr(self.data, '__next__')):
1079 or util.safehasattr(self.data, '__next__')):
1080 buff = util.chunkbuffer(self.data)
1080 buff = util.chunkbuffer(self.data)
1081 chunk = buff.read(preferedchunksize)
1081 chunk = buff.read(preferedchunksize)
1082 while chunk:
1082 while chunk:
1083 yield chunk
1083 yield chunk
1084 chunk = buff.read(preferedchunksize)
1084 chunk = buff.read(preferedchunksize)
1085 elif len(self.data):
1085 elif len(self.data):
1086 yield self.data
1086 yield self.data
1087
1087
1088
1088
1089 flaginterrupt = -1
1089 flaginterrupt = -1
1090
1090
1091 class interrupthandler(unpackermixin):
1091 class interrupthandler(unpackermixin):
1092 """read one part and process it with restricted capability
1092 """read one part and process it with restricted capability
1093
1093
1094 This allows to transmit exception raised on the producer size during part
1094 This allows to transmit exception raised on the producer size during part
1095 iteration while the consumer is reading a part.
1095 iteration while the consumer is reading a part.
1096
1096
1097 Part processed in this manner only have access to a ui object,"""
1097 Part processed in this manner only have access to a ui object,"""
1098
1098
1099 def __init__(self, ui, fp):
1099 def __init__(self, ui, fp):
1100 super(interrupthandler, self).__init__(fp)
1100 super(interrupthandler, self).__init__(fp)
1101 self.ui = ui
1101 self.ui = ui
1102
1102
1103 def _readpartheader(self):
1103 def _readpartheader(self):
1104 """reads a part header size and return the bytes blob
1104 """reads a part header size and return the bytes blob
1105
1105
1106 returns None if empty"""
1106 returns None if empty"""
1107 headersize = self._unpack(_fpartheadersize)[0]
1107 headersize = self._unpack(_fpartheadersize)[0]
1108 if headersize < 0:
1108 if headersize < 0:
1109 raise error.BundleValueError('negative part header size: %i'
1109 raise error.BundleValueError('negative part header size: %i'
1110 % headersize)
1110 % headersize)
1111 indebug(self.ui, 'part header size: %i\n' % headersize)
1111 indebug(self.ui, 'part header size: %i\n' % headersize)
1112 if headersize:
1112 if headersize:
1113 return self._readexact(headersize)
1113 return self._readexact(headersize)
1114 return None
1114 return None
1115
1115
1116 def __call__(self):
1116 def __call__(self):
1117
1117
1118 self.ui.debug('bundle2-input-stream-interrupt:'
1118 self.ui.debug('bundle2-input-stream-interrupt:'
1119 ' opening out of band context\n')
1119 ' opening out of band context\n')
1120 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1120 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1121 headerblock = self._readpartheader()
1121 headerblock = self._readpartheader()
1122 if headerblock is None:
1122 if headerblock is None:
1123 indebug(self.ui, 'no part found during interruption.')
1123 indebug(self.ui, 'no part found during interruption.')
1124 return
1124 return
1125 part = unbundlepart(self.ui, headerblock, self._fp)
1125 part = unbundlepart(self.ui, headerblock, self._fp)
1126 op = interruptoperation(self.ui)
1126 op = interruptoperation(self.ui)
1127 _processpart(op, part)
1127 _processpart(op, part)
1128 self.ui.debug('bundle2-input-stream-interrupt:'
1128 self.ui.debug('bundle2-input-stream-interrupt:'
1129 ' closing out of band context\n')
1129 ' closing out of band context\n')
1130
1130
1131 class interruptoperation(object):
1131 class interruptoperation(object):
1132 """A limited operation to be use by part handler during interruption
1132 """A limited operation to be use by part handler during interruption
1133
1133
1134 It only have access to an ui object.
1134 It only have access to an ui object.
1135 """
1135 """
1136
1136
1137 def __init__(self, ui):
1137 def __init__(self, ui):
1138 self.ui = ui
1138 self.ui = ui
1139 self.reply = None
1139 self.reply = None
1140 self.captureoutput = False
1140 self.captureoutput = False
1141
1141
1142 @property
1142 @property
1143 def repo(self):
1143 def repo(self):
1144 raise error.ProgrammingError('no repo access from stream interruption')
1144 raise error.ProgrammingError('no repo access from stream interruption')
1145
1145
1146 def gettransaction(self):
1146 def gettransaction(self):
1147 raise TransactionUnavailable('no repo access from stream interruption')
1147 raise TransactionUnavailable('no repo access from stream interruption')
1148
1148
1149 class unbundlepart(unpackermixin):
1149 class unbundlepart(unpackermixin):
1150 """a bundle part read from a bundle"""
1150 """a bundle part read from a bundle"""
1151
1151
1152 def __init__(self, ui, header, fp):
1152 def __init__(self, ui, header, fp):
1153 super(unbundlepart, self).__init__(fp)
1153 super(unbundlepart, self).__init__(fp)
1154 self._seekable = (util.safehasattr(fp, 'seek') and
1154 self._seekable = (util.safehasattr(fp, 'seek') and
1155 util.safehasattr(fp, 'tell'))
1155 util.safehasattr(fp, 'tell'))
1156 self.ui = ui
1156 self.ui = ui
1157 # unbundle state attr
1157 # unbundle state attr
1158 self._headerdata = header
1158 self._headerdata = header
1159 self._headeroffset = 0
1159 self._headeroffset = 0
1160 self._initialized = False
1160 self._initialized = False
1161 self.consumed = False
1161 self.consumed = False
1162 # part data
1162 # part data
1163 self.id = None
1163 self.id = None
1164 self.type = None
1164 self.type = None
1165 self.mandatoryparams = None
1165 self.mandatoryparams = None
1166 self.advisoryparams = None
1166 self.advisoryparams = None
1167 self.params = None
1167 self.params = None
1168 self.mandatorykeys = ()
1168 self.mandatorykeys = ()
1169 self._payloadstream = None
1169 self._payloadstream = None
1170 self._readheader()
1170 self._readheader()
1171 self._mandatory = None
1171 self._mandatory = None
1172 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1172 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1173 self._pos = 0
1173 self._pos = 0
1174
1174
1175 def _fromheader(self, size):
1175 def _fromheader(self, size):
1176 """return the next <size> byte from the header"""
1176 """return the next <size> byte from the header"""
1177 offset = self._headeroffset
1177 offset = self._headeroffset
1178 data = self._headerdata[offset:(offset + size)]
1178 data = self._headerdata[offset:(offset + size)]
1179 self._headeroffset = offset + size
1179 self._headeroffset = offset + size
1180 return data
1180 return data
1181
1181
1182 def _unpackheader(self, format):
1182 def _unpackheader(self, format):
1183 """read given format from header
1183 """read given format from header
1184
1184
1185 This automatically compute the size of the format to read."""
1185 This automatically compute the size of the format to read."""
1186 data = self._fromheader(struct.calcsize(format))
1186 data = self._fromheader(struct.calcsize(format))
1187 return _unpack(format, data)
1187 return _unpack(format, data)
1188
1188
1189 def _initparams(self, mandatoryparams, advisoryparams):
1189 def _initparams(self, mandatoryparams, advisoryparams):
1190 """internal function to setup all logic related parameters"""
1190 """internal function to setup all logic related parameters"""
1191 # make it read only to prevent people touching it by mistake.
1191 # make it read only to prevent people touching it by mistake.
1192 self.mandatoryparams = tuple(mandatoryparams)
1192 self.mandatoryparams = tuple(mandatoryparams)
1193 self.advisoryparams = tuple(advisoryparams)
1193 self.advisoryparams = tuple(advisoryparams)
1194 # user friendly UI
1194 # user friendly UI
1195 self.params = util.sortdict(self.mandatoryparams)
1195 self.params = util.sortdict(self.mandatoryparams)
1196 self.params.update(self.advisoryparams)
1196 self.params.update(self.advisoryparams)
1197 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1197 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1198
1198
1199 def _payloadchunks(self, chunknum=0):
1199 def _payloadchunks(self, chunknum=0):
1200 '''seek to specified chunk and start yielding data'''
1200 '''seek to specified chunk and start yielding data'''
1201 if len(self._chunkindex) == 0:
1201 if len(self._chunkindex) == 0:
1202 assert chunknum == 0, 'Must start with chunk 0'
1202 assert chunknum == 0, 'Must start with chunk 0'
1203 self._chunkindex.append((0, self._tellfp()))
1203 self._chunkindex.append((0, self._tellfp()))
1204 else:
1204 else:
1205 assert chunknum < len(self._chunkindex), \
1205 assert chunknum < len(self._chunkindex), \
1206 'Unknown chunk %d' % chunknum
1206 'Unknown chunk %d' % chunknum
1207 self._seekfp(self._chunkindex[chunknum][1])
1207 self._seekfp(self._chunkindex[chunknum][1])
1208
1208
1209 pos = self._chunkindex[chunknum][0]
1209 pos = self._chunkindex[chunknum][0]
1210 payloadsize = self._unpack(_fpayloadsize)[0]
1210 payloadsize = self._unpack(_fpayloadsize)[0]
1211 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1211 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1212 while payloadsize:
1212 while payloadsize:
1213 if payloadsize == flaginterrupt:
1213 if payloadsize == flaginterrupt:
1214 # interruption detection, the handler will now read a
1214 # interruption detection, the handler will now read a
1215 # single part and process it.
1215 # single part and process it.
1216 interrupthandler(self.ui, self._fp)()
1216 interrupthandler(self.ui, self._fp)()
1217 elif payloadsize < 0:
1217 elif payloadsize < 0:
1218 msg = 'negative payload chunk size: %i' % payloadsize
1218 msg = 'negative payload chunk size: %i' % payloadsize
1219 raise error.BundleValueError(msg)
1219 raise error.BundleValueError(msg)
1220 else:
1220 else:
1221 result = self._readexact(payloadsize)
1221 result = self._readexact(payloadsize)
1222 chunknum += 1
1222 chunknum += 1
1223 pos += payloadsize
1223 pos += payloadsize
1224 if chunknum == len(self._chunkindex):
1224 if chunknum == len(self._chunkindex):
1225 self._chunkindex.append((pos, self._tellfp()))
1225 self._chunkindex.append((pos, self._tellfp()))
1226 yield result
1226 yield result
1227 payloadsize = self._unpack(_fpayloadsize)[0]
1227 payloadsize = self._unpack(_fpayloadsize)[0]
1228 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1228 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1229
1229
1230 def _findchunk(self, pos):
1230 def _findchunk(self, pos):
1231 '''for a given payload position, return a chunk number and offset'''
1231 '''for a given payload position, return a chunk number and offset'''
1232 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1232 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1233 if ppos == pos:
1233 if ppos == pos:
1234 return chunk, 0
1234 return chunk, 0
1235 elif ppos > pos:
1235 elif ppos > pos:
1236 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1236 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1237 raise ValueError('Unknown chunk')
1237 raise ValueError('Unknown chunk')
1238
1238
1239 def _readheader(self):
1239 def _readheader(self):
1240 """read the header and setup the object"""
1240 """read the header and setup the object"""
1241 typesize = self._unpackheader(_fparttypesize)[0]
1241 typesize = self._unpackheader(_fparttypesize)[0]
1242 self.type = self._fromheader(typesize)
1242 self.type = self._fromheader(typesize)
1243 indebug(self.ui, 'part type: "%s"' % self.type)
1243 indebug(self.ui, 'part type: "%s"' % self.type)
1244 self.id = self._unpackheader(_fpartid)[0]
1244 self.id = self._unpackheader(_fpartid)[0]
1245 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1245 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1246 # extract mandatory bit from type
1246 # extract mandatory bit from type
1247 self.mandatory = (self.type != self.type.lower())
1247 self.mandatory = (self.type != self.type.lower())
1248 self.type = self.type.lower()
1248 self.type = self.type.lower()
1249 ## reading parameters
1249 ## reading parameters
1250 # param count
1250 # param count
1251 mancount, advcount = self._unpackheader(_fpartparamcount)
1251 mancount, advcount = self._unpackheader(_fpartparamcount)
1252 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1252 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1253 # param size
1253 # param size
1254 fparamsizes = _makefpartparamsizes(mancount + advcount)
1254 fparamsizes = _makefpartparamsizes(mancount + advcount)
1255 paramsizes = self._unpackheader(fparamsizes)
1255 paramsizes = self._unpackheader(fparamsizes)
1256 # make it a list of couple again
1256 # make it a list of couple again
1257 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1257 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1258 # split mandatory from advisory
1258 # split mandatory from advisory
1259 mansizes = paramsizes[:mancount]
1259 mansizes = paramsizes[:mancount]
1260 advsizes = paramsizes[mancount:]
1260 advsizes = paramsizes[mancount:]
1261 # retrieve param value
1261 # retrieve param value
1262 manparams = []
1262 manparams = []
1263 for key, value in mansizes:
1263 for key, value in mansizes:
1264 manparams.append((self._fromheader(key), self._fromheader(value)))
1264 manparams.append((self._fromheader(key), self._fromheader(value)))
1265 advparams = []
1265 advparams = []
1266 for key, value in advsizes:
1266 for key, value in advsizes:
1267 advparams.append((self._fromheader(key), self._fromheader(value)))
1267 advparams.append((self._fromheader(key), self._fromheader(value)))
1268 self._initparams(manparams, advparams)
1268 self._initparams(manparams, advparams)
1269 ## part payload
1269 ## part payload
1270 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1270 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1271 # we read the data, tell it
1271 # we read the data, tell it
1272 self._initialized = True
1272 self._initialized = True
1273
1273
1274 def read(self, size=None):
1274 def read(self, size=None):
1275 """read payload data"""
1275 """read payload data"""
1276 if not self._initialized:
1276 if not self._initialized:
1277 self._readheader()
1277 self._readheader()
1278 if size is None:
1278 if size is None:
1279 data = self._payloadstream.read()
1279 data = self._payloadstream.read()
1280 else:
1280 else:
1281 data = self._payloadstream.read(size)
1281 data = self._payloadstream.read(size)
1282 self._pos += len(data)
1282 self._pos += len(data)
1283 if size is None or len(data) < size:
1283 if size is None or len(data) < size:
1284 if not self.consumed and self._pos:
1284 if not self.consumed and self._pos:
1285 self.ui.debug('bundle2-input-part: total payload size %i\n'
1285 self.ui.debug('bundle2-input-part: total payload size %i\n'
1286 % self._pos)
1286 % self._pos)
1287 self.consumed = True
1287 self.consumed = True
1288 return data
1288 return data
1289
1289
1290 def tell(self):
1290 def tell(self):
1291 return self._pos
1291 return self._pos
1292
1292
1293 def seek(self, offset, whence=0):
1293 def seek(self, offset, whence=0):
1294 if whence == 0:
1294 if whence == 0:
1295 newpos = offset
1295 newpos = offset
1296 elif whence == 1:
1296 elif whence == 1:
1297 newpos = self._pos + offset
1297 newpos = self._pos + offset
1298 elif whence == 2:
1298 elif whence == 2:
1299 if not self.consumed:
1299 if not self.consumed:
1300 self.read()
1300 self.read()
1301 newpos = self._chunkindex[-1][0] - offset
1301 newpos = self._chunkindex[-1][0] - offset
1302 else:
1302 else:
1303 raise ValueError('Unknown whence value: %r' % (whence,))
1303 raise ValueError('Unknown whence value: %r' % (whence,))
1304
1304
1305 if newpos > self._chunkindex[-1][0] and not self.consumed:
1305 if newpos > self._chunkindex[-1][0] and not self.consumed:
1306 self.read()
1306 self.read()
1307 if not 0 <= newpos <= self._chunkindex[-1][0]:
1307 if not 0 <= newpos <= self._chunkindex[-1][0]:
1308 raise ValueError('Offset out of range')
1308 raise ValueError('Offset out of range')
1309
1309
1310 if self._pos != newpos:
1310 if self._pos != newpos:
1311 chunk, internaloffset = self._findchunk(newpos)
1311 chunk, internaloffset = self._findchunk(newpos)
1312 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1312 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1313 adjust = self.read(internaloffset)
1313 adjust = self.read(internaloffset)
1314 if len(adjust) != internaloffset:
1314 if len(adjust) != internaloffset:
1315 raise error.Abort(_('Seek failed\n'))
1315 raise error.Abort(_('Seek failed\n'))
1316 self._pos = newpos
1316 self._pos = newpos
1317
1317
1318 def _seekfp(self, offset, whence=0):
1318 def _seekfp(self, offset, whence=0):
1319 """move the underlying file pointer
1319 """move the underlying file pointer
1320
1320
1321 This method is meant for internal usage by the bundle2 protocol only.
1321 This method is meant for internal usage by the bundle2 protocol only.
1322 They directly manipulate the low level stream including bundle2 level
1322 They directly manipulate the low level stream including bundle2 level
1323 instruction.
1323 instruction.
1324
1324
1325 Do not use it to implement higher-level logic or methods."""
1325 Do not use it to implement higher-level logic or methods."""
1326 if self._seekable:
1326 if self._seekable:
1327 return self._fp.seek(offset, whence)
1327 return self._fp.seek(offset, whence)
1328 else:
1328 else:
1329 raise NotImplementedError(_('File pointer is not seekable'))
1329 raise NotImplementedError(_('File pointer is not seekable'))
1330
1330
1331 def _tellfp(self):
1331 def _tellfp(self):
1332 """return the file offset, or None if file is not seekable
1332 """return the file offset, or None if file is not seekable
1333
1333
1334 This method is meant for internal usage by the bundle2 protocol only.
1334 This method is meant for internal usage by the bundle2 protocol only.
1335 They directly manipulate the low level stream including bundle2 level
1335 They directly manipulate the low level stream including bundle2 level
1336 instruction.
1336 instruction.
1337
1337
1338 Do not use it to implement higher-level logic or methods."""
1338 Do not use it to implement higher-level logic or methods."""
1339 if self._seekable:
1339 if self._seekable:
1340 try:
1340 try:
1341 return self._fp.tell()
1341 return self._fp.tell()
1342 except IOError as e:
1342 except IOError as e:
1343 if e.errno == errno.ESPIPE:
1343 if e.errno == errno.ESPIPE:
1344 self._seekable = False
1344 self._seekable = False
1345 else:
1345 else:
1346 raise
1346 raise
1347 return None
1347 return None
1348
1348
1349 # These are only the static capabilities.
1349 # These are only the static capabilities.
1350 # Check the 'getrepocaps' function for the rest.
1350 # Check the 'getrepocaps' function for the rest.
1351 capabilities = {'HG20': (),
1351 capabilities = {'HG20': (),
1352 'error': ('abort', 'unsupportedcontent', 'pushraced',
1352 'error': ('abort', 'unsupportedcontent', 'pushraced',
1353 'pushkey'),
1353 'pushkey'),
1354 'listkeys': (),
1354 'listkeys': (),
1355 'pushkey': (),
1355 'pushkey': (),
1356 'digests': tuple(sorted(util.DIGESTS.keys())),
1356 'digests': tuple(sorted(util.DIGESTS.keys())),
1357 'remote-changegroup': ('http', 'https'),
1357 'remote-changegroup': ('http', 'https'),
1358 'hgtagsfnodes': (),
1358 'hgtagsfnodes': (),
1359 }
1359 }
1360
1360
1361 def getrepocaps(repo, allowpushback=False):
1361 def getrepocaps(repo, allowpushback=False):
1362 """return the bundle2 capabilities for a given repo
1362 """return the bundle2 capabilities for a given repo
1363
1363
1364 Exists to allow extensions (like evolution) to mutate the capabilities.
1364 Exists to allow extensions (like evolution) to mutate the capabilities.
1365 """
1365 """
1366 caps = capabilities.copy()
1366 caps = capabilities.copy()
1367 caps['changegroup'] = tuple(sorted(
1367 caps['changegroup'] = tuple(sorted(
1368 changegroup.supportedincomingversions(repo)))
1368 changegroup.supportedincomingversions(repo)))
1369 if obsolete.isenabled(repo, obsolete.exchangeopt):
1369 if obsolete.isenabled(repo, obsolete.exchangeopt):
1370 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1370 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1371 caps['obsmarkers'] = supportedformat
1371 caps['obsmarkers'] = supportedformat
1372 if allowpushback:
1372 if allowpushback:
1373 caps['pushback'] = ()
1373 caps['pushback'] = ()
1374 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1374 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1375 if cpmode == 'check-related':
1375 if cpmode == 'check-related':
1376 caps['checkheads'] = ('related',)
1376 caps['checkheads'] = ('related',)
1377 return caps
1377 return caps
1378
1378
1379 def bundle2caps(remote):
1379 def bundle2caps(remote):
1380 """return the bundle capabilities of a peer as dict"""
1380 """return the bundle capabilities of a peer as dict"""
1381 raw = remote.capable('bundle2')
1381 raw = remote.capable('bundle2')
1382 if not raw and raw != '':
1382 if not raw and raw != '':
1383 return {}
1383 return {}
1384 capsblob = urlreq.unquote(remote.capable('bundle2'))
1384 capsblob = urlreq.unquote(remote.capable('bundle2'))
1385 return decodecaps(capsblob)
1385 return decodecaps(capsblob)
1386
1386
1387 def obsmarkersversion(caps):
1387 def obsmarkersversion(caps):
1388 """extract the list of supported obsmarkers versions from a bundle2caps dict
1388 """extract the list of supported obsmarkers versions from a bundle2caps dict
1389 """
1389 """
1390 obscaps = caps.get('obsmarkers', ())
1390 obscaps = caps.get('obsmarkers', ())
1391 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1391 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1392
1392
1393 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1393 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1394 vfs=None, compression=None, compopts=None):
1394 vfs=None, compression=None, compopts=None):
1395 if bundletype.startswith('HG10'):
1395 if bundletype.startswith('HG10'):
1396 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1396 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1397 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1397 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1398 compression=compression, compopts=compopts)
1398 compression=compression, compopts=compopts)
1399 elif not bundletype.startswith('HG20'):
1399 elif not bundletype.startswith('HG20'):
1400 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1400 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1401
1401
1402 caps = {}
1402 caps = {}
1403 if 'obsolescence' in opts:
1403 if 'obsolescence' in opts:
1404 caps['obsmarkers'] = ('V1',)
1404 caps['obsmarkers'] = ('V1',)
1405 bundle = bundle20(ui, caps)
1405 bundle = bundle20(ui, caps)
1406 bundle.setcompression(compression, compopts)
1406 bundle.setcompression(compression, compopts)
1407 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1407 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1408 chunkiter = bundle.getchunks()
1408 chunkiter = bundle.getchunks()
1409
1409
1410 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1410 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1411
1411
1412 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1412 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1413 # We should eventually reconcile this logic with the one behind
1413 # We should eventually reconcile this logic with the one behind
1414 # 'exchange.getbundle2partsgenerator'.
1414 # 'exchange.getbundle2partsgenerator'.
1415 #
1415 #
1416 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1416 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1417 # different right now. So we keep them separated for now for the sake of
1417 # different right now. So we keep them separated for now for the sake of
1418 # simplicity.
1418 # simplicity.
1419
1419
1420 # we always want a changegroup in such bundle
1420 # we always want a changegroup in such bundle
1421 cgversion = opts.get('cg.version')
1421 cgversion = opts.get('cg.version')
1422 if cgversion is None:
1422 if cgversion is None:
1423 cgversion = changegroup.safeversion(repo)
1423 cgversion = changegroup.safeversion(repo)
1424 cg = changegroup.getchangegroup(repo, source, outgoing,
1424 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1425 version=cgversion)
1426 part = bundler.newpart('changegroup', data=cg.getchunks())
1425 part = bundler.newpart('changegroup', data=cg.getchunks())
1427 part.addparam('version', cg.version)
1426 part.addparam('version', cg.version)
1428 if 'clcount' in cg.extras:
1427 if 'clcount' in cg.extras:
1429 part.addparam('nbchanges', str(cg.extras['clcount']),
1428 part.addparam('nbchanges', str(cg.extras['clcount']),
1430 mandatory=False)
1429 mandatory=False)
1431 if opts.get('phases') and repo.revs('%ln and secret()',
1430 if opts.get('phases') and repo.revs('%ln and secret()',
1432 outgoing.missingheads):
1431 outgoing.missingheads):
1433 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1432 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1434
1433
1435 addparttagsfnodescache(repo, bundler, outgoing)
1434 addparttagsfnodescache(repo, bundler, outgoing)
1436
1435
1437 if opts.get('obsolescence', False):
1436 if opts.get('obsolescence', False):
1438 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1437 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1439 buildobsmarkerspart(bundler, obsmarkers)
1438 buildobsmarkerspart(bundler, obsmarkers)
1440
1439
1441 if opts.get('phases', False):
1440 if opts.get('phases', False):
1442 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1441 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1443 phasedata = []
1442 phasedata = []
1444 for phase in phases.allphases:
1443 for phase in phases.allphases:
1445 for head in headsbyphase[phase]:
1444 for head in headsbyphase[phase]:
1446 phasedata.append(_pack(_fphasesentry, phase, head))
1445 phasedata.append(_pack(_fphasesentry, phase, head))
1447 bundler.newpart('phase-heads', data=''.join(phasedata))
1446 bundler.newpart('phase-heads', data=''.join(phasedata))
1448
1447
1449 def addparttagsfnodescache(repo, bundler, outgoing):
1448 def addparttagsfnodescache(repo, bundler, outgoing):
1450 # we include the tags fnode cache for the bundle changeset
1449 # we include the tags fnode cache for the bundle changeset
1451 # (as an optional parts)
1450 # (as an optional parts)
1452 cache = tags.hgtagsfnodescache(repo.unfiltered())
1451 cache = tags.hgtagsfnodescache(repo.unfiltered())
1453 chunks = []
1452 chunks = []
1454
1453
1455 # .hgtags fnodes are only relevant for head changesets. While we could
1454 # .hgtags fnodes are only relevant for head changesets. While we could
1456 # transfer values for all known nodes, there will likely be little to
1455 # transfer values for all known nodes, there will likely be little to
1457 # no benefit.
1456 # no benefit.
1458 #
1457 #
1459 # We don't bother using a generator to produce output data because
1458 # We don't bother using a generator to produce output data because
1460 # a) we only have 40 bytes per head and even esoteric numbers of heads
1459 # a) we only have 40 bytes per head and even esoteric numbers of heads
1461 # consume little memory (1M heads is 40MB) b) we don't want to send the
1460 # consume little memory (1M heads is 40MB) b) we don't want to send the
1462 # part if we don't have entries and knowing if we have entries requires
1461 # part if we don't have entries and knowing if we have entries requires
1463 # cache lookups.
1462 # cache lookups.
1464 for node in outgoing.missingheads:
1463 for node in outgoing.missingheads:
1465 # Don't compute missing, as this may slow down serving.
1464 # Don't compute missing, as this may slow down serving.
1466 fnode = cache.getfnode(node, computemissing=False)
1465 fnode = cache.getfnode(node, computemissing=False)
1467 if fnode is not None:
1466 if fnode is not None:
1468 chunks.extend([node, fnode])
1467 chunks.extend([node, fnode])
1469
1468
1470 if chunks:
1469 if chunks:
1471 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1470 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1472
1471
1473 def buildobsmarkerspart(bundler, markers):
1472 def buildobsmarkerspart(bundler, markers):
1474 """add an obsmarker part to the bundler with <markers>
1473 """add an obsmarker part to the bundler with <markers>
1475
1474
1476 No part is created if markers is empty.
1475 No part is created if markers is empty.
1477 Raises ValueError if the bundler doesn't support any known obsmarker format.
1476 Raises ValueError if the bundler doesn't support any known obsmarker format.
1478 """
1477 """
1479 if not markers:
1478 if not markers:
1480 return None
1479 return None
1481
1480
1482 remoteversions = obsmarkersversion(bundler.capabilities)
1481 remoteversions = obsmarkersversion(bundler.capabilities)
1483 version = obsolete.commonversion(remoteversions)
1482 version = obsolete.commonversion(remoteversions)
1484 if version is None:
1483 if version is None:
1485 raise ValueError('bundler does not support common obsmarker format')
1484 raise ValueError('bundler does not support common obsmarker format')
1486 stream = obsolete.encodemarkers(markers, True, version=version)
1485 stream = obsolete.encodemarkers(markers, True, version=version)
1487 return bundler.newpart('obsmarkers', data=stream)
1486 return bundler.newpart('obsmarkers', data=stream)
1488
1487
1489 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1488 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1490 compopts=None):
1489 compopts=None):
1491 """Write a bundle file and return its filename.
1490 """Write a bundle file and return its filename.
1492
1491
1493 Existing files will not be overwritten.
1492 Existing files will not be overwritten.
1494 If no filename is specified, a temporary file is created.
1493 If no filename is specified, a temporary file is created.
1495 bz2 compression can be turned off.
1494 bz2 compression can be turned off.
1496 The bundle file will be deleted in case of errors.
1495 The bundle file will be deleted in case of errors.
1497 """
1496 """
1498
1497
1499 if bundletype == "HG20":
1498 if bundletype == "HG20":
1500 bundle = bundle20(ui)
1499 bundle = bundle20(ui)
1501 bundle.setcompression(compression, compopts)
1500 bundle.setcompression(compression, compopts)
1502 part = bundle.newpart('changegroup', data=cg.getchunks())
1501 part = bundle.newpart('changegroup', data=cg.getchunks())
1503 part.addparam('version', cg.version)
1502 part.addparam('version', cg.version)
1504 if 'clcount' in cg.extras:
1503 if 'clcount' in cg.extras:
1505 part.addparam('nbchanges', str(cg.extras['clcount']),
1504 part.addparam('nbchanges', str(cg.extras['clcount']),
1506 mandatory=False)
1505 mandatory=False)
1507 chunkiter = bundle.getchunks()
1506 chunkiter = bundle.getchunks()
1508 else:
1507 else:
1509 # compression argument is only for the bundle2 case
1508 # compression argument is only for the bundle2 case
1510 assert compression is None
1509 assert compression is None
1511 if cg.version != '01':
1510 if cg.version != '01':
1512 raise error.Abort(_('old bundle types only supports v1 '
1511 raise error.Abort(_('old bundle types only supports v1 '
1513 'changegroups'))
1512 'changegroups'))
1514 header, comp = bundletypes[bundletype]
1513 header, comp = bundletypes[bundletype]
1515 if comp not in util.compengines.supportedbundletypes:
1514 if comp not in util.compengines.supportedbundletypes:
1516 raise error.Abort(_('unknown stream compression type: %s')
1515 raise error.Abort(_('unknown stream compression type: %s')
1517 % comp)
1516 % comp)
1518 compengine = util.compengines.forbundletype(comp)
1517 compengine = util.compengines.forbundletype(comp)
1519 def chunkiter():
1518 def chunkiter():
1520 yield header
1519 yield header
1521 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1520 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1522 yield chunk
1521 yield chunk
1523 chunkiter = chunkiter()
1522 chunkiter = chunkiter()
1524
1523
1525 # parse the changegroup data, otherwise we will block
1524 # parse the changegroup data, otherwise we will block
1526 # in case of sshrepo because we don't know the end of the stream
1525 # in case of sshrepo because we don't know the end of the stream
1527 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1526 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1528
1527
1529 def combinechangegroupresults(op):
1528 def combinechangegroupresults(op):
1530 """logic to combine 0 or more addchangegroup results into one"""
1529 """logic to combine 0 or more addchangegroup results into one"""
1531 results = [r.get('return', 0)
1530 results = [r.get('return', 0)
1532 for r in op.records['changegroup']]
1531 for r in op.records['changegroup']]
1533 changedheads = 0
1532 changedheads = 0
1534 result = 1
1533 result = 1
1535 for ret in results:
1534 for ret in results:
1536 # If any changegroup result is 0, return 0
1535 # If any changegroup result is 0, return 0
1537 if ret == 0:
1536 if ret == 0:
1538 result = 0
1537 result = 0
1539 break
1538 break
1540 if ret < -1:
1539 if ret < -1:
1541 changedheads += ret + 1
1540 changedheads += ret + 1
1542 elif ret > 1:
1541 elif ret > 1:
1543 changedheads += ret - 1
1542 changedheads += ret - 1
1544 if changedheads > 0:
1543 if changedheads > 0:
1545 result = 1 + changedheads
1544 result = 1 + changedheads
1546 elif changedheads < 0:
1545 elif changedheads < 0:
1547 result = -1 + changedheads
1546 result = -1 + changedheads
1548 return result
1547 return result
1549
1548
1550 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1549 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1551 'targetphase'))
1550 'targetphase'))
1552 def handlechangegroup(op, inpart):
1551 def handlechangegroup(op, inpart):
1553 """apply a changegroup part on the repo
1552 """apply a changegroup part on the repo
1554
1553
1555 This is a very early implementation that will massive rework before being
1554 This is a very early implementation that will massive rework before being
1556 inflicted to any end-user.
1555 inflicted to any end-user.
1557 """
1556 """
1558 tr = op.gettransaction()
1557 tr = op.gettransaction()
1559 unpackerversion = inpart.params.get('version', '01')
1558 unpackerversion = inpart.params.get('version', '01')
1560 # We should raise an appropriate exception here
1559 # We should raise an appropriate exception here
1561 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1560 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1562 # the source and url passed here are overwritten by the one contained in
1561 # the source and url passed here are overwritten by the one contained in
1563 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1562 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1564 nbchangesets = None
1563 nbchangesets = None
1565 if 'nbchanges' in inpart.params:
1564 if 'nbchanges' in inpart.params:
1566 nbchangesets = int(inpart.params.get('nbchanges'))
1565 nbchangesets = int(inpart.params.get('nbchanges'))
1567 if ('treemanifest' in inpart.params and
1566 if ('treemanifest' in inpart.params and
1568 'treemanifest' not in op.repo.requirements):
1567 'treemanifest' not in op.repo.requirements):
1569 if len(op.repo.changelog) != 0:
1568 if len(op.repo.changelog) != 0:
1570 raise error.Abort(_(
1569 raise error.Abort(_(
1571 "bundle contains tree manifests, but local repo is "
1570 "bundle contains tree manifests, but local repo is "
1572 "non-empty and does not use tree manifests"))
1571 "non-empty and does not use tree manifests"))
1573 op.repo.requirements.add('treemanifest')
1572 op.repo.requirements.add('treemanifest')
1574 op.repo._applyopenerreqs()
1573 op.repo._applyopenerreqs()
1575 op.repo._writerequirements()
1574 op.repo._writerequirements()
1576 extrakwargs = {}
1575 extrakwargs = {}
1577 targetphase = inpart.params.get('targetphase')
1576 targetphase = inpart.params.get('targetphase')
1578 if targetphase is not None:
1577 if targetphase is not None:
1579 extrakwargs['targetphase'] = int(targetphase)
1578 extrakwargs['targetphase'] = int(targetphase)
1580 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1579 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1581 expectedtotal=nbchangesets, **extrakwargs)
1580 expectedtotal=nbchangesets, **extrakwargs)
1582 if op.reply is not None:
1581 if op.reply is not None:
1583 # This is definitely not the final form of this
1582 # This is definitely not the final form of this
1584 # return. But one need to start somewhere.
1583 # return. But one need to start somewhere.
1585 part = op.reply.newpart('reply:changegroup', mandatory=False)
1584 part = op.reply.newpart('reply:changegroup', mandatory=False)
1586 part.addparam(
1585 part.addparam(
1587 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1586 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1588 part.addparam('return', '%i' % ret, mandatory=False)
1587 part.addparam('return', '%i' % ret, mandatory=False)
1589 assert not inpart.read()
1588 assert not inpart.read()
1590
1589
1591 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1590 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1592 ['digest:%s' % k for k in util.DIGESTS.keys()])
1591 ['digest:%s' % k for k in util.DIGESTS.keys()])
1593 @parthandler('remote-changegroup', _remotechangegroupparams)
1592 @parthandler('remote-changegroup', _remotechangegroupparams)
1594 def handleremotechangegroup(op, inpart):
1593 def handleremotechangegroup(op, inpart):
1595 """apply a bundle10 on the repo, given an url and validation information
1594 """apply a bundle10 on the repo, given an url and validation information
1596
1595
1597 All the information about the remote bundle to import are given as
1596 All the information about the remote bundle to import are given as
1598 parameters. The parameters include:
1597 parameters. The parameters include:
1599 - url: the url to the bundle10.
1598 - url: the url to the bundle10.
1600 - size: the bundle10 file size. It is used to validate what was
1599 - size: the bundle10 file size. It is used to validate what was
1601 retrieved by the client matches the server knowledge about the bundle.
1600 retrieved by the client matches the server knowledge about the bundle.
1602 - digests: a space separated list of the digest types provided as
1601 - digests: a space separated list of the digest types provided as
1603 parameters.
1602 parameters.
1604 - digest:<digest-type>: the hexadecimal representation of the digest with
1603 - digest:<digest-type>: the hexadecimal representation of the digest with
1605 that name. Like the size, it is used to validate what was retrieved by
1604 that name. Like the size, it is used to validate what was retrieved by
1606 the client matches what the server knows about the bundle.
1605 the client matches what the server knows about the bundle.
1607
1606
1608 When multiple digest types are given, all of them are checked.
1607 When multiple digest types are given, all of them are checked.
1609 """
1608 """
1610 try:
1609 try:
1611 raw_url = inpart.params['url']
1610 raw_url = inpart.params['url']
1612 except KeyError:
1611 except KeyError:
1613 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1612 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1614 parsed_url = util.url(raw_url)
1613 parsed_url = util.url(raw_url)
1615 if parsed_url.scheme not in capabilities['remote-changegroup']:
1614 if parsed_url.scheme not in capabilities['remote-changegroup']:
1616 raise error.Abort(_('remote-changegroup does not support %s urls') %
1615 raise error.Abort(_('remote-changegroup does not support %s urls') %
1617 parsed_url.scheme)
1616 parsed_url.scheme)
1618
1617
1619 try:
1618 try:
1620 size = int(inpart.params['size'])
1619 size = int(inpart.params['size'])
1621 except ValueError:
1620 except ValueError:
1622 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1621 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1623 % 'size')
1622 % 'size')
1624 except KeyError:
1623 except KeyError:
1625 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1624 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1626
1625
1627 digests = {}
1626 digests = {}
1628 for typ in inpart.params.get('digests', '').split():
1627 for typ in inpart.params.get('digests', '').split():
1629 param = 'digest:%s' % typ
1628 param = 'digest:%s' % typ
1630 try:
1629 try:
1631 value = inpart.params[param]
1630 value = inpart.params[param]
1632 except KeyError:
1631 except KeyError:
1633 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1632 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1634 param)
1633 param)
1635 digests[typ] = value
1634 digests[typ] = value
1636
1635
1637 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1636 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1638
1637
1639 tr = op.gettransaction()
1638 tr = op.gettransaction()
1640 from . import exchange
1639 from . import exchange
1641 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1640 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1642 if not isinstance(cg, changegroup.cg1unpacker):
1641 if not isinstance(cg, changegroup.cg1unpacker):
1643 raise error.Abort(_('%s: not a bundle version 1.0') %
1642 raise error.Abort(_('%s: not a bundle version 1.0') %
1644 util.hidepassword(raw_url))
1643 util.hidepassword(raw_url))
1645 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1644 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1646 if op.reply is not None:
1645 if op.reply is not None:
1647 # This is definitely not the final form of this
1646 # This is definitely not the final form of this
1648 # return. But one need to start somewhere.
1647 # return. But one need to start somewhere.
1649 part = op.reply.newpart('reply:changegroup')
1648 part = op.reply.newpart('reply:changegroup')
1650 part.addparam(
1649 part.addparam(
1651 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1650 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1652 part.addparam('return', '%i' % ret, mandatory=False)
1651 part.addparam('return', '%i' % ret, mandatory=False)
1653 try:
1652 try:
1654 real_part.validate()
1653 real_part.validate()
1655 except error.Abort as e:
1654 except error.Abort as e:
1656 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1655 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1657 (util.hidepassword(raw_url), str(e)))
1656 (util.hidepassword(raw_url), str(e)))
1658 assert not inpart.read()
1657 assert not inpart.read()
1659
1658
1660 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1659 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1661 def handlereplychangegroup(op, inpart):
1660 def handlereplychangegroup(op, inpart):
1662 ret = int(inpart.params['return'])
1661 ret = int(inpart.params['return'])
1663 replyto = int(inpart.params['in-reply-to'])
1662 replyto = int(inpart.params['in-reply-to'])
1664 op.records.add('changegroup', {'return': ret}, replyto)
1663 op.records.add('changegroup', {'return': ret}, replyto)
1665
1664
1666 @parthandler('check:heads')
1665 @parthandler('check:heads')
1667 def handlecheckheads(op, inpart):
1666 def handlecheckheads(op, inpart):
1668 """check that head of the repo did not change
1667 """check that head of the repo did not change
1669
1668
1670 This is used to detect a push race when using unbundle.
1669 This is used to detect a push race when using unbundle.
1671 This replaces the "heads" argument of unbundle."""
1670 This replaces the "heads" argument of unbundle."""
1672 h = inpart.read(20)
1671 h = inpart.read(20)
1673 heads = []
1672 heads = []
1674 while len(h) == 20:
1673 while len(h) == 20:
1675 heads.append(h)
1674 heads.append(h)
1676 h = inpart.read(20)
1675 h = inpart.read(20)
1677 assert not h
1676 assert not h
1678 # Trigger a transaction so that we are guaranteed to have the lock now.
1677 # Trigger a transaction so that we are guaranteed to have the lock now.
1679 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1678 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1680 op.gettransaction()
1679 op.gettransaction()
1681 if sorted(heads) != sorted(op.repo.heads()):
1680 if sorted(heads) != sorted(op.repo.heads()):
1682 raise error.PushRaced('repository changed while pushing - '
1681 raise error.PushRaced('repository changed while pushing - '
1683 'please try again')
1682 'please try again')
1684
1683
1685 @parthandler('check:updated-heads')
1684 @parthandler('check:updated-heads')
1686 def handlecheckupdatedheads(op, inpart):
1685 def handlecheckupdatedheads(op, inpart):
1687 """check for race on the heads touched by a push
1686 """check for race on the heads touched by a push
1688
1687
1689 This is similar to 'check:heads' but focus on the heads actually updated
1688 This is similar to 'check:heads' but focus on the heads actually updated
1690 during the push. If other activities happen on unrelated heads, it is
1689 during the push. If other activities happen on unrelated heads, it is
1691 ignored.
1690 ignored.
1692
1691
1693 This allow server with high traffic to avoid push contention as long as
1692 This allow server with high traffic to avoid push contention as long as
1694 unrelated parts of the graph are involved."""
1693 unrelated parts of the graph are involved."""
1695 h = inpart.read(20)
1694 h = inpart.read(20)
1696 heads = []
1695 heads = []
1697 while len(h) == 20:
1696 while len(h) == 20:
1698 heads.append(h)
1697 heads.append(h)
1699 h = inpart.read(20)
1698 h = inpart.read(20)
1700 assert not h
1699 assert not h
1701 # trigger a transaction so that we are guaranteed to have the lock now.
1700 # trigger a transaction so that we are guaranteed to have the lock now.
1702 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1701 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1703 op.gettransaction()
1702 op.gettransaction()
1704
1703
1705 currentheads = set()
1704 currentheads = set()
1706 for ls in op.repo.branchmap().itervalues():
1705 for ls in op.repo.branchmap().itervalues():
1707 currentheads.update(ls)
1706 currentheads.update(ls)
1708
1707
1709 for h in heads:
1708 for h in heads:
1710 if h not in currentheads:
1709 if h not in currentheads:
1711 raise error.PushRaced('repository changed while pushing - '
1710 raise error.PushRaced('repository changed while pushing - '
1712 'please try again')
1711 'please try again')
1713
1712
1714 @parthandler('output')
1713 @parthandler('output')
1715 def handleoutput(op, inpart):
1714 def handleoutput(op, inpart):
1716 """forward output captured on the server to the client"""
1715 """forward output captured on the server to the client"""
1717 for line in inpart.read().splitlines():
1716 for line in inpart.read().splitlines():
1718 op.ui.status(_('remote: %s\n') % line)
1717 op.ui.status(_('remote: %s\n') % line)
1719
1718
1720 @parthandler('replycaps')
1719 @parthandler('replycaps')
1721 def handlereplycaps(op, inpart):
1720 def handlereplycaps(op, inpart):
1722 """Notify that a reply bundle should be created
1721 """Notify that a reply bundle should be created
1723
1722
1724 The payload contains the capabilities information for the reply"""
1723 The payload contains the capabilities information for the reply"""
1725 caps = decodecaps(inpart.read())
1724 caps = decodecaps(inpart.read())
1726 if op.reply is None:
1725 if op.reply is None:
1727 op.reply = bundle20(op.ui, caps)
1726 op.reply = bundle20(op.ui, caps)
1728
1727
1729 class AbortFromPart(error.Abort):
1728 class AbortFromPart(error.Abort):
1730 """Sub-class of Abort that denotes an error from a bundle2 part."""
1729 """Sub-class of Abort that denotes an error from a bundle2 part."""
1731
1730
1732 @parthandler('error:abort', ('message', 'hint'))
1731 @parthandler('error:abort', ('message', 'hint'))
1733 def handleerrorabort(op, inpart):
1732 def handleerrorabort(op, inpart):
1734 """Used to transmit abort error over the wire"""
1733 """Used to transmit abort error over the wire"""
1735 raise AbortFromPart(inpart.params['message'],
1734 raise AbortFromPart(inpart.params['message'],
1736 hint=inpart.params.get('hint'))
1735 hint=inpart.params.get('hint'))
1737
1736
1738 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1737 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1739 'in-reply-to'))
1738 'in-reply-to'))
1740 def handleerrorpushkey(op, inpart):
1739 def handleerrorpushkey(op, inpart):
1741 """Used to transmit failure of a mandatory pushkey over the wire"""
1740 """Used to transmit failure of a mandatory pushkey over the wire"""
1742 kwargs = {}
1741 kwargs = {}
1743 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1742 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1744 value = inpart.params.get(name)
1743 value = inpart.params.get(name)
1745 if value is not None:
1744 if value is not None:
1746 kwargs[name] = value
1745 kwargs[name] = value
1747 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1746 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1748
1747
1749 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1748 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1750 def handleerrorunsupportedcontent(op, inpart):
1749 def handleerrorunsupportedcontent(op, inpart):
1751 """Used to transmit unknown content error over the wire"""
1750 """Used to transmit unknown content error over the wire"""
1752 kwargs = {}
1751 kwargs = {}
1753 parttype = inpart.params.get('parttype')
1752 parttype = inpart.params.get('parttype')
1754 if parttype is not None:
1753 if parttype is not None:
1755 kwargs['parttype'] = parttype
1754 kwargs['parttype'] = parttype
1756 params = inpart.params.get('params')
1755 params = inpart.params.get('params')
1757 if params is not None:
1756 if params is not None:
1758 kwargs['params'] = params.split('\0')
1757 kwargs['params'] = params.split('\0')
1759
1758
1760 raise error.BundleUnknownFeatureError(**kwargs)
1759 raise error.BundleUnknownFeatureError(**kwargs)
1761
1760
1762 @parthandler('error:pushraced', ('message',))
1761 @parthandler('error:pushraced', ('message',))
1763 def handleerrorpushraced(op, inpart):
1762 def handleerrorpushraced(op, inpart):
1764 """Used to transmit push race error over the wire"""
1763 """Used to transmit push race error over the wire"""
1765 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1764 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1766
1765
1767 @parthandler('listkeys', ('namespace',))
1766 @parthandler('listkeys', ('namespace',))
1768 def handlelistkeys(op, inpart):
1767 def handlelistkeys(op, inpart):
1769 """retrieve pushkey namespace content stored in a bundle2"""
1768 """retrieve pushkey namespace content stored in a bundle2"""
1770 namespace = inpart.params['namespace']
1769 namespace = inpart.params['namespace']
1771 r = pushkey.decodekeys(inpart.read())
1770 r = pushkey.decodekeys(inpart.read())
1772 op.records.add('listkeys', (namespace, r))
1771 op.records.add('listkeys', (namespace, r))
1773
1772
1774 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1773 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1775 def handlepushkey(op, inpart):
1774 def handlepushkey(op, inpart):
1776 """process a pushkey request"""
1775 """process a pushkey request"""
1777 dec = pushkey.decode
1776 dec = pushkey.decode
1778 namespace = dec(inpart.params['namespace'])
1777 namespace = dec(inpart.params['namespace'])
1779 key = dec(inpart.params['key'])
1778 key = dec(inpart.params['key'])
1780 old = dec(inpart.params['old'])
1779 old = dec(inpart.params['old'])
1781 new = dec(inpart.params['new'])
1780 new = dec(inpart.params['new'])
1782 # Grab the transaction to ensure that we have the lock before performing the
1781 # Grab the transaction to ensure that we have the lock before performing the
1783 # pushkey.
1782 # pushkey.
1784 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1783 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1785 op.gettransaction()
1784 op.gettransaction()
1786 ret = op.repo.pushkey(namespace, key, old, new)
1785 ret = op.repo.pushkey(namespace, key, old, new)
1787 record = {'namespace': namespace,
1786 record = {'namespace': namespace,
1788 'key': key,
1787 'key': key,
1789 'old': old,
1788 'old': old,
1790 'new': new}
1789 'new': new}
1791 op.records.add('pushkey', record)
1790 op.records.add('pushkey', record)
1792 if op.reply is not None:
1791 if op.reply is not None:
1793 rpart = op.reply.newpart('reply:pushkey')
1792 rpart = op.reply.newpart('reply:pushkey')
1794 rpart.addparam(
1793 rpart.addparam(
1795 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1794 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1796 rpart.addparam('return', '%i' % ret, mandatory=False)
1795 rpart.addparam('return', '%i' % ret, mandatory=False)
1797 if inpart.mandatory and not ret:
1796 if inpart.mandatory and not ret:
1798 kwargs = {}
1797 kwargs = {}
1799 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1798 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1800 if key in inpart.params:
1799 if key in inpart.params:
1801 kwargs[key] = inpart.params[key]
1800 kwargs[key] = inpart.params[key]
1802 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1801 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1803
1802
1804 def _readphaseheads(inpart):
1803 def _readphaseheads(inpart):
1805 headsbyphase = [[] for i in phases.allphases]
1804 headsbyphase = [[] for i in phases.allphases]
1806 entrysize = struct.calcsize(_fphasesentry)
1805 entrysize = struct.calcsize(_fphasesentry)
1807 while True:
1806 while True:
1808 entry = inpart.read(entrysize)
1807 entry = inpart.read(entrysize)
1809 if len(entry) < entrysize:
1808 if len(entry) < entrysize:
1810 if entry:
1809 if entry:
1811 raise error.Abort(_('bad phase-heads bundle part'))
1810 raise error.Abort(_('bad phase-heads bundle part'))
1812 break
1811 break
1813 phase, node = struct.unpack(_fphasesentry, entry)
1812 phase, node = struct.unpack(_fphasesentry, entry)
1814 headsbyphase[phase].append(node)
1813 headsbyphase[phase].append(node)
1815 return headsbyphase
1814 return headsbyphase
1816
1815
1817 @parthandler('phase-heads')
1816 @parthandler('phase-heads')
1818 def handlephases(op, inpart):
1817 def handlephases(op, inpart):
1819 """apply phases from bundle part to repo"""
1818 """apply phases from bundle part to repo"""
1820 headsbyphase = _readphaseheads(inpart)
1819 headsbyphase = _readphaseheads(inpart)
1821 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase)
1820 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase)
1822 op.records.add('phase-heads', {})
1821 op.records.add('phase-heads', {})
1823
1822
1824 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1823 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1825 def handlepushkeyreply(op, inpart):
1824 def handlepushkeyreply(op, inpart):
1826 """retrieve the result of a pushkey request"""
1825 """retrieve the result of a pushkey request"""
1827 ret = int(inpart.params['return'])
1826 ret = int(inpart.params['return'])
1828 partid = int(inpart.params['in-reply-to'])
1827 partid = int(inpart.params['in-reply-to'])
1829 op.records.add('pushkey', {'return': ret}, partid)
1828 op.records.add('pushkey', {'return': ret}, partid)
1830
1829
1831 @parthandler('obsmarkers')
1830 @parthandler('obsmarkers')
1832 def handleobsmarker(op, inpart):
1831 def handleobsmarker(op, inpart):
1833 """add a stream of obsmarkers to the repo"""
1832 """add a stream of obsmarkers to the repo"""
1834 tr = op.gettransaction()
1833 tr = op.gettransaction()
1835 markerdata = inpart.read()
1834 markerdata = inpart.read()
1836 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1835 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1837 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1836 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1838 % len(markerdata))
1837 % len(markerdata))
1839 # The mergemarkers call will crash if marker creation is not enabled.
1838 # The mergemarkers call will crash if marker creation is not enabled.
1840 # we want to avoid this if the part is advisory.
1839 # we want to avoid this if the part is advisory.
1841 if not inpart.mandatory and op.repo.obsstore.readonly:
1840 if not inpart.mandatory and op.repo.obsstore.readonly:
1842 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1841 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1843 return
1842 return
1844 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1843 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1845 op.repo.invalidatevolatilesets()
1844 op.repo.invalidatevolatilesets()
1846 if new:
1845 if new:
1847 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1846 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1848 op.records.add('obsmarkers', {'new': new})
1847 op.records.add('obsmarkers', {'new': new})
1849 if op.reply is not None:
1848 if op.reply is not None:
1850 rpart = op.reply.newpart('reply:obsmarkers')
1849 rpart = op.reply.newpart('reply:obsmarkers')
1851 rpart.addparam(
1850 rpart.addparam(
1852 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1851 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1853 rpart.addparam('new', '%i' % new, mandatory=False)
1852 rpart.addparam('new', '%i' % new, mandatory=False)
1854
1853
1855
1854
1856 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1855 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1857 def handleobsmarkerreply(op, inpart):
1856 def handleobsmarkerreply(op, inpart):
1858 """retrieve the result of a pushkey request"""
1857 """retrieve the result of a pushkey request"""
1859 ret = int(inpart.params['new'])
1858 ret = int(inpart.params['new'])
1860 partid = int(inpart.params['in-reply-to'])
1859 partid = int(inpart.params['in-reply-to'])
1861 op.records.add('obsmarkers', {'new': ret}, partid)
1860 op.records.add('obsmarkers', {'new': ret}, partid)
1862
1861
1863 @parthandler('hgtagsfnodes')
1862 @parthandler('hgtagsfnodes')
1864 def handlehgtagsfnodes(op, inpart):
1863 def handlehgtagsfnodes(op, inpart):
1865 """Applies .hgtags fnodes cache entries to the local repo.
1864 """Applies .hgtags fnodes cache entries to the local repo.
1866
1865
1867 Payload is pairs of 20 byte changeset nodes and filenodes.
1866 Payload is pairs of 20 byte changeset nodes and filenodes.
1868 """
1867 """
1869 # Grab the transaction so we ensure that we have the lock at this point.
1868 # Grab the transaction so we ensure that we have the lock at this point.
1870 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1869 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1871 op.gettransaction()
1870 op.gettransaction()
1872 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1871 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1873
1872
1874 count = 0
1873 count = 0
1875 while True:
1874 while True:
1876 node = inpart.read(20)
1875 node = inpart.read(20)
1877 fnode = inpart.read(20)
1876 fnode = inpart.read(20)
1878 if len(node) < 20 or len(fnode) < 20:
1877 if len(node) < 20 or len(fnode) < 20:
1879 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1878 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1880 break
1879 break
1881 cache.setfnode(node, fnode)
1880 cache.setfnode(node, fnode)
1882 count += 1
1881 count += 1
1883
1882
1884 cache.write()
1883 cache.write()
1885 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1884 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1886
1885
1887 @parthandler('pushvars')
1886 @parthandler('pushvars')
1888 def bundle2getvars(op, part):
1887 def bundle2getvars(op, part):
1889 '''unbundle a bundle2 containing shellvars on the server'''
1888 '''unbundle a bundle2 containing shellvars on the server'''
1890 # An option to disable unbundling on server-side for security reasons
1889 # An option to disable unbundling on server-side for security reasons
1891 if op.ui.configbool('push', 'pushvars.server'):
1890 if op.ui.configbool('push', 'pushvars.server'):
1892 hookargs = {}
1891 hookargs = {}
1893 for key, value in part.advisoryparams:
1892 for key, value in part.advisoryparams:
1894 key = key.upper()
1893 key = key.upper()
1895 # We want pushed variables to have USERVAR_ prepended so we know
1894 # We want pushed variables to have USERVAR_ prepended so we know
1896 # they came from the --pushvar flag.
1895 # they came from the --pushvar flag.
1897 key = "USERVAR_" + key
1896 key = "USERVAR_" + key
1898 hookargs[key] = value
1897 hookargs[key] = value
1899 op.addhookargs(hookargs)
1898 op.addhookargs(hookargs)
@@ -1,993 +1,982 b''
1 # changegroup.py - Mercurial changegroup manipulation functions
1 # changegroup.py - Mercurial changegroup manipulation functions
2 #
2 #
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import os
10 import os
11 import struct
11 import struct
12 import tempfile
12 import tempfile
13 import weakref
13 import weakref
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 hex,
17 hex,
18 nullrev,
18 nullrev,
19 short,
19 short,
20 )
20 )
21
21
22 from . import (
22 from . import (
23 dagutil,
23 dagutil,
24 error,
24 error,
25 mdiff,
25 mdiff,
26 phases,
26 phases,
27 pycompat,
27 pycompat,
28 util,
28 util,
29 )
29 )
30
30
31 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
31 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
32 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
32 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
33 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
33 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
34
34
35 def readexactly(stream, n):
35 def readexactly(stream, n):
36 '''read n bytes from stream.read and abort if less was available'''
36 '''read n bytes from stream.read and abort if less was available'''
37 s = stream.read(n)
37 s = stream.read(n)
38 if len(s) < n:
38 if len(s) < n:
39 raise error.Abort(_("stream ended unexpectedly"
39 raise error.Abort(_("stream ended unexpectedly"
40 " (got %d bytes, expected %d)")
40 " (got %d bytes, expected %d)")
41 % (len(s), n))
41 % (len(s), n))
42 return s
42 return s
43
43
44 def getchunk(stream):
44 def getchunk(stream):
45 """return the next chunk from stream as a string"""
45 """return the next chunk from stream as a string"""
46 d = readexactly(stream, 4)
46 d = readexactly(stream, 4)
47 l = struct.unpack(">l", d)[0]
47 l = struct.unpack(">l", d)[0]
48 if l <= 4:
48 if l <= 4:
49 if l:
49 if l:
50 raise error.Abort(_("invalid chunk length %d") % l)
50 raise error.Abort(_("invalid chunk length %d") % l)
51 return ""
51 return ""
52 return readexactly(stream, l - 4)
52 return readexactly(stream, l - 4)
53
53
54 def chunkheader(length):
54 def chunkheader(length):
55 """return a changegroup chunk header (string)"""
55 """return a changegroup chunk header (string)"""
56 return struct.pack(">l", length + 4)
56 return struct.pack(">l", length + 4)
57
57
58 def closechunk():
58 def closechunk():
59 """return a changegroup chunk header (string) for a zero-length chunk"""
59 """return a changegroup chunk header (string) for a zero-length chunk"""
60 return struct.pack(">l", 0)
60 return struct.pack(">l", 0)
61
61
62 def writechunks(ui, chunks, filename, vfs=None):
62 def writechunks(ui, chunks, filename, vfs=None):
63 """Write chunks to a file and return its filename.
63 """Write chunks to a file and return its filename.
64
64
65 The stream is assumed to be a bundle file.
65 The stream is assumed to be a bundle file.
66 Existing files will not be overwritten.
66 Existing files will not be overwritten.
67 If no filename is specified, a temporary file is created.
67 If no filename is specified, a temporary file is created.
68 """
68 """
69 fh = None
69 fh = None
70 cleanup = None
70 cleanup = None
71 try:
71 try:
72 if filename:
72 if filename:
73 if vfs:
73 if vfs:
74 fh = vfs.open(filename, "wb")
74 fh = vfs.open(filename, "wb")
75 else:
75 else:
76 # Increase default buffer size because default is usually
76 # Increase default buffer size because default is usually
77 # small (4k is common on Linux).
77 # small (4k is common on Linux).
78 fh = open(filename, "wb", 131072)
78 fh = open(filename, "wb", 131072)
79 else:
79 else:
80 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
80 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
81 fh = os.fdopen(fd, pycompat.sysstr("wb"))
81 fh = os.fdopen(fd, pycompat.sysstr("wb"))
82 cleanup = filename
82 cleanup = filename
83 for c in chunks:
83 for c in chunks:
84 fh.write(c)
84 fh.write(c)
85 cleanup = None
85 cleanup = None
86 return filename
86 return filename
87 finally:
87 finally:
88 if fh is not None:
88 if fh is not None:
89 fh.close()
89 fh.close()
90 if cleanup is not None:
90 if cleanup is not None:
91 if filename and vfs:
91 if filename and vfs:
92 vfs.unlink(cleanup)
92 vfs.unlink(cleanup)
93 else:
93 else:
94 os.unlink(cleanup)
94 os.unlink(cleanup)
95
95
96 class cg1unpacker(object):
96 class cg1unpacker(object):
97 """Unpacker for cg1 changegroup streams.
97 """Unpacker for cg1 changegroup streams.
98
98
99 A changegroup unpacker handles the framing of the revision data in
99 A changegroup unpacker handles the framing of the revision data in
100 the wire format. Most consumers will want to use the apply()
100 the wire format. Most consumers will want to use the apply()
101 method to add the changes from the changegroup to a repository.
101 method to add the changes from the changegroup to a repository.
102
102
103 If you're forwarding a changegroup unmodified to another consumer,
103 If you're forwarding a changegroup unmodified to another consumer,
104 use getchunks(), which returns an iterator of changegroup
104 use getchunks(), which returns an iterator of changegroup
105 chunks. This is mostly useful for cases where you need to know the
105 chunks. This is mostly useful for cases where you need to know the
106 data stream has ended by observing the end of the changegroup.
106 data stream has ended by observing the end of the changegroup.
107
107
108 deltachunk() is useful only if you're applying delta data. Most
108 deltachunk() is useful only if you're applying delta data. Most
109 consumers should prefer apply() instead.
109 consumers should prefer apply() instead.
110
110
111 A few other public methods exist. Those are used only for
111 A few other public methods exist. Those are used only for
112 bundlerepo and some debug commands - their use is discouraged.
112 bundlerepo and some debug commands - their use is discouraged.
113 """
113 """
114 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
114 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
115 deltaheadersize = struct.calcsize(deltaheader)
115 deltaheadersize = struct.calcsize(deltaheader)
116 version = '01'
116 version = '01'
117 _grouplistcount = 1 # One list of files after the manifests
117 _grouplistcount = 1 # One list of files after the manifests
118
118
119 def __init__(self, fh, alg, extras=None):
119 def __init__(self, fh, alg, extras=None):
120 if alg is None:
120 if alg is None:
121 alg = 'UN'
121 alg = 'UN'
122 if alg not in util.compengines.supportedbundletypes:
122 if alg not in util.compengines.supportedbundletypes:
123 raise error.Abort(_('unknown stream compression type: %s')
123 raise error.Abort(_('unknown stream compression type: %s')
124 % alg)
124 % alg)
125 if alg == 'BZ':
125 if alg == 'BZ':
126 alg = '_truncatedBZ'
126 alg = '_truncatedBZ'
127
127
128 compengine = util.compengines.forbundletype(alg)
128 compengine = util.compengines.forbundletype(alg)
129 self._stream = compengine.decompressorreader(fh)
129 self._stream = compengine.decompressorreader(fh)
130 self._type = alg
130 self._type = alg
131 self.extras = extras or {}
131 self.extras = extras or {}
132 self.callback = None
132 self.callback = None
133
133
134 # These methods (compressed, read, seek, tell) all appear to only
134 # These methods (compressed, read, seek, tell) all appear to only
135 # be used by bundlerepo, but it's a little hard to tell.
135 # be used by bundlerepo, but it's a little hard to tell.
136 def compressed(self):
136 def compressed(self):
137 return self._type is not None and self._type != 'UN'
137 return self._type is not None and self._type != 'UN'
138 def read(self, l):
138 def read(self, l):
139 return self._stream.read(l)
139 return self._stream.read(l)
140 def seek(self, pos):
140 def seek(self, pos):
141 return self._stream.seek(pos)
141 return self._stream.seek(pos)
142 def tell(self):
142 def tell(self):
143 return self._stream.tell()
143 return self._stream.tell()
144 def close(self):
144 def close(self):
145 return self._stream.close()
145 return self._stream.close()
146
146
147 def _chunklength(self):
147 def _chunklength(self):
148 d = readexactly(self._stream, 4)
148 d = readexactly(self._stream, 4)
149 l = struct.unpack(">l", d)[0]
149 l = struct.unpack(">l", d)[0]
150 if l <= 4:
150 if l <= 4:
151 if l:
151 if l:
152 raise error.Abort(_("invalid chunk length %d") % l)
152 raise error.Abort(_("invalid chunk length %d") % l)
153 return 0
153 return 0
154 if self.callback:
154 if self.callback:
155 self.callback()
155 self.callback()
156 return l - 4
156 return l - 4
157
157
158 def changelogheader(self):
158 def changelogheader(self):
159 """v10 does not have a changelog header chunk"""
159 """v10 does not have a changelog header chunk"""
160 return {}
160 return {}
161
161
162 def manifestheader(self):
162 def manifestheader(self):
163 """v10 does not have a manifest header chunk"""
163 """v10 does not have a manifest header chunk"""
164 return {}
164 return {}
165
165
166 def filelogheader(self):
166 def filelogheader(self):
167 """return the header of the filelogs chunk, v10 only has the filename"""
167 """return the header of the filelogs chunk, v10 only has the filename"""
168 l = self._chunklength()
168 l = self._chunklength()
169 if not l:
169 if not l:
170 return {}
170 return {}
171 fname = readexactly(self._stream, l)
171 fname = readexactly(self._stream, l)
172 return {'filename': fname}
172 return {'filename': fname}
173
173
174 def _deltaheader(self, headertuple, prevnode):
174 def _deltaheader(self, headertuple, prevnode):
175 node, p1, p2, cs = headertuple
175 node, p1, p2, cs = headertuple
176 if prevnode is None:
176 if prevnode is None:
177 deltabase = p1
177 deltabase = p1
178 else:
178 else:
179 deltabase = prevnode
179 deltabase = prevnode
180 flags = 0
180 flags = 0
181 return node, p1, p2, deltabase, cs, flags
181 return node, p1, p2, deltabase, cs, flags
182
182
183 def deltachunk(self, prevnode):
183 def deltachunk(self, prevnode):
184 l = self._chunklength()
184 l = self._chunklength()
185 if not l:
185 if not l:
186 return {}
186 return {}
187 headerdata = readexactly(self._stream, self.deltaheadersize)
187 headerdata = readexactly(self._stream, self.deltaheadersize)
188 header = struct.unpack(self.deltaheader, headerdata)
188 header = struct.unpack(self.deltaheader, headerdata)
189 delta = readexactly(self._stream, l - self.deltaheadersize)
189 delta = readexactly(self._stream, l - self.deltaheadersize)
190 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
190 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
191 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
191 return {'node': node, 'p1': p1, 'p2': p2, 'cs': cs,
192 'deltabase': deltabase, 'delta': delta, 'flags': flags}
192 'deltabase': deltabase, 'delta': delta, 'flags': flags}
193
193
194 def getchunks(self):
194 def getchunks(self):
195 """returns all the chunks contains in the bundle
195 """returns all the chunks contains in the bundle
196
196
197 Used when you need to forward the binary stream to a file or another
197 Used when you need to forward the binary stream to a file or another
198 network API. To do so, it parse the changegroup data, otherwise it will
198 network API. To do so, it parse the changegroup data, otherwise it will
199 block in case of sshrepo because it don't know the end of the stream.
199 block in case of sshrepo because it don't know the end of the stream.
200 """
200 """
201 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
201 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
202 # and a list of filelogs. For changegroup 3, we expect 4 parts:
202 # and a list of filelogs. For changegroup 3, we expect 4 parts:
203 # changelog, manifestlog, a list of tree manifestlogs, and a list of
203 # changelog, manifestlog, a list of tree manifestlogs, and a list of
204 # filelogs.
204 # filelogs.
205 #
205 #
206 # Changelog and manifestlog parts are terminated with empty chunks. The
206 # Changelog and manifestlog parts are terminated with empty chunks. The
207 # tree and file parts are a list of entry sections. Each entry section
207 # tree and file parts are a list of entry sections. Each entry section
208 # is a series of chunks terminating in an empty chunk. The list of these
208 # is a series of chunks terminating in an empty chunk. The list of these
209 # entry sections is terminated in yet another empty chunk, so we know
209 # entry sections is terminated in yet another empty chunk, so we know
210 # we've reached the end of the tree/file list when we reach an empty
210 # we've reached the end of the tree/file list when we reach an empty
211 # chunk that was proceeded by no non-empty chunks.
211 # chunk that was proceeded by no non-empty chunks.
212
212
213 parts = 0
213 parts = 0
214 while parts < 2 + self._grouplistcount:
214 while parts < 2 + self._grouplistcount:
215 noentries = True
215 noentries = True
216 while True:
216 while True:
217 chunk = getchunk(self)
217 chunk = getchunk(self)
218 if not chunk:
218 if not chunk:
219 # The first two empty chunks represent the end of the
219 # The first two empty chunks represent the end of the
220 # changelog and the manifestlog portions. The remaining
220 # changelog and the manifestlog portions. The remaining
221 # empty chunks represent either A) the end of individual
221 # empty chunks represent either A) the end of individual
222 # tree or file entries in the file list, or B) the end of
222 # tree or file entries in the file list, or B) the end of
223 # the entire list. It's the end of the entire list if there
223 # the entire list. It's the end of the entire list if there
224 # were no entries (i.e. noentries is True).
224 # were no entries (i.e. noentries is True).
225 if parts < 2:
225 if parts < 2:
226 parts += 1
226 parts += 1
227 elif noentries:
227 elif noentries:
228 parts += 1
228 parts += 1
229 break
229 break
230 noentries = False
230 noentries = False
231 yield chunkheader(len(chunk))
231 yield chunkheader(len(chunk))
232 pos = 0
232 pos = 0
233 while pos < len(chunk):
233 while pos < len(chunk):
234 next = pos + 2**20
234 next = pos + 2**20
235 yield chunk[pos:next]
235 yield chunk[pos:next]
236 pos = next
236 pos = next
237 yield closechunk()
237 yield closechunk()
238
238
239 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
239 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
240 # We know that we'll never have more manifests than we had
240 # We know that we'll never have more manifests than we had
241 # changesets.
241 # changesets.
242 self.callback = prog(_('manifests'), numchanges)
242 self.callback = prog(_('manifests'), numchanges)
243 # no need to check for empty manifest group here:
243 # no need to check for empty manifest group here:
244 # if the result of the merge of 1 and 2 is the same in 3 and 4,
244 # if the result of the merge of 1 and 2 is the same in 3 and 4,
245 # no new manifest will be created and the manifest group will
245 # no new manifest will be created and the manifest group will
246 # be empty during the pull
246 # be empty during the pull
247 self.manifestheader()
247 self.manifestheader()
248 repo.manifestlog._revlog.addgroup(self, revmap, trp)
248 repo.manifestlog._revlog.addgroup(self, revmap, trp)
249 repo.ui.progress(_('manifests'), None)
249 repo.ui.progress(_('manifests'), None)
250 self.callback = None
250 self.callback = None
251
251
252 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
252 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
253 expectedtotal=None):
253 expectedtotal=None):
254 """Add the changegroup returned by source.read() to this repo.
254 """Add the changegroup returned by source.read() to this repo.
255 srctype is a string like 'push', 'pull', or 'unbundle'. url is
255 srctype is a string like 'push', 'pull', or 'unbundle'. url is
256 the URL of the repo where this changegroup is coming from.
256 the URL of the repo where this changegroup is coming from.
257
257
258 Return an integer summarizing the change to this repo:
258 Return an integer summarizing the change to this repo:
259 - nothing changed or no source: 0
259 - nothing changed or no source: 0
260 - more heads than before: 1+added heads (2..n)
260 - more heads than before: 1+added heads (2..n)
261 - fewer heads than before: -1-removed heads (-2..-n)
261 - fewer heads than before: -1-removed heads (-2..-n)
262 - number of heads stays the same: 1
262 - number of heads stays the same: 1
263 """
263 """
264 repo = repo.unfiltered()
264 repo = repo.unfiltered()
265 def csmap(x):
265 def csmap(x):
266 repo.ui.debug("add changeset %s\n" % short(x))
266 repo.ui.debug("add changeset %s\n" % short(x))
267 return len(cl)
267 return len(cl)
268
268
269 def revmap(x):
269 def revmap(x):
270 return cl.rev(x)
270 return cl.rev(x)
271
271
272 changesets = files = revisions = 0
272 changesets = files = revisions = 0
273
273
274 try:
274 try:
275 # The transaction may already carry source information. In this
275 # The transaction may already carry source information. In this
276 # case we use the top level data. We overwrite the argument
276 # case we use the top level data. We overwrite the argument
277 # because we need to use the top level value (if they exist)
277 # because we need to use the top level value (if they exist)
278 # in this function.
278 # in this function.
279 srctype = tr.hookargs.setdefault('source', srctype)
279 srctype = tr.hookargs.setdefault('source', srctype)
280 url = tr.hookargs.setdefault('url', url)
280 url = tr.hookargs.setdefault('url', url)
281 repo.hook('prechangegroup',
281 repo.hook('prechangegroup',
282 throw=True, **pycompat.strkwargs(tr.hookargs))
282 throw=True, **pycompat.strkwargs(tr.hookargs))
283
283
284 # write changelog data to temp files so concurrent readers
284 # write changelog data to temp files so concurrent readers
285 # will not see an inconsistent view
285 # will not see an inconsistent view
286 cl = repo.changelog
286 cl = repo.changelog
287 cl.delayupdate(tr)
287 cl.delayupdate(tr)
288 oldheads = set(cl.heads())
288 oldheads = set(cl.heads())
289
289
290 trp = weakref.proxy(tr)
290 trp = weakref.proxy(tr)
291 # pull off the changeset group
291 # pull off the changeset group
292 repo.ui.status(_("adding changesets\n"))
292 repo.ui.status(_("adding changesets\n"))
293 clstart = len(cl)
293 clstart = len(cl)
294 class prog(object):
294 class prog(object):
295 def __init__(self, step, total):
295 def __init__(self, step, total):
296 self._step = step
296 self._step = step
297 self._total = total
297 self._total = total
298 self._count = 1
298 self._count = 1
299 def __call__(self):
299 def __call__(self):
300 repo.ui.progress(self._step, self._count, unit=_('chunks'),
300 repo.ui.progress(self._step, self._count, unit=_('chunks'),
301 total=self._total)
301 total=self._total)
302 self._count += 1
302 self._count += 1
303 self.callback = prog(_('changesets'), expectedtotal)
303 self.callback = prog(_('changesets'), expectedtotal)
304
304
305 efiles = set()
305 efiles = set()
306 def onchangelog(cl, node):
306 def onchangelog(cl, node):
307 efiles.update(cl.readfiles(node))
307 efiles.update(cl.readfiles(node))
308
308
309 self.changelogheader()
309 self.changelogheader()
310 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
310 cgnodes = cl.addgroup(self, csmap, trp, addrevisioncb=onchangelog)
311 efiles = len(efiles)
311 efiles = len(efiles)
312
312
313 if not cgnodes:
313 if not cgnodes:
314 repo.ui.develwarn('applied empty changegroup',
314 repo.ui.develwarn('applied empty changegroup',
315 config='empty-changegroup')
315 config='empty-changegroup')
316 clend = len(cl)
316 clend = len(cl)
317 changesets = clend - clstart
317 changesets = clend - clstart
318 repo.ui.progress(_('changesets'), None)
318 repo.ui.progress(_('changesets'), None)
319 self.callback = None
319 self.callback = None
320
320
321 # pull off the manifest group
321 # pull off the manifest group
322 repo.ui.status(_("adding manifests\n"))
322 repo.ui.status(_("adding manifests\n"))
323 self._unpackmanifests(repo, revmap, trp, prog, changesets)
323 self._unpackmanifests(repo, revmap, trp, prog, changesets)
324
324
325 needfiles = {}
325 needfiles = {}
326 if repo.ui.configbool('server', 'validate'):
326 if repo.ui.configbool('server', 'validate'):
327 cl = repo.changelog
327 cl = repo.changelog
328 ml = repo.manifestlog
328 ml = repo.manifestlog
329 # validate incoming csets have their manifests
329 # validate incoming csets have their manifests
330 for cset in xrange(clstart, clend):
330 for cset in xrange(clstart, clend):
331 mfnode = cl.changelogrevision(cset).manifest
331 mfnode = cl.changelogrevision(cset).manifest
332 mfest = ml[mfnode].readdelta()
332 mfest = ml[mfnode].readdelta()
333 # store file cgnodes we must see
333 # store file cgnodes we must see
334 for f, n in mfest.iteritems():
334 for f, n in mfest.iteritems():
335 needfiles.setdefault(f, set()).add(n)
335 needfiles.setdefault(f, set()).add(n)
336
336
337 # process the files
337 # process the files
338 repo.ui.status(_("adding file changes\n"))
338 repo.ui.status(_("adding file changes\n"))
339 newrevs, newfiles = _addchangegroupfiles(
339 newrevs, newfiles = _addchangegroupfiles(
340 repo, self, revmap, trp, efiles, needfiles)
340 repo, self, revmap, trp, efiles, needfiles)
341 revisions += newrevs
341 revisions += newrevs
342 files += newfiles
342 files += newfiles
343
343
344 deltaheads = 0
344 deltaheads = 0
345 if oldheads:
345 if oldheads:
346 heads = cl.heads()
346 heads = cl.heads()
347 deltaheads = len(heads) - len(oldheads)
347 deltaheads = len(heads) - len(oldheads)
348 for h in heads:
348 for h in heads:
349 if h not in oldheads and repo[h].closesbranch():
349 if h not in oldheads and repo[h].closesbranch():
350 deltaheads -= 1
350 deltaheads -= 1
351 htext = ""
351 htext = ""
352 if deltaheads:
352 if deltaheads:
353 htext = _(" (%+d heads)") % deltaheads
353 htext = _(" (%+d heads)") % deltaheads
354
354
355 repo.ui.status(_("added %d changesets"
355 repo.ui.status(_("added %d changesets"
356 " with %d changes to %d files%s\n")
356 " with %d changes to %d files%s\n")
357 % (changesets, revisions, files, htext))
357 % (changesets, revisions, files, htext))
358 repo.invalidatevolatilesets()
358 repo.invalidatevolatilesets()
359
359
360 if changesets > 0:
360 if changesets > 0:
361 if 'node' not in tr.hookargs:
361 if 'node' not in tr.hookargs:
362 tr.hookargs['node'] = hex(cl.node(clstart))
362 tr.hookargs['node'] = hex(cl.node(clstart))
363 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
363 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
364 hookargs = dict(tr.hookargs)
364 hookargs = dict(tr.hookargs)
365 else:
365 else:
366 hookargs = dict(tr.hookargs)
366 hookargs = dict(tr.hookargs)
367 hookargs['node'] = hex(cl.node(clstart))
367 hookargs['node'] = hex(cl.node(clstart))
368 hookargs['node_last'] = hex(cl.node(clend - 1))
368 hookargs['node_last'] = hex(cl.node(clend - 1))
369 repo.hook('pretxnchangegroup',
369 repo.hook('pretxnchangegroup',
370 throw=True, **pycompat.strkwargs(hookargs))
370 throw=True, **pycompat.strkwargs(hookargs))
371
371
372 added = [cl.node(r) for r in xrange(clstart, clend)]
372 added = [cl.node(r) for r in xrange(clstart, clend)]
373 phaseall = None
373 phaseall = None
374 if srctype in ('push', 'serve'):
374 if srctype in ('push', 'serve'):
375 # Old servers can not push the boundary themselves.
375 # Old servers can not push the boundary themselves.
376 # New servers won't push the boundary if changeset already
376 # New servers won't push the boundary if changeset already
377 # exists locally as secret
377 # exists locally as secret
378 #
378 #
379 # We should not use added here but the list of all change in
379 # We should not use added here but the list of all change in
380 # the bundle
380 # the bundle
381 if repo.publishing():
381 if repo.publishing():
382 targetphase = phaseall = phases.public
382 targetphase = phaseall = phases.public
383 else:
383 else:
384 # closer target phase computation
384 # closer target phase computation
385
385
386 # Those changesets have been pushed from the
386 # Those changesets have been pushed from the
387 # outside, their phases are going to be pushed
387 # outside, their phases are going to be pushed
388 # alongside. Therefor `targetphase` is
388 # alongside. Therefor `targetphase` is
389 # ignored.
389 # ignored.
390 targetphase = phaseall = phases.draft
390 targetphase = phaseall = phases.draft
391 if added:
391 if added:
392 phases.registernew(repo, tr, targetphase, added)
392 phases.registernew(repo, tr, targetphase, added)
393 if phaseall is not None:
393 if phaseall is not None:
394 phases.advanceboundary(repo, tr, phaseall, cgnodes)
394 phases.advanceboundary(repo, tr, phaseall, cgnodes)
395
395
396 if changesets > 0:
396 if changesets > 0:
397
397
398 def runhooks():
398 def runhooks():
399 # These hooks run when the lock releases, not when the
399 # These hooks run when the lock releases, not when the
400 # transaction closes. So it's possible for the changelog
400 # transaction closes. So it's possible for the changelog
401 # to have changed since we last saw it.
401 # to have changed since we last saw it.
402 if clstart >= len(repo):
402 if clstart >= len(repo):
403 return
403 return
404
404
405 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
405 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
406
406
407 for n in added:
407 for n in added:
408 args = hookargs.copy()
408 args = hookargs.copy()
409 args['node'] = hex(n)
409 args['node'] = hex(n)
410 del args['node_last']
410 del args['node_last']
411 repo.hook("incoming", **pycompat.strkwargs(args))
411 repo.hook("incoming", **pycompat.strkwargs(args))
412
412
413 newheads = [h for h in repo.heads()
413 newheads = [h for h in repo.heads()
414 if h not in oldheads]
414 if h not in oldheads]
415 repo.ui.log("incoming",
415 repo.ui.log("incoming",
416 "%s incoming changes - new heads: %s\n",
416 "%s incoming changes - new heads: %s\n",
417 len(added),
417 len(added),
418 ', '.join([hex(c[:6]) for c in newheads]))
418 ', '.join([hex(c[:6]) for c in newheads]))
419
419
420 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
420 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
421 lambda tr: repo._afterlock(runhooks))
421 lambda tr: repo._afterlock(runhooks))
422 finally:
422 finally:
423 repo.ui.flush()
423 repo.ui.flush()
424 # never return 0 here:
424 # never return 0 here:
425 if deltaheads < 0:
425 if deltaheads < 0:
426 ret = deltaheads - 1
426 ret = deltaheads - 1
427 else:
427 else:
428 ret = deltaheads + 1
428 ret = deltaheads + 1
429 return ret
429 return ret
430
430
431 class cg2unpacker(cg1unpacker):
431 class cg2unpacker(cg1unpacker):
432 """Unpacker for cg2 streams.
432 """Unpacker for cg2 streams.
433
433
434 cg2 streams add support for generaldelta, so the delta header
434 cg2 streams add support for generaldelta, so the delta header
435 format is slightly different. All other features about the data
435 format is slightly different. All other features about the data
436 remain the same.
436 remain the same.
437 """
437 """
438 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
438 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
439 deltaheadersize = struct.calcsize(deltaheader)
439 deltaheadersize = struct.calcsize(deltaheader)
440 version = '02'
440 version = '02'
441
441
442 def _deltaheader(self, headertuple, prevnode):
442 def _deltaheader(self, headertuple, prevnode):
443 node, p1, p2, deltabase, cs = headertuple
443 node, p1, p2, deltabase, cs = headertuple
444 flags = 0
444 flags = 0
445 return node, p1, p2, deltabase, cs, flags
445 return node, p1, p2, deltabase, cs, flags
446
446
447 class cg3unpacker(cg2unpacker):
447 class cg3unpacker(cg2unpacker):
448 """Unpacker for cg3 streams.
448 """Unpacker for cg3 streams.
449
449
450 cg3 streams add support for exchanging treemanifests and revlog
450 cg3 streams add support for exchanging treemanifests and revlog
451 flags. It adds the revlog flags to the delta header and an empty chunk
451 flags. It adds the revlog flags to the delta header and an empty chunk
452 separating manifests and files.
452 separating manifests and files.
453 """
453 """
454 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
454 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
455 deltaheadersize = struct.calcsize(deltaheader)
455 deltaheadersize = struct.calcsize(deltaheader)
456 version = '03'
456 version = '03'
457 _grouplistcount = 2 # One list of manifests and one list of files
457 _grouplistcount = 2 # One list of manifests and one list of files
458
458
459 def _deltaheader(self, headertuple, prevnode):
459 def _deltaheader(self, headertuple, prevnode):
460 node, p1, p2, deltabase, cs, flags = headertuple
460 node, p1, p2, deltabase, cs, flags = headertuple
461 return node, p1, p2, deltabase, cs, flags
461 return node, p1, p2, deltabase, cs, flags
462
462
463 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
463 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
464 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
464 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
465 numchanges)
465 numchanges)
466 for chunkdata in iter(self.filelogheader, {}):
466 for chunkdata in iter(self.filelogheader, {}):
467 # If we get here, there are directory manifests in the changegroup
467 # If we get here, there are directory manifests in the changegroup
468 d = chunkdata["filename"]
468 d = chunkdata["filename"]
469 repo.ui.debug("adding %s revisions\n" % d)
469 repo.ui.debug("adding %s revisions\n" % d)
470 dirlog = repo.manifestlog._revlog.dirlog(d)
470 dirlog = repo.manifestlog._revlog.dirlog(d)
471 if not dirlog.addgroup(self, revmap, trp):
471 if not dirlog.addgroup(self, revmap, trp):
472 raise error.Abort(_("received dir revlog group is empty"))
472 raise error.Abort(_("received dir revlog group is empty"))
473
473
474 class headerlessfixup(object):
474 class headerlessfixup(object):
475 def __init__(self, fh, h):
475 def __init__(self, fh, h):
476 self._h = h
476 self._h = h
477 self._fh = fh
477 self._fh = fh
478 def read(self, n):
478 def read(self, n):
479 if self._h:
479 if self._h:
480 d, self._h = self._h[:n], self._h[n:]
480 d, self._h = self._h[:n], self._h[n:]
481 if len(d) < n:
481 if len(d) < n:
482 d += readexactly(self._fh, n - len(d))
482 d += readexactly(self._fh, n - len(d))
483 return d
483 return d
484 return readexactly(self._fh, n)
484 return readexactly(self._fh, n)
485
485
486 class cg1packer(object):
486 class cg1packer(object):
487 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
487 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
488 version = '01'
488 version = '01'
489 def __init__(self, repo, bundlecaps=None):
489 def __init__(self, repo, bundlecaps=None):
490 """Given a source repo, construct a bundler.
490 """Given a source repo, construct a bundler.
491
491
492 bundlecaps is optional and can be used to specify the set of
492 bundlecaps is optional and can be used to specify the set of
493 capabilities which can be used to build the bundle. While bundlecaps is
493 capabilities which can be used to build the bundle. While bundlecaps is
494 unused in core Mercurial, extensions rely on this feature to communicate
494 unused in core Mercurial, extensions rely on this feature to communicate
495 capabilities to customize the changegroup packer.
495 capabilities to customize the changegroup packer.
496 """
496 """
497 # Set of capabilities we can use to build the bundle.
497 # Set of capabilities we can use to build the bundle.
498 if bundlecaps is None:
498 if bundlecaps is None:
499 bundlecaps = set()
499 bundlecaps = set()
500 self._bundlecaps = bundlecaps
500 self._bundlecaps = bundlecaps
501 # experimental config: bundle.reorder
501 # experimental config: bundle.reorder
502 reorder = repo.ui.config('bundle', 'reorder')
502 reorder = repo.ui.config('bundle', 'reorder')
503 if reorder == 'auto':
503 if reorder == 'auto':
504 reorder = None
504 reorder = None
505 else:
505 else:
506 reorder = util.parsebool(reorder)
506 reorder = util.parsebool(reorder)
507 self._repo = repo
507 self._repo = repo
508 self._reorder = reorder
508 self._reorder = reorder
509 self._progress = repo.ui.progress
509 self._progress = repo.ui.progress
510 if self._repo.ui.verbose and not self._repo.ui.debugflag:
510 if self._repo.ui.verbose and not self._repo.ui.debugflag:
511 self._verbosenote = self._repo.ui.note
511 self._verbosenote = self._repo.ui.note
512 else:
512 else:
513 self._verbosenote = lambda s: None
513 self._verbosenote = lambda s: None
514
514
515 def close(self):
515 def close(self):
516 return closechunk()
516 return closechunk()
517
517
518 def fileheader(self, fname):
518 def fileheader(self, fname):
519 return chunkheader(len(fname)) + fname
519 return chunkheader(len(fname)) + fname
520
520
521 # Extracted both for clarity and for overriding in extensions.
521 # Extracted both for clarity and for overriding in extensions.
522 def _sortgroup(self, revlog, nodelist, lookup):
522 def _sortgroup(self, revlog, nodelist, lookup):
523 """Sort nodes for change group and turn them into revnums."""
523 """Sort nodes for change group and turn them into revnums."""
524 # for generaldelta revlogs, we linearize the revs; this will both be
524 # for generaldelta revlogs, we linearize the revs; this will both be
525 # much quicker and generate a much smaller bundle
525 # much quicker and generate a much smaller bundle
526 if (revlog._generaldelta and self._reorder is None) or self._reorder:
526 if (revlog._generaldelta and self._reorder is None) or self._reorder:
527 dag = dagutil.revlogdag(revlog)
527 dag = dagutil.revlogdag(revlog)
528 return dag.linearize(set(revlog.rev(n) for n in nodelist))
528 return dag.linearize(set(revlog.rev(n) for n in nodelist))
529 else:
529 else:
530 return sorted([revlog.rev(n) for n in nodelist])
530 return sorted([revlog.rev(n) for n in nodelist])
531
531
532 def group(self, nodelist, revlog, lookup, units=None):
532 def group(self, nodelist, revlog, lookup, units=None):
533 """Calculate a delta group, yielding a sequence of changegroup chunks
533 """Calculate a delta group, yielding a sequence of changegroup chunks
534 (strings).
534 (strings).
535
535
536 Given a list of changeset revs, return a set of deltas and
536 Given a list of changeset revs, return a set of deltas and
537 metadata corresponding to nodes. The first delta is
537 metadata corresponding to nodes. The first delta is
538 first parent(nodelist[0]) -> nodelist[0], the receiver is
538 first parent(nodelist[0]) -> nodelist[0], the receiver is
539 guaranteed to have this parent as it has all history before
539 guaranteed to have this parent as it has all history before
540 these changesets. In the case firstparent is nullrev the
540 these changesets. In the case firstparent is nullrev the
541 changegroup starts with a full revision.
541 changegroup starts with a full revision.
542
542
543 If units is not None, progress detail will be generated, units specifies
543 If units is not None, progress detail will be generated, units specifies
544 the type of revlog that is touched (changelog, manifest, etc.).
544 the type of revlog that is touched (changelog, manifest, etc.).
545 """
545 """
546 # if we don't have any revisions touched by these changesets, bail
546 # if we don't have any revisions touched by these changesets, bail
547 if len(nodelist) == 0:
547 if len(nodelist) == 0:
548 yield self.close()
548 yield self.close()
549 return
549 return
550
550
551 revs = self._sortgroup(revlog, nodelist, lookup)
551 revs = self._sortgroup(revlog, nodelist, lookup)
552
552
553 # add the parent of the first rev
553 # add the parent of the first rev
554 p = revlog.parentrevs(revs[0])[0]
554 p = revlog.parentrevs(revs[0])[0]
555 revs.insert(0, p)
555 revs.insert(0, p)
556
556
557 # build deltas
557 # build deltas
558 total = len(revs) - 1
558 total = len(revs) - 1
559 msgbundling = _('bundling')
559 msgbundling = _('bundling')
560 for r in xrange(len(revs) - 1):
560 for r in xrange(len(revs) - 1):
561 if units is not None:
561 if units is not None:
562 self._progress(msgbundling, r + 1, unit=units, total=total)
562 self._progress(msgbundling, r + 1, unit=units, total=total)
563 prev, curr = revs[r], revs[r + 1]
563 prev, curr = revs[r], revs[r + 1]
564 linknode = lookup(revlog.node(curr))
564 linknode = lookup(revlog.node(curr))
565 for c in self.revchunk(revlog, curr, prev, linknode):
565 for c in self.revchunk(revlog, curr, prev, linknode):
566 yield c
566 yield c
567
567
568 if units is not None:
568 if units is not None:
569 self._progress(msgbundling, None)
569 self._progress(msgbundling, None)
570 yield self.close()
570 yield self.close()
571
571
572 # filter any nodes that claim to be part of the known set
572 # filter any nodes that claim to be part of the known set
573 def prune(self, revlog, missing, commonrevs):
573 def prune(self, revlog, missing, commonrevs):
574 rr, rl = revlog.rev, revlog.linkrev
574 rr, rl = revlog.rev, revlog.linkrev
575 return [n for n in missing if rl(rr(n)) not in commonrevs]
575 return [n for n in missing if rl(rr(n)) not in commonrevs]
576
576
577 def _packmanifests(self, dir, mfnodes, lookuplinknode):
577 def _packmanifests(self, dir, mfnodes, lookuplinknode):
578 """Pack flat manifests into a changegroup stream."""
578 """Pack flat manifests into a changegroup stream."""
579 assert not dir
579 assert not dir
580 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
580 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
581 lookuplinknode, units=_('manifests')):
581 lookuplinknode, units=_('manifests')):
582 yield chunk
582 yield chunk
583
583
584 def _manifestsdone(self):
584 def _manifestsdone(self):
585 return ''
585 return ''
586
586
587 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
587 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
588 '''yield a sequence of changegroup chunks (strings)'''
588 '''yield a sequence of changegroup chunks (strings)'''
589 repo = self._repo
589 repo = self._repo
590 cl = repo.changelog
590 cl = repo.changelog
591
591
592 clrevorder = {}
592 clrevorder = {}
593 mfs = {} # needed manifests
593 mfs = {} # needed manifests
594 fnodes = {} # needed file nodes
594 fnodes = {} # needed file nodes
595 changedfiles = set()
595 changedfiles = set()
596
596
597 # Callback for the changelog, used to collect changed files and manifest
597 # Callback for the changelog, used to collect changed files and manifest
598 # nodes.
598 # nodes.
599 # Returns the linkrev node (identity in the changelog case).
599 # Returns the linkrev node (identity in the changelog case).
600 def lookupcl(x):
600 def lookupcl(x):
601 c = cl.read(x)
601 c = cl.read(x)
602 clrevorder[x] = len(clrevorder)
602 clrevorder[x] = len(clrevorder)
603 n = c[0]
603 n = c[0]
604 # record the first changeset introducing this manifest version
604 # record the first changeset introducing this manifest version
605 mfs.setdefault(n, x)
605 mfs.setdefault(n, x)
606 # Record a complete list of potentially-changed files in
606 # Record a complete list of potentially-changed files in
607 # this manifest.
607 # this manifest.
608 changedfiles.update(c[3])
608 changedfiles.update(c[3])
609 return x
609 return x
610
610
611 self._verbosenote(_('uncompressed size of bundle content:\n'))
611 self._verbosenote(_('uncompressed size of bundle content:\n'))
612 size = 0
612 size = 0
613 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
613 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
614 size += len(chunk)
614 size += len(chunk)
615 yield chunk
615 yield chunk
616 self._verbosenote(_('%8.i (changelog)\n') % size)
616 self._verbosenote(_('%8.i (changelog)\n') % size)
617
617
618 # We need to make sure that the linkrev in the changegroup refers to
618 # We need to make sure that the linkrev in the changegroup refers to
619 # the first changeset that introduced the manifest or file revision.
619 # the first changeset that introduced the manifest or file revision.
620 # The fastpath is usually safer than the slowpath, because the filelogs
620 # The fastpath is usually safer than the slowpath, because the filelogs
621 # are walked in revlog order.
621 # are walked in revlog order.
622 #
622 #
623 # When taking the slowpath with reorder=None and the manifest revlog
623 # When taking the slowpath with reorder=None and the manifest revlog
624 # uses generaldelta, the manifest may be walked in the "wrong" order.
624 # uses generaldelta, the manifest may be walked in the "wrong" order.
625 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
625 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
626 # cc0ff93d0c0c).
626 # cc0ff93d0c0c).
627 #
627 #
628 # When taking the fastpath, we are only vulnerable to reordering
628 # When taking the fastpath, we are only vulnerable to reordering
629 # of the changelog itself. The changelog never uses generaldelta, so
629 # of the changelog itself. The changelog never uses generaldelta, so
630 # it is only reordered when reorder=True. To handle this case, we
630 # it is only reordered when reorder=True. To handle this case, we
631 # simply take the slowpath, which already has the 'clrevorder' logic.
631 # simply take the slowpath, which already has the 'clrevorder' logic.
632 # This was also fixed in cc0ff93d0c0c.
632 # This was also fixed in cc0ff93d0c0c.
633 fastpathlinkrev = fastpathlinkrev and not self._reorder
633 fastpathlinkrev = fastpathlinkrev and not self._reorder
634 # Treemanifests don't work correctly with fastpathlinkrev
634 # Treemanifests don't work correctly with fastpathlinkrev
635 # either, because we don't discover which directory nodes to
635 # either, because we don't discover which directory nodes to
636 # send along with files. This could probably be fixed.
636 # send along with files. This could probably be fixed.
637 fastpathlinkrev = fastpathlinkrev and (
637 fastpathlinkrev = fastpathlinkrev and (
638 'treemanifest' not in repo.requirements)
638 'treemanifest' not in repo.requirements)
639
639
640 for chunk in self.generatemanifests(commonrevs, clrevorder,
640 for chunk in self.generatemanifests(commonrevs, clrevorder,
641 fastpathlinkrev, mfs, fnodes):
641 fastpathlinkrev, mfs, fnodes):
642 yield chunk
642 yield chunk
643 mfs.clear()
643 mfs.clear()
644 clrevs = set(cl.rev(x) for x in clnodes)
644 clrevs = set(cl.rev(x) for x in clnodes)
645
645
646 if not fastpathlinkrev:
646 if not fastpathlinkrev:
647 def linknodes(unused, fname):
647 def linknodes(unused, fname):
648 return fnodes.get(fname, {})
648 return fnodes.get(fname, {})
649 else:
649 else:
650 cln = cl.node
650 cln = cl.node
651 def linknodes(filerevlog, fname):
651 def linknodes(filerevlog, fname):
652 llr = filerevlog.linkrev
652 llr = filerevlog.linkrev
653 fln = filerevlog.node
653 fln = filerevlog.node
654 revs = ((r, llr(r)) for r in filerevlog)
654 revs = ((r, llr(r)) for r in filerevlog)
655 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
655 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
656
656
657 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
657 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
658 source):
658 source):
659 yield chunk
659 yield chunk
660
660
661 yield self.close()
661 yield self.close()
662
662
663 if clnodes:
663 if clnodes:
664 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
664 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
665
665
666 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
666 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
667 fnodes):
667 fnodes):
668 repo = self._repo
668 repo = self._repo
669 mfl = repo.manifestlog
669 mfl = repo.manifestlog
670 dirlog = mfl._revlog.dirlog
670 dirlog = mfl._revlog.dirlog
671 tmfnodes = {'': mfs}
671 tmfnodes = {'': mfs}
672
672
673 # Callback for the manifest, used to collect linkrevs for filelog
673 # Callback for the manifest, used to collect linkrevs for filelog
674 # revisions.
674 # revisions.
675 # Returns the linkrev node (collected in lookupcl).
675 # Returns the linkrev node (collected in lookupcl).
676 def makelookupmflinknode(dir):
676 def makelookupmflinknode(dir):
677 if fastpathlinkrev:
677 if fastpathlinkrev:
678 assert not dir
678 assert not dir
679 return mfs.__getitem__
679 return mfs.__getitem__
680
680
681 def lookupmflinknode(x):
681 def lookupmflinknode(x):
682 """Callback for looking up the linknode for manifests.
682 """Callback for looking up the linknode for manifests.
683
683
684 Returns the linkrev node for the specified manifest.
684 Returns the linkrev node for the specified manifest.
685
685
686 SIDE EFFECT:
686 SIDE EFFECT:
687
687
688 1) fclnodes gets populated with the list of relevant
688 1) fclnodes gets populated with the list of relevant
689 file nodes if we're not using fastpathlinkrev
689 file nodes if we're not using fastpathlinkrev
690 2) When treemanifests are in use, collects treemanifest nodes
690 2) When treemanifests are in use, collects treemanifest nodes
691 to send
691 to send
692
692
693 Note that this means manifests must be completely sent to
693 Note that this means manifests must be completely sent to
694 the client before you can trust the list of files and
694 the client before you can trust the list of files and
695 treemanifests to send.
695 treemanifests to send.
696 """
696 """
697 clnode = tmfnodes[dir][x]
697 clnode = tmfnodes[dir][x]
698 mdata = mfl.get(dir, x).readfast(shallow=True)
698 mdata = mfl.get(dir, x).readfast(shallow=True)
699 for p, n, fl in mdata.iterentries():
699 for p, n, fl in mdata.iterentries():
700 if fl == 't': # subdirectory manifest
700 if fl == 't': # subdirectory manifest
701 subdir = dir + p + '/'
701 subdir = dir + p + '/'
702 tmfclnodes = tmfnodes.setdefault(subdir, {})
702 tmfclnodes = tmfnodes.setdefault(subdir, {})
703 tmfclnode = tmfclnodes.setdefault(n, clnode)
703 tmfclnode = tmfclnodes.setdefault(n, clnode)
704 if clrevorder[clnode] < clrevorder[tmfclnode]:
704 if clrevorder[clnode] < clrevorder[tmfclnode]:
705 tmfclnodes[n] = clnode
705 tmfclnodes[n] = clnode
706 else:
706 else:
707 f = dir + p
707 f = dir + p
708 fclnodes = fnodes.setdefault(f, {})
708 fclnodes = fnodes.setdefault(f, {})
709 fclnode = fclnodes.setdefault(n, clnode)
709 fclnode = fclnodes.setdefault(n, clnode)
710 if clrevorder[clnode] < clrevorder[fclnode]:
710 if clrevorder[clnode] < clrevorder[fclnode]:
711 fclnodes[n] = clnode
711 fclnodes[n] = clnode
712 return clnode
712 return clnode
713 return lookupmflinknode
713 return lookupmflinknode
714
714
715 size = 0
715 size = 0
716 while tmfnodes:
716 while tmfnodes:
717 dir = min(tmfnodes)
717 dir = min(tmfnodes)
718 nodes = tmfnodes[dir]
718 nodes = tmfnodes[dir]
719 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
719 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
720 if not dir or prunednodes:
720 if not dir or prunednodes:
721 for x in self._packmanifests(dir, prunednodes,
721 for x in self._packmanifests(dir, prunednodes,
722 makelookupmflinknode(dir)):
722 makelookupmflinknode(dir)):
723 size += len(x)
723 size += len(x)
724 yield x
724 yield x
725 del tmfnodes[dir]
725 del tmfnodes[dir]
726 self._verbosenote(_('%8.i (manifests)\n') % size)
726 self._verbosenote(_('%8.i (manifests)\n') % size)
727 yield self._manifestsdone()
727 yield self._manifestsdone()
728
728
729 # The 'source' parameter is useful for extensions
729 # The 'source' parameter is useful for extensions
730 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
730 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
731 repo = self._repo
731 repo = self._repo
732 progress = self._progress
732 progress = self._progress
733 msgbundling = _('bundling')
733 msgbundling = _('bundling')
734
734
735 total = len(changedfiles)
735 total = len(changedfiles)
736 # for progress output
736 # for progress output
737 msgfiles = _('files')
737 msgfiles = _('files')
738 for i, fname in enumerate(sorted(changedfiles)):
738 for i, fname in enumerate(sorted(changedfiles)):
739 filerevlog = repo.file(fname)
739 filerevlog = repo.file(fname)
740 if not filerevlog:
740 if not filerevlog:
741 raise error.Abort(_("empty or missing revlog for %s") % fname)
741 raise error.Abort(_("empty or missing revlog for %s") % fname)
742
742
743 linkrevnodes = linknodes(filerevlog, fname)
743 linkrevnodes = linknodes(filerevlog, fname)
744 # Lookup for filenodes, we collected the linkrev nodes above in the
744 # Lookup for filenodes, we collected the linkrev nodes above in the
745 # fastpath case and with lookupmf in the slowpath case.
745 # fastpath case and with lookupmf in the slowpath case.
746 def lookupfilelog(x):
746 def lookupfilelog(x):
747 return linkrevnodes[x]
747 return linkrevnodes[x]
748
748
749 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
749 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
750 if filenodes:
750 if filenodes:
751 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
751 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
752 total=total)
752 total=total)
753 h = self.fileheader(fname)
753 h = self.fileheader(fname)
754 size = len(h)
754 size = len(h)
755 yield h
755 yield h
756 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
756 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
757 size += len(chunk)
757 size += len(chunk)
758 yield chunk
758 yield chunk
759 self._verbosenote(_('%8.i %s\n') % (size, fname))
759 self._verbosenote(_('%8.i %s\n') % (size, fname))
760 progress(msgbundling, None)
760 progress(msgbundling, None)
761
761
762 def deltaparent(self, revlog, rev, p1, p2, prev):
762 def deltaparent(self, revlog, rev, p1, p2, prev):
763 return prev
763 return prev
764
764
765 def revchunk(self, revlog, rev, prev, linknode):
765 def revchunk(self, revlog, rev, prev, linknode):
766 node = revlog.node(rev)
766 node = revlog.node(rev)
767 p1, p2 = revlog.parentrevs(rev)
767 p1, p2 = revlog.parentrevs(rev)
768 base = self.deltaparent(revlog, rev, p1, p2, prev)
768 base = self.deltaparent(revlog, rev, p1, p2, prev)
769
769
770 prefix = ''
770 prefix = ''
771 if revlog.iscensored(base) or revlog.iscensored(rev):
771 if revlog.iscensored(base) or revlog.iscensored(rev):
772 try:
772 try:
773 delta = revlog.revision(node, raw=True)
773 delta = revlog.revision(node, raw=True)
774 except error.CensoredNodeError as e:
774 except error.CensoredNodeError as e:
775 delta = e.tombstone
775 delta = e.tombstone
776 if base == nullrev:
776 if base == nullrev:
777 prefix = mdiff.trivialdiffheader(len(delta))
777 prefix = mdiff.trivialdiffheader(len(delta))
778 else:
778 else:
779 baselen = revlog.rawsize(base)
779 baselen = revlog.rawsize(base)
780 prefix = mdiff.replacediffheader(baselen, len(delta))
780 prefix = mdiff.replacediffheader(baselen, len(delta))
781 elif base == nullrev:
781 elif base == nullrev:
782 delta = revlog.revision(node, raw=True)
782 delta = revlog.revision(node, raw=True)
783 prefix = mdiff.trivialdiffheader(len(delta))
783 prefix = mdiff.trivialdiffheader(len(delta))
784 else:
784 else:
785 delta = revlog.revdiff(base, rev)
785 delta = revlog.revdiff(base, rev)
786 p1n, p2n = revlog.parents(node)
786 p1n, p2n = revlog.parents(node)
787 basenode = revlog.node(base)
787 basenode = revlog.node(base)
788 flags = revlog.flags(rev)
788 flags = revlog.flags(rev)
789 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
789 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
790 meta += prefix
790 meta += prefix
791 l = len(meta) + len(delta)
791 l = len(meta) + len(delta)
792 yield chunkheader(l)
792 yield chunkheader(l)
793 yield meta
793 yield meta
794 yield delta
794 yield delta
795 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
795 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
796 # do nothing with basenode, it is implicitly the previous one in HG10
796 # do nothing with basenode, it is implicitly the previous one in HG10
797 # do nothing with flags, it is implicitly 0 for cg1 and cg2
797 # do nothing with flags, it is implicitly 0 for cg1 and cg2
798 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
798 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
799
799
800 class cg2packer(cg1packer):
800 class cg2packer(cg1packer):
801 version = '02'
801 version = '02'
802 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
802 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
803
803
804 def __init__(self, repo, bundlecaps=None):
804 def __init__(self, repo, bundlecaps=None):
805 super(cg2packer, self).__init__(repo, bundlecaps)
805 super(cg2packer, self).__init__(repo, bundlecaps)
806 if self._reorder is None:
806 if self._reorder is None:
807 # Since generaldelta is directly supported by cg2, reordering
807 # Since generaldelta is directly supported by cg2, reordering
808 # generally doesn't help, so we disable it by default (treating
808 # generally doesn't help, so we disable it by default (treating
809 # bundle.reorder=auto just like bundle.reorder=False).
809 # bundle.reorder=auto just like bundle.reorder=False).
810 self._reorder = False
810 self._reorder = False
811
811
812 def deltaparent(self, revlog, rev, p1, p2, prev):
812 def deltaparent(self, revlog, rev, p1, p2, prev):
813 dp = revlog.deltaparent(rev)
813 dp = revlog.deltaparent(rev)
814 if dp == nullrev and revlog.storedeltachains:
814 if dp == nullrev and revlog.storedeltachains:
815 # Avoid sending full revisions when delta parent is null. Pick prev
815 # Avoid sending full revisions when delta parent is null. Pick prev
816 # in that case. It's tempting to pick p1 in this case, as p1 will
816 # in that case. It's tempting to pick p1 in this case, as p1 will
817 # be smaller in the common case. However, computing a delta against
817 # be smaller in the common case. However, computing a delta against
818 # p1 may require resolving the raw text of p1, which could be
818 # p1 may require resolving the raw text of p1, which could be
819 # expensive. The revlog caches should have prev cached, meaning
819 # expensive. The revlog caches should have prev cached, meaning
820 # less CPU for changegroup generation. There is likely room to add
820 # less CPU for changegroup generation. There is likely room to add
821 # a flag and/or config option to control this behavior.
821 # a flag and/or config option to control this behavior.
822 return prev
822 return prev
823 elif dp == nullrev:
823 elif dp == nullrev:
824 # revlog is configured to use full snapshot for a reason,
824 # revlog is configured to use full snapshot for a reason,
825 # stick to full snapshot.
825 # stick to full snapshot.
826 return nullrev
826 return nullrev
827 elif dp not in (p1, p2, prev):
827 elif dp not in (p1, p2, prev):
828 # Pick prev when we can't be sure remote has the base revision.
828 # Pick prev when we can't be sure remote has the base revision.
829 return prev
829 return prev
830 else:
830 else:
831 return dp
831 return dp
832
832
833 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
833 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
834 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
834 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
835 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
835 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
836
836
837 class cg3packer(cg2packer):
837 class cg3packer(cg2packer):
838 version = '03'
838 version = '03'
839 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
839 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
840
840
841 def _packmanifests(self, dir, mfnodes, lookuplinknode):
841 def _packmanifests(self, dir, mfnodes, lookuplinknode):
842 if dir:
842 if dir:
843 yield self.fileheader(dir)
843 yield self.fileheader(dir)
844
844
845 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
845 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
846 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
846 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
847 units=_('manifests')):
847 units=_('manifests')):
848 yield chunk
848 yield chunk
849
849
850 def _manifestsdone(self):
850 def _manifestsdone(self):
851 return self.close()
851 return self.close()
852
852
853 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
853 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
854 return struct.pack(
854 return struct.pack(
855 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
855 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
856
856
857 _packermap = {'01': (cg1packer, cg1unpacker),
857 _packermap = {'01': (cg1packer, cg1unpacker),
858 # cg2 adds support for exchanging generaldelta
858 # cg2 adds support for exchanging generaldelta
859 '02': (cg2packer, cg2unpacker),
859 '02': (cg2packer, cg2unpacker),
860 # cg3 adds support for exchanging revlog flags and treemanifests
860 # cg3 adds support for exchanging revlog flags and treemanifests
861 '03': (cg3packer, cg3unpacker),
861 '03': (cg3packer, cg3unpacker),
862 }
862 }
863
863
864 def allsupportedversions(repo):
864 def allsupportedversions(repo):
865 versions = set(_packermap.keys())
865 versions = set(_packermap.keys())
866 if not (repo.ui.configbool('experimental', 'changegroup3') or
866 if not (repo.ui.configbool('experimental', 'changegroup3') or
867 repo.ui.configbool('experimental', 'treemanifest') or
867 repo.ui.configbool('experimental', 'treemanifest') or
868 'treemanifest' in repo.requirements):
868 'treemanifest' in repo.requirements):
869 versions.discard('03')
869 versions.discard('03')
870 return versions
870 return versions
871
871
872 # Changegroup versions that can be applied to the repo
872 # Changegroup versions that can be applied to the repo
873 def supportedincomingversions(repo):
873 def supportedincomingversions(repo):
874 return allsupportedversions(repo)
874 return allsupportedversions(repo)
875
875
876 # Changegroup versions that can be created from the repo
876 # Changegroup versions that can be created from the repo
877 def supportedoutgoingversions(repo):
877 def supportedoutgoingversions(repo):
878 versions = allsupportedversions(repo)
878 versions = allsupportedversions(repo)
879 if 'treemanifest' in repo.requirements:
879 if 'treemanifest' in repo.requirements:
880 # Versions 01 and 02 support only flat manifests and it's just too
880 # Versions 01 and 02 support only flat manifests and it's just too
881 # expensive to convert between the flat manifest and tree manifest on
881 # expensive to convert between the flat manifest and tree manifest on
882 # the fly. Since tree manifests are hashed differently, all of history
882 # the fly. Since tree manifests are hashed differently, all of history
883 # would have to be converted. Instead, we simply don't even pretend to
883 # would have to be converted. Instead, we simply don't even pretend to
884 # support versions 01 and 02.
884 # support versions 01 and 02.
885 versions.discard('01')
885 versions.discard('01')
886 versions.discard('02')
886 versions.discard('02')
887 return versions
887 return versions
888
888
889 def safeversion(repo):
889 def safeversion(repo):
890 # Finds the smallest version that it's safe to assume clients of the repo
890 # Finds the smallest version that it's safe to assume clients of the repo
891 # will support. For example, all hg versions that support generaldelta also
891 # will support. For example, all hg versions that support generaldelta also
892 # support changegroup 02.
892 # support changegroup 02.
893 versions = supportedoutgoingversions(repo)
893 versions = supportedoutgoingversions(repo)
894 if 'generaldelta' in repo.requirements:
894 if 'generaldelta' in repo.requirements:
895 versions.discard('01')
895 versions.discard('01')
896 assert versions
896 assert versions
897 return min(versions)
897 return min(versions)
898
898
899 def getbundler(version, repo, bundlecaps=None):
899 def getbundler(version, repo, bundlecaps=None):
900 assert version in supportedoutgoingversions(repo)
900 assert version in supportedoutgoingversions(repo)
901 return _packermap[version][0](repo, bundlecaps)
901 return _packermap[version][0](repo, bundlecaps)
902
902
903 def getunbundler(version, fh, alg, extras=None):
903 def getunbundler(version, fh, alg, extras=None):
904 return _packermap[version][1](fh, alg, extras=extras)
904 return _packermap[version][1](fh, alg, extras=extras)
905
905
906 def _changegroupinfo(repo, nodes, source):
906 def _changegroupinfo(repo, nodes, source):
907 if repo.ui.verbose or source == 'bundle':
907 if repo.ui.verbose or source == 'bundle':
908 repo.ui.status(_("%d changesets found\n") % len(nodes))
908 repo.ui.status(_("%d changesets found\n") % len(nodes))
909 if repo.ui.debugflag:
909 if repo.ui.debugflag:
910 repo.ui.debug("list of changesets:\n")
910 repo.ui.debug("list of changesets:\n")
911 for node in nodes:
911 for node in nodes:
912 repo.ui.debug("%s\n" % hex(node))
912 repo.ui.debug("%s\n" % hex(node))
913
913
914 def makestream(repo, outgoing, version, source, fastpath=False,
914 def makestream(repo, outgoing, version, source, fastpath=False,
915 bundlecaps=None):
915 bundlecaps=None):
916 bundler = getbundler(version, repo, bundlecaps=bundlecaps)
916 bundler = getbundler(version, repo, bundlecaps=bundlecaps)
917 return getsubsetraw(repo, outgoing, bundler, source, fastpath=fastpath)
917 return getsubsetraw(repo, outgoing, bundler, source, fastpath=fastpath)
918
918
919 def makechangegroup(repo, outgoing, version, source, fastpath=False,
919 def makechangegroup(repo, outgoing, version, source, fastpath=False,
920 bundlecaps=None):
920 bundlecaps=None):
921 cgstream = makestream(repo, outgoing, version, source,
921 cgstream = makestream(repo, outgoing, version, source,
922 fastpath=fastpath, bundlecaps=bundlecaps)
922 fastpath=fastpath, bundlecaps=bundlecaps)
923 return getunbundler(version, util.chunkbuffer(cgstream), None,
923 return getunbundler(version, util.chunkbuffer(cgstream), None,
924 {'clcount': len(outgoing.missing) })
924 {'clcount': len(outgoing.missing) })
925
925
926 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
926 def getsubsetraw(repo, outgoing, bundler, source, fastpath=False):
927 repo = repo.unfiltered()
927 repo = repo.unfiltered()
928 commonrevs = outgoing.common
928 commonrevs = outgoing.common
929 csets = outgoing.missing
929 csets = outgoing.missing
930 heads = outgoing.missingheads
930 heads = outgoing.missingheads
931 # We go through the fast path if we get told to, or if all (unfiltered
931 # We go through the fast path if we get told to, or if all (unfiltered
932 # heads have been requested (since we then know there all linkrevs will
932 # heads have been requested (since we then know there all linkrevs will
933 # be pulled by the client).
933 # be pulled by the client).
934 heads.sort()
934 heads.sort()
935 fastpathlinkrev = fastpath or (
935 fastpathlinkrev = fastpath or (
936 repo.filtername is None and heads == sorted(repo.heads()))
936 repo.filtername is None and heads == sorted(repo.heads()))
937
937
938 repo.hook('preoutgoing', throw=True, source=source)
938 repo.hook('preoutgoing', throw=True, source=source)
939 _changegroupinfo(repo, csets, source)
939 _changegroupinfo(repo, csets, source)
940 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
940 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
941
941
942 def getchangegroup(repo, source, outgoing, bundlecaps=None,
943 version='01'):
944 """Like getbundle, but taking a discovery.outgoing as an argument.
945
946 This is only implemented for local repos and reuses potentially
947 precomputed sets in outgoing."""
948 if not outgoing.missing:
949 return None
950 return makechangegroup(repo, outgoing, version, source,
951 bundlecaps=bundlecaps)
952
953 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
942 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
954 revisions = 0
943 revisions = 0
955 files = 0
944 files = 0
956 for chunkdata in iter(source.filelogheader, {}):
945 for chunkdata in iter(source.filelogheader, {}):
957 files += 1
946 files += 1
958 f = chunkdata["filename"]
947 f = chunkdata["filename"]
959 repo.ui.debug("adding %s revisions\n" % f)
948 repo.ui.debug("adding %s revisions\n" % f)
960 repo.ui.progress(_('files'), files, unit=_('files'),
949 repo.ui.progress(_('files'), files, unit=_('files'),
961 total=expectedfiles)
950 total=expectedfiles)
962 fl = repo.file(f)
951 fl = repo.file(f)
963 o = len(fl)
952 o = len(fl)
964 try:
953 try:
965 if not fl.addgroup(source, revmap, trp):
954 if not fl.addgroup(source, revmap, trp):
966 raise error.Abort(_("received file revlog group is empty"))
955 raise error.Abort(_("received file revlog group is empty"))
967 except error.CensoredBaseError as e:
956 except error.CensoredBaseError as e:
968 raise error.Abort(_("received delta base is censored: %s") % e)
957 raise error.Abort(_("received delta base is censored: %s") % e)
969 revisions += len(fl) - o
958 revisions += len(fl) - o
970 if f in needfiles:
959 if f in needfiles:
971 needs = needfiles[f]
960 needs = needfiles[f]
972 for new in xrange(o, len(fl)):
961 for new in xrange(o, len(fl)):
973 n = fl.node(new)
962 n = fl.node(new)
974 if n in needs:
963 if n in needs:
975 needs.remove(n)
964 needs.remove(n)
976 else:
965 else:
977 raise error.Abort(
966 raise error.Abort(
978 _("received spurious file revlog entry"))
967 _("received spurious file revlog entry"))
979 if not needs:
968 if not needs:
980 del needfiles[f]
969 del needfiles[f]
981 repo.ui.progress(_('files'), None)
970 repo.ui.progress(_('files'), None)
982
971
983 for f, needs in needfiles.iteritems():
972 for f, needs in needfiles.iteritems():
984 fl = repo.file(f)
973 fl = repo.file(f)
985 for n in needs:
974 for n in needs:
986 try:
975 try:
987 fl.rev(n)
976 fl.rev(n)
988 except error.LookupError:
977 except error.LookupError:
989 raise error.Abort(
978 raise error.Abort(
990 _('missing file data for %s:%s - run hg verify') %
979 _('missing file data for %s:%s - run hg verify') %
991 (f, hex(n)))
980 (f, hex(n)))
992
981
993 return revisions, files
982 return revisions, files
@@ -1,2011 +1,2011 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 pycompat,
28 pycompat,
29 scmutil,
29 scmutil,
30 sslutil,
30 sslutil,
31 streamclone,
31 streamclone,
32 url as urlmod,
32 url as urlmod,
33 util,
33 util,
34 )
34 )
35
35
36 urlerr = util.urlerr
36 urlerr = util.urlerr
37 urlreq = util.urlreq
37 urlreq = util.urlreq
38
38
39 # Maps bundle version human names to changegroup versions.
39 # Maps bundle version human names to changegroup versions.
40 _bundlespeccgversions = {'v1': '01',
40 _bundlespeccgversions = {'v1': '01',
41 'v2': '02',
41 'v2': '02',
42 'packed1': 's1',
42 'packed1': 's1',
43 'bundle2': '02', #legacy
43 'bundle2': '02', #legacy
44 }
44 }
45
45
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
48
48
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 def parsebundlespec(repo, spec, strict=True, externalnames=False):
50 """Parse a bundle string specification into parts.
50 """Parse a bundle string specification into parts.
51
51
52 Bundle specifications denote a well-defined bundle/exchange format.
52 Bundle specifications denote a well-defined bundle/exchange format.
53 The content of a given specification should not change over time in
53 The content of a given specification should not change over time in
54 order to ensure that bundles produced by a newer version of Mercurial are
54 order to ensure that bundles produced by a newer version of Mercurial are
55 readable from an older version.
55 readable from an older version.
56
56
57 The string currently has the form:
57 The string currently has the form:
58
58
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
59 <compression>-<type>[;<parameter0>[;<parameter1>]]
60
60
61 Where <compression> is one of the supported compression formats
61 Where <compression> is one of the supported compression formats
62 and <type> is (currently) a version string. A ";" can follow the type and
62 and <type> is (currently) a version string. A ";" can follow the type and
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 all text afterwards is interpreted as URI encoded, ";" delimited key=value
64 pairs.
64 pairs.
65
65
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 If ``strict`` is True (the default) <compression> is required. Otherwise,
67 it is optional.
67 it is optional.
68
68
69 If ``externalnames`` is False (the default), the human-centric names will
69 If ``externalnames`` is False (the default), the human-centric names will
70 be converted to their internal representation.
70 be converted to their internal representation.
71
71
72 Returns a 3-tuple of (compression, version, parameters). Compression will
72 Returns a 3-tuple of (compression, version, parameters). Compression will
73 be ``None`` if not in strict mode and a compression isn't defined.
73 be ``None`` if not in strict mode and a compression isn't defined.
74
74
75 An ``InvalidBundleSpecification`` is raised when the specification is
75 An ``InvalidBundleSpecification`` is raised when the specification is
76 not syntactically well formed.
76 not syntactically well formed.
77
77
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 An ``UnsupportedBundleSpecification`` is raised when the compression or
79 bundle type/version is not recognized.
79 bundle type/version is not recognized.
80
80
81 Note: this function will likely eventually return a more complex data
81 Note: this function will likely eventually return a more complex data
82 structure, including bundle2 part information.
82 structure, including bundle2 part information.
83 """
83 """
84 def parseparams(s):
84 def parseparams(s):
85 if ';' not in s:
85 if ';' not in s:
86 return s, {}
86 return s, {}
87
87
88 params = {}
88 params = {}
89 version, paramstr = s.split(';', 1)
89 version, paramstr = s.split(';', 1)
90
90
91 for p in paramstr.split(';'):
91 for p in paramstr.split(';'):
92 if '=' not in p:
92 if '=' not in p:
93 raise error.InvalidBundleSpecification(
93 raise error.InvalidBundleSpecification(
94 _('invalid bundle specification: '
94 _('invalid bundle specification: '
95 'missing "=" in parameter: %s') % p)
95 'missing "=" in parameter: %s') % p)
96
96
97 key, value = p.split('=', 1)
97 key, value = p.split('=', 1)
98 key = urlreq.unquote(key)
98 key = urlreq.unquote(key)
99 value = urlreq.unquote(value)
99 value = urlreq.unquote(value)
100 params[key] = value
100 params[key] = value
101
101
102 return version, params
102 return version, params
103
103
104
104
105 if strict and '-' not in spec:
105 if strict and '-' not in spec:
106 raise error.InvalidBundleSpecification(
106 raise error.InvalidBundleSpecification(
107 _('invalid bundle specification; '
107 _('invalid bundle specification; '
108 'must be prefixed with compression: %s') % spec)
108 'must be prefixed with compression: %s') % spec)
109
109
110 if '-' in spec:
110 if '-' in spec:
111 compression, version = spec.split('-', 1)
111 compression, version = spec.split('-', 1)
112
112
113 if compression not in util.compengines.supportedbundlenames:
113 if compression not in util.compengines.supportedbundlenames:
114 raise error.UnsupportedBundleSpecification(
114 raise error.UnsupportedBundleSpecification(
115 _('%s compression is not supported') % compression)
115 _('%s compression is not supported') % compression)
116
116
117 version, params = parseparams(version)
117 version, params = parseparams(version)
118
118
119 if version not in _bundlespeccgversions:
119 if version not in _bundlespeccgversions:
120 raise error.UnsupportedBundleSpecification(
120 raise error.UnsupportedBundleSpecification(
121 _('%s is not a recognized bundle version') % version)
121 _('%s is not a recognized bundle version') % version)
122 else:
122 else:
123 # Value could be just the compression or just the version, in which
123 # Value could be just the compression or just the version, in which
124 # case some defaults are assumed (but only when not in strict mode).
124 # case some defaults are assumed (but only when not in strict mode).
125 assert not strict
125 assert not strict
126
126
127 spec, params = parseparams(spec)
127 spec, params = parseparams(spec)
128
128
129 if spec in util.compengines.supportedbundlenames:
129 if spec in util.compengines.supportedbundlenames:
130 compression = spec
130 compression = spec
131 version = 'v1'
131 version = 'v1'
132 # Generaldelta repos require v2.
132 # Generaldelta repos require v2.
133 if 'generaldelta' in repo.requirements:
133 if 'generaldelta' in repo.requirements:
134 version = 'v2'
134 version = 'v2'
135 # Modern compression engines require v2.
135 # Modern compression engines require v2.
136 if compression not in _bundlespecv1compengines:
136 if compression not in _bundlespecv1compengines:
137 version = 'v2'
137 version = 'v2'
138 elif spec in _bundlespeccgversions:
138 elif spec in _bundlespeccgversions:
139 if spec == 'packed1':
139 if spec == 'packed1':
140 compression = 'none'
140 compression = 'none'
141 else:
141 else:
142 compression = 'bzip2'
142 compression = 'bzip2'
143 version = spec
143 version = spec
144 else:
144 else:
145 raise error.UnsupportedBundleSpecification(
145 raise error.UnsupportedBundleSpecification(
146 _('%s is not a recognized bundle specification') % spec)
146 _('%s is not a recognized bundle specification') % spec)
147
147
148 # Bundle version 1 only supports a known set of compression engines.
148 # Bundle version 1 only supports a known set of compression engines.
149 if version == 'v1' and compression not in _bundlespecv1compengines:
149 if version == 'v1' and compression not in _bundlespecv1compengines:
150 raise error.UnsupportedBundleSpecification(
150 raise error.UnsupportedBundleSpecification(
151 _('compression engine %s is not supported on v1 bundles') %
151 _('compression engine %s is not supported on v1 bundles') %
152 compression)
152 compression)
153
153
154 # The specification for packed1 can optionally declare the data formats
154 # The specification for packed1 can optionally declare the data formats
155 # required to apply it. If we see this metadata, compare against what the
155 # required to apply it. If we see this metadata, compare against what the
156 # repo supports and error if the bundle isn't compatible.
156 # repo supports and error if the bundle isn't compatible.
157 if version == 'packed1' and 'requirements' in params:
157 if version == 'packed1' and 'requirements' in params:
158 requirements = set(params['requirements'].split(','))
158 requirements = set(params['requirements'].split(','))
159 missingreqs = requirements - repo.supportedformats
159 missingreqs = requirements - repo.supportedformats
160 if missingreqs:
160 if missingreqs:
161 raise error.UnsupportedBundleSpecification(
161 raise error.UnsupportedBundleSpecification(
162 _('missing support for repository features: %s') %
162 _('missing support for repository features: %s') %
163 ', '.join(sorted(missingreqs)))
163 ', '.join(sorted(missingreqs)))
164
164
165 if not externalnames:
165 if not externalnames:
166 engine = util.compengines.forbundlename(compression)
166 engine = util.compengines.forbundlename(compression)
167 compression = engine.bundletype()[1]
167 compression = engine.bundletype()[1]
168 version = _bundlespeccgversions[version]
168 version = _bundlespeccgversions[version]
169 return compression, version, params
169 return compression, version, params
170
170
171 def readbundle(ui, fh, fname, vfs=None):
171 def readbundle(ui, fh, fname, vfs=None):
172 header = changegroup.readexactly(fh, 4)
172 header = changegroup.readexactly(fh, 4)
173
173
174 alg = None
174 alg = None
175 if not fname:
175 if not fname:
176 fname = "stream"
176 fname = "stream"
177 if not header.startswith('HG') and header.startswith('\0'):
177 if not header.startswith('HG') and header.startswith('\0'):
178 fh = changegroup.headerlessfixup(fh, header)
178 fh = changegroup.headerlessfixup(fh, header)
179 header = "HG10"
179 header = "HG10"
180 alg = 'UN'
180 alg = 'UN'
181 elif vfs:
181 elif vfs:
182 fname = vfs.join(fname)
182 fname = vfs.join(fname)
183
183
184 magic, version = header[0:2], header[2:4]
184 magic, version = header[0:2], header[2:4]
185
185
186 if magic != 'HG':
186 if magic != 'HG':
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
188 if version == '10':
188 if version == '10':
189 if alg is None:
189 if alg is None:
190 alg = changegroup.readexactly(fh, 2)
190 alg = changegroup.readexactly(fh, 2)
191 return changegroup.cg1unpacker(fh, alg)
191 return changegroup.cg1unpacker(fh, alg)
192 elif version.startswith('2'):
192 elif version.startswith('2'):
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
194 elif version == 'S1':
194 elif version == 'S1':
195 return streamclone.streamcloneapplier(fh)
195 return streamclone.streamcloneapplier(fh)
196 else:
196 else:
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
198
198
199 def getbundlespec(ui, fh):
199 def getbundlespec(ui, fh):
200 """Infer the bundlespec from a bundle file handle.
200 """Infer the bundlespec from a bundle file handle.
201
201
202 The input file handle is seeked and the original seek position is not
202 The input file handle is seeked and the original seek position is not
203 restored.
203 restored.
204 """
204 """
205 def speccompression(alg):
205 def speccompression(alg):
206 try:
206 try:
207 return util.compengines.forbundletype(alg).bundletype()[0]
207 return util.compengines.forbundletype(alg).bundletype()[0]
208 except KeyError:
208 except KeyError:
209 return None
209 return None
210
210
211 b = readbundle(ui, fh, None)
211 b = readbundle(ui, fh, None)
212 if isinstance(b, changegroup.cg1unpacker):
212 if isinstance(b, changegroup.cg1unpacker):
213 alg = b._type
213 alg = b._type
214 if alg == '_truncatedBZ':
214 if alg == '_truncatedBZ':
215 alg = 'BZ'
215 alg = 'BZ'
216 comp = speccompression(alg)
216 comp = speccompression(alg)
217 if not comp:
217 if not comp:
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 raise error.Abort(_('unknown compression algorithm: %s') % alg)
219 return '%s-v1' % comp
219 return '%s-v1' % comp
220 elif isinstance(b, bundle2.unbundle20):
220 elif isinstance(b, bundle2.unbundle20):
221 if 'Compression' in b.params:
221 if 'Compression' in b.params:
222 comp = speccompression(b.params['Compression'])
222 comp = speccompression(b.params['Compression'])
223 if not comp:
223 if not comp:
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 raise error.Abort(_('unknown compression algorithm: %s') % comp)
225 else:
225 else:
226 comp = 'none'
226 comp = 'none'
227
227
228 version = None
228 version = None
229 for part in b.iterparts():
229 for part in b.iterparts():
230 if part.type == 'changegroup':
230 if part.type == 'changegroup':
231 version = part.params['version']
231 version = part.params['version']
232 if version in ('01', '02'):
232 if version in ('01', '02'):
233 version = 'v2'
233 version = 'v2'
234 else:
234 else:
235 raise error.Abort(_('changegroup version %s does not have '
235 raise error.Abort(_('changegroup version %s does not have '
236 'a known bundlespec') % version,
236 'a known bundlespec') % version,
237 hint=_('try upgrading your Mercurial '
237 hint=_('try upgrading your Mercurial '
238 'client'))
238 'client'))
239
239
240 if not version:
240 if not version:
241 raise error.Abort(_('could not identify changegroup version in '
241 raise error.Abort(_('could not identify changegroup version in '
242 'bundle'))
242 'bundle'))
243
243
244 return '%s-%s' % (comp, version)
244 return '%s-%s' % (comp, version)
245 elif isinstance(b, streamclone.streamcloneapplier):
245 elif isinstance(b, streamclone.streamcloneapplier):
246 requirements = streamclone.readbundle1header(fh)[2]
246 requirements = streamclone.readbundle1header(fh)[2]
247 params = 'requirements=%s' % ','.join(sorted(requirements))
247 params = 'requirements=%s' % ','.join(sorted(requirements))
248 return 'none-packed1;%s' % urlreq.quote(params)
248 return 'none-packed1;%s' % urlreq.quote(params)
249 else:
249 else:
250 raise error.Abort(_('unknown bundle type: %s') % b)
250 raise error.Abort(_('unknown bundle type: %s') % b)
251
251
252 def _computeoutgoing(repo, heads, common):
252 def _computeoutgoing(repo, heads, common):
253 """Computes which revs are outgoing given a set of common
253 """Computes which revs are outgoing given a set of common
254 and a set of heads.
254 and a set of heads.
255
255
256 This is a separate function so extensions can have access to
256 This is a separate function so extensions can have access to
257 the logic.
257 the logic.
258
258
259 Returns a discovery.outgoing object.
259 Returns a discovery.outgoing object.
260 """
260 """
261 cl = repo.changelog
261 cl = repo.changelog
262 if common:
262 if common:
263 hasnode = cl.hasnode
263 hasnode = cl.hasnode
264 common = [n for n in common if hasnode(n)]
264 common = [n for n in common if hasnode(n)]
265 else:
265 else:
266 common = [nullid]
266 common = [nullid]
267 if not heads:
267 if not heads:
268 heads = cl.heads()
268 heads = cl.heads()
269 return discovery.outgoing(repo, common, heads)
269 return discovery.outgoing(repo, common, heads)
270
270
271 def _forcebundle1(op):
271 def _forcebundle1(op):
272 """return true if a pull/push must use bundle1
272 """return true if a pull/push must use bundle1
273
273
274 This function is used to allow testing of the older bundle version"""
274 This function is used to allow testing of the older bundle version"""
275 ui = op.repo.ui
275 ui = op.repo.ui
276 forcebundle1 = False
276 forcebundle1 = False
277 # The goal is this config is to allow developer to choose the bundle
277 # The goal is this config is to allow developer to choose the bundle
278 # version used during exchanged. This is especially handy during test.
278 # version used during exchanged. This is especially handy during test.
279 # Value is a list of bundle version to be picked from, highest version
279 # Value is a list of bundle version to be picked from, highest version
280 # should be used.
280 # should be used.
281 #
281 #
282 # developer config: devel.legacy.exchange
282 # developer config: devel.legacy.exchange
283 exchange = ui.configlist('devel', 'legacy.exchange')
283 exchange = ui.configlist('devel', 'legacy.exchange')
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
284 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
285 return forcebundle1 or not op.remote.capable('bundle2')
285 return forcebundle1 or not op.remote.capable('bundle2')
286
286
287 class pushoperation(object):
287 class pushoperation(object):
288 """A object that represent a single push operation
288 """A object that represent a single push operation
289
289
290 Its purpose is to carry push related state and very common operations.
290 Its purpose is to carry push related state and very common operations.
291
291
292 A new pushoperation should be created at the beginning of each push and
292 A new pushoperation should be created at the beginning of each push and
293 discarded afterward.
293 discarded afterward.
294 """
294 """
295
295
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
296 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
297 bookmarks=(), pushvars=None):
297 bookmarks=(), pushvars=None):
298 # repo we push from
298 # repo we push from
299 self.repo = repo
299 self.repo = repo
300 self.ui = repo.ui
300 self.ui = repo.ui
301 # repo we push to
301 # repo we push to
302 self.remote = remote
302 self.remote = remote
303 # force option provided
303 # force option provided
304 self.force = force
304 self.force = force
305 # revs to be pushed (None is "all")
305 # revs to be pushed (None is "all")
306 self.revs = revs
306 self.revs = revs
307 # bookmark explicitly pushed
307 # bookmark explicitly pushed
308 self.bookmarks = bookmarks
308 self.bookmarks = bookmarks
309 # allow push of new branch
309 # allow push of new branch
310 self.newbranch = newbranch
310 self.newbranch = newbranch
311 # step already performed
311 # step already performed
312 # (used to check what steps have been already performed through bundle2)
312 # (used to check what steps have been already performed through bundle2)
313 self.stepsdone = set()
313 self.stepsdone = set()
314 # Integer version of the changegroup push result
314 # Integer version of the changegroup push result
315 # - None means nothing to push
315 # - None means nothing to push
316 # - 0 means HTTP error
316 # - 0 means HTTP error
317 # - 1 means we pushed and remote head count is unchanged *or*
317 # - 1 means we pushed and remote head count is unchanged *or*
318 # we have outgoing changesets but refused to push
318 # we have outgoing changesets but refused to push
319 # - other values as described by addchangegroup()
319 # - other values as described by addchangegroup()
320 self.cgresult = None
320 self.cgresult = None
321 # Boolean value for the bookmark push
321 # Boolean value for the bookmark push
322 self.bkresult = None
322 self.bkresult = None
323 # discover.outgoing object (contains common and outgoing data)
323 # discover.outgoing object (contains common and outgoing data)
324 self.outgoing = None
324 self.outgoing = None
325 # all remote topological heads before the push
325 # all remote topological heads before the push
326 self.remoteheads = None
326 self.remoteheads = None
327 # Details of the remote branch pre and post push
327 # Details of the remote branch pre and post push
328 #
328 #
329 # mapping: {'branch': ([remoteheads],
329 # mapping: {'branch': ([remoteheads],
330 # [newheads],
330 # [newheads],
331 # [unsyncedheads],
331 # [unsyncedheads],
332 # [discardedheads])}
332 # [discardedheads])}
333 # - branch: the branch name
333 # - branch: the branch name
334 # - remoteheads: the list of remote heads known locally
334 # - remoteheads: the list of remote heads known locally
335 # None if the branch is new
335 # None if the branch is new
336 # - newheads: the new remote heads (known locally) with outgoing pushed
336 # - newheads: the new remote heads (known locally) with outgoing pushed
337 # - unsyncedheads: the list of remote heads unknown locally.
337 # - unsyncedheads: the list of remote heads unknown locally.
338 # - discardedheads: the list of remote heads made obsolete by the push
338 # - discardedheads: the list of remote heads made obsolete by the push
339 self.pushbranchmap = None
339 self.pushbranchmap = None
340 # testable as a boolean indicating if any nodes are missing locally.
340 # testable as a boolean indicating if any nodes are missing locally.
341 self.incoming = None
341 self.incoming = None
342 # phases changes that must be pushed along side the changesets
342 # phases changes that must be pushed along side the changesets
343 self.outdatedphases = None
343 self.outdatedphases = None
344 # phases changes that must be pushed if changeset push fails
344 # phases changes that must be pushed if changeset push fails
345 self.fallbackoutdatedphases = None
345 self.fallbackoutdatedphases = None
346 # outgoing obsmarkers
346 # outgoing obsmarkers
347 self.outobsmarkers = set()
347 self.outobsmarkers = set()
348 # outgoing bookmarks
348 # outgoing bookmarks
349 self.outbookmarks = []
349 self.outbookmarks = []
350 # transaction manager
350 # transaction manager
351 self.trmanager = None
351 self.trmanager = None
352 # map { pushkey partid -> callback handling failure}
352 # map { pushkey partid -> callback handling failure}
353 # used to handle exception from mandatory pushkey part failure
353 # used to handle exception from mandatory pushkey part failure
354 self.pkfailcb = {}
354 self.pkfailcb = {}
355 # an iterable of pushvars or None
355 # an iterable of pushvars or None
356 self.pushvars = pushvars
356 self.pushvars = pushvars
357
357
358 @util.propertycache
358 @util.propertycache
359 def futureheads(self):
359 def futureheads(self):
360 """future remote heads if the changeset push succeeds"""
360 """future remote heads if the changeset push succeeds"""
361 return self.outgoing.missingheads
361 return self.outgoing.missingheads
362
362
363 @util.propertycache
363 @util.propertycache
364 def fallbackheads(self):
364 def fallbackheads(self):
365 """future remote heads if the changeset push fails"""
365 """future remote heads if the changeset push fails"""
366 if self.revs is None:
366 if self.revs is None:
367 # not target to push, all common are relevant
367 # not target to push, all common are relevant
368 return self.outgoing.commonheads
368 return self.outgoing.commonheads
369 unfi = self.repo.unfiltered()
369 unfi = self.repo.unfiltered()
370 # I want cheads = heads(::missingheads and ::commonheads)
370 # I want cheads = heads(::missingheads and ::commonheads)
371 # (missingheads is revs with secret changeset filtered out)
371 # (missingheads is revs with secret changeset filtered out)
372 #
372 #
373 # This can be expressed as:
373 # This can be expressed as:
374 # cheads = ( (missingheads and ::commonheads)
374 # cheads = ( (missingheads and ::commonheads)
375 # + (commonheads and ::missingheads))"
375 # + (commonheads and ::missingheads))"
376 # )
376 # )
377 #
377 #
378 # while trying to push we already computed the following:
378 # while trying to push we already computed the following:
379 # common = (::commonheads)
379 # common = (::commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
380 # missing = ((commonheads::missingheads) - commonheads)
381 #
381 #
382 # We can pick:
382 # We can pick:
383 # * missingheads part of common (::commonheads)
383 # * missingheads part of common (::commonheads)
384 common = self.outgoing.common
384 common = self.outgoing.common
385 nm = self.repo.changelog.nodemap
385 nm = self.repo.changelog.nodemap
386 cheads = [node for node in self.revs if nm[node] in common]
386 cheads = [node for node in self.revs if nm[node] in common]
387 # and
387 # and
388 # * commonheads parents on missing
388 # * commonheads parents on missing
389 revset = unfi.set('%ln and parents(roots(%ln))',
389 revset = unfi.set('%ln and parents(roots(%ln))',
390 self.outgoing.commonheads,
390 self.outgoing.commonheads,
391 self.outgoing.missing)
391 self.outgoing.missing)
392 cheads.extend(c.node() for c in revset)
392 cheads.extend(c.node() for c in revset)
393 return cheads
393 return cheads
394
394
395 @property
395 @property
396 def commonheads(self):
396 def commonheads(self):
397 """set of all common heads after changeset bundle push"""
397 """set of all common heads after changeset bundle push"""
398 if self.cgresult:
398 if self.cgresult:
399 return self.futureheads
399 return self.futureheads
400 else:
400 else:
401 return self.fallbackheads
401 return self.fallbackheads
402
402
403 # mapping of message used when pushing bookmark
403 # mapping of message used when pushing bookmark
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
404 bookmsgmap = {'update': (_("updating bookmark %s\n"),
405 _('updating bookmark %s failed!\n')),
405 _('updating bookmark %s failed!\n')),
406 'export': (_("exporting bookmark %s\n"),
406 'export': (_("exporting bookmark %s\n"),
407 _('exporting bookmark %s failed!\n')),
407 _('exporting bookmark %s failed!\n')),
408 'delete': (_("deleting remote bookmark %s\n"),
408 'delete': (_("deleting remote bookmark %s\n"),
409 _('deleting remote bookmark %s failed!\n')),
409 _('deleting remote bookmark %s failed!\n')),
410 }
410 }
411
411
412
412
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
413 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
414 opargs=None):
414 opargs=None):
415 '''Push outgoing changesets (limited by revs) from a local
415 '''Push outgoing changesets (limited by revs) from a local
416 repository to remote. Return an integer:
416 repository to remote. Return an integer:
417 - None means nothing to push
417 - None means nothing to push
418 - 0 means HTTP error
418 - 0 means HTTP error
419 - 1 means we pushed and remote head count is unchanged *or*
419 - 1 means we pushed and remote head count is unchanged *or*
420 we have outgoing changesets but refused to push
420 we have outgoing changesets but refused to push
421 - other values as described by addchangegroup()
421 - other values as described by addchangegroup()
422 '''
422 '''
423 if opargs is None:
423 if opargs is None:
424 opargs = {}
424 opargs = {}
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
425 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
426 **opargs)
426 **opargs)
427 if pushop.remote.local():
427 if pushop.remote.local():
428 missing = (set(pushop.repo.requirements)
428 missing = (set(pushop.repo.requirements)
429 - pushop.remote.local().supported)
429 - pushop.remote.local().supported)
430 if missing:
430 if missing:
431 msg = _("required features are not"
431 msg = _("required features are not"
432 " supported in the destination:"
432 " supported in the destination:"
433 " %s") % (', '.join(sorted(missing)))
433 " %s") % (', '.join(sorted(missing)))
434 raise error.Abort(msg)
434 raise error.Abort(msg)
435
435
436 if not pushop.remote.canpush():
436 if not pushop.remote.canpush():
437 raise error.Abort(_("destination does not support push"))
437 raise error.Abort(_("destination does not support push"))
438
438
439 if not pushop.remote.capable('unbundle'):
439 if not pushop.remote.capable('unbundle'):
440 raise error.Abort(_('cannot push: destination does not support the '
440 raise error.Abort(_('cannot push: destination does not support the '
441 'unbundle wire protocol command'))
441 'unbundle wire protocol command'))
442
442
443 # get lock as we might write phase data
443 # get lock as we might write phase data
444 wlock = lock = None
444 wlock = lock = None
445 try:
445 try:
446 # bundle2 push may receive a reply bundle touching bookmarks or other
446 # bundle2 push may receive a reply bundle touching bookmarks or other
447 # things requiring the wlock. Take it now to ensure proper ordering.
447 # things requiring the wlock. Take it now to ensure proper ordering.
448 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
448 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
449 if (not _forcebundle1(pushop)) and maypushback:
449 if (not _forcebundle1(pushop)) and maypushback:
450 wlock = pushop.repo.wlock()
450 wlock = pushop.repo.wlock()
451 lock = pushop.repo.lock()
451 lock = pushop.repo.lock()
452 pushop.trmanager = transactionmanager(pushop.repo,
452 pushop.trmanager = transactionmanager(pushop.repo,
453 'push-response',
453 'push-response',
454 pushop.remote.url())
454 pushop.remote.url())
455 except IOError as err:
455 except IOError as err:
456 if err.errno != errno.EACCES:
456 if err.errno != errno.EACCES:
457 raise
457 raise
458 # source repo cannot be locked.
458 # source repo cannot be locked.
459 # We do not abort the push, but just disable the local phase
459 # We do not abort the push, but just disable the local phase
460 # synchronisation.
460 # synchronisation.
461 msg = 'cannot lock source repository: %s\n' % err
461 msg = 'cannot lock source repository: %s\n' % err
462 pushop.ui.debug(msg)
462 pushop.ui.debug(msg)
463
463
464 with wlock or util.nullcontextmanager(), \
464 with wlock or util.nullcontextmanager(), \
465 lock or util.nullcontextmanager(), \
465 lock or util.nullcontextmanager(), \
466 pushop.trmanager or util.nullcontextmanager():
466 pushop.trmanager or util.nullcontextmanager():
467 pushop.repo.checkpush(pushop)
467 pushop.repo.checkpush(pushop)
468 _pushdiscovery(pushop)
468 _pushdiscovery(pushop)
469 if not _forcebundle1(pushop):
469 if not _forcebundle1(pushop):
470 _pushbundle2(pushop)
470 _pushbundle2(pushop)
471 _pushchangeset(pushop)
471 _pushchangeset(pushop)
472 _pushsyncphase(pushop)
472 _pushsyncphase(pushop)
473 _pushobsolete(pushop)
473 _pushobsolete(pushop)
474 _pushbookmark(pushop)
474 _pushbookmark(pushop)
475
475
476 return pushop
476 return pushop
477
477
478 # list of steps to perform discovery before push
478 # list of steps to perform discovery before push
479 pushdiscoveryorder = []
479 pushdiscoveryorder = []
480
480
481 # Mapping between step name and function
481 # Mapping between step name and function
482 #
482 #
483 # This exists to help extensions wrap steps if necessary
483 # This exists to help extensions wrap steps if necessary
484 pushdiscoverymapping = {}
484 pushdiscoverymapping = {}
485
485
486 def pushdiscovery(stepname):
486 def pushdiscovery(stepname):
487 """decorator for function performing discovery before push
487 """decorator for function performing discovery before push
488
488
489 The function is added to the step -> function mapping and appended to the
489 The function is added to the step -> function mapping and appended to the
490 list of steps. Beware that decorated function will be added in order (this
490 list of steps. Beware that decorated function will be added in order (this
491 may matter).
491 may matter).
492
492
493 You can only use this decorator for a new step, if you want to wrap a step
493 You can only use this decorator for a new step, if you want to wrap a step
494 from an extension, change the pushdiscovery dictionary directly."""
494 from an extension, change the pushdiscovery dictionary directly."""
495 def dec(func):
495 def dec(func):
496 assert stepname not in pushdiscoverymapping
496 assert stepname not in pushdiscoverymapping
497 pushdiscoverymapping[stepname] = func
497 pushdiscoverymapping[stepname] = func
498 pushdiscoveryorder.append(stepname)
498 pushdiscoveryorder.append(stepname)
499 return func
499 return func
500 return dec
500 return dec
501
501
502 def _pushdiscovery(pushop):
502 def _pushdiscovery(pushop):
503 """Run all discovery steps"""
503 """Run all discovery steps"""
504 for stepname in pushdiscoveryorder:
504 for stepname in pushdiscoveryorder:
505 step = pushdiscoverymapping[stepname]
505 step = pushdiscoverymapping[stepname]
506 step(pushop)
506 step(pushop)
507
507
508 @pushdiscovery('changeset')
508 @pushdiscovery('changeset')
509 def _pushdiscoverychangeset(pushop):
509 def _pushdiscoverychangeset(pushop):
510 """discover the changeset that need to be pushed"""
510 """discover the changeset that need to be pushed"""
511 fci = discovery.findcommonincoming
511 fci = discovery.findcommonincoming
512 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
512 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
513 common, inc, remoteheads = commoninc
513 common, inc, remoteheads = commoninc
514 fco = discovery.findcommonoutgoing
514 fco = discovery.findcommonoutgoing
515 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
515 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
516 commoninc=commoninc, force=pushop.force)
516 commoninc=commoninc, force=pushop.force)
517 pushop.outgoing = outgoing
517 pushop.outgoing = outgoing
518 pushop.remoteheads = remoteheads
518 pushop.remoteheads = remoteheads
519 pushop.incoming = inc
519 pushop.incoming = inc
520
520
521 @pushdiscovery('phase')
521 @pushdiscovery('phase')
522 def _pushdiscoveryphase(pushop):
522 def _pushdiscoveryphase(pushop):
523 """discover the phase that needs to be pushed
523 """discover the phase that needs to be pushed
524
524
525 (computed for both success and failure case for changesets push)"""
525 (computed for both success and failure case for changesets push)"""
526 outgoing = pushop.outgoing
526 outgoing = pushop.outgoing
527 unfi = pushop.repo.unfiltered()
527 unfi = pushop.repo.unfiltered()
528 remotephases = pushop.remote.listkeys('phases')
528 remotephases = pushop.remote.listkeys('phases')
529 publishing = remotephases.get('publishing', False)
529 publishing = remotephases.get('publishing', False)
530 if (pushop.ui.configbool('ui', '_usedassubrepo')
530 if (pushop.ui.configbool('ui', '_usedassubrepo')
531 and remotephases # server supports phases
531 and remotephases # server supports phases
532 and not pushop.outgoing.missing # no changesets to be pushed
532 and not pushop.outgoing.missing # no changesets to be pushed
533 and publishing):
533 and publishing):
534 # When:
534 # When:
535 # - this is a subrepo push
535 # - this is a subrepo push
536 # - and remote support phase
536 # - and remote support phase
537 # - and no changeset are to be pushed
537 # - and no changeset are to be pushed
538 # - and remote is publishing
538 # - and remote is publishing
539 # We may be in issue 3871 case!
539 # We may be in issue 3871 case!
540 # We drop the possible phase synchronisation done by
540 # We drop the possible phase synchronisation done by
541 # courtesy to publish changesets possibly locally draft
541 # courtesy to publish changesets possibly locally draft
542 # on the remote.
542 # on the remote.
543 remotephases = {'publishing': 'True'}
543 remotephases = {'publishing': 'True'}
544 ana = phases.analyzeremotephases(pushop.repo,
544 ana = phases.analyzeremotephases(pushop.repo,
545 pushop.fallbackheads,
545 pushop.fallbackheads,
546 remotephases)
546 remotephases)
547 pheads, droots = ana
547 pheads, droots = ana
548 extracond = ''
548 extracond = ''
549 if not publishing:
549 if not publishing:
550 extracond = ' and public()'
550 extracond = ' and public()'
551 revset = 'heads((%%ln::%%ln) %s)' % extracond
551 revset = 'heads((%%ln::%%ln) %s)' % extracond
552 # Get the list of all revs draft on remote by public here.
552 # Get the list of all revs draft on remote by public here.
553 # XXX Beware that revset break if droots is not strictly
553 # XXX Beware that revset break if droots is not strictly
554 # XXX root we may want to ensure it is but it is costly
554 # XXX root we may want to ensure it is but it is costly
555 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
555 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
556 if not outgoing.missing:
556 if not outgoing.missing:
557 future = fallback
557 future = fallback
558 else:
558 else:
559 # adds changeset we are going to push as draft
559 # adds changeset we are going to push as draft
560 #
560 #
561 # should not be necessary for publishing server, but because of an
561 # should not be necessary for publishing server, but because of an
562 # issue fixed in xxxxx we have to do it anyway.
562 # issue fixed in xxxxx we have to do it anyway.
563 fdroots = list(unfi.set('roots(%ln + %ln::)',
563 fdroots = list(unfi.set('roots(%ln + %ln::)',
564 outgoing.missing, droots))
564 outgoing.missing, droots))
565 fdroots = [f.node() for f in fdroots]
565 fdroots = [f.node() for f in fdroots]
566 future = list(unfi.set(revset, fdroots, pushop.futureheads))
566 future = list(unfi.set(revset, fdroots, pushop.futureheads))
567 pushop.outdatedphases = future
567 pushop.outdatedphases = future
568 pushop.fallbackoutdatedphases = fallback
568 pushop.fallbackoutdatedphases = fallback
569
569
570 @pushdiscovery('obsmarker')
570 @pushdiscovery('obsmarker')
571 def _pushdiscoveryobsmarkers(pushop):
571 def _pushdiscoveryobsmarkers(pushop):
572 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
572 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
573 and pushop.repo.obsstore
573 and pushop.repo.obsstore
574 and 'obsolete' in pushop.remote.listkeys('namespaces')):
574 and 'obsolete' in pushop.remote.listkeys('namespaces')):
575 repo = pushop.repo
575 repo = pushop.repo
576 # very naive computation, that can be quite expensive on big repo.
576 # very naive computation, that can be quite expensive on big repo.
577 # However: evolution is currently slow on them anyway.
577 # However: evolution is currently slow on them anyway.
578 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
578 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
579 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
579 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
580
580
581 @pushdiscovery('bookmarks')
581 @pushdiscovery('bookmarks')
582 def _pushdiscoverybookmarks(pushop):
582 def _pushdiscoverybookmarks(pushop):
583 ui = pushop.ui
583 ui = pushop.ui
584 repo = pushop.repo.unfiltered()
584 repo = pushop.repo.unfiltered()
585 remote = pushop.remote
585 remote = pushop.remote
586 ui.debug("checking for updated bookmarks\n")
586 ui.debug("checking for updated bookmarks\n")
587 ancestors = ()
587 ancestors = ()
588 if pushop.revs:
588 if pushop.revs:
589 revnums = map(repo.changelog.rev, pushop.revs)
589 revnums = map(repo.changelog.rev, pushop.revs)
590 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
590 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
591 remotebookmark = remote.listkeys('bookmarks')
591 remotebookmark = remote.listkeys('bookmarks')
592
592
593 explicit = set([repo._bookmarks.expandname(bookmark)
593 explicit = set([repo._bookmarks.expandname(bookmark)
594 for bookmark in pushop.bookmarks])
594 for bookmark in pushop.bookmarks])
595
595
596 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
596 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
597 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
597 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
598
598
599 def safehex(x):
599 def safehex(x):
600 if x is None:
600 if x is None:
601 return x
601 return x
602 return hex(x)
602 return hex(x)
603
603
604 def hexifycompbookmarks(bookmarks):
604 def hexifycompbookmarks(bookmarks):
605 for b, scid, dcid in bookmarks:
605 for b, scid, dcid in bookmarks:
606 yield b, safehex(scid), safehex(dcid)
606 yield b, safehex(scid), safehex(dcid)
607
607
608 comp = [hexifycompbookmarks(marks) for marks in comp]
608 comp = [hexifycompbookmarks(marks) for marks in comp]
609 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
609 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
610
610
611 for b, scid, dcid in advsrc:
611 for b, scid, dcid in advsrc:
612 if b in explicit:
612 if b in explicit:
613 explicit.remove(b)
613 explicit.remove(b)
614 if not ancestors or repo[scid].rev() in ancestors:
614 if not ancestors or repo[scid].rev() in ancestors:
615 pushop.outbookmarks.append((b, dcid, scid))
615 pushop.outbookmarks.append((b, dcid, scid))
616 # search added bookmark
616 # search added bookmark
617 for b, scid, dcid in addsrc:
617 for b, scid, dcid in addsrc:
618 if b in explicit:
618 if b in explicit:
619 explicit.remove(b)
619 explicit.remove(b)
620 pushop.outbookmarks.append((b, '', scid))
620 pushop.outbookmarks.append((b, '', scid))
621 # search for overwritten bookmark
621 # search for overwritten bookmark
622 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
622 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
623 if b in explicit:
623 if b in explicit:
624 explicit.remove(b)
624 explicit.remove(b)
625 pushop.outbookmarks.append((b, dcid, scid))
625 pushop.outbookmarks.append((b, dcid, scid))
626 # search for bookmark to delete
626 # search for bookmark to delete
627 for b, scid, dcid in adddst:
627 for b, scid, dcid in adddst:
628 if b in explicit:
628 if b in explicit:
629 explicit.remove(b)
629 explicit.remove(b)
630 # treat as "deleted locally"
630 # treat as "deleted locally"
631 pushop.outbookmarks.append((b, dcid, ''))
631 pushop.outbookmarks.append((b, dcid, ''))
632 # identical bookmarks shouldn't get reported
632 # identical bookmarks shouldn't get reported
633 for b, scid, dcid in same:
633 for b, scid, dcid in same:
634 if b in explicit:
634 if b in explicit:
635 explicit.remove(b)
635 explicit.remove(b)
636
636
637 if explicit:
637 if explicit:
638 explicit = sorted(explicit)
638 explicit = sorted(explicit)
639 # we should probably list all of them
639 # we should probably list all of them
640 ui.warn(_('bookmark %s does not exist on the local '
640 ui.warn(_('bookmark %s does not exist on the local '
641 'or remote repository!\n') % explicit[0])
641 'or remote repository!\n') % explicit[0])
642 pushop.bkresult = 2
642 pushop.bkresult = 2
643
643
644 pushop.outbookmarks.sort()
644 pushop.outbookmarks.sort()
645
645
646 def _pushcheckoutgoing(pushop):
646 def _pushcheckoutgoing(pushop):
647 outgoing = pushop.outgoing
647 outgoing = pushop.outgoing
648 unfi = pushop.repo.unfiltered()
648 unfi = pushop.repo.unfiltered()
649 if not outgoing.missing:
649 if not outgoing.missing:
650 # nothing to push
650 # nothing to push
651 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
651 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
652 return False
652 return False
653 # something to push
653 # something to push
654 if not pushop.force:
654 if not pushop.force:
655 # if repo.obsstore == False --> no obsolete
655 # if repo.obsstore == False --> no obsolete
656 # then, save the iteration
656 # then, save the iteration
657 if unfi.obsstore:
657 if unfi.obsstore:
658 # this message are here for 80 char limit reason
658 # this message are here for 80 char limit reason
659 mso = _("push includes obsolete changeset: %s!")
659 mso = _("push includes obsolete changeset: %s!")
660 mspd = _("push includes phase-divergent changeset: %s!")
660 mspd = _("push includes phase-divergent changeset: %s!")
661 mscd = _("push includes content-divergent changeset: %s!")
661 mscd = _("push includes content-divergent changeset: %s!")
662 mst = {"orphan": _("push includes orphan changeset: %s!"),
662 mst = {"orphan": _("push includes orphan changeset: %s!"),
663 "phase-divergent": mspd,
663 "phase-divergent": mspd,
664 "content-divergent": mscd}
664 "content-divergent": mscd}
665 # If we are to push if there is at least one
665 # If we are to push if there is at least one
666 # obsolete or unstable changeset in missing, at
666 # obsolete or unstable changeset in missing, at
667 # least one of the missinghead will be obsolete or
667 # least one of the missinghead will be obsolete or
668 # unstable. So checking heads only is ok
668 # unstable. So checking heads only is ok
669 for node in outgoing.missingheads:
669 for node in outgoing.missingheads:
670 ctx = unfi[node]
670 ctx = unfi[node]
671 if ctx.obsolete():
671 if ctx.obsolete():
672 raise error.Abort(mso % ctx)
672 raise error.Abort(mso % ctx)
673 elif ctx.isunstable():
673 elif ctx.isunstable():
674 # TODO print more than one instability in the abort
674 # TODO print more than one instability in the abort
675 # message
675 # message
676 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
676 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
677
677
678 discovery.checkheads(pushop)
678 discovery.checkheads(pushop)
679 return True
679 return True
680
680
681 # List of names of steps to perform for an outgoing bundle2, order matters.
681 # List of names of steps to perform for an outgoing bundle2, order matters.
682 b2partsgenorder = []
682 b2partsgenorder = []
683
683
684 # Mapping between step name and function
684 # Mapping between step name and function
685 #
685 #
686 # This exists to help extensions wrap steps if necessary
686 # This exists to help extensions wrap steps if necessary
687 b2partsgenmapping = {}
687 b2partsgenmapping = {}
688
688
689 def b2partsgenerator(stepname, idx=None):
689 def b2partsgenerator(stepname, idx=None):
690 """decorator for function generating bundle2 part
690 """decorator for function generating bundle2 part
691
691
692 The function is added to the step -> function mapping and appended to the
692 The function is added to the step -> function mapping and appended to the
693 list of steps. Beware that decorated functions will be added in order
693 list of steps. Beware that decorated functions will be added in order
694 (this may matter).
694 (this may matter).
695
695
696 You can only use this decorator for new steps, if you want to wrap a step
696 You can only use this decorator for new steps, if you want to wrap a step
697 from an extension, attack the b2partsgenmapping dictionary directly."""
697 from an extension, attack the b2partsgenmapping dictionary directly."""
698 def dec(func):
698 def dec(func):
699 assert stepname not in b2partsgenmapping
699 assert stepname not in b2partsgenmapping
700 b2partsgenmapping[stepname] = func
700 b2partsgenmapping[stepname] = func
701 if idx is None:
701 if idx is None:
702 b2partsgenorder.append(stepname)
702 b2partsgenorder.append(stepname)
703 else:
703 else:
704 b2partsgenorder.insert(idx, stepname)
704 b2partsgenorder.insert(idx, stepname)
705 return func
705 return func
706 return dec
706 return dec
707
707
708 def _pushb2ctxcheckheads(pushop, bundler):
708 def _pushb2ctxcheckheads(pushop, bundler):
709 """Generate race condition checking parts
709 """Generate race condition checking parts
710
710
711 Exists as an independent function to aid extensions
711 Exists as an independent function to aid extensions
712 """
712 """
713 # * 'force' do not check for push race,
713 # * 'force' do not check for push race,
714 # * if we don't push anything, there are nothing to check.
714 # * if we don't push anything, there are nothing to check.
715 if not pushop.force and pushop.outgoing.missingheads:
715 if not pushop.force and pushop.outgoing.missingheads:
716 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
716 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
717 emptyremote = pushop.pushbranchmap is None
717 emptyremote = pushop.pushbranchmap is None
718 if not allowunrelated or emptyremote:
718 if not allowunrelated or emptyremote:
719 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
719 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
720 else:
720 else:
721 affected = set()
721 affected = set()
722 for branch, heads in pushop.pushbranchmap.iteritems():
722 for branch, heads in pushop.pushbranchmap.iteritems():
723 remoteheads, newheads, unsyncedheads, discardedheads = heads
723 remoteheads, newheads, unsyncedheads, discardedheads = heads
724 if remoteheads is not None:
724 if remoteheads is not None:
725 remote = set(remoteheads)
725 remote = set(remoteheads)
726 affected |= set(discardedheads) & remote
726 affected |= set(discardedheads) & remote
727 affected |= remote - set(newheads)
727 affected |= remote - set(newheads)
728 if affected:
728 if affected:
729 data = iter(sorted(affected))
729 data = iter(sorted(affected))
730 bundler.newpart('check:updated-heads', data=data)
730 bundler.newpart('check:updated-heads', data=data)
731
731
732 @b2partsgenerator('changeset')
732 @b2partsgenerator('changeset')
733 def _pushb2ctx(pushop, bundler):
733 def _pushb2ctx(pushop, bundler):
734 """handle changegroup push through bundle2
734 """handle changegroup push through bundle2
735
735
736 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
736 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
737 """
737 """
738 if 'changesets' in pushop.stepsdone:
738 if 'changesets' in pushop.stepsdone:
739 return
739 return
740 pushop.stepsdone.add('changesets')
740 pushop.stepsdone.add('changesets')
741 # Send known heads to the server for race detection.
741 # Send known heads to the server for race detection.
742 if not _pushcheckoutgoing(pushop):
742 if not _pushcheckoutgoing(pushop):
743 return
743 return
744 pushop.repo.prepushoutgoinghooks(pushop)
744 pushop.repo.prepushoutgoinghooks(pushop)
745
745
746 _pushb2ctxcheckheads(pushop, bundler)
746 _pushb2ctxcheckheads(pushop, bundler)
747
747
748 b2caps = bundle2.bundle2caps(pushop.remote)
748 b2caps = bundle2.bundle2caps(pushop.remote)
749 version = '01'
749 version = '01'
750 cgversions = b2caps.get('changegroup')
750 cgversions = b2caps.get('changegroup')
751 if cgversions: # 3.1 and 3.2 ship with an empty value
751 if cgversions: # 3.1 and 3.2 ship with an empty value
752 cgversions = [v for v in cgversions
752 cgversions = [v for v in cgversions
753 if v in changegroup.supportedoutgoingversions(
753 if v in changegroup.supportedoutgoingversions(
754 pushop.repo)]
754 pushop.repo)]
755 if not cgversions:
755 if not cgversions:
756 raise ValueError(_('no common changegroup version'))
756 raise ValueError(_('no common changegroup version'))
757 version = max(cgversions)
757 version = max(cgversions)
758 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
758 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
759 'push')
759 'push')
760 cgpart = bundler.newpart('changegroup', data=cgstream)
760 cgpart = bundler.newpart('changegroup', data=cgstream)
761 if cgversions:
761 if cgversions:
762 cgpart.addparam('version', version)
762 cgpart.addparam('version', version)
763 if 'treemanifest' in pushop.repo.requirements:
763 if 'treemanifest' in pushop.repo.requirements:
764 cgpart.addparam('treemanifest', '1')
764 cgpart.addparam('treemanifest', '1')
765 def handlereply(op):
765 def handlereply(op):
766 """extract addchangegroup returns from server reply"""
766 """extract addchangegroup returns from server reply"""
767 cgreplies = op.records.getreplies(cgpart.id)
767 cgreplies = op.records.getreplies(cgpart.id)
768 assert len(cgreplies['changegroup']) == 1
768 assert len(cgreplies['changegroup']) == 1
769 pushop.cgresult = cgreplies['changegroup'][0]['return']
769 pushop.cgresult = cgreplies['changegroup'][0]['return']
770 return handlereply
770 return handlereply
771
771
772 @b2partsgenerator('phase')
772 @b2partsgenerator('phase')
773 def _pushb2phases(pushop, bundler):
773 def _pushb2phases(pushop, bundler):
774 """handle phase push through bundle2"""
774 """handle phase push through bundle2"""
775 if 'phases' in pushop.stepsdone:
775 if 'phases' in pushop.stepsdone:
776 return
776 return
777 b2caps = bundle2.bundle2caps(pushop.remote)
777 b2caps = bundle2.bundle2caps(pushop.remote)
778 if not 'pushkey' in b2caps:
778 if not 'pushkey' in b2caps:
779 return
779 return
780 pushop.stepsdone.add('phases')
780 pushop.stepsdone.add('phases')
781 part2node = []
781 part2node = []
782
782
783 def handlefailure(pushop, exc):
783 def handlefailure(pushop, exc):
784 targetid = int(exc.partid)
784 targetid = int(exc.partid)
785 for partid, node in part2node:
785 for partid, node in part2node:
786 if partid == targetid:
786 if partid == targetid:
787 raise error.Abort(_('updating %s to public failed') % node)
787 raise error.Abort(_('updating %s to public failed') % node)
788
788
789 enc = pushkey.encode
789 enc = pushkey.encode
790 for newremotehead in pushop.outdatedphases:
790 for newremotehead in pushop.outdatedphases:
791 part = bundler.newpart('pushkey')
791 part = bundler.newpart('pushkey')
792 part.addparam('namespace', enc('phases'))
792 part.addparam('namespace', enc('phases'))
793 part.addparam('key', enc(newremotehead.hex()))
793 part.addparam('key', enc(newremotehead.hex()))
794 part.addparam('old', enc(str(phases.draft)))
794 part.addparam('old', enc(str(phases.draft)))
795 part.addparam('new', enc(str(phases.public)))
795 part.addparam('new', enc(str(phases.public)))
796 part2node.append((part.id, newremotehead))
796 part2node.append((part.id, newremotehead))
797 pushop.pkfailcb[part.id] = handlefailure
797 pushop.pkfailcb[part.id] = handlefailure
798
798
799 def handlereply(op):
799 def handlereply(op):
800 for partid, node in part2node:
800 for partid, node in part2node:
801 partrep = op.records.getreplies(partid)
801 partrep = op.records.getreplies(partid)
802 results = partrep['pushkey']
802 results = partrep['pushkey']
803 assert len(results) <= 1
803 assert len(results) <= 1
804 msg = None
804 msg = None
805 if not results:
805 if not results:
806 msg = _('server ignored update of %s to public!\n') % node
806 msg = _('server ignored update of %s to public!\n') % node
807 elif not int(results[0]['return']):
807 elif not int(results[0]['return']):
808 msg = _('updating %s to public failed!\n') % node
808 msg = _('updating %s to public failed!\n') % node
809 if msg is not None:
809 if msg is not None:
810 pushop.ui.warn(msg)
810 pushop.ui.warn(msg)
811 return handlereply
811 return handlereply
812
812
813 @b2partsgenerator('obsmarkers')
813 @b2partsgenerator('obsmarkers')
814 def _pushb2obsmarkers(pushop, bundler):
814 def _pushb2obsmarkers(pushop, bundler):
815 if 'obsmarkers' in pushop.stepsdone:
815 if 'obsmarkers' in pushop.stepsdone:
816 return
816 return
817 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
817 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
818 if obsolete.commonversion(remoteversions) is None:
818 if obsolete.commonversion(remoteversions) is None:
819 return
819 return
820 pushop.stepsdone.add('obsmarkers')
820 pushop.stepsdone.add('obsmarkers')
821 if pushop.outobsmarkers:
821 if pushop.outobsmarkers:
822 markers = sorted(pushop.outobsmarkers)
822 markers = sorted(pushop.outobsmarkers)
823 bundle2.buildobsmarkerspart(bundler, markers)
823 bundle2.buildobsmarkerspart(bundler, markers)
824
824
825 @b2partsgenerator('bookmarks')
825 @b2partsgenerator('bookmarks')
826 def _pushb2bookmarks(pushop, bundler):
826 def _pushb2bookmarks(pushop, bundler):
827 """handle bookmark push through bundle2"""
827 """handle bookmark push through bundle2"""
828 if 'bookmarks' in pushop.stepsdone:
828 if 'bookmarks' in pushop.stepsdone:
829 return
829 return
830 b2caps = bundle2.bundle2caps(pushop.remote)
830 b2caps = bundle2.bundle2caps(pushop.remote)
831 if 'pushkey' not in b2caps:
831 if 'pushkey' not in b2caps:
832 return
832 return
833 pushop.stepsdone.add('bookmarks')
833 pushop.stepsdone.add('bookmarks')
834 part2book = []
834 part2book = []
835 enc = pushkey.encode
835 enc = pushkey.encode
836
836
837 def handlefailure(pushop, exc):
837 def handlefailure(pushop, exc):
838 targetid = int(exc.partid)
838 targetid = int(exc.partid)
839 for partid, book, action in part2book:
839 for partid, book, action in part2book:
840 if partid == targetid:
840 if partid == targetid:
841 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
841 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
842 # we should not be called for part we did not generated
842 # we should not be called for part we did not generated
843 assert False
843 assert False
844
844
845 for book, old, new in pushop.outbookmarks:
845 for book, old, new in pushop.outbookmarks:
846 part = bundler.newpart('pushkey')
846 part = bundler.newpart('pushkey')
847 part.addparam('namespace', enc('bookmarks'))
847 part.addparam('namespace', enc('bookmarks'))
848 part.addparam('key', enc(book))
848 part.addparam('key', enc(book))
849 part.addparam('old', enc(old))
849 part.addparam('old', enc(old))
850 part.addparam('new', enc(new))
850 part.addparam('new', enc(new))
851 action = 'update'
851 action = 'update'
852 if not old:
852 if not old:
853 action = 'export'
853 action = 'export'
854 elif not new:
854 elif not new:
855 action = 'delete'
855 action = 'delete'
856 part2book.append((part.id, book, action))
856 part2book.append((part.id, book, action))
857 pushop.pkfailcb[part.id] = handlefailure
857 pushop.pkfailcb[part.id] = handlefailure
858
858
859 def handlereply(op):
859 def handlereply(op):
860 ui = pushop.ui
860 ui = pushop.ui
861 for partid, book, action in part2book:
861 for partid, book, action in part2book:
862 partrep = op.records.getreplies(partid)
862 partrep = op.records.getreplies(partid)
863 results = partrep['pushkey']
863 results = partrep['pushkey']
864 assert len(results) <= 1
864 assert len(results) <= 1
865 if not results:
865 if not results:
866 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
866 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
867 else:
867 else:
868 ret = int(results[0]['return'])
868 ret = int(results[0]['return'])
869 if ret:
869 if ret:
870 ui.status(bookmsgmap[action][0] % book)
870 ui.status(bookmsgmap[action][0] % book)
871 else:
871 else:
872 ui.warn(bookmsgmap[action][1] % book)
872 ui.warn(bookmsgmap[action][1] % book)
873 if pushop.bkresult is not None:
873 if pushop.bkresult is not None:
874 pushop.bkresult = 1
874 pushop.bkresult = 1
875 return handlereply
875 return handlereply
876
876
877 @b2partsgenerator('pushvars', idx=0)
877 @b2partsgenerator('pushvars', idx=0)
878 def _getbundlesendvars(pushop, bundler):
878 def _getbundlesendvars(pushop, bundler):
879 '''send shellvars via bundle2'''
879 '''send shellvars via bundle2'''
880 pushvars = pushop.pushvars
880 pushvars = pushop.pushvars
881 if pushvars:
881 if pushvars:
882 shellvars = {}
882 shellvars = {}
883 for raw in pushvars:
883 for raw in pushvars:
884 if '=' not in raw:
884 if '=' not in raw:
885 msg = ("unable to parse variable '%s', should follow "
885 msg = ("unable to parse variable '%s', should follow "
886 "'KEY=VALUE' or 'KEY=' format")
886 "'KEY=VALUE' or 'KEY=' format")
887 raise error.Abort(msg % raw)
887 raise error.Abort(msg % raw)
888 k, v = raw.split('=', 1)
888 k, v = raw.split('=', 1)
889 shellvars[k] = v
889 shellvars[k] = v
890
890
891 part = bundler.newpart('pushvars')
891 part = bundler.newpart('pushvars')
892
892
893 for key, value in shellvars.iteritems():
893 for key, value in shellvars.iteritems():
894 part.addparam(key, value, mandatory=False)
894 part.addparam(key, value, mandatory=False)
895
895
896 def _pushbundle2(pushop):
896 def _pushbundle2(pushop):
897 """push data to the remote using bundle2
897 """push data to the remote using bundle2
898
898
899 The only currently supported type of data is changegroup but this will
899 The only currently supported type of data is changegroup but this will
900 evolve in the future."""
900 evolve in the future."""
901 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
901 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
902 pushback = (pushop.trmanager
902 pushback = (pushop.trmanager
903 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
903 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
904
904
905 # create reply capability
905 # create reply capability
906 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
906 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
907 allowpushback=pushback))
907 allowpushback=pushback))
908 bundler.newpart('replycaps', data=capsblob)
908 bundler.newpart('replycaps', data=capsblob)
909 replyhandlers = []
909 replyhandlers = []
910 for partgenname in b2partsgenorder:
910 for partgenname in b2partsgenorder:
911 partgen = b2partsgenmapping[partgenname]
911 partgen = b2partsgenmapping[partgenname]
912 ret = partgen(pushop, bundler)
912 ret = partgen(pushop, bundler)
913 if callable(ret):
913 if callable(ret):
914 replyhandlers.append(ret)
914 replyhandlers.append(ret)
915 # do not push if nothing to push
915 # do not push if nothing to push
916 if bundler.nbparts <= 1:
916 if bundler.nbparts <= 1:
917 return
917 return
918 stream = util.chunkbuffer(bundler.getchunks())
918 stream = util.chunkbuffer(bundler.getchunks())
919 try:
919 try:
920 try:
920 try:
921 reply = pushop.remote.unbundle(
921 reply = pushop.remote.unbundle(
922 stream, ['force'], pushop.remote.url())
922 stream, ['force'], pushop.remote.url())
923 except error.BundleValueError as exc:
923 except error.BundleValueError as exc:
924 raise error.Abort(_('missing support for %s') % exc)
924 raise error.Abort(_('missing support for %s') % exc)
925 try:
925 try:
926 trgetter = None
926 trgetter = None
927 if pushback:
927 if pushback:
928 trgetter = pushop.trmanager.transaction
928 trgetter = pushop.trmanager.transaction
929 op = bundle2.processbundle(pushop.repo, reply, trgetter)
929 op = bundle2.processbundle(pushop.repo, reply, trgetter)
930 except error.BundleValueError as exc:
930 except error.BundleValueError as exc:
931 raise error.Abort(_('missing support for %s') % exc)
931 raise error.Abort(_('missing support for %s') % exc)
932 except bundle2.AbortFromPart as exc:
932 except bundle2.AbortFromPart as exc:
933 pushop.ui.status(_('remote: %s\n') % exc)
933 pushop.ui.status(_('remote: %s\n') % exc)
934 if exc.hint is not None:
934 if exc.hint is not None:
935 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
935 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
936 raise error.Abort(_('push failed on remote'))
936 raise error.Abort(_('push failed on remote'))
937 except error.PushkeyFailed as exc:
937 except error.PushkeyFailed as exc:
938 partid = int(exc.partid)
938 partid = int(exc.partid)
939 if partid not in pushop.pkfailcb:
939 if partid not in pushop.pkfailcb:
940 raise
940 raise
941 pushop.pkfailcb[partid](pushop, exc)
941 pushop.pkfailcb[partid](pushop, exc)
942 for rephand in replyhandlers:
942 for rephand in replyhandlers:
943 rephand(op)
943 rephand(op)
944
944
945 def _pushchangeset(pushop):
945 def _pushchangeset(pushop):
946 """Make the actual push of changeset bundle to remote repo"""
946 """Make the actual push of changeset bundle to remote repo"""
947 if 'changesets' in pushop.stepsdone:
947 if 'changesets' in pushop.stepsdone:
948 return
948 return
949 pushop.stepsdone.add('changesets')
949 pushop.stepsdone.add('changesets')
950 if not _pushcheckoutgoing(pushop):
950 if not _pushcheckoutgoing(pushop):
951 return
951 return
952
952
953 # Should have verified this in push().
953 # Should have verified this in push().
954 assert pushop.remote.capable('unbundle')
954 assert pushop.remote.capable('unbundle')
955
955
956 pushop.repo.prepushoutgoinghooks(pushop)
956 pushop.repo.prepushoutgoinghooks(pushop)
957 outgoing = pushop.outgoing
957 outgoing = pushop.outgoing
958 # TODO: get bundlecaps from remote
958 # TODO: get bundlecaps from remote
959 bundlecaps = None
959 bundlecaps = None
960 # create a changegroup from local
960 # create a changegroup from local
961 if pushop.revs is None and not (outgoing.excluded
961 if pushop.revs is None and not (outgoing.excluded
962 or pushop.repo.changelog.filteredrevs):
962 or pushop.repo.changelog.filteredrevs):
963 # push everything,
963 # push everything,
964 # use the fast path, no race possible on push
964 # use the fast path, no race possible on push
965 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
965 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
966 fastpath=True, bundlecaps=bundlecaps)
966 fastpath=True, bundlecaps=bundlecaps)
967 else:
967 else:
968 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
968 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
969 bundlecaps=bundlecaps)
969 'push', bundlecaps=bundlecaps)
970
970
971 # apply changegroup to remote
971 # apply changegroup to remote
972 # local repo finds heads on server, finds out what
972 # local repo finds heads on server, finds out what
973 # revs it must push. once revs transferred, if server
973 # revs it must push. once revs transferred, if server
974 # finds it has different heads (someone else won
974 # finds it has different heads (someone else won
975 # commit/push race), server aborts.
975 # commit/push race), server aborts.
976 if pushop.force:
976 if pushop.force:
977 remoteheads = ['force']
977 remoteheads = ['force']
978 else:
978 else:
979 remoteheads = pushop.remoteheads
979 remoteheads = pushop.remoteheads
980 # ssh: return remote's addchangegroup()
980 # ssh: return remote's addchangegroup()
981 # http: return remote's addchangegroup() or 0 for error
981 # http: return remote's addchangegroup() or 0 for error
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
982 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
983 pushop.repo.url())
983 pushop.repo.url())
984
984
985 def _pushsyncphase(pushop):
985 def _pushsyncphase(pushop):
986 """synchronise phase information locally and remotely"""
986 """synchronise phase information locally and remotely"""
987 cheads = pushop.commonheads
987 cheads = pushop.commonheads
988 # even when we don't push, exchanging phase data is useful
988 # even when we don't push, exchanging phase data is useful
989 remotephases = pushop.remote.listkeys('phases')
989 remotephases = pushop.remote.listkeys('phases')
990 if (pushop.ui.configbool('ui', '_usedassubrepo')
990 if (pushop.ui.configbool('ui', '_usedassubrepo')
991 and remotephases # server supports phases
991 and remotephases # server supports phases
992 and pushop.cgresult is None # nothing was pushed
992 and pushop.cgresult is None # nothing was pushed
993 and remotephases.get('publishing', False)):
993 and remotephases.get('publishing', False)):
994 # When:
994 # When:
995 # - this is a subrepo push
995 # - this is a subrepo push
996 # - and remote support phase
996 # - and remote support phase
997 # - and no changeset was pushed
997 # - and no changeset was pushed
998 # - and remote is publishing
998 # - and remote is publishing
999 # We may be in issue 3871 case!
999 # We may be in issue 3871 case!
1000 # We drop the possible phase synchronisation done by
1000 # We drop the possible phase synchronisation done by
1001 # courtesy to publish changesets possibly locally draft
1001 # courtesy to publish changesets possibly locally draft
1002 # on the remote.
1002 # on the remote.
1003 remotephases = {'publishing': 'True'}
1003 remotephases = {'publishing': 'True'}
1004 if not remotephases: # old server or public only reply from non-publishing
1004 if not remotephases: # old server or public only reply from non-publishing
1005 _localphasemove(pushop, cheads)
1005 _localphasemove(pushop, cheads)
1006 # don't push any phase data as there is nothing to push
1006 # don't push any phase data as there is nothing to push
1007 else:
1007 else:
1008 ana = phases.analyzeremotephases(pushop.repo, cheads,
1008 ana = phases.analyzeremotephases(pushop.repo, cheads,
1009 remotephases)
1009 remotephases)
1010 pheads, droots = ana
1010 pheads, droots = ana
1011 ### Apply remote phase on local
1011 ### Apply remote phase on local
1012 if remotephases.get('publishing', False):
1012 if remotephases.get('publishing', False):
1013 _localphasemove(pushop, cheads)
1013 _localphasemove(pushop, cheads)
1014 else: # publish = False
1014 else: # publish = False
1015 _localphasemove(pushop, pheads)
1015 _localphasemove(pushop, pheads)
1016 _localphasemove(pushop, cheads, phases.draft)
1016 _localphasemove(pushop, cheads, phases.draft)
1017 ### Apply local phase on remote
1017 ### Apply local phase on remote
1018
1018
1019 if pushop.cgresult:
1019 if pushop.cgresult:
1020 if 'phases' in pushop.stepsdone:
1020 if 'phases' in pushop.stepsdone:
1021 # phases already pushed though bundle2
1021 # phases already pushed though bundle2
1022 return
1022 return
1023 outdated = pushop.outdatedphases
1023 outdated = pushop.outdatedphases
1024 else:
1024 else:
1025 outdated = pushop.fallbackoutdatedphases
1025 outdated = pushop.fallbackoutdatedphases
1026
1026
1027 pushop.stepsdone.add('phases')
1027 pushop.stepsdone.add('phases')
1028
1028
1029 # filter heads already turned public by the push
1029 # filter heads already turned public by the push
1030 outdated = [c for c in outdated if c.node() not in pheads]
1030 outdated = [c for c in outdated if c.node() not in pheads]
1031 # fallback to independent pushkey command
1031 # fallback to independent pushkey command
1032 for newremotehead in outdated:
1032 for newremotehead in outdated:
1033 r = pushop.remote.pushkey('phases',
1033 r = pushop.remote.pushkey('phases',
1034 newremotehead.hex(),
1034 newremotehead.hex(),
1035 str(phases.draft),
1035 str(phases.draft),
1036 str(phases.public))
1036 str(phases.public))
1037 if not r:
1037 if not r:
1038 pushop.ui.warn(_('updating %s to public failed!\n')
1038 pushop.ui.warn(_('updating %s to public failed!\n')
1039 % newremotehead)
1039 % newremotehead)
1040
1040
1041 def _localphasemove(pushop, nodes, phase=phases.public):
1041 def _localphasemove(pushop, nodes, phase=phases.public):
1042 """move <nodes> to <phase> in the local source repo"""
1042 """move <nodes> to <phase> in the local source repo"""
1043 if pushop.trmanager:
1043 if pushop.trmanager:
1044 phases.advanceboundary(pushop.repo,
1044 phases.advanceboundary(pushop.repo,
1045 pushop.trmanager.transaction(),
1045 pushop.trmanager.transaction(),
1046 phase,
1046 phase,
1047 nodes)
1047 nodes)
1048 else:
1048 else:
1049 # repo is not locked, do not change any phases!
1049 # repo is not locked, do not change any phases!
1050 # Informs the user that phases should have been moved when
1050 # Informs the user that phases should have been moved when
1051 # applicable.
1051 # applicable.
1052 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1052 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1053 phasestr = phases.phasenames[phase]
1053 phasestr = phases.phasenames[phase]
1054 if actualmoves:
1054 if actualmoves:
1055 pushop.ui.status(_('cannot lock source repo, skipping '
1055 pushop.ui.status(_('cannot lock source repo, skipping '
1056 'local %s phase update\n') % phasestr)
1056 'local %s phase update\n') % phasestr)
1057
1057
1058 def _pushobsolete(pushop):
1058 def _pushobsolete(pushop):
1059 """utility function to push obsolete markers to a remote"""
1059 """utility function to push obsolete markers to a remote"""
1060 if 'obsmarkers' in pushop.stepsdone:
1060 if 'obsmarkers' in pushop.stepsdone:
1061 return
1061 return
1062 repo = pushop.repo
1062 repo = pushop.repo
1063 remote = pushop.remote
1063 remote = pushop.remote
1064 pushop.stepsdone.add('obsmarkers')
1064 pushop.stepsdone.add('obsmarkers')
1065 if pushop.outobsmarkers:
1065 if pushop.outobsmarkers:
1066 pushop.ui.debug('try to push obsolete markers to remote\n')
1066 pushop.ui.debug('try to push obsolete markers to remote\n')
1067 rslts = []
1067 rslts = []
1068 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1068 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1069 for key in sorted(remotedata, reverse=True):
1069 for key in sorted(remotedata, reverse=True):
1070 # reverse sort to ensure we end with dump0
1070 # reverse sort to ensure we end with dump0
1071 data = remotedata[key]
1071 data = remotedata[key]
1072 rslts.append(remote.pushkey('obsolete', key, '', data))
1072 rslts.append(remote.pushkey('obsolete', key, '', data))
1073 if [r for r in rslts if not r]:
1073 if [r for r in rslts if not r]:
1074 msg = _('failed to push some obsolete markers!\n')
1074 msg = _('failed to push some obsolete markers!\n')
1075 repo.ui.warn(msg)
1075 repo.ui.warn(msg)
1076
1076
1077 def _pushbookmark(pushop):
1077 def _pushbookmark(pushop):
1078 """Update bookmark position on remote"""
1078 """Update bookmark position on remote"""
1079 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1079 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1080 return
1080 return
1081 pushop.stepsdone.add('bookmarks')
1081 pushop.stepsdone.add('bookmarks')
1082 ui = pushop.ui
1082 ui = pushop.ui
1083 remote = pushop.remote
1083 remote = pushop.remote
1084
1084
1085 for b, old, new in pushop.outbookmarks:
1085 for b, old, new in pushop.outbookmarks:
1086 action = 'update'
1086 action = 'update'
1087 if not old:
1087 if not old:
1088 action = 'export'
1088 action = 'export'
1089 elif not new:
1089 elif not new:
1090 action = 'delete'
1090 action = 'delete'
1091 if remote.pushkey('bookmarks', b, old, new):
1091 if remote.pushkey('bookmarks', b, old, new):
1092 ui.status(bookmsgmap[action][0] % b)
1092 ui.status(bookmsgmap[action][0] % b)
1093 else:
1093 else:
1094 ui.warn(bookmsgmap[action][1] % b)
1094 ui.warn(bookmsgmap[action][1] % b)
1095 # discovery can have set the value form invalid entry
1095 # discovery can have set the value form invalid entry
1096 if pushop.bkresult is not None:
1096 if pushop.bkresult is not None:
1097 pushop.bkresult = 1
1097 pushop.bkresult = 1
1098
1098
1099 class pulloperation(object):
1099 class pulloperation(object):
1100 """A object that represent a single pull operation
1100 """A object that represent a single pull operation
1101
1101
1102 It purpose is to carry pull related state and very common operation.
1102 It purpose is to carry pull related state and very common operation.
1103
1103
1104 A new should be created at the beginning of each pull and discarded
1104 A new should be created at the beginning of each pull and discarded
1105 afterward.
1105 afterward.
1106 """
1106 """
1107
1107
1108 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1108 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1109 remotebookmarks=None, streamclonerequested=None):
1109 remotebookmarks=None, streamclonerequested=None):
1110 # repo we pull into
1110 # repo we pull into
1111 self.repo = repo
1111 self.repo = repo
1112 # repo we pull from
1112 # repo we pull from
1113 self.remote = remote
1113 self.remote = remote
1114 # revision we try to pull (None is "all")
1114 # revision we try to pull (None is "all")
1115 self.heads = heads
1115 self.heads = heads
1116 # bookmark pulled explicitly
1116 # bookmark pulled explicitly
1117 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1117 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1118 for bookmark in bookmarks]
1118 for bookmark in bookmarks]
1119 # do we force pull?
1119 # do we force pull?
1120 self.force = force
1120 self.force = force
1121 # whether a streaming clone was requested
1121 # whether a streaming clone was requested
1122 self.streamclonerequested = streamclonerequested
1122 self.streamclonerequested = streamclonerequested
1123 # transaction manager
1123 # transaction manager
1124 self.trmanager = None
1124 self.trmanager = None
1125 # set of common changeset between local and remote before pull
1125 # set of common changeset between local and remote before pull
1126 self.common = None
1126 self.common = None
1127 # set of pulled head
1127 # set of pulled head
1128 self.rheads = None
1128 self.rheads = None
1129 # list of missing changeset to fetch remotely
1129 # list of missing changeset to fetch remotely
1130 self.fetch = None
1130 self.fetch = None
1131 # remote bookmarks data
1131 # remote bookmarks data
1132 self.remotebookmarks = remotebookmarks
1132 self.remotebookmarks = remotebookmarks
1133 # result of changegroup pulling (used as return code by pull)
1133 # result of changegroup pulling (used as return code by pull)
1134 self.cgresult = None
1134 self.cgresult = None
1135 # list of step already done
1135 # list of step already done
1136 self.stepsdone = set()
1136 self.stepsdone = set()
1137 # Whether we attempted a clone from pre-generated bundles.
1137 # Whether we attempted a clone from pre-generated bundles.
1138 self.clonebundleattempted = False
1138 self.clonebundleattempted = False
1139
1139
1140 @util.propertycache
1140 @util.propertycache
1141 def pulledsubset(self):
1141 def pulledsubset(self):
1142 """heads of the set of changeset target by the pull"""
1142 """heads of the set of changeset target by the pull"""
1143 # compute target subset
1143 # compute target subset
1144 if self.heads is None:
1144 if self.heads is None:
1145 # We pulled every thing possible
1145 # We pulled every thing possible
1146 # sync on everything common
1146 # sync on everything common
1147 c = set(self.common)
1147 c = set(self.common)
1148 ret = list(self.common)
1148 ret = list(self.common)
1149 for n in self.rheads:
1149 for n in self.rheads:
1150 if n not in c:
1150 if n not in c:
1151 ret.append(n)
1151 ret.append(n)
1152 return ret
1152 return ret
1153 else:
1153 else:
1154 # We pulled a specific subset
1154 # We pulled a specific subset
1155 # sync on this subset
1155 # sync on this subset
1156 return self.heads
1156 return self.heads
1157
1157
1158 @util.propertycache
1158 @util.propertycache
1159 def canusebundle2(self):
1159 def canusebundle2(self):
1160 return not _forcebundle1(self)
1160 return not _forcebundle1(self)
1161
1161
1162 @util.propertycache
1162 @util.propertycache
1163 def remotebundle2caps(self):
1163 def remotebundle2caps(self):
1164 return bundle2.bundle2caps(self.remote)
1164 return bundle2.bundle2caps(self.remote)
1165
1165
1166 def gettransaction(self):
1166 def gettransaction(self):
1167 # deprecated; talk to trmanager directly
1167 # deprecated; talk to trmanager directly
1168 return self.trmanager.transaction()
1168 return self.trmanager.transaction()
1169
1169
1170 class transactionmanager(util.transactional):
1170 class transactionmanager(util.transactional):
1171 """An object to manage the life cycle of a transaction
1171 """An object to manage the life cycle of a transaction
1172
1172
1173 It creates the transaction on demand and calls the appropriate hooks when
1173 It creates the transaction on demand and calls the appropriate hooks when
1174 closing the transaction."""
1174 closing the transaction."""
1175 def __init__(self, repo, source, url):
1175 def __init__(self, repo, source, url):
1176 self.repo = repo
1176 self.repo = repo
1177 self.source = source
1177 self.source = source
1178 self.url = url
1178 self.url = url
1179 self._tr = None
1179 self._tr = None
1180
1180
1181 def transaction(self):
1181 def transaction(self):
1182 """Return an open transaction object, constructing if necessary"""
1182 """Return an open transaction object, constructing if necessary"""
1183 if not self._tr:
1183 if not self._tr:
1184 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1184 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1185 self._tr = self.repo.transaction(trname)
1185 self._tr = self.repo.transaction(trname)
1186 self._tr.hookargs['source'] = self.source
1186 self._tr.hookargs['source'] = self.source
1187 self._tr.hookargs['url'] = self.url
1187 self._tr.hookargs['url'] = self.url
1188 return self._tr
1188 return self._tr
1189
1189
1190 def close(self):
1190 def close(self):
1191 """close transaction if created"""
1191 """close transaction if created"""
1192 if self._tr is not None:
1192 if self._tr is not None:
1193 self._tr.close()
1193 self._tr.close()
1194
1194
1195 def release(self):
1195 def release(self):
1196 """release transaction if created"""
1196 """release transaction if created"""
1197 if self._tr is not None:
1197 if self._tr is not None:
1198 self._tr.release()
1198 self._tr.release()
1199
1199
1200 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1200 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1201 streamclonerequested=None):
1201 streamclonerequested=None):
1202 """Fetch repository data from a remote.
1202 """Fetch repository data from a remote.
1203
1203
1204 This is the main function used to retrieve data from a remote repository.
1204 This is the main function used to retrieve data from a remote repository.
1205
1205
1206 ``repo`` is the local repository to clone into.
1206 ``repo`` is the local repository to clone into.
1207 ``remote`` is a peer instance.
1207 ``remote`` is a peer instance.
1208 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1208 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1209 default) means to pull everything from the remote.
1209 default) means to pull everything from the remote.
1210 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1210 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1211 default, all remote bookmarks are pulled.
1211 default, all remote bookmarks are pulled.
1212 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1212 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1213 initialization.
1213 initialization.
1214 ``streamclonerequested`` is a boolean indicating whether a "streaming
1214 ``streamclonerequested`` is a boolean indicating whether a "streaming
1215 clone" is requested. A "streaming clone" is essentially a raw file copy
1215 clone" is requested. A "streaming clone" is essentially a raw file copy
1216 of revlogs from the server. This only works when the local repository is
1216 of revlogs from the server. This only works when the local repository is
1217 empty. The default value of ``None`` means to respect the server
1217 empty. The default value of ``None`` means to respect the server
1218 configuration for preferring stream clones.
1218 configuration for preferring stream clones.
1219
1219
1220 Returns the ``pulloperation`` created for this pull.
1220 Returns the ``pulloperation`` created for this pull.
1221 """
1221 """
1222 if opargs is None:
1222 if opargs is None:
1223 opargs = {}
1223 opargs = {}
1224 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1224 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1225 streamclonerequested=streamclonerequested, **opargs)
1225 streamclonerequested=streamclonerequested, **opargs)
1226
1226
1227 peerlocal = pullop.remote.local()
1227 peerlocal = pullop.remote.local()
1228 if peerlocal:
1228 if peerlocal:
1229 missing = set(peerlocal.requirements) - pullop.repo.supported
1229 missing = set(peerlocal.requirements) - pullop.repo.supported
1230 if missing:
1230 if missing:
1231 msg = _("required features are not"
1231 msg = _("required features are not"
1232 " supported in the destination:"
1232 " supported in the destination:"
1233 " %s") % (', '.join(sorted(missing)))
1233 " %s") % (', '.join(sorted(missing)))
1234 raise error.Abort(msg)
1234 raise error.Abort(msg)
1235
1235
1236 wlock = lock = None
1236 wlock = lock = None
1237 try:
1237 try:
1238 wlock = pullop.repo.wlock()
1238 wlock = pullop.repo.wlock()
1239 lock = pullop.repo.lock()
1239 lock = pullop.repo.lock()
1240 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1240 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1241 streamclone.maybeperformlegacystreamclone(pullop)
1241 streamclone.maybeperformlegacystreamclone(pullop)
1242 # This should ideally be in _pullbundle2(). However, it needs to run
1242 # This should ideally be in _pullbundle2(). However, it needs to run
1243 # before discovery to avoid extra work.
1243 # before discovery to avoid extra work.
1244 _maybeapplyclonebundle(pullop)
1244 _maybeapplyclonebundle(pullop)
1245 _pulldiscovery(pullop)
1245 _pulldiscovery(pullop)
1246 if pullop.canusebundle2:
1246 if pullop.canusebundle2:
1247 _pullbundle2(pullop)
1247 _pullbundle2(pullop)
1248 _pullchangeset(pullop)
1248 _pullchangeset(pullop)
1249 _pullphase(pullop)
1249 _pullphase(pullop)
1250 _pullbookmarks(pullop)
1250 _pullbookmarks(pullop)
1251 _pullobsolete(pullop)
1251 _pullobsolete(pullop)
1252 pullop.trmanager.close()
1252 pullop.trmanager.close()
1253 finally:
1253 finally:
1254 lockmod.release(pullop.trmanager, lock, wlock)
1254 lockmod.release(pullop.trmanager, lock, wlock)
1255
1255
1256 return pullop
1256 return pullop
1257
1257
1258 # list of steps to perform discovery before pull
1258 # list of steps to perform discovery before pull
1259 pulldiscoveryorder = []
1259 pulldiscoveryorder = []
1260
1260
1261 # Mapping between step name and function
1261 # Mapping between step name and function
1262 #
1262 #
1263 # This exists to help extensions wrap steps if necessary
1263 # This exists to help extensions wrap steps if necessary
1264 pulldiscoverymapping = {}
1264 pulldiscoverymapping = {}
1265
1265
1266 def pulldiscovery(stepname):
1266 def pulldiscovery(stepname):
1267 """decorator for function performing discovery before pull
1267 """decorator for function performing discovery before pull
1268
1268
1269 The function is added to the step -> function mapping and appended to the
1269 The function is added to the step -> function mapping and appended to the
1270 list of steps. Beware that decorated function will be added in order (this
1270 list of steps. Beware that decorated function will be added in order (this
1271 may matter).
1271 may matter).
1272
1272
1273 You can only use this decorator for a new step, if you want to wrap a step
1273 You can only use this decorator for a new step, if you want to wrap a step
1274 from an extension, change the pulldiscovery dictionary directly."""
1274 from an extension, change the pulldiscovery dictionary directly."""
1275 def dec(func):
1275 def dec(func):
1276 assert stepname not in pulldiscoverymapping
1276 assert stepname not in pulldiscoverymapping
1277 pulldiscoverymapping[stepname] = func
1277 pulldiscoverymapping[stepname] = func
1278 pulldiscoveryorder.append(stepname)
1278 pulldiscoveryorder.append(stepname)
1279 return func
1279 return func
1280 return dec
1280 return dec
1281
1281
1282 def _pulldiscovery(pullop):
1282 def _pulldiscovery(pullop):
1283 """Run all discovery steps"""
1283 """Run all discovery steps"""
1284 for stepname in pulldiscoveryorder:
1284 for stepname in pulldiscoveryorder:
1285 step = pulldiscoverymapping[stepname]
1285 step = pulldiscoverymapping[stepname]
1286 step(pullop)
1286 step(pullop)
1287
1287
1288 @pulldiscovery('b1:bookmarks')
1288 @pulldiscovery('b1:bookmarks')
1289 def _pullbookmarkbundle1(pullop):
1289 def _pullbookmarkbundle1(pullop):
1290 """fetch bookmark data in bundle1 case
1290 """fetch bookmark data in bundle1 case
1291
1291
1292 If not using bundle2, we have to fetch bookmarks before changeset
1292 If not using bundle2, we have to fetch bookmarks before changeset
1293 discovery to reduce the chance and impact of race conditions."""
1293 discovery to reduce the chance and impact of race conditions."""
1294 if pullop.remotebookmarks is not None:
1294 if pullop.remotebookmarks is not None:
1295 return
1295 return
1296 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1296 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1297 # all known bundle2 servers now support listkeys, but lets be nice with
1297 # all known bundle2 servers now support listkeys, but lets be nice with
1298 # new implementation.
1298 # new implementation.
1299 return
1299 return
1300 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1300 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1301
1301
1302
1302
1303 @pulldiscovery('changegroup')
1303 @pulldiscovery('changegroup')
1304 def _pulldiscoverychangegroup(pullop):
1304 def _pulldiscoverychangegroup(pullop):
1305 """discovery phase for the pull
1305 """discovery phase for the pull
1306
1306
1307 Current handle changeset discovery only, will change handle all discovery
1307 Current handle changeset discovery only, will change handle all discovery
1308 at some point."""
1308 at some point."""
1309 tmp = discovery.findcommonincoming(pullop.repo,
1309 tmp = discovery.findcommonincoming(pullop.repo,
1310 pullop.remote,
1310 pullop.remote,
1311 heads=pullop.heads,
1311 heads=pullop.heads,
1312 force=pullop.force)
1312 force=pullop.force)
1313 common, fetch, rheads = tmp
1313 common, fetch, rheads = tmp
1314 nm = pullop.repo.unfiltered().changelog.nodemap
1314 nm = pullop.repo.unfiltered().changelog.nodemap
1315 if fetch and rheads:
1315 if fetch and rheads:
1316 # If a remote heads in filtered locally, lets drop it from the unknown
1316 # If a remote heads in filtered locally, lets drop it from the unknown
1317 # remote heads and put in back in common.
1317 # remote heads and put in back in common.
1318 #
1318 #
1319 # This is a hackish solution to catch most of "common but locally
1319 # This is a hackish solution to catch most of "common but locally
1320 # hidden situation". We do not performs discovery on unfiltered
1320 # hidden situation". We do not performs discovery on unfiltered
1321 # repository because it end up doing a pathological amount of round
1321 # repository because it end up doing a pathological amount of round
1322 # trip for w huge amount of changeset we do not care about.
1322 # trip for w huge amount of changeset we do not care about.
1323 #
1323 #
1324 # If a set of such "common but filtered" changeset exist on the server
1324 # If a set of such "common but filtered" changeset exist on the server
1325 # but are not including a remote heads, we'll not be able to detect it,
1325 # but are not including a remote heads, we'll not be able to detect it,
1326 scommon = set(common)
1326 scommon = set(common)
1327 filteredrheads = []
1327 filteredrheads = []
1328 for n in rheads:
1328 for n in rheads:
1329 if n in nm:
1329 if n in nm:
1330 if n not in scommon:
1330 if n not in scommon:
1331 common.append(n)
1331 common.append(n)
1332 else:
1332 else:
1333 filteredrheads.append(n)
1333 filteredrheads.append(n)
1334 if not filteredrheads:
1334 if not filteredrheads:
1335 fetch = []
1335 fetch = []
1336 rheads = filteredrheads
1336 rheads = filteredrheads
1337 pullop.common = common
1337 pullop.common = common
1338 pullop.fetch = fetch
1338 pullop.fetch = fetch
1339 pullop.rheads = rheads
1339 pullop.rheads = rheads
1340
1340
1341 def _pullbundle2(pullop):
1341 def _pullbundle2(pullop):
1342 """pull data using bundle2
1342 """pull data using bundle2
1343
1343
1344 For now, the only supported data are changegroup."""
1344 For now, the only supported data are changegroup."""
1345 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1345 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1346
1346
1347 # At the moment we don't do stream clones over bundle2. If that is
1347 # At the moment we don't do stream clones over bundle2. If that is
1348 # implemented then here's where the check for that will go.
1348 # implemented then here's where the check for that will go.
1349 streaming = False
1349 streaming = False
1350
1350
1351 # pulling changegroup
1351 # pulling changegroup
1352 pullop.stepsdone.add('changegroup')
1352 pullop.stepsdone.add('changegroup')
1353
1353
1354 kwargs['common'] = pullop.common
1354 kwargs['common'] = pullop.common
1355 kwargs['heads'] = pullop.heads or pullop.rheads
1355 kwargs['heads'] = pullop.heads or pullop.rheads
1356 kwargs['cg'] = pullop.fetch
1356 kwargs['cg'] = pullop.fetch
1357 if 'listkeys' in pullop.remotebundle2caps:
1357 if 'listkeys' in pullop.remotebundle2caps:
1358 kwargs['listkeys'] = ['phases']
1358 kwargs['listkeys'] = ['phases']
1359 if pullop.remotebookmarks is None:
1359 if pullop.remotebookmarks is None:
1360 # make sure to always includes bookmark data when migrating
1360 # make sure to always includes bookmark data when migrating
1361 # `hg incoming --bundle` to using this function.
1361 # `hg incoming --bundle` to using this function.
1362 kwargs['listkeys'].append('bookmarks')
1362 kwargs['listkeys'].append('bookmarks')
1363
1363
1364 # If this is a full pull / clone and the server supports the clone bundles
1364 # If this is a full pull / clone and the server supports the clone bundles
1365 # feature, tell the server whether we attempted a clone bundle. The
1365 # feature, tell the server whether we attempted a clone bundle. The
1366 # presence of this flag indicates the client supports clone bundles. This
1366 # presence of this flag indicates the client supports clone bundles. This
1367 # will enable the server to treat clients that support clone bundles
1367 # will enable the server to treat clients that support clone bundles
1368 # differently from those that don't.
1368 # differently from those that don't.
1369 if (pullop.remote.capable('clonebundles')
1369 if (pullop.remote.capable('clonebundles')
1370 and pullop.heads is None and list(pullop.common) == [nullid]):
1370 and pullop.heads is None and list(pullop.common) == [nullid]):
1371 kwargs['cbattempted'] = pullop.clonebundleattempted
1371 kwargs['cbattempted'] = pullop.clonebundleattempted
1372
1372
1373 if streaming:
1373 if streaming:
1374 pullop.repo.ui.status(_('streaming all changes\n'))
1374 pullop.repo.ui.status(_('streaming all changes\n'))
1375 elif not pullop.fetch:
1375 elif not pullop.fetch:
1376 pullop.repo.ui.status(_("no changes found\n"))
1376 pullop.repo.ui.status(_("no changes found\n"))
1377 pullop.cgresult = 0
1377 pullop.cgresult = 0
1378 else:
1378 else:
1379 if pullop.heads is None and list(pullop.common) == [nullid]:
1379 if pullop.heads is None and list(pullop.common) == [nullid]:
1380 pullop.repo.ui.status(_("requesting all changes\n"))
1380 pullop.repo.ui.status(_("requesting all changes\n"))
1381 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1381 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1382 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1382 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1383 if obsolete.commonversion(remoteversions) is not None:
1383 if obsolete.commonversion(remoteversions) is not None:
1384 kwargs['obsmarkers'] = True
1384 kwargs['obsmarkers'] = True
1385 pullop.stepsdone.add('obsmarkers')
1385 pullop.stepsdone.add('obsmarkers')
1386 _pullbundle2extraprepare(pullop, kwargs)
1386 _pullbundle2extraprepare(pullop, kwargs)
1387 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1387 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1388 try:
1388 try:
1389 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1389 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1390 except bundle2.AbortFromPart as exc:
1390 except bundle2.AbortFromPart as exc:
1391 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1391 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1392 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1392 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1393 except error.BundleValueError as exc:
1393 except error.BundleValueError as exc:
1394 raise error.Abort(_('missing support for %s') % exc)
1394 raise error.Abort(_('missing support for %s') % exc)
1395
1395
1396 if pullop.fetch:
1396 if pullop.fetch:
1397 pullop.cgresult = bundle2.combinechangegroupresults(op)
1397 pullop.cgresult = bundle2.combinechangegroupresults(op)
1398
1398
1399 # If the bundle had a phase-heads part, then phase exchange is already done
1399 # If the bundle had a phase-heads part, then phase exchange is already done
1400 if op.records['phase-heads']:
1400 if op.records['phase-heads']:
1401 pullop.stepsdone.add('phases')
1401 pullop.stepsdone.add('phases')
1402
1402
1403 # processing phases change
1403 # processing phases change
1404 for namespace, value in op.records['listkeys']:
1404 for namespace, value in op.records['listkeys']:
1405 if namespace == 'phases':
1405 if namespace == 'phases':
1406 _pullapplyphases(pullop, value)
1406 _pullapplyphases(pullop, value)
1407
1407
1408 # processing bookmark update
1408 # processing bookmark update
1409 for namespace, value in op.records['listkeys']:
1409 for namespace, value in op.records['listkeys']:
1410 if namespace == 'bookmarks':
1410 if namespace == 'bookmarks':
1411 pullop.remotebookmarks = value
1411 pullop.remotebookmarks = value
1412
1412
1413 # bookmark data were either already there or pulled in the bundle
1413 # bookmark data were either already there or pulled in the bundle
1414 if pullop.remotebookmarks is not None:
1414 if pullop.remotebookmarks is not None:
1415 _pullbookmarks(pullop)
1415 _pullbookmarks(pullop)
1416
1416
1417 def _pullbundle2extraprepare(pullop, kwargs):
1417 def _pullbundle2extraprepare(pullop, kwargs):
1418 """hook function so that extensions can extend the getbundle call"""
1418 """hook function so that extensions can extend the getbundle call"""
1419 pass
1419 pass
1420
1420
1421 def _pullchangeset(pullop):
1421 def _pullchangeset(pullop):
1422 """pull changeset from unbundle into the local repo"""
1422 """pull changeset from unbundle into the local repo"""
1423 # We delay the open of the transaction as late as possible so we
1423 # We delay the open of the transaction as late as possible so we
1424 # don't open transaction for nothing or you break future useful
1424 # don't open transaction for nothing or you break future useful
1425 # rollback call
1425 # rollback call
1426 if 'changegroup' in pullop.stepsdone:
1426 if 'changegroup' in pullop.stepsdone:
1427 return
1427 return
1428 pullop.stepsdone.add('changegroup')
1428 pullop.stepsdone.add('changegroup')
1429 if not pullop.fetch:
1429 if not pullop.fetch:
1430 pullop.repo.ui.status(_("no changes found\n"))
1430 pullop.repo.ui.status(_("no changes found\n"))
1431 pullop.cgresult = 0
1431 pullop.cgresult = 0
1432 return
1432 return
1433 tr = pullop.gettransaction()
1433 tr = pullop.gettransaction()
1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1434 if pullop.heads is None and list(pullop.common) == [nullid]:
1435 pullop.repo.ui.status(_("requesting all changes\n"))
1435 pullop.repo.ui.status(_("requesting all changes\n"))
1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1436 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1437 # issue1320, avoid a race if remote changed after discovery
1437 # issue1320, avoid a race if remote changed after discovery
1438 pullop.heads = pullop.rheads
1438 pullop.heads = pullop.rheads
1439
1439
1440 if pullop.remote.capable('getbundle'):
1440 if pullop.remote.capable('getbundle'):
1441 # TODO: get bundlecaps from remote
1441 # TODO: get bundlecaps from remote
1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1442 cg = pullop.remote.getbundle('pull', common=pullop.common,
1443 heads=pullop.heads or pullop.rheads)
1443 heads=pullop.heads or pullop.rheads)
1444 elif pullop.heads is None:
1444 elif pullop.heads is None:
1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1445 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1446 elif not pullop.remote.capable('changegroupsubset'):
1446 elif not pullop.remote.capable('changegroupsubset'):
1447 raise error.Abort(_("partial pull cannot be done because "
1447 raise error.Abort(_("partial pull cannot be done because "
1448 "other repository doesn't support "
1448 "other repository doesn't support "
1449 "changegroupsubset."))
1449 "changegroupsubset."))
1450 else:
1450 else:
1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1451 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1452 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1452 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1453 pullop.remote.url())
1453 pullop.remote.url())
1454 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1454 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1455
1455
1456 def _pullphase(pullop):
1456 def _pullphase(pullop):
1457 # Get remote phases data from remote
1457 # Get remote phases data from remote
1458 if 'phases' in pullop.stepsdone:
1458 if 'phases' in pullop.stepsdone:
1459 return
1459 return
1460 remotephases = pullop.remote.listkeys('phases')
1460 remotephases = pullop.remote.listkeys('phases')
1461 _pullapplyphases(pullop, remotephases)
1461 _pullapplyphases(pullop, remotephases)
1462
1462
1463 def _pullapplyphases(pullop, remotephases):
1463 def _pullapplyphases(pullop, remotephases):
1464 """apply phase movement from observed remote state"""
1464 """apply phase movement from observed remote state"""
1465 if 'phases' in pullop.stepsdone:
1465 if 'phases' in pullop.stepsdone:
1466 return
1466 return
1467 pullop.stepsdone.add('phases')
1467 pullop.stepsdone.add('phases')
1468 publishing = bool(remotephases.get('publishing', False))
1468 publishing = bool(remotephases.get('publishing', False))
1469 if remotephases and not publishing:
1469 if remotephases and not publishing:
1470 # remote is new and non-publishing
1470 # remote is new and non-publishing
1471 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1471 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1472 pullop.pulledsubset,
1472 pullop.pulledsubset,
1473 remotephases)
1473 remotephases)
1474 dheads = pullop.pulledsubset
1474 dheads = pullop.pulledsubset
1475 else:
1475 else:
1476 # Remote is old or publishing all common changesets
1476 # Remote is old or publishing all common changesets
1477 # should be seen as public
1477 # should be seen as public
1478 pheads = pullop.pulledsubset
1478 pheads = pullop.pulledsubset
1479 dheads = []
1479 dheads = []
1480 unfi = pullop.repo.unfiltered()
1480 unfi = pullop.repo.unfiltered()
1481 phase = unfi._phasecache.phase
1481 phase = unfi._phasecache.phase
1482 rev = unfi.changelog.nodemap.get
1482 rev = unfi.changelog.nodemap.get
1483 public = phases.public
1483 public = phases.public
1484 draft = phases.draft
1484 draft = phases.draft
1485
1485
1486 # exclude changesets already public locally and update the others
1486 # exclude changesets already public locally and update the others
1487 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1487 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1488 if pheads:
1488 if pheads:
1489 tr = pullop.gettransaction()
1489 tr = pullop.gettransaction()
1490 phases.advanceboundary(pullop.repo, tr, public, pheads)
1490 phases.advanceboundary(pullop.repo, tr, public, pheads)
1491
1491
1492 # exclude changesets already draft locally and update the others
1492 # exclude changesets already draft locally and update the others
1493 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1493 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1494 if dheads:
1494 if dheads:
1495 tr = pullop.gettransaction()
1495 tr = pullop.gettransaction()
1496 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1496 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1497
1497
1498 def _pullbookmarks(pullop):
1498 def _pullbookmarks(pullop):
1499 """process the remote bookmark information to update the local one"""
1499 """process the remote bookmark information to update the local one"""
1500 if 'bookmarks' in pullop.stepsdone:
1500 if 'bookmarks' in pullop.stepsdone:
1501 return
1501 return
1502 pullop.stepsdone.add('bookmarks')
1502 pullop.stepsdone.add('bookmarks')
1503 repo = pullop.repo
1503 repo = pullop.repo
1504 remotebookmarks = pullop.remotebookmarks
1504 remotebookmarks = pullop.remotebookmarks
1505 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1505 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1506 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1506 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1507 pullop.remote.url(),
1507 pullop.remote.url(),
1508 pullop.gettransaction,
1508 pullop.gettransaction,
1509 explicit=pullop.explicitbookmarks)
1509 explicit=pullop.explicitbookmarks)
1510
1510
1511 def _pullobsolete(pullop):
1511 def _pullobsolete(pullop):
1512 """utility function to pull obsolete markers from a remote
1512 """utility function to pull obsolete markers from a remote
1513
1513
1514 The `gettransaction` is function that return the pull transaction, creating
1514 The `gettransaction` is function that return the pull transaction, creating
1515 one if necessary. We return the transaction to inform the calling code that
1515 one if necessary. We return the transaction to inform the calling code that
1516 a new transaction have been created (when applicable).
1516 a new transaction have been created (when applicable).
1517
1517
1518 Exists mostly to allow overriding for experimentation purpose"""
1518 Exists mostly to allow overriding for experimentation purpose"""
1519 if 'obsmarkers' in pullop.stepsdone:
1519 if 'obsmarkers' in pullop.stepsdone:
1520 return
1520 return
1521 pullop.stepsdone.add('obsmarkers')
1521 pullop.stepsdone.add('obsmarkers')
1522 tr = None
1522 tr = None
1523 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1523 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1524 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1524 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1525 remoteobs = pullop.remote.listkeys('obsolete')
1525 remoteobs = pullop.remote.listkeys('obsolete')
1526 if 'dump0' in remoteobs:
1526 if 'dump0' in remoteobs:
1527 tr = pullop.gettransaction()
1527 tr = pullop.gettransaction()
1528 markers = []
1528 markers = []
1529 for key in sorted(remoteobs, reverse=True):
1529 for key in sorted(remoteobs, reverse=True):
1530 if key.startswith('dump'):
1530 if key.startswith('dump'):
1531 data = util.b85decode(remoteobs[key])
1531 data = util.b85decode(remoteobs[key])
1532 version, newmarks = obsolete._readmarkers(data)
1532 version, newmarks = obsolete._readmarkers(data)
1533 markers += newmarks
1533 markers += newmarks
1534 if markers:
1534 if markers:
1535 pullop.repo.obsstore.add(tr, markers)
1535 pullop.repo.obsstore.add(tr, markers)
1536 pullop.repo.invalidatevolatilesets()
1536 pullop.repo.invalidatevolatilesets()
1537 return tr
1537 return tr
1538
1538
1539 def caps20to10(repo):
1539 def caps20to10(repo):
1540 """return a set with appropriate options to use bundle20 during getbundle"""
1540 """return a set with appropriate options to use bundle20 during getbundle"""
1541 caps = {'HG20'}
1541 caps = {'HG20'}
1542 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1542 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1543 caps.add('bundle2=' + urlreq.quote(capsblob))
1543 caps.add('bundle2=' + urlreq.quote(capsblob))
1544 return caps
1544 return caps
1545
1545
1546 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1546 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1547 getbundle2partsorder = []
1547 getbundle2partsorder = []
1548
1548
1549 # Mapping between step name and function
1549 # Mapping between step name and function
1550 #
1550 #
1551 # This exists to help extensions wrap steps if necessary
1551 # This exists to help extensions wrap steps if necessary
1552 getbundle2partsmapping = {}
1552 getbundle2partsmapping = {}
1553
1553
1554 def getbundle2partsgenerator(stepname, idx=None):
1554 def getbundle2partsgenerator(stepname, idx=None):
1555 """decorator for function generating bundle2 part for getbundle
1555 """decorator for function generating bundle2 part for getbundle
1556
1556
1557 The function is added to the step -> function mapping and appended to the
1557 The function is added to the step -> function mapping and appended to the
1558 list of steps. Beware that decorated functions will be added in order
1558 list of steps. Beware that decorated functions will be added in order
1559 (this may matter).
1559 (this may matter).
1560
1560
1561 You can only use this decorator for new steps, if you want to wrap a step
1561 You can only use this decorator for new steps, if you want to wrap a step
1562 from an extension, attack the getbundle2partsmapping dictionary directly."""
1562 from an extension, attack the getbundle2partsmapping dictionary directly."""
1563 def dec(func):
1563 def dec(func):
1564 assert stepname not in getbundle2partsmapping
1564 assert stepname not in getbundle2partsmapping
1565 getbundle2partsmapping[stepname] = func
1565 getbundle2partsmapping[stepname] = func
1566 if idx is None:
1566 if idx is None:
1567 getbundle2partsorder.append(stepname)
1567 getbundle2partsorder.append(stepname)
1568 else:
1568 else:
1569 getbundle2partsorder.insert(idx, stepname)
1569 getbundle2partsorder.insert(idx, stepname)
1570 return func
1570 return func
1571 return dec
1571 return dec
1572
1572
1573 def bundle2requested(bundlecaps):
1573 def bundle2requested(bundlecaps):
1574 if bundlecaps is not None:
1574 if bundlecaps is not None:
1575 return any(cap.startswith('HG2') for cap in bundlecaps)
1575 return any(cap.startswith('HG2') for cap in bundlecaps)
1576 return False
1576 return False
1577
1577
1578 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1578 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1579 **kwargs):
1579 **kwargs):
1580 """Return chunks constituting a bundle's raw data.
1580 """Return chunks constituting a bundle's raw data.
1581
1581
1582 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1582 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1583 passed.
1583 passed.
1584
1584
1585 Returns an iterator over raw chunks (of varying sizes).
1585 Returns an iterator over raw chunks (of varying sizes).
1586 """
1586 """
1587 kwargs = pycompat.byteskwargs(kwargs)
1587 kwargs = pycompat.byteskwargs(kwargs)
1588 usebundle2 = bundle2requested(bundlecaps)
1588 usebundle2 = bundle2requested(bundlecaps)
1589 # bundle10 case
1589 # bundle10 case
1590 if not usebundle2:
1590 if not usebundle2:
1591 if bundlecaps and not kwargs.get('cg', True):
1591 if bundlecaps and not kwargs.get('cg', True):
1592 raise ValueError(_('request for bundle10 must include changegroup'))
1592 raise ValueError(_('request for bundle10 must include changegroup'))
1593
1593
1594 if kwargs:
1594 if kwargs:
1595 raise ValueError(_('unsupported getbundle arguments: %s')
1595 raise ValueError(_('unsupported getbundle arguments: %s')
1596 % ', '.join(sorted(kwargs.keys())))
1596 % ', '.join(sorted(kwargs.keys())))
1597 outgoing = _computeoutgoing(repo, heads, common)
1597 outgoing = _computeoutgoing(repo, heads, common)
1598 bundler = changegroup.getbundler('01', repo, bundlecaps)
1598 bundler = changegroup.getbundler('01', repo, bundlecaps)
1599 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1599 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1600
1600
1601 # bundle20 case
1601 # bundle20 case
1602 b2caps = {}
1602 b2caps = {}
1603 for bcaps in bundlecaps:
1603 for bcaps in bundlecaps:
1604 if bcaps.startswith('bundle2='):
1604 if bcaps.startswith('bundle2='):
1605 blob = urlreq.unquote(bcaps[len('bundle2='):])
1605 blob = urlreq.unquote(bcaps[len('bundle2='):])
1606 b2caps.update(bundle2.decodecaps(blob))
1606 b2caps.update(bundle2.decodecaps(blob))
1607 bundler = bundle2.bundle20(repo.ui, b2caps)
1607 bundler = bundle2.bundle20(repo.ui, b2caps)
1608
1608
1609 kwargs['heads'] = heads
1609 kwargs['heads'] = heads
1610 kwargs['common'] = common
1610 kwargs['common'] = common
1611
1611
1612 for name in getbundle2partsorder:
1612 for name in getbundle2partsorder:
1613 func = getbundle2partsmapping[name]
1613 func = getbundle2partsmapping[name]
1614 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1614 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1615 **pycompat.strkwargs(kwargs))
1615 **pycompat.strkwargs(kwargs))
1616
1616
1617 return bundler.getchunks()
1617 return bundler.getchunks()
1618
1618
1619 @getbundle2partsgenerator('changegroup')
1619 @getbundle2partsgenerator('changegroup')
1620 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1620 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1621 b2caps=None, heads=None, common=None, **kwargs):
1621 b2caps=None, heads=None, common=None, **kwargs):
1622 """add a changegroup part to the requested bundle"""
1622 """add a changegroup part to the requested bundle"""
1623 cgstream = None
1623 cgstream = None
1624 if kwargs.get('cg', True):
1624 if kwargs.get('cg', True):
1625 # build changegroup bundle here.
1625 # build changegroup bundle here.
1626 version = '01'
1626 version = '01'
1627 cgversions = b2caps.get('changegroup')
1627 cgversions = b2caps.get('changegroup')
1628 if cgversions: # 3.1 and 3.2 ship with an empty value
1628 if cgversions: # 3.1 and 3.2 ship with an empty value
1629 cgversions = [v for v in cgversions
1629 cgversions = [v for v in cgversions
1630 if v in changegroup.supportedoutgoingversions(repo)]
1630 if v in changegroup.supportedoutgoingversions(repo)]
1631 if not cgversions:
1631 if not cgversions:
1632 raise ValueError(_('no common changegroup version'))
1632 raise ValueError(_('no common changegroup version'))
1633 version = max(cgversions)
1633 version = max(cgversions)
1634 outgoing = _computeoutgoing(repo, heads, common)
1634 outgoing = _computeoutgoing(repo, heads, common)
1635 cgstream = changegroup.makestream(repo, outgoing, version, source,
1635 cgstream = changegroup.makestream(repo, outgoing, version, source,
1636 bundlecaps=bundlecaps)
1636 bundlecaps=bundlecaps)
1637
1637
1638 if cgstream:
1638 if cgstream:
1639 part = bundler.newpart('changegroup', data=cgstream)
1639 part = bundler.newpart('changegroup', data=cgstream)
1640 if cgversions:
1640 if cgversions:
1641 part.addparam('version', version)
1641 part.addparam('version', version)
1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1642 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1643 if 'treemanifest' in repo.requirements:
1643 if 'treemanifest' in repo.requirements:
1644 part.addparam('treemanifest', '1')
1644 part.addparam('treemanifest', '1')
1645
1645
1646 @getbundle2partsgenerator('listkeys')
1646 @getbundle2partsgenerator('listkeys')
1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1647 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1648 b2caps=None, **kwargs):
1648 b2caps=None, **kwargs):
1649 """add parts containing listkeys namespaces to the requested bundle"""
1649 """add parts containing listkeys namespaces to the requested bundle"""
1650 listkeys = kwargs.get('listkeys', ())
1650 listkeys = kwargs.get('listkeys', ())
1651 for namespace in listkeys:
1651 for namespace in listkeys:
1652 part = bundler.newpart('listkeys')
1652 part = bundler.newpart('listkeys')
1653 part.addparam('namespace', namespace)
1653 part.addparam('namespace', namespace)
1654 keys = repo.listkeys(namespace).items()
1654 keys = repo.listkeys(namespace).items()
1655 part.data = pushkey.encodekeys(keys)
1655 part.data = pushkey.encodekeys(keys)
1656
1656
1657 @getbundle2partsgenerator('obsmarkers')
1657 @getbundle2partsgenerator('obsmarkers')
1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1658 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1659 b2caps=None, heads=None, **kwargs):
1659 b2caps=None, heads=None, **kwargs):
1660 """add an obsolescence markers part to the requested bundle"""
1660 """add an obsolescence markers part to the requested bundle"""
1661 if kwargs.get('obsmarkers', False):
1661 if kwargs.get('obsmarkers', False):
1662 if heads is None:
1662 if heads is None:
1663 heads = repo.heads()
1663 heads = repo.heads()
1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1664 subset = [c.node() for c in repo.set('::%ln', heads)]
1665 markers = repo.obsstore.relevantmarkers(subset)
1665 markers = repo.obsstore.relevantmarkers(subset)
1666 markers = sorted(markers)
1666 markers = sorted(markers)
1667 bundle2.buildobsmarkerspart(bundler, markers)
1667 bundle2.buildobsmarkerspart(bundler, markers)
1668
1668
1669 @getbundle2partsgenerator('hgtagsfnodes')
1669 @getbundle2partsgenerator('hgtagsfnodes')
1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1670 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1671 b2caps=None, heads=None, common=None,
1671 b2caps=None, heads=None, common=None,
1672 **kwargs):
1672 **kwargs):
1673 """Transfer the .hgtags filenodes mapping.
1673 """Transfer the .hgtags filenodes mapping.
1674
1674
1675 Only values for heads in this bundle will be transferred.
1675 Only values for heads in this bundle will be transferred.
1676
1676
1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1677 The part data consists of pairs of 20 byte changeset node and .hgtags
1678 filenodes raw values.
1678 filenodes raw values.
1679 """
1679 """
1680 # Don't send unless:
1680 # Don't send unless:
1681 # - changeset are being exchanged,
1681 # - changeset are being exchanged,
1682 # - the client supports it.
1682 # - the client supports it.
1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1683 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1684 return
1684 return
1685
1685
1686 outgoing = _computeoutgoing(repo, heads, common)
1686 outgoing = _computeoutgoing(repo, heads, common)
1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1687 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1688
1688
1689 def _getbookmarks(repo, **kwargs):
1689 def _getbookmarks(repo, **kwargs):
1690 """Returns bookmark to node mapping.
1690 """Returns bookmark to node mapping.
1691
1691
1692 This function is primarily used to generate `bookmarks` bundle2 part.
1692 This function is primarily used to generate `bookmarks` bundle2 part.
1693 It is a separate function in order to make it easy to wrap it
1693 It is a separate function in order to make it easy to wrap it
1694 in extensions. Passing `kwargs` to the function makes it easy to
1694 in extensions. Passing `kwargs` to the function makes it easy to
1695 add new parameters in extensions.
1695 add new parameters in extensions.
1696 """
1696 """
1697
1697
1698 return dict(bookmod.listbinbookmarks(repo))
1698 return dict(bookmod.listbinbookmarks(repo))
1699
1699
1700 def check_heads(repo, their_heads, context):
1700 def check_heads(repo, their_heads, context):
1701 """check if the heads of a repo have been modified
1701 """check if the heads of a repo have been modified
1702
1702
1703 Used by peer for unbundling.
1703 Used by peer for unbundling.
1704 """
1704 """
1705 heads = repo.heads()
1705 heads = repo.heads()
1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1706 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1707 if not (their_heads == ['force'] or their_heads == heads or
1707 if not (their_heads == ['force'] or their_heads == heads or
1708 their_heads == ['hashed', heads_hash]):
1708 their_heads == ['hashed', heads_hash]):
1709 # someone else committed/pushed/unbundled while we
1709 # someone else committed/pushed/unbundled while we
1710 # were transferring data
1710 # were transferring data
1711 raise error.PushRaced('repository changed while %s - '
1711 raise error.PushRaced('repository changed while %s - '
1712 'please try again' % context)
1712 'please try again' % context)
1713
1713
1714 def unbundle(repo, cg, heads, source, url):
1714 def unbundle(repo, cg, heads, source, url):
1715 """Apply a bundle to a repo.
1715 """Apply a bundle to a repo.
1716
1716
1717 this function makes sure the repo is locked during the application and have
1717 this function makes sure the repo is locked during the application and have
1718 mechanism to check that no push race occurred between the creation of the
1718 mechanism to check that no push race occurred between the creation of the
1719 bundle and its application.
1719 bundle and its application.
1720
1720
1721 If the push was raced as PushRaced exception is raised."""
1721 If the push was raced as PushRaced exception is raised."""
1722 r = 0
1722 r = 0
1723 # need a transaction when processing a bundle2 stream
1723 # need a transaction when processing a bundle2 stream
1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1724 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1725 lockandtr = [None, None, None]
1725 lockandtr = [None, None, None]
1726 recordout = None
1726 recordout = None
1727 # quick fix for output mismatch with bundle2 in 3.4
1727 # quick fix for output mismatch with bundle2 in 3.4
1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1728 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1729 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1729 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1730 captureoutput = True
1730 captureoutput = True
1731 try:
1731 try:
1732 # note: outside bundle1, 'heads' is expected to be empty and this
1732 # note: outside bundle1, 'heads' is expected to be empty and this
1733 # 'check_heads' call wil be a no-op
1733 # 'check_heads' call wil be a no-op
1734 check_heads(repo, heads, 'uploading changes')
1734 check_heads(repo, heads, 'uploading changes')
1735 # push can proceed
1735 # push can proceed
1736 if not isinstance(cg, bundle2.unbundle20):
1736 if not isinstance(cg, bundle2.unbundle20):
1737 # legacy case: bundle1 (changegroup 01)
1737 # legacy case: bundle1 (changegroup 01)
1738 txnname = "\n".join([source, util.hidepassword(url)])
1738 txnname = "\n".join([source, util.hidepassword(url)])
1739 with repo.lock(), repo.transaction(txnname) as tr:
1739 with repo.lock(), repo.transaction(txnname) as tr:
1740 op = bundle2.applybundle(repo, cg, tr, source, url)
1740 op = bundle2.applybundle(repo, cg, tr, source, url)
1741 r = bundle2.combinechangegroupresults(op)
1741 r = bundle2.combinechangegroupresults(op)
1742 else:
1742 else:
1743 r = None
1743 r = None
1744 try:
1744 try:
1745 def gettransaction():
1745 def gettransaction():
1746 if not lockandtr[2]:
1746 if not lockandtr[2]:
1747 lockandtr[0] = repo.wlock()
1747 lockandtr[0] = repo.wlock()
1748 lockandtr[1] = repo.lock()
1748 lockandtr[1] = repo.lock()
1749 lockandtr[2] = repo.transaction(source)
1749 lockandtr[2] = repo.transaction(source)
1750 lockandtr[2].hookargs['source'] = source
1750 lockandtr[2].hookargs['source'] = source
1751 lockandtr[2].hookargs['url'] = url
1751 lockandtr[2].hookargs['url'] = url
1752 lockandtr[2].hookargs['bundle2'] = '1'
1752 lockandtr[2].hookargs['bundle2'] = '1'
1753 return lockandtr[2]
1753 return lockandtr[2]
1754
1754
1755 # Do greedy locking by default until we're satisfied with lazy
1755 # Do greedy locking by default until we're satisfied with lazy
1756 # locking.
1756 # locking.
1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1757 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1758 gettransaction()
1758 gettransaction()
1759
1759
1760 op = bundle2.bundleoperation(repo, gettransaction,
1760 op = bundle2.bundleoperation(repo, gettransaction,
1761 captureoutput=captureoutput)
1761 captureoutput=captureoutput)
1762 try:
1762 try:
1763 op = bundle2.processbundle(repo, cg, op=op)
1763 op = bundle2.processbundle(repo, cg, op=op)
1764 finally:
1764 finally:
1765 r = op.reply
1765 r = op.reply
1766 if captureoutput and r is not None:
1766 if captureoutput and r is not None:
1767 repo.ui.pushbuffer(error=True, subproc=True)
1767 repo.ui.pushbuffer(error=True, subproc=True)
1768 def recordout(output):
1768 def recordout(output):
1769 r.newpart('output', data=output, mandatory=False)
1769 r.newpart('output', data=output, mandatory=False)
1770 if lockandtr[2] is not None:
1770 if lockandtr[2] is not None:
1771 lockandtr[2].close()
1771 lockandtr[2].close()
1772 except BaseException as exc:
1772 except BaseException as exc:
1773 exc.duringunbundle2 = True
1773 exc.duringunbundle2 = True
1774 if captureoutput and r is not None:
1774 if captureoutput and r is not None:
1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1775 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1776 def recordout(output):
1776 def recordout(output):
1777 part = bundle2.bundlepart('output', data=output,
1777 part = bundle2.bundlepart('output', data=output,
1778 mandatory=False)
1778 mandatory=False)
1779 parts.append(part)
1779 parts.append(part)
1780 raise
1780 raise
1781 finally:
1781 finally:
1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1782 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1783 if recordout is not None:
1783 if recordout is not None:
1784 recordout(repo.ui.popbuffer())
1784 recordout(repo.ui.popbuffer())
1785 return r
1785 return r
1786
1786
1787 def _maybeapplyclonebundle(pullop):
1787 def _maybeapplyclonebundle(pullop):
1788 """Apply a clone bundle from a remote, if possible."""
1788 """Apply a clone bundle from a remote, if possible."""
1789
1789
1790 repo = pullop.repo
1790 repo = pullop.repo
1791 remote = pullop.remote
1791 remote = pullop.remote
1792
1792
1793 if not repo.ui.configbool('ui', 'clonebundles'):
1793 if not repo.ui.configbool('ui', 'clonebundles'):
1794 return
1794 return
1795
1795
1796 # Only run if local repo is empty.
1796 # Only run if local repo is empty.
1797 if len(repo):
1797 if len(repo):
1798 return
1798 return
1799
1799
1800 if pullop.heads:
1800 if pullop.heads:
1801 return
1801 return
1802
1802
1803 if not remote.capable('clonebundles'):
1803 if not remote.capable('clonebundles'):
1804 return
1804 return
1805
1805
1806 res = remote._call('clonebundles')
1806 res = remote._call('clonebundles')
1807
1807
1808 # If we call the wire protocol command, that's good enough to record the
1808 # If we call the wire protocol command, that's good enough to record the
1809 # attempt.
1809 # attempt.
1810 pullop.clonebundleattempted = True
1810 pullop.clonebundleattempted = True
1811
1811
1812 entries = parseclonebundlesmanifest(repo, res)
1812 entries = parseclonebundlesmanifest(repo, res)
1813 if not entries:
1813 if not entries:
1814 repo.ui.note(_('no clone bundles available on remote; '
1814 repo.ui.note(_('no clone bundles available on remote; '
1815 'falling back to regular clone\n'))
1815 'falling back to regular clone\n'))
1816 return
1816 return
1817
1817
1818 entries = filterclonebundleentries(repo, entries)
1818 entries = filterclonebundleentries(repo, entries)
1819 if not entries:
1819 if not entries:
1820 # There is a thundering herd concern here. However, if a server
1820 # There is a thundering herd concern here. However, if a server
1821 # operator doesn't advertise bundles appropriate for its clients,
1821 # operator doesn't advertise bundles appropriate for its clients,
1822 # they deserve what's coming. Furthermore, from a client's
1822 # they deserve what's coming. Furthermore, from a client's
1823 # perspective, no automatic fallback would mean not being able to
1823 # perspective, no automatic fallback would mean not being able to
1824 # clone!
1824 # clone!
1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1825 repo.ui.warn(_('no compatible clone bundles available on server; '
1826 'falling back to regular clone\n'))
1826 'falling back to regular clone\n'))
1827 repo.ui.warn(_('(you may want to report this to the server '
1827 repo.ui.warn(_('(you may want to report this to the server '
1828 'operator)\n'))
1828 'operator)\n'))
1829 return
1829 return
1830
1830
1831 entries = sortclonebundleentries(repo.ui, entries)
1831 entries = sortclonebundleentries(repo.ui, entries)
1832
1832
1833 url = entries[0]['URL']
1833 url = entries[0]['URL']
1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1834 repo.ui.status(_('applying clone bundle from %s\n') % url)
1835 if trypullbundlefromurl(repo.ui, repo, url):
1835 if trypullbundlefromurl(repo.ui, repo, url):
1836 repo.ui.status(_('finished applying clone bundle\n'))
1836 repo.ui.status(_('finished applying clone bundle\n'))
1837 # Bundle failed.
1837 # Bundle failed.
1838 #
1838 #
1839 # We abort by default to avoid the thundering herd of
1839 # We abort by default to avoid the thundering herd of
1840 # clients flooding a server that was expecting expensive
1840 # clients flooding a server that was expecting expensive
1841 # clone load to be offloaded.
1841 # clone load to be offloaded.
1842 elif repo.ui.configbool('ui', 'clonebundlefallback'):
1842 elif repo.ui.configbool('ui', 'clonebundlefallback'):
1843 repo.ui.warn(_('falling back to normal clone\n'))
1843 repo.ui.warn(_('falling back to normal clone\n'))
1844 else:
1844 else:
1845 raise error.Abort(_('error applying bundle'),
1845 raise error.Abort(_('error applying bundle'),
1846 hint=_('if this error persists, consider contacting '
1846 hint=_('if this error persists, consider contacting '
1847 'the server operator or disable clone '
1847 'the server operator or disable clone '
1848 'bundles via '
1848 'bundles via '
1849 '"--config ui.clonebundles=false"'))
1849 '"--config ui.clonebundles=false"'))
1850
1850
1851 def parseclonebundlesmanifest(repo, s):
1851 def parseclonebundlesmanifest(repo, s):
1852 """Parses the raw text of a clone bundles manifest.
1852 """Parses the raw text of a clone bundles manifest.
1853
1853
1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1854 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1855 to the URL and other keys are the attributes for the entry.
1855 to the URL and other keys are the attributes for the entry.
1856 """
1856 """
1857 m = []
1857 m = []
1858 for line in s.splitlines():
1858 for line in s.splitlines():
1859 fields = line.split()
1859 fields = line.split()
1860 if not fields:
1860 if not fields:
1861 continue
1861 continue
1862 attrs = {'URL': fields[0]}
1862 attrs = {'URL': fields[0]}
1863 for rawattr in fields[1:]:
1863 for rawattr in fields[1:]:
1864 key, value = rawattr.split('=', 1)
1864 key, value = rawattr.split('=', 1)
1865 key = urlreq.unquote(key)
1865 key = urlreq.unquote(key)
1866 value = urlreq.unquote(value)
1866 value = urlreq.unquote(value)
1867 attrs[key] = value
1867 attrs[key] = value
1868
1868
1869 # Parse BUNDLESPEC into components. This makes client-side
1869 # Parse BUNDLESPEC into components. This makes client-side
1870 # preferences easier to specify since you can prefer a single
1870 # preferences easier to specify since you can prefer a single
1871 # component of the BUNDLESPEC.
1871 # component of the BUNDLESPEC.
1872 if key == 'BUNDLESPEC':
1872 if key == 'BUNDLESPEC':
1873 try:
1873 try:
1874 comp, version, params = parsebundlespec(repo, value,
1874 comp, version, params = parsebundlespec(repo, value,
1875 externalnames=True)
1875 externalnames=True)
1876 attrs['COMPRESSION'] = comp
1876 attrs['COMPRESSION'] = comp
1877 attrs['VERSION'] = version
1877 attrs['VERSION'] = version
1878 except error.InvalidBundleSpecification:
1878 except error.InvalidBundleSpecification:
1879 pass
1879 pass
1880 except error.UnsupportedBundleSpecification:
1880 except error.UnsupportedBundleSpecification:
1881 pass
1881 pass
1882
1882
1883 m.append(attrs)
1883 m.append(attrs)
1884
1884
1885 return m
1885 return m
1886
1886
1887 def filterclonebundleentries(repo, entries):
1887 def filterclonebundleentries(repo, entries):
1888 """Remove incompatible clone bundle manifest entries.
1888 """Remove incompatible clone bundle manifest entries.
1889
1889
1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1890 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1891 and returns a new list consisting of only the entries that this client
1891 and returns a new list consisting of only the entries that this client
1892 should be able to apply.
1892 should be able to apply.
1893
1893
1894 There is no guarantee we'll be able to apply all returned entries because
1894 There is no guarantee we'll be able to apply all returned entries because
1895 the metadata we use to filter on may be missing or wrong.
1895 the metadata we use to filter on may be missing or wrong.
1896 """
1896 """
1897 newentries = []
1897 newentries = []
1898 for entry in entries:
1898 for entry in entries:
1899 spec = entry.get('BUNDLESPEC')
1899 spec = entry.get('BUNDLESPEC')
1900 if spec:
1900 if spec:
1901 try:
1901 try:
1902 parsebundlespec(repo, spec, strict=True)
1902 parsebundlespec(repo, spec, strict=True)
1903 except error.InvalidBundleSpecification as e:
1903 except error.InvalidBundleSpecification as e:
1904 repo.ui.debug(str(e) + '\n')
1904 repo.ui.debug(str(e) + '\n')
1905 continue
1905 continue
1906 except error.UnsupportedBundleSpecification as e:
1906 except error.UnsupportedBundleSpecification as e:
1907 repo.ui.debug('filtering %s because unsupported bundle '
1907 repo.ui.debug('filtering %s because unsupported bundle '
1908 'spec: %s\n' % (entry['URL'], str(e)))
1908 'spec: %s\n' % (entry['URL'], str(e)))
1909 continue
1909 continue
1910
1910
1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1911 if 'REQUIRESNI' in entry and not sslutil.hassni:
1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1912 repo.ui.debug('filtering %s because SNI not supported\n' %
1913 entry['URL'])
1913 entry['URL'])
1914 continue
1914 continue
1915
1915
1916 newentries.append(entry)
1916 newentries.append(entry)
1917
1917
1918 return newentries
1918 return newentries
1919
1919
1920 class clonebundleentry(object):
1920 class clonebundleentry(object):
1921 """Represents an item in a clone bundles manifest.
1921 """Represents an item in a clone bundles manifest.
1922
1922
1923 This rich class is needed to support sorting since sorted() in Python 3
1923 This rich class is needed to support sorting since sorted() in Python 3
1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1924 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1925 won't work.
1925 won't work.
1926 """
1926 """
1927
1927
1928 def __init__(self, value, prefers):
1928 def __init__(self, value, prefers):
1929 self.value = value
1929 self.value = value
1930 self.prefers = prefers
1930 self.prefers = prefers
1931
1931
1932 def _cmp(self, other):
1932 def _cmp(self, other):
1933 for prefkey, prefvalue in self.prefers:
1933 for prefkey, prefvalue in self.prefers:
1934 avalue = self.value.get(prefkey)
1934 avalue = self.value.get(prefkey)
1935 bvalue = other.value.get(prefkey)
1935 bvalue = other.value.get(prefkey)
1936
1936
1937 # Special case for b missing attribute and a matches exactly.
1937 # Special case for b missing attribute and a matches exactly.
1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1938 if avalue is not None and bvalue is None and avalue == prefvalue:
1939 return -1
1939 return -1
1940
1940
1941 # Special case for a missing attribute and b matches exactly.
1941 # Special case for a missing attribute and b matches exactly.
1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1942 if bvalue is not None and avalue is None and bvalue == prefvalue:
1943 return 1
1943 return 1
1944
1944
1945 # We can't compare unless attribute present on both.
1945 # We can't compare unless attribute present on both.
1946 if avalue is None or bvalue is None:
1946 if avalue is None or bvalue is None:
1947 continue
1947 continue
1948
1948
1949 # Same values should fall back to next attribute.
1949 # Same values should fall back to next attribute.
1950 if avalue == bvalue:
1950 if avalue == bvalue:
1951 continue
1951 continue
1952
1952
1953 # Exact matches come first.
1953 # Exact matches come first.
1954 if avalue == prefvalue:
1954 if avalue == prefvalue:
1955 return -1
1955 return -1
1956 if bvalue == prefvalue:
1956 if bvalue == prefvalue:
1957 return 1
1957 return 1
1958
1958
1959 # Fall back to next attribute.
1959 # Fall back to next attribute.
1960 continue
1960 continue
1961
1961
1962 # If we got here we couldn't sort by attributes and prefers. Fall
1962 # If we got here we couldn't sort by attributes and prefers. Fall
1963 # back to index order.
1963 # back to index order.
1964 return 0
1964 return 0
1965
1965
1966 def __lt__(self, other):
1966 def __lt__(self, other):
1967 return self._cmp(other) < 0
1967 return self._cmp(other) < 0
1968
1968
1969 def __gt__(self, other):
1969 def __gt__(self, other):
1970 return self._cmp(other) > 0
1970 return self._cmp(other) > 0
1971
1971
1972 def __eq__(self, other):
1972 def __eq__(self, other):
1973 return self._cmp(other) == 0
1973 return self._cmp(other) == 0
1974
1974
1975 def __le__(self, other):
1975 def __le__(self, other):
1976 return self._cmp(other) <= 0
1976 return self._cmp(other) <= 0
1977
1977
1978 def __ge__(self, other):
1978 def __ge__(self, other):
1979 return self._cmp(other) >= 0
1979 return self._cmp(other) >= 0
1980
1980
1981 def __ne__(self, other):
1981 def __ne__(self, other):
1982 return self._cmp(other) != 0
1982 return self._cmp(other) != 0
1983
1983
1984 def sortclonebundleentries(ui, entries):
1984 def sortclonebundleentries(ui, entries):
1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1985 prefers = ui.configlist('ui', 'clonebundleprefers')
1986 if not prefers:
1986 if not prefers:
1987 return list(entries)
1987 return list(entries)
1988
1988
1989 prefers = [p.split('=', 1) for p in prefers]
1989 prefers = [p.split('=', 1) for p in prefers]
1990
1990
1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1991 items = sorted(clonebundleentry(v, prefers) for v in entries)
1992 return [i.value for i in items]
1992 return [i.value for i in items]
1993
1993
1994 def trypullbundlefromurl(ui, repo, url):
1994 def trypullbundlefromurl(ui, repo, url):
1995 """Attempt to apply a bundle from a URL."""
1995 """Attempt to apply a bundle from a URL."""
1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1996 with repo.lock(), repo.transaction('bundleurl') as tr:
1997 try:
1997 try:
1998 fh = urlmod.open(ui, url)
1998 fh = urlmod.open(ui, url)
1999 cg = readbundle(ui, fh, 'stream')
1999 cg = readbundle(ui, fh, 'stream')
2000
2000
2001 if isinstance(cg, streamclone.streamcloneapplier):
2001 if isinstance(cg, streamclone.streamcloneapplier):
2002 cg.apply(repo)
2002 cg.apply(repo)
2003 else:
2003 else:
2004 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2004 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2005 return True
2005 return True
2006 except urlerr.httperror as e:
2006 except urlerr.httperror as e:
2007 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2007 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2008 except urlerr.urlerror as e:
2008 except urlerr.urlerror as e:
2009 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2009 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2010
2010
2011 return False
2011 return False
@@ -1,1233 +1,1234 b''
1 This test is dedicated to test the bundle2 container format
1 This test is dedicated to test the bundle2 container format
2
2
3 It test multiple existing parts to test different feature of the container. You
3 It test multiple existing parts to test different feature of the container. You
4 probably do not need to touch this test unless you change the binary encoding
4 probably do not need to touch this test unless you change the binary encoding
5 of the bundle2 format itself.
5 of the bundle2 format itself.
6
6
7 Create an extension to test bundle2 API
7 Create an extension to test bundle2 API
8
8
9 $ cat > bundle2.py << EOF
9 $ cat > bundle2.py << EOF
10 > """A small extension to test bundle2 implementation
10 > """A small extension to test bundle2 implementation
11 >
11 >
12 > This extension allows detailed testing of the various bundle2 API and
12 > This extension allows detailed testing of the various bundle2 API and
13 > behaviors.
13 > behaviors.
14 > """
14 > """
15 > import gc
15 > import gc
16 > import os
16 > import os
17 > import sys
17 > import sys
18 > from mercurial import util
18 > from mercurial import util
19 > from mercurial import bundle2
19 > from mercurial import bundle2
20 > from mercurial import scmutil
20 > from mercurial import scmutil
21 > from mercurial import discovery
21 > from mercurial import discovery
22 > from mercurial import changegroup
22 > from mercurial import changegroup
23 > from mercurial import error
23 > from mercurial import error
24 > from mercurial import obsolete
24 > from mercurial import obsolete
25 > from mercurial import registrar
25 > from mercurial import registrar
26 >
26 >
27 >
27 >
28 > try:
28 > try:
29 > import msvcrt
29 > import msvcrt
30 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
30 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
31 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
31 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
32 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
32 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
33 > except ImportError:
33 > except ImportError:
34 > pass
34 > pass
35 >
35 >
36 > cmdtable = {}
36 > cmdtable = {}
37 > command = registrar.command(cmdtable)
37 > command = registrar.command(cmdtable)
38 >
38 >
39 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
39 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
40 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
40 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
41 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
41 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
42 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
42 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
43 >
43 >
44 > @bundle2.parthandler('test:song')
44 > @bundle2.parthandler('test:song')
45 > def songhandler(op, part):
45 > def songhandler(op, part):
46 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
46 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
47 > op.ui.write('The choir starts singing:\n')
47 > op.ui.write('The choir starts singing:\n')
48 > verses = 0
48 > verses = 0
49 > for line in part.read().split('\n'):
49 > for line in part.read().split('\n'):
50 > op.ui.write(' %s\n' % line)
50 > op.ui.write(' %s\n' % line)
51 > verses += 1
51 > verses += 1
52 > op.records.add('song', {'verses': verses})
52 > op.records.add('song', {'verses': verses})
53 >
53 >
54 > @bundle2.parthandler('test:ping')
54 > @bundle2.parthandler('test:ping')
55 > def pinghandler(op, part):
55 > def pinghandler(op, part):
56 > op.ui.write('received ping request (id %i)\n' % part.id)
56 > op.ui.write('received ping request (id %i)\n' % part.id)
57 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
57 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
58 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
58 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
59 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))],
59 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))],
60 > mandatory=False)
60 > mandatory=False)
61 >
61 >
62 > @bundle2.parthandler('test:debugreply')
62 > @bundle2.parthandler('test:debugreply')
63 > def debugreply(op, part):
63 > def debugreply(op, part):
64 > """print data about the capacity of the bundle reply"""
64 > """print data about the capacity of the bundle reply"""
65 > if op.reply is None:
65 > if op.reply is None:
66 > op.ui.write('debugreply: no reply\n')
66 > op.ui.write('debugreply: no reply\n')
67 > else:
67 > else:
68 > op.ui.write('debugreply: capabilities:\n')
68 > op.ui.write('debugreply: capabilities:\n')
69 > for cap in sorted(op.reply.capabilities):
69 > for cap in sorted(op.reply.capabilities):
70 > op.ui.write('debugreply: %r\n' % cap)
70 > op.ui.write('debugreply: %r\n' % cap)
71 > for val in op.reply.capabilities[cap]:
71 > for val in op.reply.capabilities[cap]:
72 > op.ui.write('debugreply: %r\n' % val)
72 > op.ui.write('debugreply: %r\n' % val)
73 >
73 >
74 > @command(b'bundle2',
74 > @command(b'bundle2',
75 > [('', 'param', [], 'stream level parameter'),
75 > [('', 'param', [], 'stream level parameter'),
76 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
76 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
77 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
77 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
78 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
78 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
79 > ('', 'reply', False, 'produce a reply bundle'),
79 > ('', 'reply', False, 'produce a reply bundle'),
80 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
80 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
81 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
81 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
82 > ('', 'timeout', False, 'emulate a timeout during bundle generation'),
82 > ('', 'timeout', False, 'emulate a timeout during bundle generation'),
83 > ('r', 'rev', [], 'includes those changeset in the bundle'),
83 > ('r', 'rev', [], 'includes those changeset in the bundle'),
84 > ('', 'compress', '', 'compress the stream'),],
84 > ('', 'compress', '', 'compress the stream'),],
85 > '[OUTPUTFILE]')
85 > '[OUTPUTFILE]')
86 > def cmdbundle2(ui, repo, path=None, **opts):
86 > def cmdbundle2(ui, repo, path=None, **opts):
87 > """write a bundle2 container on standard output"""
87 > """write a bundle2 container on standard output"""
88 > bundler = bundle2.bundle20(ui)
88 > bundler = bundle2.bundle20(ui)
89 > for p in opts['param']:
89 > for p in opts['param']:
90 > p = p.split('=', 1)
90 > p = p.split('=', 1)
91 > try:
91 > try:
92 > bundler.addparam(*p)
92 > bundler.addparam(*p)
93 > except ValueError as exc:
93 > except ValueError as exc:
94 > raise error.Abort('%s' % exc)
94 > raise error.Abort('%s' % exc)
95 >
95 >
96 > if opts['compress']:
96 > if opts['compress']:
97 > bundler.setcompression(opts['compress'])
97 > bundler.setcompression(opts['compress'])
98 >
98 >
99 > if opts['reply']:
99 > if opts['reply']:
100 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
100 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
101 > bundler.newpart('replycaps', data=capsstring)
101 > bundler.newpart('replycaps', data=capsstring)
102 >
102 >
103 > if opts['pushrace']:
103 > if opts['pushrace']:
104 > # also serve to test the assignement of data outside of init
104 > # also serve to test the assignement of data outside of init
105 > part = bundler.newpart('check:heads')
105 > part = bundler.newpart('check:heads')
106 > part.data = '01234567890123456789'
106 > part.data = '01234567890123456789'
107 >
107 >
108 > revs = opts['rev']
108 > revs = opts['rev']
109 > if 'rev' in opts:
109 > if 'rev' in opts:
110 > revs = scmutil.revrange(repo, opts['rev'])
110 > revs = scmutil.revrange(repo, opts['rev'])
111 > if revs:
111 > if revs:
112 > # very crude version of a changegroup part creation
112 > # very crude version of a changegroup part creation
113 > bundled = repo.revs('%ld::%ld', revs, revs)
113 > bundled = repo.revs('%ld::%ld', revs, revs)
114 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
114 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
115 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
115 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
116 > outgoing = discovery.outgoing(repo, headcommon, headmissing)
116 > outgoing = discovery.outgoing(repo, headcommon, headmissing)
117 > cg = changegroup.getchangegroup(repo, 'test:bundle2', outgoing, None)
117 > cg = changegroup.makechangegroup(repo, outgoing, '01',
118 > 'test:bundle2')
118 > bundler.newpart('changegroup', data=cg.getchunks(),
119 > bundler.newpart('changegroup', data=cg.getchunks(),
119 > mandatory=False)
120 > mandatory=False)
120 >
121 >
121 > if opts['parts']:
122 > if opts['parts']:
122 > bundler.newpart('test:empty', mandatory=False)
123 > bundler.newpart('test:empty', mandatory=False)
123 > # add a second one to make sure we handle multiple parts
124 > # add a second one to make sure we handle multiple parts
124 > bundler.newpart('test:empty', mandatory=False)
125 > bundler.newpart('test:empty', mandatory=False)
125 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
126 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
126 > bundler.newpart('test:debugreply', mandatory=False)
127 > bundler.newpart('test:debugreply', mandatory=False)
127 > mathpart = bundler.newpart('test:math')
128 > mathpart = bundler.newpart('test:math')
128 > mathpart.addparam('pi', '3.14')
129 > mathpart.addparam('pi', '3.14')
129 > mathpart.addparam('e', '2.72')
130 > mathpart.addparam('e', '2.72')
130 > mathpart.addparam('cooking', 'raw', mandatory=False)
131 > mathpart.addparam('cooking', 'raw', mandatory=False)
131 > mathpart.data = '42'
132 > mathpart.data = '42'
132 > mathpart.mandatory = False
133 > mathpart.mandatory = False
133 > # advisory known part with unknown mandatory param
134 > # advisory known part with unknown mandatory param
134 > bundler.newpart('test:song', [('randomparam','')], mandatory=False)
135 > bundler.newpart('test:song', [('randomparam','')], mandatory=False)
135 > if opts['unknown']:
136 > if opts['unknown']:
136 > bundler.newpart('test:unknown', data='some random content')
137 > bundler.newpart('test:unknown', data='some random content')
137 > if opts['unknownparams']:
138 > if opts['unknownparams']:
138 > bundler.newpart('test:song', [('randomparams', '')])
139 > bundler.newpart('test:song', [('randomparams', '')])
139 > if opts['parts']:
140 > if opts['parts']:
140 > bundler.newpart('test:ping', mandatory=False)
141 > bundler.newpart('test:ping', mandatory=False)
141 > if opts['genraise']:
142 > if opts['genraise']:
142 > def genraise():
143 > def genraise():
143 > yield 'first line\n'
144 > yield 'first line\n'
144 > raise RuntimeError('Someone set up us the bomb!')
145 > raise RuntimeError('Someone set up us the bomb!')
145 > bundler.newpart('output', data=genraise(), mandatory=False)
146 > bundler.newpart('output', data=genraise(), mandatory=False)
146 >
147 >
147 > if path is None:
148 > if path is None:
148 > file = sys.stdout
149 > file = sys.stdout
149 > else:
150 > else:
150 > file = open(path, 'wb')
151 > file = open(path, 'wb')
151 >
152 >
152 > if opts['timeout']:
153 > if opts['timeout']:
153 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
154 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
154 > for idx, junk in enumerate(bundler.getchunks()):
155 > for idx, junk in enumerate(bundler.getchunks()):
155 > ui.write('%d chunk\n' % idx)
156 > ui.write('%d chunk\n' % idx)
156 > if idx > 4:
157 > if idx > 4:
157 > # This throws a GeneratorExit inside the generator, which
158 > # This throws a GeneratorExit inside the generator, which
158 > # can cause problems if the exception-recovery code is
159 > # can cause problems if the exception-recovery code is
159 > # too zealous. It's important for this test that the break
160 > # too zealous. It's important for this test that the break
160 > # occur while we're in the middle of a part.
161 > # occur while we're in the middle of a part.
161 > break
162 > break
162 > gc.collect()
163 > gc.collect()
163 > ui.write('fake timeout complete.\n')
164 > ui.write('fake timeout complete.\n')
164 > return
165 > return
165 > try:
166 > try:
166 > for chunk in bundler.getchunks():
167 > for chunk in bundler.getchunks():
167 > file.write(chunk)
168 > file.write(chunk)
168 > except RuntimeError as exc:
169 > except RuntimeError as exc:
169 > raise error.Abort(exc)
170 > raise error.Abort(exc)
170 > finally:
171 > finally:
171 > file.flush()
172 > file.flush()
172 >
173 >
173 > @command(b'unbundle2', [], '')
174 > @command(b'unbundle2', [], '')
174 > def cmdunbundle2(ui, repo, replypath=None):
175 > def cmdunbundle2(ui, repo, replypath=None):
175 > """process a bundle2 stream from stdin on the current repo"""
176 > """process a bundle2 stream from stdin on the current repo"""
176 > try:
177 > try:
177 > tr = None
178 > tr = None
178 > lock = repo.lock()
179 > lock = repo.lock()
179 > tr = repo.transaction('processbundle')
180 > tr = repo.transaction('processbundle')
180 > try:
181 > try:
181 > unbundler = bundle2.getunbundler(ui, sys.stdin)
182 > unbundler = bundle2.getunbundler(ui, sys.stdin)
182 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
183 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
183 > tr.close()
184 > tr.close()
184 > except error.BundleValueError as exc:
185 > except error.BundleValueError as exc:
185 > raise error.Abort('missing support for %s' % exc)
186 > raise error.Abort('missing support for %s' % exc)
186 > except error.PushRaced as exc:
187 > except error.PushRaced as exc:
187 > raise error.Abort('push race: %s' % exc)
188 > raise error.Abort('push race: %s' % exc)
188 > finally:
189 > finally:
189 > if tr is not None:
190 > if tr is not None:
190 > tr.release()
191 > tr.release()
191 > lock.release()
192 > lock.release()
192 > remains = sys.stdin.read()
193 > remains = sys.stdin.read()
193 > ui.write('%i unread bytes\n' % len(remains))
194 > ui.write('%i unread bytes\n' % len(remains))
194 > if op.records['song']:
195 > if op.records['song']:
195 > totalverses = sum(r['verses'] for r in op.records['song'])
196 > totalverses = sum(r['verses'] for r in op.records['song'])
196 > ui.write('%i total verses sung\n' % totalverses)
197 > ui.write('%i total verses sung\n' % totalverses)
197 > for rec in op.records['changegroup']:
198 > for rec in op.records['changegroup']:
198 > ui.write('addchangegroup return: %i\n' % rec['return'])
199 > ui.write('addchangegroup return: %i\n' % rec['return'])
199 > if op.reply is not None and replypath is not None:
200 > if op.reply is not None and replypath is not None:
200 > with open(replypath, 'wb') as file:
201 > with open(replypath, 'wb') as file:
201 > for chunk in op.reply.getchunks():
202 > for chunk in op.reply.getchunks():
202 > file.write(chunk)
203 > file.write(chunk)
203 >
204 >
204 > @command(b'statbundle2', [], '')
205 > @command(b'statbundle2', [], '')
205 > def cmdstatbundle2(ui, repo):
206 > def cmdstatbundle2(ui, repo):
206 > """print statistic on the bundle2 container read from stdin"""
207 > """print statistic on the bundle2 container read from stdin"""
207 > unbundler = bundle2.getunbundler(ui, sys.stdin)
208 > unbundler = bundle2.getunbundler(ui, sys.stdin)
208 > try:
209 > try:
209 > params = unbundler.params
210 > params = unbundler.params
210 > except error.BundleValueError as exc:
211 > except error.BundleValueError as exc:
211 > raise error.Abort('unknown parameters: %s' % exc)
212 > raise error.Abort('unknown parameters: %s' % exc)
212 > ui.write('options count: %i\n' % len(params))
213 > ui.write('options count: %i\n' % len(params))
213 > for key in sorted(params):
214 > for key in sorted(params):
214 > ui.write('- %s\n' % key)
215 > ui.write('- %s\n' % key)
215 > value = params[key]
216 > value = params[key]
216 > if value is not None:
217 > if value is not None:
217 > ui.write(' %s\n' % value)
218 > ui.write(' %s\n' % value)
218 > count = 0
219 > count = 0
219 > for p in unbundler.iterparts():
220 > for p in unbundler.iterparts():
220 > count += 1
221 > count += 1
221 > ui.write(' :%s:\n' % p.type)
222 > ui.write(' :%s:\n' % p.type)
222 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
223 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
223 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
224 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
224 > ui.write(' payload: %i bytes\n' % len(p.read()))
225 > ui.write(' payload: %i bytes\n' % len(p.read()))
225 > ui.write('parts count: %i\n' % count)
226 > ui.write('parts count: %i\n' % count)
226 > EOF
227 > EOF
227 $ cat >> $HGRCPATH << EOF
228 $ cat >> $HGRCPATH << EOF
228 > [extensions]
229 > [extensions]
229 > bundle2=$TESTTMP/bundle2.py
230 > bundle2=$TESTTMP/bundle2.py
230 > [experimental]
231 > [experimental]
231 > stabilization=createmarkers
232 > stabilization=createmarkers
232 > [ui]
233 > [ui]
233 > ssh=$PYTHON "$TESTDIR/dummyssh"
234 > ssh=$PYTHON "$TESTDIR/dummyssh"
234 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
235 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
235 > [web]
236 > [web]
236 > push_ssl = false
237 > push_ssl = false
237 > allow_push = *
238 > allow_push = *
238 > [phases]
239 > [phases]
239 > publish=False
240 > publish=False
240 > EOF
241 > EOF
241
242
242 The extension requires a repo (currently unused)
243 The extension requires a repo (currently unused)
243
244
244 $ hg init main
245 $ hg init main
245 $ cd main
246 $ cd main
246 $ touch a
247 $ touch a
247 $ hg add a
248 $ hg add a
248 $ hg commit -m 'a'
249 $ hg commit -m 'a'
249
250
250
251
251 Empty bundle
252 Empty bundle
252 =================
253 =================
253
254
254 - no option
255 - no option
255 - no parts
256 - no parts
256
257
257 Test bundling
258 Test bundling
258
259
259 $ hg bundle2 | f --hexdump
260 $ hg bundle2 | f --hexdump
260
261
261 0000: 48 47 32 30 00 00 00 00 00 00 00 00 |HG20........|
262 0000: 48 47 32 30 00 00 00 00 00 00 00 00 |HG20........|
262
263
263 Test timeouts during bundling
264 Test timeouts during bundling
264 $ hg bundle2 --timeout --debug --config devel.bundle2.debug=yes
265 $ hg bundle2 --timeout --debug --config devel.bundle2.debug=yes
265 bundle2-output-bundle: "HG20", 1 parts total
266 bundle2-output-bundle: "HG20", 1 parts total
266 bundle2-output: start emission of HG20 stream
267 bundle2-output: start emission of HG20 stream
267 0 chunk
268 0 chunk
268 bundle2-output: bundle parameter:
269 bundle2-output: bundle parameter:
269 1 chunk
270 1 chunk
270 bundle2-output: start of parts
271 bundle2-output: start of parts
271 bundle2-output: bundle part: "test:song"
272 bundle2-output: bundle part: "test:song"
272 bundle2-output-part: "test:song" (advisory) 178 bytes payload
273 bundle2-output-part: "test:song" (advisory) 178 bytes payload
273 bundle2-output: part 0: "test:song"
274 bundle2-output: part 0: "test:song"
274 bundle2-output: header chunk size: 16
275 bundle2-output: header chunk size: 16
275 2 chunk
276 2 chunk
276 3 chunk
277 3 chunk
277 bundle2-output: payload chunk size: 178
278 bundle2-output: payload chunk size: 178
278 4 chunk
279 4 chunk
279 5 chunk
280 5 chunk
280 bundle2-generatorexit
281 bundle2-generatorexit
281 fake timeout complete.
282 fake timeout complete.
282
283
283 Test unbundling
284 Test unbundling
284
285
285 $ hg bundle2 | hg statbundle2
286 $ hg bundle2 | hg statbundle2
286 options count: 0
287 options count: 0
287 parts count: 0
288 parts count: 0
288
289
289 Test old style bundle are detected and refused
290 Test old style bundle are detected and refused
290
291
291 $ hg bundle --all --type v1 ../bundle.hg
292 $ hg bundle --all --type v1 ../bundle.hg
292 1 changesets found
293 1 changesets found
293 $ hg statbundle2 < ../bundle.hg
294 $ hg statbundle2 < ../bundle.hg
294 abort: unknown bundle version 10
295 abort: unknown bundle version 10
295 [255]
296 [255]
296
297
297 Test parameters
298 Test parameters
298 =================
299 =================
299
300
300 - some options
301 - some options
301 - no parts
302 - no parts
302
303
303 advisory parameters, no value
304 advisory parameters, no value
304 -------------------------------
305 -------------------------------
305
306
306 Simplest possible parameters form
307 Simplest possible parameters form
307
308
308 Test generation simple option
309 Test generation simple option
309
310
310 $ hg bundle2 --param 'caution' | f --hexdump
311 $ hg bundle2 --param 'caution' | f --hexdump
311
312
312 0000: 48 47 32 30 00 00 00 07 63 61 75 74 69 6f 6e 00 |HG20....caution.|
313 0000: 48 47 32 30 00 00 00 07 63 61 75 74 69 6f 6e 00 |HG20....caution.|
313 0010: 00 00 00 |...|
314 0010: 00 00 00 |...|
314
315
315 Test unbundling
316 Test unbundling
316
317
317 $ hg bundle2 --param 'caution' | hg statbundle2
318 $ hg bundle2 --param 'caution' | hg statbundle2
318 options count: 1
319 options count: 1
319 - caution
320 - caution
320 parts count: 0
321 parts count: 0
321
322
322 Test generation multiple option
323 Test generation multiple option
323
324
324 $ hg bundle2 --param 'caution' --param 'meal' | f --hexdump
325 $ hg bundle2 --param 'caution' --param 'meal' | f --hexdump
325
326
326 0000: 48 47 32 30 00 00 00 0c 63 61 75 74 69 6f 6e 20 |HG20....caution |
327 0000: 48 47 32 30 00 00 00 0c 63 61 75 74 69 6f 6e 20 |HG20....caution |
327 0010: 6d 65 61 6c 00 00 00 00 |meal....|
328 0010: 6d 65 61 6c 00 00 00 00 |meal....|
328
329
329 Test unbundling
330 Test unbundling
330
331
331 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
332 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
332 options count: 2
333 options count: 2
333 - caution
334 - caution
334 - meal
335 - meal
335 parts count: 0
336 parts count: 0
336
337
337 advisory parameters, with value
338 advisory parameters, with value
338 -------------------------------
339 -------------------------------
339
340
340 Test generation
341 Test generation
341
342
342 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | f --hexdump
343 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | f --hexdump
343
344
344 0000: 48 47 32 30 00 00 00 1c 63 61 75 74 69 6f 6e 20 |HG20....caution |
345 0000: 48 47 32 30 00 00 00 1c 63 61 75 74 69 6f 6e 20 |HG20....caution |
345 0010: 6d 65 61 6c 3d 76 65 67 61 6e 20 65 6c 65 70 68 |meal=vegan eleph|
346 0010: 6d 65 61 6c 3d 76 65 67 61 6e 20 65 6c 65 70 68 |meal=vegan eleph|
346 0020: 61 6e 74 73 00 00 00 00 |ants....|
347 0020: 61 6e 74 73 00 00 00 00 |ants....|
347
348
348 Test unbundling
349 Test unbundling
349
350
350 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
351 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
351 options count: 3
352 options count: 3
352 - caution
353 - caution
353 - elephants
354 - elephants
354 - meal
355 - meal
355 vegan
356 vegan
356 parts count: 0
357 parts count: 0
357
358
358 parameter with special char in value
359 parameter with special char in value
359 ---------------------------------------------------
360 ---------------------------------------------------
360
361
361 Test generation
362 Test generation
362
363
363 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | f --hexdump
364 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | f --hexdump
364
365
365 0000: 48 47 32 30 00 00 00 29 65 25 37 43 25 32 31 25 |HG20...)e%7C%21%|
366 0000: 48 47 32 30 00 00 00 29 65 25 37 43 25 32 31 25 |HG20...)e%7C%21%|
366 0010: 32 30 37 2f 3d 62 61 62 61 72 25 32 35 25 32 33 |207/=babar%25%23|
367 0010: 32 30 37 2f 3d 62 61 62 61 72 25 32 35 25 32 33 |207/=babar%25%23|
367 0020: 25 33 44 25 33 44 74 75 74 75 20 73 69 6d 70 6c |%3D%3Dtutu simpl|
368 0020: 25 33 44 25 33 44 74 75 74 75 20 73 69 6d 70 6c |%3D%3Dtutu simpl|
368 0030: 65 00 00 00 00 |e....|
369 0030: 65 00 00 00 00 |e....|
369
370
370 Test unbundling
371 Test unbundling
371
372
372 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
373 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
373 options count: 2
374 options count: 2
374 - e|! 7/
375 - e|! 7/
375 babar%#==tutu
376 babar%#==tutu
376 - simple
377 - simple
377 parts count: 0
378 parts count: 0
378
379
379 Test unknown mandatory option
380 Test unknown mandatory option
380 ---------------------------------------------------
381 ---------------------------------------------------
381
382
382 $ hg bundle2 --param 'Gravity' | hg statbundle2
383 $ hg bundle2 --param 'Gravity' | hg statbundle2
383 abort: unknown parameters: Stream Parameter - Gravity
384 abort: unknown parameters: Stream Parameter - Gravity
384 [255]
385 [255]
385
386
386 Test debug output
387 Test debug output
387 ---------------------------------------------------
388 ---------------------------------------------------
388
389
389 bundling debug
390 bundling debug
390
391
391 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2 --config progress.debug=true --config devel.bundle2.debug=true
392 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2 --config progress.debug=true --config devel.bundle2.debug=true
392 bundle2-output-bundle: "HG20", (2 params) 0 parts total
393 bundle2-output-bundle: "HG20", (2 params) 0 parts total
393 bundle2-output: start emission of HG20 stream
394 bundle2-output: start emission of HG20 stream
394 bundle2-output: bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
395 bundle2-output: bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
395 bundle2-output: start of parts
396 bundle2-output: start of parts
396 bundle2-output: end of bundle
397 bundle2-output: end of bundle
397
398
398 file content is ok
399 file content is ok
399
400
400 $ f --hexdump ../out.hg2
401 $ f --hexdump ../out.hg2
401 ../out.hg2:
402 ../out.hg2:
402 0000: 48 47 32 30 00 00 00 29 65 25 37 43 25 32 31 25 |HG20...)e%7C%21%|
403 0000: 48 47 32 30 00 00 00 29 65 25 37 43 25 32 31 25 |HG20...)e%7C%21%|
403 0010: 32 30 37 2f 3d 62 61 62 61 72 25 32 35 25 32 33 |207/=babar%25%23|
404 0010: 32 30 37 2f 3d 62 61 62 61 72 25 32 35 25 32 33 |207/=babar%25%23|
404 0020: 25 33 44 25 33 44 74 75 74 75 20 73 69 6d 70 6c |%3D%3Dtutu simpl|
405 0020: 25 33 44 25 33 44 74 75 74 75 20 73 69 6d 70 6c |%3D%3Dtutu simpl|
405 0030: 65 00 00 00 00 |e....|
406 0030: 65 00 00 00 00 |e....|
406
407
407 unbundling debug
408 unbundling debug
408
409
409 $ hg statbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../out.hg2
410 $ hg statbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../out.hg2
410 bundle2-input: start processing of HG20 stream
411 bundle2-input: start processing of HG20 stream
411 bundle2-input: reading bundle2 stream parameters
412 bundle2-input: reading bundle2 stream parameters
412 bundle2-input: ignoring unknown parameter 'e|! 7/'
413 bundle2-input: ignoring unknown parameter 'e|! 7/'
413 bundle2-input: ignoring unknown parameter 'simple'
414 bundle2-input: ignoring unknown parameter 'simple'
414 options count: 2
415 options count: 2
415 - e|! 7/
416 - e|! 7/
416 babar%#==tutu
417 babar%#==tutu
417 - simple
418 - simple
418 bundle2-input: start extraction of bundle2 parts
419 bundle2-input: start extraction of bundle2 parts
419 bundle2-input: part header size: 0
420 bundle2-input: part header size: 0
420 bundle2-input: end of bundle2 stream
421 bundle2-input: end of bundle2 stream
421 parts count: 0
422 parts count: 0
422
423
423
424
424 Test buggy input
425 Test buggy input
425 ---------------------------------------------------
426 ---------------------------------------------------
426
427
427 empty parameter name
428 empty parameter name
428
429
429 $ hg bundle2 --param '' --quiet
430 $ hg bundle2 --param '' --quiet
430 abort: empty parameter name
431 abort: empty parameter name
431 [255]
432 [255]
432
433
433 bad parameter name
434 bad parameter name
434
435
435 $ hg bundle2 --param 42babar
436 $ hg bundle2 --param 42babar
436 abort: non letter first character: '42babar'
437 abort: non letter first character: '42babar'
437 [255]
438 [255]
438
439
439
440
440 Test part
441 Test part
441 =================
442 =================
442
443
443 $ hg bundle2 --parts ../parts.hg2 --debug --config progress.debug=true --config devel.bundle2.debug=true
444 $ hg bundle2 --parts ../parts.hg2 --debug --config progress.debug=true --config devel.bundle2.debug=true
444 bundle2-output-bundle: "HG20", 7 parts total
445 bundle2-output-bundle: "HG20", 7 parts total
445 bundle2-output: start emission of HG20 stream
446 bundle2-output: start emission of HG20 stream
446 bundle2-output: bundle parameter:
447 bundle2-output: bundle parameter:
447 bundle2-output: start of parts
448 bundle2-output: start of parts
448 bundle2-output: bundle part: "test:empty"
449 bundle2-output: bundle part: "test:empty"
449 bundle2-output-part: "test:empty" (advisory) empty payload
450 bundle2-output-part: "test:empty" (advisory) empty payload
450 bundle2-output: part 0: "test:empty"
451 bundle2-output: part 0: "test:empty"
451 bundle2-output: header chunk size: 17
452 bundle2-output: header chunk size: 17
452 bundle2-output: closing payload chunk
453 bundle2-output: closing payload chunk
453 bundle2-output: bundle part: "test:empty"
454 bundle2-output: bundle part: "test:empty"
454 bundle2-output-part: "test:empty" (advisory) empty payload
455 bundle2-output-part: "test:empty" (advisory) empty payload
455 bundle2-output: part 1: "test:empty"
456 bundle2-output: part 1: "test:empty"
456 bundle2-output: header chunk size: 17
457 bundle2-output: header chunk size: 17
457 bundle2-output: closing payload chunk
458 bundle2-output: closing payload chunk
458 bundle2-output: bundle part: "test:song"
459 bundle2-output: bundle part: "test:song"
459 bundle2-output-part: "test:song" (advisory) 178 bytes payload
460 bundle2-output-part: "test:song" (advisory) 178 bytes payload
460 bundle2-output: part 2: "test:song"
461 bundle2-output: part 2: "test:song"
461 bundle2-output: header chunk size: 16
462 bundle2-output: header chunk size: 16
462 bundle2-output: payload chunk size: 178
463 bundle2-output: payload chunk size: 178
463 bundle2-output: closing payload chunk
464 bundle2-output: closing payload chunk
464 bundle2-output: bundle part: "test:debugreply"
465 bundle2-output: bundle part: "test:debugreply"
465 bundle2-output-part: "test:debugreply" (advisory) empty payload
466 bundle2-output-part: "test:debugreply" (advisory) empty payload
466 bundle2-output: part 3: "test:debugreply"
467 bundle2-output: part 3: "test:debugreply"
467 bundle2-output: header chunk size: 22
468 bundle2-output: header chunk size: 22
468 bundle2-output: closing payload chunk
469 bundle2-output: closing payload chunk
469 bundle2-output: bundle part: "test:math"
470 bundle2-output: bundle part: "test:math"
470 bundle2-output-part: "test:math" (advisory) (params: 2 mandatory 2 advisory) 2 bytes payload
471 bundle2-output-part: "test:math" (advisory) (params: 2 mandatory 2 advisory) 2 bytes payload
471 bundle2-output: part 4: "test:math"
472 bundle2-output: part 4: "test:math"
472 bundle2-output: header chunk size: 43
473 bundle2-output: header chunk size: 43
473 bundle2-output: payload chunk size: 2
474 bundle2-output: payload chunk size: 2
474 bundle2-output: closing payload chunk
475 bundle2-output: closing payload chunk
475 bundle2-output: bundle part: "test:song"
476 bundle2-output: bundle part: "test:song"
476 bundle2-output-part: "test:song" (advisory) (params: 1 mandatory) empty payload
477 bundle2-output-part: "test:song" (advisory) (params: 1 mandatory) empty payload
477 bundle2-output: part 5: "test:song"
478 bundle2-output: part 5: "test:song"
478 bundle2-output: header chunk size: 29
479 bundle2-output: header chunk size: 29
479 bundle2-output: closing payload chunk
480 bundle2-output: closing payload chunk
480 bundle2-output: bundle part: "test:ping"
481 bundle2-output: bundle part: "test:ping"
481 bundle2-output-part: "test:ping" (advisory) empty payload
482 bundle2-output-part: "test:ping" (advisory) empty payload
482 bundle2-output: part 6: "test:ping"
483 bundle2-output: part 6: "test:ping"
483 bundle2-output: header chunk size: 16
484 bundle2-output: header chunk size: 16
484 bundle2-output: closing payload chunk
485 bundle2-output: closing payload chunk
485 bundle2-output: end of bundle
486 bundle2-output: end of bundle
486
487
487 $ f --hexdump ../parts.hg2
488 $ f --hexdump ../parts.hg2
488 ../parts.hg2:
489 ../parts.hg2:
489 0000: 48 47 32 30 00 00 00 00 00 00 00 11 0a 74 65 73 |HG20.........tes|
490 0000: 48 47 32 30 00 00 00 00 00 00 00 11 0a 74 65 73 |HG20.........tes|
490 0010: 74 3a 65 6d 70 74 79 00 00 00 00 00 00 00 00 00 |t:empty.........|
491 0010: 74 3a 65 6d 70 74 79 00 00 00 00 00 00 00 00 00 |t:empty.........|
491 0020: 00 00 00 00 11 0a 74 65 73 74 3a 65 6d 70 74 79 |......test:empty|
492 0020: 00 00 00 00 11 0a 74 65 73 74 3a 65 6d 70 74 79 |......test:empty|
492 0030: 00 00 00 01 00 00 00 00 00 00 00 00 00 10 09 74 |...............t|
493 0030: 00 00 00 01 00 00 00 00 00 00 00 00 00 10 09 74 |...............t|
493 0040: 65 73 74 3a 73 6f 6e 67 00 00 00 02 00 00 00 00 |est:song........|
494 0040: 65 73 74 3a 73 6f 6e 67 00 00 00 02 00 00 00 00 |est:song........|
494 0050: 00 b2 50 61 74 61 6c 69 20 44 69 72 61 70 61 74 |..Patali Dirapat|
495 0050: 00 b2 50 61 74 61 6c 69 20 44 69 72 61 70 61 74 |..Patali Dirapat|
495 0060: 61 2c 20 43 72 6f 6d 64 61 20 43 72 6f 6d 64 61 |a, Cromda Cromda|
496 0060: 61 2c 20 43 72 6f 6d 64 61 20 43 72 6f 6d 64 61 |a, Cromda Cromda|
496 0070: 20 52 69 70 61 6c 6f 2c 20 50 61 74 61 20 50 61 | Ripalo, Pata Pa|
497 0070: 20 52 69 70 61 6c 6f 2c 20 50 61 74 61 20 50 61 | Ripalo, Pata Pa|
497 0080: 74 61 2c 20 4b 6f 20 4b 6f 20 4b 6f 0a 42 6f 6b |ta, Ko Ko Ko.Bok|
498 0080: 74 61 2c 20 4b 6f 20 4b 6f 20 4b 6f 0a 42 6f 6b |ta, Ko Ko Ko.Bok|
498 0090: 6f 72 6f 20 44 69 70 6f 75 6c 69 74 6f 2c 20 52 |oro Dipoulito, R|
499 0090: 6f 72 6f 20 44 69 70 6f 75 6c 69 74 6f 2c 20 52 |oro Dipoulito, R|
499 00a0: 6f 6e 64 69 20 52 6f 6e 64 69 20 50 65 70 69 6e |ondi Rondi Pepin|
500 00a0: 6f 6e 64 69 20 52 6f 6e 64 69 20 50 65 70 69 6e |ondi Rondi Pepin|
500 00b0: 6f 2c 20 50 61 74 61 20 50 61 74 61 2c 20 4b 6f |o, Pata Pata, Ko|
501 00b0: 6f 2c 20 50 61 74 61 20 50 61 74 61 2c 20 4b 6f |o, Pata Pata, Ko|
501 00c0: 20 4b 6f 20 4b 6f 0a 45 6d 61 6e 61 20 4b 61 72 | Ko Ko.Emana Kar|
502 00c0: 20 4b 6f 20 4b 6f 0a 45 6d 61 6e 61 20 4b 61 72 | Ko Ko.Emana Kar|
502 00d0: 61 73 73 6f 6c 69 2c 20 4c 6f 75 63 72 61 20 4c |assoli, Loucra L|
503 00d0: 61 73 73 6f 6c 69 2c 20 4c 6f 75 63 72 61 20 4c |assoli, Loucra L|
503 00e0: 6f 75 63 72 61 20 50 6f 6e 70 6f 6e 74 6f 2c 20 |oucra Ponponto, |
504 00e0: 6f 75 63 72 61 20 50 6f 6e 70 6f 6e 74 6f 2c 20 |oucra Ponponto, |
504 00f0: 50 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f |Pata Pata, Ko Ko|
505 00f0: 50 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f |Pata Pata, Ko Ko|
505 0100: 20 4b 6f 2e 00 00 00 00 00 00 00 16 0f 74 65 73 | Ko..........tes|
506 0100: 20 4b 6f 2e 00 00 00 00 00 00 00 16 0f 74 65 73 | Ko..........tes|
506 0110: 74 3a 64 65 62 75 67 72 65 70 6c 79 00 00 00 03 |t:debugreply....|
507 0110: 74 3a 64 65 62 75 67 72 65 70 6c 79 00 00 00 03 |t:debugreply....|
507 0120: 00 00 00 00 00 00 00 00 00 2b 09 74 65 73 74 3a |.........+.test:|
508 0120: 00 00 00 00 00 00 00 00 00 2b 09 74 65 73 74 3a |.........+.test:|
508 0130: 6d 61 74 68 00 00 00 04 02 01 02 04 01 04 07 03 |math............|
509 0130: 6d 61 74 68 00 00 00 04 02 01 02 04 01 04 07 03 |math............|
509 0140: 70 69 33 2e 31 34 65 32 2e 37 32 63 6f 6f 6b 69 |pi3.14e2.72cooki|
510 0140: 70 69 33 2e 31 34 65 32 2e 37 32 63 6f 6f 6b 69 |pi3.14e2.72cooki|
510 0150: 6e 67 72 61 77 00 00 00 02 34 32 00 00 00 00 00 |ngraw....42.....|
511 0150: 6e 67 72 61 77 00 00 00 02 34 32 00 00 00 00 00 |ngraw....42.....|
511 0160: 00 00 1d 09 74 65 73 74 3a 73 6f 6e 67 00 00 00 |....test:song...|
512 0160: 00 00 1d 09 74 65 73 74 3a 73 6f 6e 67 00 00 00 |....test:song...|
512 0170: 05 01 00 0b 00 72 61 6e 64 6f 6d 70 61 72 61 6d |.....randomparam|
513 0170: 05 01 00 0b 00 72 61 6e 64 6f 6d 70 61 72 61 6d |.....randomparam|
513 0180: 00 00 00 00 00 00 00 10 09 74 65 73 74 3a 70 69 |.........test:pi|
514 0180: 00 00 00 00 00 00 00 10 09 74 65 73 74 3a 70 69 |.........test:pi|
514 0190: 6e 67 00 00 00 06 00 00 00 00 00 00 00 00 00 00 |ng..............|
515 0190: 6e 67 00 00 00 06 00 00 00 00 00 00 00 00 00 00 |ng..............|
515
516
516
517
517 $ hg statbundle2 < ../parts.hg2
518 $ hg statbundle2 < ../parts.hg2
518 options count: 0
519 options count: 0
519 :test:empty:
520 :test:empty:
520 mandatory: 0
521 mandatory: 0
521 advisory: 0
522 advisory: 0
522 payload: 0 bytes
523 payload: 0 bytes
523 :test:empty:
524 :test:empty:
524 mandatory: 0
525 mandatory: 0
525 advisory: 0
526 advisory: 0
526 payload: 0 bytes
527 payload: 0 bytes
527 :test:song:
528 :test:song:
528 mandatory: 0
529 mandatory: 0
529 advisory: 0
530 advisory: 0
530 payload: 178 bytes
531 payload: 178 bytes
531 :test:debugreply:
532 :test:debugreply:
532 mandatory: 0
533 mandatory: 0
533 advisory: 0
534 advisory: 0
534 payload: 0 bytes
535 payload: 0 bytes
535 :test:math:
536 :test:math:
536 mandatory: 2
537 mandatory: 2
537 advisory: 1
538 advisory: 1
538 payload: 2 bytes
539 payload: 2 bytes
539 :test:song:
540 :test:song:
540 mandatory: 1
541 mandatory: 1
541 advisory: 0
542 advisory: 0
542 payload: 0 bytes
543 payload: 0 bytes
543 :test:ping:
544 :test:ping:
544 mandatory: 0
545 mandatory: 0
545 advisory: 0
546 advisory: 0
546 payload: 0 bytes
547 payload: 0 bytes
547 parts count: 7
548 parts count: 7
548
549
549 $ hg statbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../parts.hg2
550 $ hg statbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../parts.hg2
550 bundle2-input: start processing of HG20 stream
551 bundle2-input: start processing of HG20 stream
551 bundle2-input: reading bundle2 stream parameters
552 bundle2-input: reading bundle2 stream parameters
552 options count: 0
553 options count: 0
553 bundle2-input: start extraction of bundle2 parts
554 bundle2-input: start extraction of bundle2 parts
554 bundle2-input: part header size: 17
555 bundle2-input: part header size: 17
555 bundle2-input: part type: "test:empty"
556 bundle2-input: part type: "test:empty"
556 bundle2-input: part id: "0"
557 bundle2-input: part id: "0"
557 bundle2-input: part parameters: 0
558 bundle2-input: part parameters: 0
558 :test:empty:
559 :test:empty:
559 mandatory: 0
560 mandatory: 0
560 advisory: 0
561 advisory: 0
561 bundle2-input: payload chunk size: 0
562 bundle2-input: payload chunk size: 0
562 payload: 0 bytes
563 payload: 0 bytes
563 bundle2-input: part header size: 17
564 bundle2-input: part header size: 17
564 bundle2-input: part type: "test:empty"
565 bundle2-input: part type: "test:empty"
565 bundle2-input: part id: "1"
566 bundle2-input: part id: "1"
566 bundle2-input: part parameters: 0
567 bundle2-input: part parameters: 0
567 :test:empty:
568 :test:empty:
568 mandatory: 0
569 mandatory: 0
569 advisory: 0
570 advisory: 0
570 bundle2-input: payload chunk size: 0
571 bundle2-input: payload chunk size: 0
571 payload: 0 bytes
572 payload: 0 bytes
572 bundle2-input: part header size: 16
573 bundle2-input: part header size: 16
573 bundle2-input: part type: "test:song"
574 bundle2-input: part type: "test:song"
574 bundle2-input: part id: "2"
575 bundle2-input: part id: "2"
575 bundle2-input: part parameters: 0
576 bundle2-input: part parameters: 0
576 :test:song:
577 :test:song:
577 mandatory: 0
578 mandatory: 0
578 advisory: 0
579 advisory: 0
579 bundle2-input: payload chunk size: 178
580 bundle2-input: payload chunk size: 178
580 bundle2-input: payload chunk size: 0
581 bundle2-input: payload chunk size: 0
581 bundle2-input-part: total payload size 178
582 bundle2-input-part: total payload size 178
582 payload: 178 bytes
583 payload: 178 bytes
583 bundle2-input: part header size: 22
584 bundle2-input: part header size: 22
584 bundle2-input: part type: "test:debugreply"
585 bundle2-input: part type: "test:debugreply"
585 bundle2-input: part id: "3"
586 bundle2-input: part id: "3"
586 bundle2-input: part parameters: 0
587 bundle2-input: part parameters: 0
587 :test:debugreply:
588 :test:debugreply:
588 mandatory: 0
589 mandatory: 0
589 advisory: 0
590 advisory: 0
590 bundle2-input: payload chunk size: 0
591 bundle2-input: payload chunk size: 0
591 payload: 0 bytes
592 payload: 0 bytes
592 bundle2-input: part header size: 43
593 bundle2-input: part header size: 43
593 bundle2-input: part type: "test:math"
594 bundle2-input: part type: "test:math"
594 bundle2-input: part id: "4"
595 bundle2-input: part id: "4"
595 bundle2-input: part parameters: 3
596 bundle2-input: part parameters: 3
596 :test:math:
597 :test:math:
597 mandatory: 2
598 mandatory: 2
598 advisory: 1
599 advisory: 1
599 bundle2-input: payload chunk size: 2
600 bundle2-input: payload chunk size: 2
600 bundle2-input: payload chunk size: 0
601 bundle2-input: payload chunk size: 0
601 bundle2-input-part: total payload size 2
602 bundle2-input-part: total payload size 2
602 payload: 2 bytes
603 payload: 2 bytes
603 bundle2-input: part header size: 29
604 bundle2-input: part header size: 29
604 bundle2-input: part type: "test:song"
605 bundle2-input: part type: "test:song"
605 bundle2-input: part id: "5"
606 bundle2-input: part id: "5"
606 bundle2-input: part parameters: 1
607 bundle2-input: part parameters: 1
607 :test:song:
608 :test:song:
608 mandatory: 1
609 mandatory: 1
609 advisory: 0
610 advisory: 0
610 bundle2-input: payload chunk size: 0
611 bundle2-input: payload chunk size: 0
611 payload: 0 bytes
612 payload: 0 bytes
612 bundle2-input: part header size: 16
613 bundle2-input: part header size: 16
613 bundle2-input: part type: "test:ping"
614 bundle2-input: part type: "test:ping"
614 bundle2-input: part id: "6"
615 bundle2-input: part id: "6"
615 bundle2-input: part parameters: 0
616 bundle2-input: part parameters: 0
616 :test:ping:
617 :test:ping:
617 mandatory: 0
618 mandatory: 0
618 advisory: 0
619 advisory: 0
619 bundle2-input: payload chunk size: 0
620 bundle2-input: payload chunk size: 0
620 payload: 0 bytes
621 payload: 0 bytes
621 bundle2-input: part header size: 0
622 bundle2-input: part header size: 0
622 bundle2-input: end of bundle2 stream
623 bundle2-input: end of bundle2 stream
623 parts count: 7
624 parts count: 7
624
625
625 Test actual unbundling of test part
626 Test actual unbundling of test part
626 =======================================
627 =======================================
627
628
628 Process the bundle
629 Process the bundle
629
630
630 $ hg unbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../parts.hg2
631 $ hg unbundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true < ../parts.hg2
631 bundle2-input: start processing of HG20 stream
632 bundle2-input: start processing of HG20 stream
632 bundle2-input: reading bundle2 stream parameters
633 bundle2-input: reading bundle2 stream parameters
633 bundle2-input-bundle: with-transaction
634 bundle2-input-bundle: with-transaction
634 bundle2-input: start extraction of bundle2 parts
635 bundle2-input: start extraction of bundle2 parts
635 bundle2-input: part header size: 17
636 bundle2-input: part header size: 17
636 bundle2-input: part type: "test:empty"
637 bundle2-input: part type: "test:empty"
637 bundle2-input: part id: "0"
638 bundle2-input: part id: "0"
638 bundle2-input: part parameters: 0
639 bundle2-input: part parameters: 0
639 bundle2-input: ignoring unsupported advisory part test:empty
640 bundle2-input: ignoring unsupported advisory part test:empty
640 bundle2-input-part: "test:empty" (advisory) unsupported-type
641 bundle2-input-part: "test:empty" (advisory) unsupported-type
641 bundle2-input: payload chunk size: 0
642 bundle2-input: payload chunk size: 0
642 bundle2-input: part header size: 17
643 bundle2-input: part header size: 17
643 bundle2-input: part type: "test:empty"
644 bundle2-input: part type: "test:empty"
644 bundle2-input: part id: "1"
645 bundle2-input: part id: "1"
645 bundle2-input: part parameters: 0
646 bundle2-input: part parameters: 0
646 bundle2-input: ignoring unsupported advisory part test:empty
647 bundle2-input: ignoring unsupported advisory part test:empty
647 bundle2-input-part: "test:empty" (advisory) unsupported-type
648 bundle2-input-part: "test:empty" (advisory) unsupported-type
648 bundle2-input: payload chunk size: 0
649 bundle2-input: payload chunk size: 0
649 bundle2-input: part header size: 16
650 bundle2-input: part header size: 16
650 bundle2-input: part type: "test:song"
651 bundle2-input: part type: "test:song"
651 bundle2-input: part id: "2"
652 bundle2-input: part id: "2"
652 bundle2-input: part parameters: 0
653 bundle2-input: part parameters: 0
653 bundle2-input: found a handler for part 'test:song'
654 bundle2-input: found a handler for part 'test:song'
654 bundle2-input-part: "test:song" (advisory) supported
655 bundle2-input-part: "test:song" (advisory) supported
655 The choir starts singing:
656 The choir starts singing:
656 bundle2-input: payload chunk size: 178
657 bundle2-input: payload chunk size: 178
657 bundle2-input: payload chunk size: 0
658 bundle2-input: payload chunk size: 0
658 bundle2-input-part: total payload size 178
659 bundle2-input-part: total payload size 178
659 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
660 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
660 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
661 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
661 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
662 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
662 bundle2-input: part header size: 22
663 bundle2-input: part header size: 22
663 bundle2-input: part type: "test:debugreply"
664 bundle2-input: part type: "test:debugreply"
664 bundle2-input: part id: "3"
665 bundle2-input: part id: "3"
665 bundle2-input: part parameters: 0
666 bundle2-input: part parameters: 0
666 bundle2-input: found a handler for part 'test:debugreply'
667 bundle2-input: found a handler for part 'test:debugreply'
667 bundle2-input-part: "test:debugreply" (advisory) supported
668 bundle2-input-part: "test:debugreply" (advisory) supported
668 debugreply: no reply
669 debugreply: no reply
669 bundle2-input: payload chunk size: 0
670 bundle2-input: payload chunk size: 0
670 bundle2-input: part header size: 43
671 bundle2-input: part header size: 43
671 bundle2-input: part type: "test:math"
672 bundle2-input: part type: "test:math"
672 bundle2-input: part id: "4"
673 bundle2-input: part id: "4"
673 bundle2-input: part parameters: 3
674 bundle2-input: part parameters: 3
674 bundle2-input: ignoring unsupported advisory part test:math
675 bundle2-input: ignoring unsupported advisory part test:math
675 bundle2-input-part: "test:math" (advisory) (params: 2 mandatory 2 advisory) unsupported-type
676 bundle2-input-part: "test:math" (advisory) (params: 2 mandatory 2 advisory) unsupported-type
676 bundle2-input: payload chunk size: 2
677 bundle2-input: payload chunk size: 2
677 bundle2-input: payload chunk size: 0
678 bundle2-input: payload chunk size: 0
678 bundle2-input-part: total payload size 2
679 bundle2-input-part: total payload size 2
679 bundle2-input: part header size: 29
680 bundle2-input: part header size: 29
680 bundle2-input: part type: "test:song"
681 bundle2-input: part type: "test:song"
681 bundle2-input: part id: "5"
682 bundle2-input: part id: "5"
682 bundle2-input: part parameters: 1
683 bundle2-input: part parameters: 1
683 bundle2-input: found a handler for part 'test:song'
684 bundle2-input: found a handler for part 'test:song'
684 bundle2-input: ignoring unsupported advisory part test:song - randomparam
685 bundle2-input: ignoring unsupported advisory part test:song - randomparam
685 bundle2-input-part: "test:song" (advisory) (params: 1 mandatory) unsupported-params (['randomparam'])
686 bundle2-input-part: "test:song" (advisory) (params: 1 mandatory) unsupported-params (['randomparam'])
686 bundle2-input: payload chunk size: 0
687 bundle2-input: payload chunk size: 0
687 bundle2-input: part header size: 16
688 bundle2-input: part header size: 16
688 bundle2-input: part type: "test:ping"
689 bundle2-input: part type: "test:ping"
689 bundle2-input: part id: "6"
690 bundle2-input: part id: "6"
690 bundle2-input: part parameters: 0
691 bundle2-input: part parameters: 0
691 bundle2-input: found a handler for part 'test:ping'
692 bundle2-input: found a handler for part 'test:ping'
692 bundle2-input-part: "test:ping" (advisory) supported
693 bundle2-input-part: "test:ping" (advisory) supported
693 received ping request (id 6)
694 received ping request (id 6)
694 bundle2-input: payload chunk size: 0
695 bundle2-input: payload chunk size: 0
695 bundle2-input: part header size: 0
696 bundle2-input: part header size: 0
696 bundle2-input: end of bundle2 stream
697 bundle2-input: end of bundle2 stream
697 bundle2-input-bundle: 6 parts total
698 bundle2-input-bundle: 6 parts total
698 0 unread bytes
699 0 unread bytes
699 3 total verses sung
700 3 total verses sung
700
701
701 Unbundle with an unknown mandatory part
702 Unbundle with an unknown mandatory part
702 (should abort)
703 (should abort)
703
704
704 $ hg bundle2 --parts --unknown ../unknown.hg2
705 $ hg bundle2 --parts --unknown ../unknown.hg2
705
706
706 $ hg unbundle2 < ../unknown.hg2
707 $ hg unbundle2 < ../unknown.hg2
707 The choir starts singing:
708 The choir starts singing:
708 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
709 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
709 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
710 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
710 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
711 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
711 debugreply: no reply
712 debugreply: no reply
712 0 unread bytes
713 0 unread bytes
713 abort: missing support for test:unknown
714 abort: missing support for test:unknown
714 [255]
715 [255]
715
716
716 Unbundle with an unknown mandatory part parameters
717 Unbundle with an unknown mandatory part parameters
717 (should abort)
718 (should abort)
718
719
719 $ hg bundle2 --unknownparams ../unknown.hg2
720 $ hg bundle2 --unknownparams ../unknown.hg2
720
721
721 $ hg unbundle2 < ../unknown.hg2
722 $ hg unbundle2 < ../unknown.hg2
722 0 unread bytes
723 0 unread bytes
723 abort: missing support for test:song - randomparams
724 abort: missing support for test:song - randomparams
724 [255]
725 [255]
725
726
726 unbundle with a reply
727 unbundle with a reply
727
728
728 $ hg bundle2 --parts --reply ../parts-reply.hg2
729 $ hg bundle2 --parts --reply ../parts-reply.hg2
729 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
730 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
730 0 unread bytes
731 0 unread bytes
731 3 total verses sung
732 3 total verses sung
732
733
733 The reply is a bundle
734 The reply is a bundle
734
735
735 $ f --hexdump ../reply.hg2
736 $ f --hexdump ../reply.hg2
736 ../reply.hg2:
737 ../reply.hg2:
737 0000: 48 47 32 30 00 00 00 00 00 00 00 1b 06 6f 75 74 |HG20.........out|
738 0000: 48 47 32 30 00 00 00 00 00 00 00 1b 06 6f 75 74 |HG20.........out|
738 0010: 70 75 74 00 00 00 00 00 01 0b 01 69 6e 2d 72 65 |put........in-re|
739 0010: 70 75 74 00 00 00 00 00 01 0b 01 69 6e 2d 72 65 |put........in-re|
739 0020: 70 6c 79 2d 74 6f 33 00 00 00 d9 54 68 65 20 63 |ply-to3....The c|
740 0020: 70 6c 79 2d 74 6f 33 00 00 00 d9 54 68 65 20 63 |ply-to3....The c|
740 0030: 68 6f 69 72 20 73 74 61 72 74 73 20 73 69 6e 67 |hoir starts sing|
741 0030: 68 6f 69 72 20 73 74 61 72 74 73 20 73 69 6e 67 |hoir starts sing|
741 0040: 69 6e 67 3a 0a 20 20 20 20 50 61 74 61 6c 69 20 |ing:. Patali |
742 0040: 69 6e 67 3a 0a 20 20 20 20 50 61 74 61 6c 69 20 |ing:. Patali |
742 0050: 44 69 72 61 70 61 74 61 2c 20 43 72 6f 6d 64 61 |Dirapata, Cromda|
743 0050: 44 69 72 61 70 61 74 61 2c 20 43 72 6f 6d 64 61 |Dirapata, Cromda|
743 0060: 20 43 72 6f 6d 64 61 20 52 69 70 61 6c 6f 2c 20 | Cromda Ripalo, |
744 0060: 20 43 72 6f 6d 64 61 20 52 69 70 61 6c 6f 2c 20 | Cromda Ripalo, |
744 0070: 50 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f |Pata Pata, Ko Ko|
745 0070: 50 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f |Pata Pata, Ko Ko|
745 0080: 20 4b 6f 0a 20 20 20 20 42 6f 6b 6f 72 6f 20 44 | Ko. Bokoro D|
746 0080: 20 4b 6f 0a 20 20 20 20 42 6f 6b 6f 72 6f 20 44 | Ko. Bokoro D|
746 0090: 69 70 6f 75 6c 69 74 6f 2c 20 52 6f 6e 64 69 20 |ipoulito, Rondi |
747 0090: 69 70 6f 75 6c 69 74 6f 2c 20 52 6f 6e 64 69 20 |ipoulito, Rondi |
747 00a0: 52 6f 6e 64 69 20 50 65 70 69 6e 6f 2c 20 50 61 |Rondi Pepino, Pa|
748 00a0: 52 6f 6e 64 69 20 50 65 70 69 6e 6f 2c 20 50 61 |Rondi Pepino, Pa|
748 00b0: 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f 20 4b |ta Pata, Ko Ko K|
749 00b0: 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f 20 4b |ta Pata, Ko Ko K|
749 00c0: 6f 0a 20 20 20 20 45 6d 61 6e 61 20 4b 61 72 61 |o. Emana Kara|
750 00c0: 6f 0a 20 20 20 20 45 6d 61 6e 61 20 4b 61 72 61 |o. Emana Kara|
750 00d0: 73 73 6f 6c 69 2c 20 4c 6f 75 63 72 61 20 4c 6f |ssoli, Loucra Lo|
751 00d0: 73 73 6f 6c 69 2c 20 4c 6f 75 63 72 61 20 4c 6f |ssoli, Loucra Lo|
751 00e0: 75 63 72 61 20 50 6f 6e 70 6f 6e 74 6f 2c 20 50 |ucra Ponponto, P|
752 00e0: 75 63 72 61 20 50 6f 6e 70 6f 6e 74 6f 2c 20 50 |ucra Ponponto, P|
752 00f0: 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f 20 |ata Pata, Ko Ko |
753 00f0: 61 74 61 20 50 61 74 61 2c 20 4b 6f 20 4b 6f 20 |ata Pata, Ko Ko |
753 0100: 4b 6f 2e 0a 00 00 00 00 00 00 00 1b 06 6f 75 74 |Ko...........out|
754 0100: 4b 6f 2e 0a 00 00 00 00 00 00 00 1b 06 6f 75 74 |Ko...........out|
754 0110: 70 75 74 00 00 00 01 00 01 0b 01 69 6e 2d 72 65 |put........in-re|
755 0110: 70 75 74 00 00 00 01 00 01 0b 01 69 6e 2d 72 65 |put........in-re|
755 0120: 70 6c 79 2d 74 6f 34 00 00 00 c9 64 65 62 75 67 |ply-to4....debug|
756 0120: 70 6c 79 2d 74 6f 34 00 00 00 c9 64 65 62 75 67 |ply-to4....debug|
756 0130: 72 65 70 6c 79 3a 20 63 61 70 61 62 69 6c 69 74 |reply: capabilit|
757 0130: 72 65 70 6c 79 3a 20 63 61 70 61 62 69 6c 69 74 |reply: capabilit|
757 0140: 69 65 73 3a 0a 64 65 62 75 67 72 65 70 6c 79 3a |ies:.debugreply:|
758 0140: 69 65 73 3a 0a 64 65 62 75 67 72 65 70 6c 79 3a |ies:.debugreply:|
758 0150: 20 20 20 20 20 27 63 69 74 79 3d 21 27 0a 64 65 | 'city=!'.de|
759 0150: 20 20 20 20 20 27 63 69 74 79 3d 21 27 0a 64 65 | 'city=!'.de|
759 0160: 62 75 67 72 65 70 6c 79 3a 20 20 20 20 20 20 20 |bugreply: |
760 0160: 62 75 67 72 65 70 6c 79 3a 20 20 20 20 20 20 20 |bugreply: |
760 0170: 20 20 27 63 65 6c 65 73 74 65 2c 76 69 6c 6c 65 | 'celeste,ville|
761 0170: 20 20 27 63 65 6c 65 73 74 65 2c 76 69 6c 6c 65 | 'celeste,ville|
761 0180: 27 0a 64 65 62 75 67 72 65 70 6c 79 3a 20 20 20 |'.debugreply: |
762 0180: 27 0a 64 65 62 75 67 72 65 70 6c 79 3a 20 20 20 |'.debugreply: |
762 0190: 20 20 27 65 6c 65 70 68 61 6e 74 73 27 0a 64 65 | 'elephants'.de|
763 0190: 20 20 27 65 6c 65 70 68 61 6e 74 73 27 0a 64 65 | 'elephants'.de|
763 01a0: 62 75 67 72 65 70 6c 79 3a 20 20 20 20 20 20 20 |bugreply: |
764 01a0: 62 75 67 72 65 70 6c 79 3a 20 20 20 20 20 20 20 |bugreply: |
764 01b0: 20 20 27 62 61 62 61 72 27 0a 64 65 62 75 67 72 | 'babar'.debugr|
765 01b0: 20 20 27 62 61 62 61 72 27 0a 64 65 62 75 67 72 | 'babar'.debugr|
765 01c0: 65 70 6c 79 3a 20 20 20 20 20 20 20 20 20 27 63 |eply: 'c|
766 01c0: 65 70 6c 79 3a 20 20 20 20 20 20 20 20 20 27 63 |eply: 'c|
766 01d0: 65 6c 65 73 74 65 27 0a 64 65 62 75 67 72 65 70 |eleste'.debugrep|
767 01d0: 65 6c 65 73 74 65 27 0a 64 65 62 75 67 72 65 70 |eleste'.debugrep|
767 01e0: 6c 79 3a 20 20 20 20 20 27 70 69 6e 67 2d 70 6f |ly: 'ping-po|
768 01e0: 6c 79 3a 20 20 20 20 20 27 70 69 6e 67 2d 70 6f |ly: 'ping-po|
768 01f0: 6e 67 27 0a 00 00 00 00 00 00 00 1e 09 74 65 73 |ng'..........tes|
769 01f0: 6e 67 27 0a 00 00 00 00 00 00 00 1e 09 74 65 73 |ng'..........tes|
769 0200: 74 3a 70 6f 6e 67 00 00 00 02 01 00 0b 01 69 6e |t:pong........in|
770 0200: 74 3a 70 6f 6e 67 00 00 00 02 01 00 0b 01 69 6e |t:pong........in|
770 0210: 2d 72 65 70 6c 79 2d 74 6f 37 00 00 00 00 00 00 |-reply-to7......|
771 0210: 2d 72 65 70 6c 79 2d 74 6f 37 00 00 00 00 00 00 |-reply-to7......|
771 0220: 00 1b 06 6f 75 74 70 75 74 00 00 00 03 00 01 0b |...output.......|
772 0220: 00 1b 06 6f 75 74 70 75 74 00 00 00 03 00 01 0b |...output.......|
772 0230: 01 69 6e 2d 72 65 70 6c 79 2d 74 6f 37 00 00 00 |.in-reply-to7...|
773 0230: 01 69 6e 2d 72 65 70 6c 79 2d 74 6f 37 00 00 00 |.in-reply-to7...|
773 0240: 3d 72 65 63 65 69 76 65 64 20 70 69 6e 67 20 72 |=received ping r|
774 0240: 3d 72 65 63 65 69 76 65 64 20 70 69 6e 67 20 72 |=received ping r|
774 0250: 65 71 75 65 73 74 20 28 69 64 20 37 29 0a 72 65 |equest (id 7).re|
775 0250: 65 71 75 65 73 74 20 28 69 64 20 37 29 0a 72 65 |equest (id 7).re|
775 0260: 70 6c 79 69 6e 67 20 74 6f 20 70 69 6e 67 20 72 |plying to ping r|
776 0260: 70 6c 79 69 6e 67 20 74 6f 20 70 69 6e 67 20 72 |plying to ping r|
776 0270: 65 71 75 65 73 74 20 28 69 64 20 37 29 0a 00 00 |equest (id 7)...|
777 0270: 65 71 75 65 73 74 20 28 69 64 20 37 29 0a 00 00 |equest (id 7)...|
777 0280: 00 00 00 00 00 00 |......|
778 0280: 00 00 00 00 00 00 |......|
778
779
779 The reply is valid
780 The reply is valid
780
781
781 $ hg statbundle2 < ../reply.hg2
782 $ hg statbundle2 < ../reply.hg2
782 options count: 0
783 options count: 0
783 :output:
784 :output:
784 mandatory: 0
785 mandatory: 0
785 advisory: 1
786 advisory: 1
786 payload: 217 bytes
787 payload: 217 bytes
787 :output:
788 :output:
788 mandatory: 0
789 mandatory: 0
789 advisory: 1
790 advisory: 1
790 payload: 201 bytes
791 payload: 201 bytes
791 :test:pong:
792 :test:pong:
792 mandatory: 1
793 mandatory: 1
793 advisory: 0
794 advisory: 0
794 payload: 0 bytes
795 payload: 0 bytes
795 :output:
796 :output:
796 mandatory: 0
797 mandatory: 0
797 advisory: 1
798 advisory: 1
798 payload: 61 bytes
799 payload: 61 bytes
799 parts count: 4
800 parts count: 4
800
801
801 Unbundle the reply to get the output:
802 Unbundle the reply to get the output:
802
803
803 $ hg unbundle2 < ../reply.hg2
804 $ hg unbundle2 < ../reply.hg2
804 remote: The choir starts singing:
805 remote: The choir starts singing:
805 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
806 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
806 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
807 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
807 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
808 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
808 remote: debugreply: capabilities:
809 remote: debugreply: capabilities:
809 remote: debugreply: 'city=!'
810 remote: debugreply: 'city=!'
810 remote: debugreply: 'celeste,ville'
811 remote: debugreply: 'celeste,ville'
811 remote: debugreply: 'elephants'
812 remote: debugreply: 'elephants'
812 remote: debugreply: 'babar'
813 remote: debugreply: 'babar'
813 remote: debugreply: 'celeste'
814 remote: debugreply: 'celeste'
814 remote: debugreply: 'ping-pong'
815 remote: debugreply: 'ping-pong'
815 remote: received ping request (id 7)
816 remote: received ping request (id 7)
816 remote: replying to ping request (id 7)
817 remote: replying to ping request (id 7)
817 0 unread bytes
818 0 unread bytes
818
819
819 Test push race detection
820 Test push race detection
820
821
821 $ hg bundle2 --pushrace ../part-race.hg2
822 $ hg bundle2 --pushrace ../part-race.hg2
822
823
823 $ hg unbundle2 < ../part-race.hg2
824 $ hg unbundle2 < ../part-race.hg2
824 0 unread bytes
825 0 unread bytes
825 abort: push race: repository changed while pushing - please try again
826 abort: push race: repository changed while pushing - please try again
826 [255]
827 [255]
827
828
828 Support for changegroup
829 Support for changegroup
829 ===================================
830 ===================================
830
831
831 $ hg unbundle $TESTDIR/bundles/rebase.hg
832 $ hg unbundle $TESTDIR/bundles/rebase.hg
832 adding changesets
833 adding changesets
833 adding manifests
834 adding manifests
834 adding file changes
835 adding file changes
835 added 8 changesets with 7 changes to 7 files (+3 heads)
836 added 8 changesets with 7 changes to 7 files (+3 heads)
836 (run 'hg heads' to see heads, 'hg merge' to merge)
837 (run 'hg heads' to see heads, 'hg merge' to merge)
837
838
838 $ hg log -G
839 $ hg log -G
839 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
840 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
840 |
841 |
841 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
842 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
842 |/|
843 |/|
843 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
844 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
844 | |
845 | |
845 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
846 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
846 |/
847 |/
847 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
848 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
848 | |
849 | |
849 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
850 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
850 | |
851 | |
851 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
852 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
852 |/
853 |/
853 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
854 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
854
855
855 @ 0:3903775176ed draft test a
856 @ 0:3903775176ed draft test a
856
857
857
858
858 $ hg bundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true --rev '8+7+5+4' ../rev.hg2
859 $ hg bundle2 --debug --config progress.debug=true --config devel.bundle2.debug=true --rev '8+7+5+4' ../rev.hg2
859 4 changesets found
860 4 changesets found
860 list of changesets:
861 list of changesets:
861 32af7686d403cf45b5d95f2d70cebea587ac806a
862 32af7686d403cf45b5d95f2d70cebea587ac806a
862 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
863 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
863 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
864 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
864 02de42196ebee42ef284b6780a87cdc96e8eaab6
865 02de42196ebee42ef284b6780a87cdc96e8eaab6
865 bundle2-output-bundle: "HG20", 1 parts total
866 bundle2-output-bundle: "HG20", 1 parts total
866 bundle2-output: start emission of HG20 stream
867 bundle2-output: start emission of HG20 stream
867 bundle2-output: bundle parameter:
868 bundle2-output: bundle parameter:
868 bundle2-output: start of parts
869 bundle2-output: start of parts
869 bundle2-output: bundle part: "changegroup"
870 bundle2-output: bundle part: "changegroup"
870 bundle2-output-part: "changegroup" (advisory) streamed payload
871 bundle2-output-part: "changegroup" (advisory) streamed payload
871 bundle2-output: part 0: "changegroup"
872 bundle2-output: part 0: "changegroup"
872 bundle2-output: header chunk size: 18
873 bundle2-output: header chunk size: 18
873 bundling: 1/4 changesets (25.00%)
874 bundling: 1/4 changesets (25.00%)
874 bundling: 2/4 changesets (50.00%)
875 bundling: 2/4 changesets (50.00%)
875 bundling: 3/4 changesets (75.00%)
876 bundling: 3/4 changesets (75.00%)
876 bundling: 4/4 changesets (100.00%)
877 bundling: 4/4 changesets (100.00%)
877 bundling: 1/4 manifests (25.00%)
878 bundling: 1/4 manifests (25.00%)
878 bundling: 2/4 manifests (50.00%)
879 bundling: 2/4 manifests (50.00%)
879 bundling: 3/4 manifests (75.00%)
880 bundling: 3/4 manifests (75.00%)
880 bundling: 4/4 manifests (100.00%)
881 bundling: 4/4 manifests (100.00%)
881 bundling: D 1/3 files (33.33%)
882 bundling: D 1/3 files (33.33%)
882 bundling: E 2/3 files (66.67%)
883 bundling: E 2/3 files (66.67%)
883 bundling: H 3/3 files (100.00%)
884 bundling: H 3/3 files (100.00%)
884 bundle2-output: payload chunk size: 1555
885 bundle2-output: payload chunk size: 1555
885 bundle2-output: closing payload chunk
886 bundle2-output: closing payload chunk
886 bundle2-output: end of bundle
887 bundle2-output: end of bundle
887
888
888 $ f --hexdump ../rev.hg2
889 $ f --hexdump ../rev.hg2
889 ../rev.hg2:
890 ../rev.hg2:
890 0000: 48 47 32 30 00 00 00 00 00 00 00 12 0b 63 68 61 |HG20.........cha|
891 0000: 48 47 32 30 00 00 00 00 00 00 00 12 0b 63 68 61 |HG20.........cha|
891 0010: 6e 67 65 67 72 6f 75 70 00 00 00 00 00 00 00 00 |ngegroup........|
892 0010: 6e 67 65 67 72 6f 75 70 00 00 00 00 00 00 00 00 |ngegroup........|
892 0020: 06 13 00 00 00 a4 32 af 76 86 d4 03 cf 45 b5 d9 |......2.v....E..|
893 0020: 06 13 00 00 00 a4 32 af 76 86 d4 03 cf 45 b5 d9 |......2.v....E..|
893 0030: 5f 2d 70 ce be a5 87 ac 80 6a 5f dd d9 89 57 c8 |_-p......j_...W.|
894 0030: 5f 2d 70 ce be a5 87 ac 80 6a 5f dd d9 89 57 c8 |_-p......j_...W.|
894 0040: a5 4a 4d 43 6d fe 1d a9 d8 7f 21 a1 b9 7b 00 00 |.JMCm.....!..{..|
895 0040: a5 4a 4d 43 6d fe 1d a9 d8 7f 21 a1 b9 7b 00 00 |.JMCm.....!..{..|
895 0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
896 0050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
896 0060: 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f 2d 70 ce |..2.v....E.._-p.|
897 0060: 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f 2d 70 ce |..2.v....E.._-p.|
897 0070: be a5 87 ac 80 6a 00 00 00 00 00 00 00 29 00 00 |.....j.......)..|
898 0070: be a5 87 ac 80 6a 00 00 00 00 00 00 00 29 00 00 |.....j.......)..|
898 0080: 00 29 36 65 31 66 34 63 34 37 65 63 62 35 33 33 |.)6e1f4c47ecb533|
899 0080: 00 29 36 65 31 66 34 63 34 37 65 63 62 35 33 33 |.)6e1f4c47ecb533|
899 0090: 66 66 64 30 63 38 65 35 32 63 64 63 38 38 61 66 |ffd0c8e52cdc88af|
900 0090: 66 66 64 30 63 38 65 35 32 63 64 63 38 38 61 66 |ffd0c8e52cdc88af|
900 00a0: 62 36 63 64 33 39 65 32 30 63 0a 00 00 00 66 00 |b6cd39e20c....f.|
901 00a0: 62 36 63 64 33 39 65 32 30 63 0a 00 00 00 66 00 |b6cd39e20c....f.|
901 00b0: 00 00 68 00 00 00 02 44 0a 00 00 00 69 00 00 00 |..h....D....i...|
902 00b0: 00 00 68 00 00 00 02 44 0a 00 00 00 69 00 00 00 |..h....D....i...|
902 00c0: 6a 00 00 00 01 44 00 00 00 a4 95 20 ee a7 81 bc |j....D..... ....|
903 00c0: 6a 00 00 00 01 44 00 00 00 a4 95 20 ee a7 81 bc |j....D..... ....|
903 00d0: ca 16 c1 e1 5a cc 0b a1 43 35 a0 e8 e5 ba cd 01 |....Z...C5......|
904 00d0: ca 16 c1 e1 5a cc 0b a1 43 35 a0 e8 e5 ba cd 01 |....Z...C5......|
904 00e0: 0b 8c d9 98 f3 98 1a 5a 81 15 f9 4f 8d a4 ab 50 |.......Z...O...P|
905 00e0: 0b 8c d9 98 f3 98 1a 5a 81 15 f9 4f 8d a4 ab 50 |.......Z...O...P|
905 00f0: 60 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |`...............|
906 00f0: 60 89 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |`...............|
906 0100: 00 00 00 00 00 00 95 20 ee a7 81 bc ca 16 c1 e1 |....... ........|
907 0100: 00 00 00 00 00 00 95 20 ee a7 81 bc ca 16 c1 e1 |....... ........|
907 0110: 5a cc 0b a1 43 35 a0 e8 e5 ba 00 00 00 00 00 00 |Z...C5..........|
908 0110: 5a cc 0b a1 43 35 a0 e8 e5 ba 00 00 00 00 00 00 |Z...C5..........|
908 0120: 00 29 00 00 00 29 34 64 65 63 65 39 63 38 32 36 |.)...)4dece9c826|
909 0120: 00 29 00 00 00 29 34 64 65 63 65 39 63 38 32 36 |.)...)4dece9c826|
909 0130: 66 36 39 34 39 30 35 30 37 62 39 38 63 36 33 38 |f69490507b98c638|
910 0130: 66 36 39 34 39 30 35 30 37 62 39 38 63 36 33 38 |f69490507b98c638|
910 0140: 33 61 33 30 30 39 62 32 39 35 38 33 37 64 0a 00 |3a3009b295837d..|
911 0140: 33 61 33 30 30 39 62 32 39 35 38 33 37 64 0a 00 |3a3009b295837d..|
911 0150: 00 00 66 00 00 00 68 00 00 00 02 45 0a 00 00 00 |..f...h....E....|
912 0150: 00 00 66 00 00 00 68 00 00 00 02 45 0a 00 00 00 |..f...h....E....|
912 0160: 69 00 00 00 6a 00 00 00 01 45 00 00 00 a2 ee a1 |i...j....E......|
913 0160: 69 00 00 00 6a 00 00 00 01 45 00 00 00 a2 ee a1 |i...j....E......|
913 0170: 37 46 79 9a 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f |7Fy.......<...8.|
914 0170: 37 46 79 9a 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f |7Fy.......<...8.|
914 0180: 52 4f 24 b6 38 7c 8c 8c ae 37 17 88 80 f3 fa 95 |RO$.8|...7......|
915 0180: 52 4f 24 b6 38 7c 8c 8c ae 37 17 88 80 f3 fa 95 |RO$.8|...7......|
915 0190: de d3 cb 1c f7 85 95 20 ee a7 81 bc ca 16 c1 e1 |....... ........|
916 0190: de d3 cb 1c f7 85 95 20 ee a7 81 bc ca 16 c1 e1 |....... ........|
916 01a0: 5a cc 0b a1 43 35 a0 e8 e5 ba ee a1 37 46 79 9a |Z...C5......7Fy.|
917 01a0: 5a cc 0b a1 43 35 a0 e8 e5 ba ee a1 37 46 79 9a |Z...C5......7Fy.|
917 01b0: 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f 52 4f 00 00 |......<...8.RO..|
918 01b0: 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f 52 4f 00 00 |......<...8.RO..|
918 01c0: 00 00 00 00 00 29 00 00 00 29 33 36 35 62 39 33 |.....)...)365b93|
919 01c0: 00 00 00 00 00 29 00 00 00 29 33 36 35 62 39 33 |.....)...)365b93|
919 01d0: 64 35 37 66 64 66 34 38 31 34 65 32 62 35 39 31 |d57fdf4814e2b591|
920 01d0: 64 35 37 66 64 66 34 38 31 34 65 32 62 35 39 31 |d57fdf4814e2b591|
920 01e0: 31 64 36 62 61 63 66 66 32 62 31 32 30 31 34 34 |1d6bacff2b120144|
921 01e0: 31 64 36 62 61 63 66 66 32 62 31 32 30 31 34 34 |1d6bacff2b120144|
921 01f0: 34 31 0a 00 00 00 66 00 00 00 68 00 00 00 00 00 |41....f...h.....|
922 01f0: 34 31 0a 00 00 00 66 00 00 00 68 00 00 00 00 00 |41....f...h.....|
922 0200: 00 00 69 00 00 00 6a 00 00 00 01 47 00 00 00 a4 |..i...j....G....|
923 0200: 00 00 69 00 00 00 6a 00 00 00 01 47 00 00 00 a4 |..i...j....G....|
923 0210: 02 de 42 19 6e be e4 2e f2 84 b6 78 0a 87 cd c9 |..B.n......x....|
924 0210: 02 de 42 19 6e be e4 2e f2 84 b6 78 0a 87 cd c9 |..B.n......x....|
924 0220: 6e 8e aa b6 24 b6 38 7c 8c 8c ae 37 17 88 80 f3 |n...$.8|...7....|
925 0220: 6e 8e aa b6 24 b6 38 7c 8c 8c ae 37 17 88 80 f3 |n...$.8|...7....|
925 0230: fa 95 de d3 cb 1c f7 85 00 00 00 00 00 00 00 00 |................|
926 0230: fa 95 de d3 cb 1c f7 85 00 00 00 00 00 00 00 00 |................|
926 0240: 00 00 00 00 00 00 00 00 00 00 00 00 02 de 42 19 |..............B.|
927 0240: 00 00 00 00 00 00 00 00 00 00 00 00 02 de 42 19 |..............B.|
927 0250: 6e be e4 2e f2 84 b6 78 0a 87 cd c9 6e 8e aa b6 |n......x....n...|
928 0250: 6e be e4 2e f2 84 b6 78 0a 87 cd c9 6e 8e aa b6 |n......x....n...|
928 0260: 00 00 00 00 00 00 00 29 00 00 00 29 38 62 65 65 |.......)...)8bee|
929 0260: 00 00 00 00 00 00 00 29 00 00 00 29 38 62 65 65 |.......)...)8bee|
929 0270: 34 38 65 64 63 37 33 31 38 35 34 31 66 63 30 30 |48edc7318541fc00|
930 0270: 34 38 65 64 63 37 33 31 38 35 34 31 66 63 30 30 |48edc7318541fc00|
930 0280: 31 33 65 65 34 31 62 30 38 39 32 37 36 61 38 63 |13ee41b089276a8c|
931 0280: 31 33 65 65 34 31 62 30 38 39 32 37 36 61 38 63 |13ee41b089276a8c|
931 0290: 32 34 62 66 0a 00 00 00 66 00 00 00 66 00 00 00 |24bf....f...f...|
932 0290: 32 34 62 66 0a 00 00 00 66 00 00 00 66 00 00 00 |24bf....f...f...|
932 02a0: 02 48 0a 00 00 00 67 00 00 00 68 00 00 00 01 48 |.H....g...h....H|
933 02a0: 02 48 0a 00 00 00 67 00 00 00 68 00 00 00 01 48 |.H....g...h....H|
933 02b0: 00 00 00 00 00 00 00 8b 6e 1f 4c 47 ec b5 33 ff |........n.LG..3.|
934 02b0: 00 00 00 00 00 00 00 8b 6e 1f 4c 47 ec b5 33 ff |........n.LG..3.|
934 02c0: d0 c8 e5 2c dc 88 af b6 cd 39 e2 0c 66 a5 a0 18 |...,.....9..f...|
935 02c0: d0 c8 e5 2c dc 88 af b6 cd 39 e2 0c 66 a5 a0 18 |...,.....9..f...|
935 02d0: 17 fd f5 23 9c 27 38 02 b5 b7 61 8d 05 1c 89 e4 |...#.'8...a.....|
936 02d0: 17 fd f5 23 9c 27 38 02 b5 b7 61 8d 05 1c 89 e4 |...#.'8...a.....|
936 02e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
937 02e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
937 02f0: 00 00 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f 2d |....2.v....E.._-|
938 02f0: 00 00 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f 2d |....2.v....E.._-|
938 0300: 70 ce be a5 87 ac 80 6a 00 00 00 81 00 00 00 81 |p......j........|
939 0300: 70 ce be a5 87 ac 80 6a 00 00 00 81 00 00 00 81 |p......j........|
939 0310: 00 00 00 2b 44 00 63 33 66 31 63 61 32 39 32 34 |...+D.c3f1ca2924|
940 0310: 00 00 00 2b 44 00 63 33 66 31 63 61 32 39 32 34 |...+D.c3f1ca2924|
940 0320: 63 31 36 61 31 39 62 30 36 35 36 61 38 34 39 30 |c16a19b0656a8490|
941 0320: 63 31 36 61 31 39 62 30 36 35 36 61 38 34 39 30 |c16a19b0656a8490|
941 0330: 30 65 35 30 34 65 35 62 30 61 65 63 32 64 0a 00 |0e504e5b0aec2d..|
942 0330: 30 65 35 30 34 65 35 62 30 61 65 63 32 64 0a 00 |0e504e5b0aec2d..|
942 0340: 00 00 8b 4d ec e9 c8 26 f6 94 90 50 7b 98 c6 38 |...M...&...P{..8|
943 0340: 00 00 8b 4d ec e9 c8 26 f6 94 90 50 7b 98 c6 38 |...M...&...P{..8|
943 0350: 3a 30 09 b2 95 83 7d 00 7d 8c 9d 88 84 13 25 f5 |:0....}.}.....%.|
944 0350: 3a 30 09 b2 95 83 7d 00 7d 8c 9d 88 84 13 25 f5 |:0....}.}.....%.|
944 0360: c6 b0 63 71 b3 5b 4e 8a 2b 1a 83 00 00 00 00 00 |..cq.[N.+.......|
945 0360: c6 b0 63 71 b3 5b 4e 8a 2b 1a 83 00 00 00 00 00 |..cq.[N.+.......|
945 0370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 95 |................|
946 0370: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 95 |................|
946 0380: 20 ee a7 81 bc ca 16 c1 e1 5a cc 0b a1 43 35 a0 | ........Z...C5.|
947 0380: 20 ee a7 81 bc ca 16 c1 e1 5a cc 0b a1 43 35 a0 | ........Z...C5.|
947 0390: e8 e5 ba 00 00 00 2b 00 00 00 ac 00 00 00 2b 45 |......+.......+E|
948 0390: e8 e5 ba 00 00 00 2b 00 00 00 ac 00 00 00 2b 45 |......+.......+E|
948 03a0: 00 39 63 36 66 64 30 33 35 30 61 36 63 30 64 30 |.9c6fd0350a6c0d0|
949 03a0: 00 39 63 36 66 64 30 33 35 30 61 36 63 30 64 30 |.9c6fd0350a6c0d0|
949 03b0: 63 34 39 64 34 61 39 63 35 30 31 37 63 66 30 37 |c49d4a9c5017cf07|
950 03b0: 63 34 39 64 34 61 39 63 35 30 31 37 63 66 30 37 |c49d4a9c5017cf07|
950 03c0: 30 34 33 66 35 34 65 35 38 0a 00 00 00 8b 36 5b |043f54e58.....6[|
951 03c0: 30 34 33 66 35 34 65 35 38 0a 00 00 00 8b 36 5b |043f54e58.....6[|
951 03d0: 93 d5 7f df 48 14 e2 b5 91 1d 6b ac ff 2b 12 01 |....H.....k..+..|
952 03d0: 93 d5 7f df 48 14 e2 b5 91 1d 6b ac ff 2b 12 01 |....H.....k..+..|
952 03e0: 44 41 28 a5 84 c6 5e f1 21 f8 9e b6 6a b7 d0 bc |DA(...^.!...j...|
953 03e0: 44 41 28 a5 84 c6 5e f1 21 f8 9e b6 6a b7 d0 bc |DA(...^.!...j...|
953 03f0: 15 3d 80 99 e7 ce 4d ec e9 c8 26 f6 94 90 50 7b |.=....M...&...P{|
954 03f0: 15 3d 80 99 e7 ce 4d ec e9 c8 26 f6 94 90 50 7b |.=....M...&...P{|
954 0400: 98 c6 38 3a 30 09 b2 95 83 7d ee a1 37 46 79 9a |..8:0....}..7Fy.|
955 0400: 98 c6 38 3a 30 09 b2 95 83 7d ee a1 37 46 79 9a |..8:0....}..7Fy.|
955 0410: 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f 52 4f 00 00 |......<...8.RO..|
956 0410: 9e 0b fd 88 f2 9d 3c 2e 9d c9 38 9f 52 4f 00 00 |......<...8.RO..|
956 0420: 00 56 00 00 00 56 00 00 00 2b 46 00 32 32 62 66 |.V...V...+F.22bf|
957 0420: 00 56 00 00 00 56 00 00 00 2b 46 00 32 32 62 66 |.V...V...+F.22bf|
957 0430: 63 66 64 36 32 61 32 31 61 33 32 38 37 65 64 62 |cfd62a21a3287edb|
958 0430: 63 66 64 36 32 61 32 31 61 33 32 38 37 65 64 62 |cfd62a21a3287edb|
958 0440: 64 34 64 36 35 36 32 31 38 64 30 66 35 32 35 65 |d4d656218d0f525e|
959 0440: 64 34 64 36 35 36 32 31 38 64 30 66 35 32 35 65 |d4d656218d0f525e|
959 0450: 64 37 36 61 0a 00 00 00 97 8b ee 48 ed c7 31 85 |d76a.......H..1.|
960 0450: 64 37 36 61 0a 00 00 00 97 8b ee 48 ed c7 31 85 |d76a.......H..1.|
960 0460: 41 fc 00 13 ee 41 b0 89 27 6a 8c 24 bf 28 a5 84 |A....A..'j.$.(..|
961 0460: 41 fc 00 13 ee 41 b0 89 27 6a 8c 24 bf 28 a5 84 |A....A..'j.$.(..|
961 0470: c6 5e f1 21 f8 9e b6 6a b7 d0 bc 15 3d 80 99 e7 |.^.!...j....=...|
962 0470: c6 5e f1 21 f8 9e b6 6a b7 d0 bc 15 3d 80 99 e7 |.^.!...j....=...|
962 0480: ce 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
963 0480: ce 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
963 0490: 00 00 00 00 00 02 de 42 19 6e be e4 2e f2 84 b6 |.......B.n......|
964 0490: 00 00 00 00 00 02 de 42 19 6e be e4 2e f2 84 b6 |.......B.n......|
964 04a0: 78 0a 87 cd c9 6e 8e aa b6 00 00 00 2b 00 00 00 |x....n......+...|
965 04a0: 78 0a 87 cd c9 6e 8e aa b6 00 00 00 2b 00 00 00 |x....n......+...|
965 04b0: 56 00 00 00 00 00 00 00 81 00 00 00 81 00 00 00 |V...............|
966 04b0: 56 00 00 00 00 00 00 00 81 00 00 00 81 00 00 00 |V...............|
966 04c0: 2b 48 00 38 35 30 30 31 38 39 65 37 34 61 39 65 |+H.8500189e74a9e|
967 04c0: 2b 48 00 38 35 30 30 31 38 39 65 37 34 61 39 65 |+H.8500189e74a9e|
967 04d0: 30 34 37 35 65 38 32 32 30 39 33 62 63 37 64 62 |0475e822093bc7db|
968 04d0: 30 34 37 35 65 38 32 32 30 39 33 62 63 37 64 62 |0475e822093bc7db|
968 04e0: 30 64 36 33 31 61 65 62 30 62 34 0a 00 00 00 00 |0d631aeb0b4.....|
969 04e0: 30 64 36 33 31 61 65 62 30 62 34 0a 00 00 00 00 |0d631aeb0b4.....|
969 04f0: 00 00 00 05 44 00 00 00 62 c3 f1 ca 29 24 c1 6a |....D...b...)$.j|
970 04f0: 00 00 00 05 44 00 00 00 62 c3 f1 ca 29 24 c1 6a |....D...b...)$.j|
970 0500: 19 b0 65 6a 84 90 0e 50 4e 5b 0a ec 2d 00 00 00 |..ej...PN[..-...|
971 0500: 19 b0 65 6a 84 90 0e 50 4e 5b 0a ec 2d 00 00 00 |..ej...PN[..-...|
971 0510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
972 0510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
972 0520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
973 0520: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
973 0530: 00 00 00 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f |.....2.v....E.._|
974 0530: 00 00 00 00 00 32 af 76 86 d4 03 cf 45 b5 d9 5f |.....2.v....E.._|
974 0540: 2d 70 ce be a5 87 ac 80 6a 00 00 00 00 00 00 00 |-p......j.......|
975 0540: 2d 70 ce be a5 87 ac 80 6a 00 00 00 00 00 00 00 |-p......j.......|
975 0550: 00 00 00 00 02 44 0a 00 00 00 00 00 00 00 05 45 |.....D.........E|
976 0550: 00 00 00 00 02 44 0a 00 00 00 00 00 00 00 05 45 |.....D.........E|
976 0560: 00 00 00 62 9c 6f d0 35 0a 6c 0d 0c 49 d4 a9 c5 |...b.o.5.l..I...|
977 0560: 00 00 00 62 9c 6f d0 35 0a 6c 0d 0c 49 d4 a9 c5 |...b.o.5.l..I...|
977 0570: 01 7c f0 70 43 f5 4e 58 00 00 00 00 00 00 00 00 |.|.pC.NX........|
978 0570: 01 7c f0 70 43 f5 4e 58 00 00 00 00 00 00 00 00 |.|.pC.NX........|
978 0580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
979 0580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
979 0590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
980 0590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
980 05a0: 95 20 ee a7 81 bc ca 16 c1 e1 5a cc 0b a1 43 35 |. ........Z...C5|
981 05a0: 95 20 ee a7 81 bc ca 16 c1 e1 5a cc 0b a1 43 35 |. ........Z...C5|
981 05b0: a0 e8 e5 ba 00 00 00 00 00 00 00 00 00 00 00 02 |................|
982 05b0: a0 e8 e5 ba 00 00 00 00 00 00 00 00 00 00 00 02 |................|
982 05c0: 45 0a 00 00 00 00 00 00 00 05 48 00 00 00 62 85 |E.........H...b.|
983 05c0: 45 0a 00 00 00 00 00 00 00 05 48 00 00 00 62 85 |E.........H...b.|
983 05d0: 00 18 9e 74 a9 e0 47 5e 82 20 93 bc 7d b0 d6 31 |...t..G^. ..}..1|
984 05d0: 00 18 9e 74 a9 e0 47 5e 82 20 93 bc 7d b0 d6 31 |...t..G^. ..}..1|
984 05e0: ae b0 b4 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
985 05e0: ae b0 b4 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
985 05f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
986 05f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
986 0600: 00 00 00 00 00 00 00 00 00 00 00 02 de 42 19 6e |.............B.n|
987 0600: 00 00 00 00 00 00 00 00 00 00 00 02 de 42 19 6e |.............B.n|
987 0610: be e4 2e f2 84 b6 78 0a 87 cd c9 6e 8e aa b6 00 |......x....n....|
988 0610: be e4 2e f2 84 b6 78 0a 87 cd c9 6e 8e aa b6 00 |......x....n....|
988 0620: 00 00 00 00 00 00 00 00 00 00 02 48 0a 00 00 00 |...........H....|
989 0620: 00 00 00 00 00 00 00 00 00 00 02 48 0a 00 00 00 |...........H....|
989 0630: 00 00 00 00 00 00 00 00 00 00 00 00 00 |.............|
990 0630: 00 00 00 00 00 00 00 00 00 00 00 00 00 |.............|
990
991
991 $ hg debugbundle ../rev.hg2
992 $ hg debugbundle ../rev.hg2
992 Stream params: {}
993 Stream params: {}
993 changegroup -- {}
994 changegroup -- {}
994 32af7686d403cf45b5d95f2d70cebea587ac806a
995 32af7686d403cf45b5d95f2d70cebea587ac806a
995 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
996 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
996 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
997 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
997 02de42196ebee42ef284b6780a87cdc96e8eaab6
998 02de42196ebee42ef284b6780a87cdc96e8eaab6
998 $ hg unbundle ../rev.hg2
999 $ hg unbundle ../rev.hg2
999 adding changesets
1000 adding changesets
1000 adding manifests
1001 adding manifests
1001 adding file changes
1002 adding file changes
1002 added 0 changesets with 0 changes to 3 files
1003 added 0 changesets with 0 changes to 3 files
1003 (run 'hg update' to get a working copy)
1004 (run 'hg update' to get a working copy)
1004
1005
1005 with reply
1006 with reply
1006
1007
1007 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
1008 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
1008 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
1009 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
1009 0 unread bytes
1010 0 unread bytes
1010 addchangegroup return: 1
1011 addchangegroup return: 1
1011
1012
1012 $ f --hexdump ../rev-reply.hg2
1013 $ f --hexdump ../rev-reply.hg2
1013 ../rev-reply.hg2:
1014 ../rev-reply.hg2:
1014 0000: 48 47 32 30 00 00 00 00 00 00 00 2f 11 72 65 70 |HG20......./.rep|
1015 0000: 48 47 32 30 00 00 00 00 00 00 00 2f 11 72 65 70 |HG20......./.rep|
1015 0010: 6c 79 3a 63 68 61 6e 67 65 67 72 6f 75 70 00 00 |ly:changegroup..|
1016 0010: 6c 79 3a 63 68 61 6e 67 65 67 72 6f 75 70 00 00 |ly:changegroup..|
1016 0020: 00 00 00 02 0b 01 06 01 69 6e 2d 72 65 70 6c 79 |........in-reply|
1017 0020: 00 00 00 02 0b 01 06 01 69 6e 2d 72 65 70 6c 79 |........in-reply|
1017 0030: 2d 74 6f 31 72 65 74 75 72 6e 31 00 00 00 00 00 |-to1return1.....|
1018 0030: 2d 74 6f 31 72 65 74 75 72 6e 31 00 00 00 00 00 |-to1return1.....|
1018 0040: 00 00 1b 06 6f 75 74 70 75 74 00 00 00 01 00 01 |....output......|
1019 0040: 00 00 1b 06 6f 75 74 70 75 74 00 00 00 01 00 01 |....output......|
1019 0050: 0b 01 69 6e 2d 72 65 70 6c 79 2d 74 6f 31 00 00 |..in-reply-to1..|
1020 0050: 0b 01 69 6e 2d 72 65 70 6c 79 2d 74 6f 31 00 00 |..in-reply-to1..|
1020 0060: 00 64 61 64 64 69 6e 67 20 63 68 61 6e 67 65 73 |.dadding changes|
1021 0060: 00 64 61 64 64 69 6e 67 20 63 68 61 6e 67 65 73 |.dadding changes|
1021 0070: 65 74 73 0a 61 64 64 69 6e 67 20 6d 61 6e 69 66 |ets.adding manif|
1022 0070: 65 74 73 0a 61 64 64 69 6e 67 20 6d 61 6e 69 66 |ets.adding manif|
1022 0080: 65 73 74 73 0a 61 64 64 69 6e 67 20 66 69 6c 65 |ests.adding file|
1023 0080: 65 73 74 73 0a 61 64 64 69 6e 67 20 66 69 6c 65 |ests.adding file|
1023 0090: 20 63 68 61 6e 67 65 73 0a 61 64 64 65 64 20 30 | changes.added 0|
1024 0090: 20 63 68 61 6e 67 65 73 0a 61 64 64 65 64 20 30 | changes.added 0|
1024 00a0: 20 63 68 61 6e 67 65 73 65 74 73 20 77 69 74 68 | changesets with|
1025 00a0: 20 63 68 61 6e 67 65 73 65 74 73 20 77 69 74 68 | changesets with|
1025 00b0: 20 30 20 63 68 61 6e 67 65 73 20 74 6f 20 33 20 | 0 changes to 3 |
1026 00b0: 20 30 20 63 68 61 6e 67 65 73 20 74 6f 20 33 20 | 0 changes to 3 |
1026 00c0: 66 69 6c 65 73 0a 00 00 00 00 00 00 00 00 |files.........|
1027 00c0: 66 69 6c 65 73 0a 00 00 00 00 00 00 00 00 |files.........|
1027
1028
1028 Check handling of exception during generation.
1029 Check handling of exception during generation.
1029 ----------------------------------------------
1030 ----------------------------------------------
1030
1031
1031 $ hg bundle2 --genraise > ../genfailed.hg2
1032 $ hg bundle2 --genraise > ../genfailed.hg2
1032 abort: Someone set up us the bomb!
1033 abort: Someone set up us the bomb!
1033 [255]
1034 [255]
1034
1035
1035 Should still be a valid bundle
1036 Should still be a valid bundle
1036
1037
1037 $ f --hexdump ../genfailed.hg2
1038 $ f --hexdump ../genfailed.hg2
1038 ../genfailed.hg2:
1039 ../genfailed.hg2:
1039 0000: 48 47 32 30 00 00 00 00 00 00 00 0d 06 6f 75 74 |HG20.........out|
1040 0000: 48 47 32 30 00 00 00 00 00 00 00 0d 06 6f 75 74 |HG20.........out|
1040 0010: 70 75 74 00 00 00 00 00 00 ff ff ff ff 00 00 00 |put.............|
1041 0010: 70 75 74 00 00 00 00 00 00 ff ff ff ff 00 00 00 |put.............|
1041 0020: 48 0b 65 72 72 6f 72 3a 61 62 6f 72 74 00 00 00 |H.error:abort...|
1042 0020: 48 0b 65 72 72 6f 72 3a 61 62 6f 72 74 00 00 00 |H.error:abort...|
1042 0030: 00 01 00 07 2d 6d 65 73 73 61 67 65 75 6e 65 78 |....-messageunex|
1043 0030: 00 01 00 07 2d 6d 65 73 73 61 67 65 75 6e 65 78 |....-messageunex|
1043 0040: 70 65 63 74 65 64 20 65 72 72 6f 72 3a 20 53 6f |pected error: So|
1044 0040: 70 65 63 74 65 64 20 65 72 72 6f 72 3a 20 53 6f |pected error: So|
1044 0050: 6d 65 6f 6e 65 20 73 65 74 20 75 70 20 75 73 20 |meone set up us |
1045 0050: 6d 65 6f 6e 65 20 73 65 74 20 75 70 20 75 73 20 |meone set up us |
1045 0060: 74 68 65 20 62 6f 6d 62 21 00 00 00 00 00 00 00 |the bomb!.......|
1046 0060: 74 68 65 20 62 6f 6d 62 21 00 00 00 00 00 00 00 |the bomb!.......|
1046 0070: 00 |.|
1047 0070: 00 |.|
1047
1048
1048 And its handling on the other size raise a clean exception
1049 And its handling on the other size raise a clean exception
1049
1050
1050 $ cat ../genfailed.hg2 | hg unbundle2
1051 $ cat ../genfailed.hg2 | hg unbundle2
1051 0 unread bytes
1052 0 unread bytes
1052 abort: unexpected error: Someone set up us the bomb!
1053 abort: unexpected error: Someone set up us the bomb!
1053 [255]
1054 [255]
1054
1055
1055 Test compression
1056 Test compression
1056 ================
1057 ================
1057
1058
1058 Simple case where it just work: GZ
1059 Simple case where it just work: GZ
1059 ----------------------------------
1060 ----------------------------------
1060
1061
1061 $ hg bundle2 --compress GZ --rev '8+7+5+4' ../rev.hg2.bz
1062 $ hg bundle2 --compress GZ --rev '8+7+5+4' ../rev.hg2.bz
1062 $ f --hexdump ../rev.hg2.bz
1063 $ f --hexdump ../rev.hg2.bz
1063 ../rev.hg2.bz:
1064 ../rev.hg2.bz:
1064 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
1065 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
1065 0010: 69 6f 6e 3d 47 5a 78 9c 95 94 7d 68 95 55 1c c7 |ion=GZx...}h.U..|
1066 0010: 69 6f 6e 3d 47 5a 78 9c 95 94 7d 68 95 55 1c c7 |ion=GZx...}h.U..|
1066 0020: 9f 3b 31 e8 ce fa c3 65 be a0 a4 b4 52 b9 29 e7 |.;1....e....R.).|
1067 0020: 9f 3b 31 e8 ce fa c3 65 be a0 a4 b4 52 b9 29 e7 |.;1....e....R.).|
1067 0030: f5 79 ce 89 fa 63 ed 5e 77 8b 9c c3 3f 2a 1c 68 |.y...c.^w...?*.h|
1068 0030: f5 79 ce 89 fa 63 ed 5e 77 8b 9c c3 3f 2a 1c 68 |.y...c.^w...?*.h|
1068 0040: cf 79 9b dd 6a ae b0 28 74 b8 e5 96 5b bb 86 61 |.y..j..(t...[..a|
1069 0040: cf 79 9b dd 6a ae b0 28 74 b8 e5 96 5b bb 86 61 |.y..j..(t...[..a|
1069 0050: a3 15 6e 3a 71 c8 6a e8 a5 da 95 64 28 22 ce 69 |..n:q.j....d(".i|
1070 0050: a3 15 6e 3a 71 c8 6a e8 a5 da 95 64 28 22 ce 69 |..n:q.j....d(".i|
1070 0060: cd 06 59 34 28 2b 51 2a 58 c3 17 56 2a 9a 9d 67 |..Y4(+Q*X..V*..g|
1071 0060: cd 06 59 34 28 2b 51 2a 58 c3 17 56 2a 9a 9d 67 |..Y4(+Q*X..V*..g|
1071 0070: dc c6 35 9e c4 1d f8 9e 87 f3 9c f3 3b bf 0f bf |..5.........;...|
1072 0070: dc c6 35 9e c4 1d f8 9e 87 f3 9c f3 3b bf 0f bf |..5.........;...|
1072 0080: 97 e3 38 ce f4 42 b9 d6 af ae d2 55 af ae 7b ad |..8..B.....U..{.|
1073 0080: 97 e3 38 ce f4 42 b9 d6 af ae d2 55 af ae 7b ad |..8..B.....U..{.|
1073 0090: c6 c9 8d bb 8a ec b4 07 ed 7f fd ed d3 53 be 4e |.............S.N|
1074 0090: c6 c9 8d bb 8a ec b4 07 ed 7f fd ed d3 53 be 4e |.............S.N|
1074 00a0: f4 0e af 59 52 73 ea 50 d7 96 9e ba d4 9a 1f 87 |...YRs.P........|
1075 00a0: f4 0e af 59 52 73 ea 50 d7 96 9e ba d4 9a 1f 87 |...YRs.P........|
1075 00b0: 9b 9f 1d e8 7a 6a 79 e9 cb 7f cf eb fe 7e d3 82 |....zjy......~..|
1076 00b0: 9b 9f 1d e8 7a 6a 79 e9 cb 7f cf eb fe 7e d3 82 |....zjy......~..|
1076 00c0: ce 2f 36 38 21 23 cc 36 b7 b5 38 90 ab a1 21 92 |./68!#.6..8...!.|
1077 00c0: ce 2f 36 38 21 23 cc 36 b7 b5 38 90 ab a1 21 92 |./68!#.6..8...!.|
1077 00d0: 78 5a 0a 8a b1 31 0a 48 a6 29 92 4a 32 e6 1b e1 |xZ...1.H.).J2...|
1078 00d0: 78 5a 0a 8a b1 31 0a 48 a6 29 92 4a 32 e6 1b e1 |xZ...1.H.).J2...|
1078 00e0: 4a 85 b9 46 40 46 ed 61 63 b5 d6 aa 20 1e ac 5e |J..F@F.ac... ..^|
1079 00e0: 4a 85 b9 46 40 46 ed 61 63 b5 d6 aa 20 1e ac 5e |J..F@F.ac... ..^|
1079 00f0: b0 0a ae 8a c4 03 c6 d6 f9 a3 7b eb fb 4e de 7f |..........{..N..|
1080 00f0: b0 0a ae 8a c4 03 c6 d6 f9 a3 7b eb fb 4e de 7f |..........{..N..|
1080 0100: e4 97 55 5f 15 76 96 d2 5d bf 9d 3f 38 18 29 4c |..U_.v..]..?8.)L|
1081 0100: e4 97 55 5f 15 76 96 d2 5d bf 9d 3f 38 18 29 4c |..U_.v..]..?8.)L|
1081 0110: 0f b7 5d 6e 9b b3 aa 7e c6 d5 15 5b f7 7c 52 f1 |..]n...~...[.|R.|
1082 0110: 0f b7 5d 6e 9b b3 aa 7e c6 d5 15 5b f7 7c 52 f1 |..]n...~...[.|R.|
1082 0120: 7c 73 18 63 98 6d 3e 23 51 5a 6a 2e 19 72 8d cb ||s.c.m>#QZj..r..|
1083 0120: 7c 73 18 63 98 6d 3e 23 51 5a 6a 2e 19 72 8d cb ||s.c.m>#QZj..r..|
1083 0130: 09 07 14 78 82 33 e9 62 86 7d 0c 00 17 88 53 86 |...x.3.b.}....S.|
1084 0130: 09 07 14 78 82 33 e9 62 86 7d 0c 00 17 88 53 86 |...x.3.b.}....S.|
1084 0140: 3d 75 0b 63 e2 16 c6 84 9d 76 8f 76 7a cb de fc |=u.c.....v.vz...|
1085 0140: 3d 75 0b 63 e2 16 c6 84 9d 76 8f 76 7a cb de fc |=u.c.....v.vz...|
1085 0150: a8 a3 f0 46 d3 a5 f6 c7 96 b6 9f 60 3b 57 ae 28 |...F.......`;W.(|
1086 0150: a8 a3 f0 46 d3 a5 f6 c7 96 b6 9f 60 3b 57 ae 28 |...F.......`;W.(|
1086 0160: ce b2 8d e9 f4 3e 6f 66 53 dd e5 6b ad 67 be f9 |.....>ofS..k.g..|
1087 0160: ce b2 8d e9 f4 3e 6f 66 53 dd e5 6b ad 67 be f9 |.....>ofS..k.g..|
1087 0170: 72 ee 5f 8d 61 3c 61 b6 f9 8c d8 a5 82 63 45 3d |r._.a<a......cE=|
1088 0170: 72 ee 5f 8d 61 3c 61 b6 f9 8c d8 a5 82 63 45 3d |r._.a<a......cE=|
1088 0180: a3 0c 61 90 68 24 28 87 50 b9 c2 97 c6 20 01 11 |..a.h$(.P.... ..|
1089 0180: a3 0c 61 90 68 24 28 87 50 b9 c2 97 c6 20 01 11 |..a.h$(.P.... ..|
1089 0190: 80 84 10 98 cf e8 e4 13 96 05 51 2c 38 f3 c4 ec |..........Q,8...|
1090 0190: 80 84 10 98 cf e8 e4 13 96 05 51 2c 38 f3 c4 ec |..........Q,8...|
1090 01a0: ea 43 e7 96 5e 6a c8 be 11 dd 32 78 a2 fa dd 8f |.C..^j....2x....|
1091 01a0: ea 43 e7 96 5e 6a c8 be 11 dd 32 78 a2 fa dd 8f |.C..^j....2x....|
1091 01b0: b3 61 84 61 51 0c b3 cd 27 64 42 6b c2 b4 92 1e |.a.aQ...'dBk....|
1092 01b0: b3 61 84 61 51 0c b3 cd 27 64 42 6b c2 b4 92 1e |.a.aQ...'dBk....|
1092 01c0: 86 8c 12 68 24 00 10 db 7f 50 00 c6 91 e7 fa 4c |...h$....P.....L|
1093 01c0: 86 8c 12 68 24 00 10 db 7f 50 00 c6 91 e7 fa 4c |...h$....P.....L|
1093 01d0: 22 22 cc bf 84 81 0a 92 c1 aa 2a c7 1b 49 e6 ee |""........*..I..|
1094 01d0: 22 22 cc bf 84 81 0a 92 c1 aa 2a c7 1b 49 e6 ee |""........*..I..|
1094 01e0: 6b a9 7e e0 e9 b2 91 5e 7c 73 68 e0 fc 23 3f 34 |k.~....^|sh..#?4|
1095 01e0: 6b a9 7e e0 e9 b2 91 5e 7c 73 68 e0 fc 23 3f 34 |k.~....^|sh..#?4|
1095 01f0: ed cf 0e f2 b3 d3 4c d7 ae 59 33 6f 8c 3d b8 63 |......L..Y3o.=.c|
1096 01f0: ed cf 0e f2 b3 d3 4c d7 ae 59 33 6f 8c 3d b8 63 |......L..Y3o.=.c|
1096 0200: 21 2b e8 3d e0 6f 9d 3a b7 f9 dc 24 2a b2 3e a7 |!+.=.o.:...$*.>.|
1097 0200: 21 2b e8 3d e0 6f 9d 3a b7 f9 dc 24 2a b2 3e a7 |!+.=.o.:...$*.>.|
1097 0210: 58 dc 91 d8 40 e9 23 8e 88 84 ae 0f b9 00 2e b5 |X...@.#.........|
1098 0210: 58 dc 91 d8 40 e9 23 8e 88 84 ae 0f b9 00 2e b5 |X...@.#.........|
1098 0220: 74 36 f3 40 53 40 34 15 c0 d7 12 8d e7 bb 65 f9 |t6.@S@4.......e.|
1099 0220: 74 36 f3 40 53 40 34 15 c0 d7 12 8d e7 bb 65 f9 |t6.@S@4.......e.|
1099 0230: c8 ef 03 0f ff f9 fe b6 8a 0d 6d fd ec 51 70 f7 |..........m..Qp.|
1100 0230: c8 ef 03 0f ff f9 fe b6 8a 0d 6d fd ec 51 70 f7 |..........m..Qp.|
1100 0240: a7 ad 9b 6b 9d da 74 7b 53 43 d1 43 63 fd 19 f9 |...k..t{SC.Cc...|
1101 0240: a7 ad 9b 6b 9d da 74 7b 53 43 d1 43 63 fd 19 f9 |...k..t{SC.Cc...|
1101 0250: ca 67 95 e5 ef c4 e6 6c 9e 44 e1 c5 ac 7a 82 6f |.g.....l.D...z.o|
1102 0250: ca 67 95 e5 ef c4 e6 6c 9e 44 e1 c5 ac 7a 82 6f |.g.....l.D...z.o|
1102 0260: c2 e1 d2 b5 2d 81 29 f0 5d 09 6c 6f 10 ae 88 cf |....-.).].lo....|
1103 0260: c2 e1 d2 b5 2d 81 29 f0 5d 09 6c 6f 10 ae 88 cf |....-.).].lo....|
1103 0270: 25 05 d0 93 06 78 80 60 43 2d 10 1b 47 71 2b b7 |%....x.`C-..Gq+.|
1104 0270: 25 05 d0 93 06 78 80 60 43 2d 10 1b 47 71 2b b7 |%....x.`C-..Gq+.|
1104 0280: 7f bb e9 a7 e4 7d 67 7b df 9b f7 62 cf cd d8 f4 |.....}g{...b....|
1105 0280: 7f bb e9 a7 e4 7d 67 7b df 9b f7 62 cf cd d8 f4 |.....}g{...b....|
1105 0290: 48 bc 64 51 57 43 ff ea 8b 0b ae 74 64 53 07 86 |H.dQWC.....tdS..|
1106 0290: 48 bc 64 51 57 43 ff ea 8b 0b ae 74 64 53 07 86 |H.dQWC.....tdS..|
1106 02a0: fa 66 3c 5e f7 e1 af a7 c2 90 ff a7 be 9e c9 29 |.f<^...........)|
1107 02a0: fa 66 3c 5e f7 e1 af a7 c2 90 ff a7 be 9e c9 29 |.f<^...........)|
1107 02b0: b6 cc 41 48 18 69 94 8b 7c 04 7d 8c 98 a7 95 50 |..AH.i..|.}....P|
1108 02b0: b6 cc 41 48 18 69 94 8b 7c 04 7d 8c 98 a7 95 50 |..AH.i..|.}....P|
1108 02c0: 44 d9 d0 20 c8 14 30 14 51 ad 6c 16 03 94 0f 5a |D.. ..0.Q.l....Z|
1109 02c0: 44 d9 d0 20 c8 14 30 14 51 ad 6c 16 03 94 0f 5a |D.. ..0.Q.l....Z|
1109 02d0: 46 93 7f 1c 87 8d 25 d7 9d a2 d1 92 4c f3 c2 54 |F.....%.....L..T|
1110 02d0: 46 93 7f 1c 87 8d 25 d7 9d a2 d1 92 4c f3 c2 54 |F.....%.....L..T|
1110 02e0: ba f8 70 18 ca 24 0a 29 96 43 71 f2 93 95 74 18 |..p..$.).Cq...t.|
1111 02e0: ba f8 70 18 ca 24 0a 29 96 43 71 f2 93 95 74 18 |..p..$.).Cq...t.|
1111 02f0: b5 65 c4 b8 f6 6c 5c 34 20 1e d5 0c 21 c0 b1 90 |.e...l\4 ...!...|
1112 02f0: b5 65 c4 b8 f6 6c 5c 34 20 1e d5 0c 21 c0 b1 90 |.e...l\4 ...!...|
1112 0300: 9e 12 40 b9 18 fa 5a 00 41 a2 39 d3 a9 c1 73 21 |..@...Z.A.9...s!|
1113 0300: 9e 12 40 b9 18 fa 5a 00 41 a2 39 d3 a9 c1 73 21 |..@...Z.A.9...s!|
1113 0310: 8e 5e 3c b9 b8 f8 48 6a 76 46 a7 1a b6 dd 5b 51 |.^<...HjvF....[Q|
1114 0310: 8e 5e 3c b9 b8 f8 48 6a 76 46 a7 1a b6 dd 5b 51 |.^<...HjvF....[Q|
1114 0320: 5e 19 1d 59 12 c6 32 89 02 9a c0 8f 4f b8 0a ba |^..Y..2.....O...|
1115 0320: 5e 19 1d 59 12 c6 32 89 02 9a c0 8f 4f b8 0a ba |^..Y..2.....O...|
1115 0330: 5e ec 58 37 44 a3 2f dd 33 ed c9 d3 dd c7 22 1b |^.X7D./.3.....".|
1116 0330: 5e ec 58 37 44 a3 2f dd 33 ed c9 d3 dd c7 22 1b |^.X7D./.3.....".|
1116 0340: 2f d4 94 8e 95 3f 77 a7 ae 6e f3 32 8d bb 4a 4c |/....?w..n.2..JL|
1117 0340: 2f d4 94 8e 95 3f 77 a7 ae 6e f3 32 8d bb 4a 4c |/....?w..n.2..JL|
1117 0350: b8 0a 5a 43 34 3a b3 3a d6 77 ff 5c b6 fa ad f9 |..ZC4:.:.w.\....|
1118 0350: b8 0a 5a 43 34 3a b3 3a d6 77 ff 5c b6 fa ad f9 |..ZC4:.:.w.\....|
1118 0360: db fb 6a 33 df c1 7d 99 cf ef d4 d5 6d da 77 7c |..j3..}.....m.w||
1119 0360: db fb 6a 33 df c1 7d 99 cf ef d4 d5 6d da 77 7c |..j3..}.....m.w||
1119 0370: 3b 19 fd af c5 3f f1 60 c3 17 |;....?.`..|
1120 0370: 3b 19 fd af c5 3f f1 60 c3 17 |;....?.`..|
1120 $ hg debugbundle ../rev.hg2.bz
1121 $ hg debugbundle ../rev.hg2.bz
1121 Stream params: {Compression: GZ}
1122 Stream params: {Compression: GZ}
1122 changegroup -- {}
1123 changegroup -- {}
1123 32af7686d403cf45b5d95f2d70cebea587ac806a
1124 32af7686d403cf45b5d95f2d70cebea587ac806a
1124 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
1125 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
1125 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
1126 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
1126 02de42196ebee42ef284b6780a87cdc96e8eaab6
1127 02de42196ebee42ef284b6780a87cdc96e8eaab6
1127 $ hg unbundle ../rev.hg2.bz
1128 $ hg unbundle ../rev.hg2.bz
1128 adding changesets
1129 adding changesets
1129 adding manifests
1130 adding manifests
1130 adding file changes
1131 adding file changes
1131 added 0 changesets with 0 changes to 3 files
1132 added 0 changesets with 0 changes to 3 files
1132 (run 'hg update' to get a working copy)
1133 (run 'hg update' to get a working copy)
1133 Simple case where it just work: BZ
1134 Simple case where it just work: BZ
1134 ----------------------------------
1135 ----------------------------------
1135
1136
1136 $ hg bundle2 --compress BZ --rev '8+7+5+4' ../rev.hg2.bz
1137 $ hg bundle2 --compress BZ --rev '8+7+5+4' ../rev.hg2.bz
1137 $ f --hexdump ../rev.hg2.bz
1138 $ f --hexdump ../rev.hg2.bz
1138 ../rev.hg2.bz:
1139 ../rev.hg2.bz:
1139 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
1140 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
1140 0010: 69 6f 6e 3d 42 5a 42 5a 68 39 31 41 59 26 53 59 |ion=BZBZh91AY&SY|
1141 0010: 69 6f 6e 3d 42 5a 42 5a 68 39 31 41 59 26 53 59 |ion=BZBZh91AY&SY|
1141 0020: a3 4b 18 3d 00 00 1a 7f ff ff bf 5f f6 ef ef 7f |.K.=......._....|
1142 0020: a3 4b 18 3d 00 00 1a 7f ff ff bf 5f f6 ef ef 7f |.K.=......._....|
1142 0030: f6 3f f7 d1 d9 ff ff f7 6e ff ff 6e f7 f6 bd df |.?......n..n....|
1143 0030: f6 3f f7 d1 d9 ff ff f7 6e ff ff 6e f7 f6 bd df |.?......n..n....|
1143 0040: b5 ab ff cf 67 f6 e7 7b f7 c0 02 d7 33 82 8b 51 |....g..{....3..Q|
1144 0040: b5 ab ff cf 67 f6 e7 7b f7 c0 02 d7 33 82 8b 51 |....g..{....3..Q|
1144 0050: 04 a5 53 d5 3d 27 a0 99 18 4d 0d 34 00 d1 a1 e8 |..S.='...M.4....|
1145 0050: 04 a5 53 d5 3d 27 a0 99 18 4d 0d 34 00 d1 a1 e8 |..S.='...M.4....|
1145 0060: 80 c8 7a 87 a9 a3 43 6a 3d 46 86 26 80 34 3d 40 |..z...Cj=F.&.4=@|
1146 0060: 80 c8 7a 87 a9 a3 43 6a 3d 46 86 26 80 34 3d 40 |..z...Cj=F.&.4=@|
1146 0070: c8 c9 b5 34 f4 8f 48 0f 51 ea 34 34 fd 4d aa 19 |...4..H.Q.44.M..|
1147 0070: c8 c9 b5 34 f4 8f 48 0f 51 ea 34 34 fd 4d aa 19 |...4..H.Q.44.M..|
1147 0080: 03 40 0c 08 da 86 43 d4 f5 0f 42 1e a0 f3 54 33 |.@....C...B...T3|
1148 0080: 03 40 0c 08 da 86 43 d4 f5 0f 42 1e a0 f3 54 33 |.@....C...B...T3|
1148 0090: 54 d3 13 4d 03 40 32 00 00 32 03 26 80 0d 00 0d |T..M.@2..2.&....|
1149 0090: 54 d3 13 4d 03 40 32 00 00 32 03 26 80 0d 00 0d |T..M.@2..2.&....|
1149 00a0: 00 68 c8 c8 03 20 32 30 98 8c 80 00 00 03 4d 00 |.h... 20......M.|
1150 00a0: 00 68 c8 c8 03 20 32 30 98 8c 80 00 00 03 4d 00 |.h... 20......M.|
1150 00b0: c8 00 00 0d 00 00 22 99 a1 34 c2 64 a6 d5 34 1a |......"..4.d..4.|
1151 00b0: c8 00 00 0d 00 00 22 99 a1 34 c2 64 a6 d5 34 1a |......"..4.d..4.|
1151 00c0: 00 00 06 86 83 4d 07 a8 d1 a0 68 01 a0 00 00 00 |.....M....h.....|
1152 00c0: 00 00 06 86 83 4d 07 a8 d1 a0 68 01 a0 00 00 00 |.....M....h.....|
1152 00d0: 00 0d 06 80 00 00 00 0d 00 03 40 00 00 04 a4 a1 |..........@.....|
1153 00d0: 00 0d 06 80 00 00 00 0d 00 03 40 00 00 04 a4 a1 |..........@.....|
1153 00e0: 4d a9 89 89 b4 9a 32 0c 43 46 86 87 a9 8d 41 9a |M.....2.CF....A.|
1154 00e0: 4d a9 89 89 b4 9a 32 0c 43 46 86 87 a9 8d 41 9a |M.....2.CF....A.|
1154 00f0: 98 46 9a 0d 31 32 1a 34 0d 0c 8d a2 0c 98 4d 06 |.F..12.4......M.|
1155 00f0: 98 46 9a 0d 31 32 1a 34 0d 0c 8d a2 0c 98 4d 06 |.F..12.4......M.|
1155 0100: 8c 40 c2 60 8d 0d 0c 20 c9 89 fa a0 d0 d3 21 a1 |.@.`... ......!.|
1156 0100: 8c 40 c2 60 8d 0d 0c 20 c9 89 fa a0 d0 d3 21 a1 |.@.`... ......!.|
1156 0110: ea 34 d3 68 9e a6 d1 74 05 33 cb 66 96 93 28 64 |.4.h...t.3.f..(d|
1157 0110: ea 34 d3 68 9e a6 d1 74 05 33 cb 66 96 93 28 64 |.4.h...t.3.f..(d|
1157 0120: 40 91 22 ac 55 9b ea 40 7b 38 94 e2 f8 06 00 cb |@.".U..@{8......|
1158 0120: 40 91 22 ac 55 9b ea 40 7b 38 94 e2 f8 06 00 cb |@.".U..@{8......|
1158 0130: 28 02 00 4d ab 40 24 10 43 18 cf 64 b4 06 83 0c |(..M.@$.C..d....|
1159 0130: 28 02 00 4d ab 40 24 10 43 18 cf 64 b4 06 83 0c |(..M.@$.C..d....|
1159 0140: 34 6c b4 a3 d4 0a 0a e4 a8 5c 4e 23 c0 c9 7a 31 |4l.......\N#..z1|
1160 0140: 34 6c b4 a3 d4 0a 0a e4 a8 5c 4e 23 c0 c9 7a 31 |4l.......\N#..z1|
1160 0150: 97 87 77 7a 64 88 80 8e 60 97 20 93 0f 8e eb c4 |..wzd...`. .....|
1161 0150: 97 87 77 7a 64 88 80 8e 60 97 20 93 0f 8e eb c4 |..wzd...`. .....|
1161 0160: 62 a4 44 a3 52 20 b2 99 a9 2e e1 d7 29 4a 54 ac |b.D.R ......)JT.|
1162 0160: 62 a4 44 a3 52 20 b2 99 a9 2e e1 d7 29 4a 54 ac |b.D.R ......)JT.|
1162 0170: 44 7a bb cc 04 3d e0 aa bd 6a 33 5e 9b a2 57 36 |Dz...=...j3^..W6|
1163 0170: 44 7a bb cc 04 3d e0 aa bd 6a 33 5e 9b a2 57 36 |Dz...=...j3^..W6|
1163 0180: fa cb 45 bb 6d 3e c1 d9 d9 f5 83 69 8a d0 e0 e2 |..E.m>.....i....|
1164 0180: fa cb 45 bb 6d 3e c1 d9 d9 f5 83 69 8a d0 e0 e2 |..E.m>.....i....|
1164 0190: e7 ae 90 55 24 da 3f ab 78 c0 4c b4 56 a3 9e a4 |...U$.?.x.L.V...|
1165 0190: e7 ae 90 55 24 da 3f ab 78 c0 4c b4 56 a3 9e a4 |...U$.?.x.L.V...|
1165 01a0: af 9c 65 74 86 ec 6d dc 62 dc 33 ca c8 50 dd 9d |..et..m.b.3..P..|
1166 01a0: af 9c 65 74 86 ec 6d dc 62 dc 33 ca c8 50 dd 9d |..et..m.b.3..P..|
1166 01b0: 98 8e 9e 59 20 f3 f0 42 91 4a 09 f5 75 8d 3d a5 |...Y ..B.J..u.=.|
1167 01b0: 98 8e 9e 59 20 f3 f0 42 91 4a 09 f5 75 8d 3d a5 |...Y ..B.J..u.=.|
1167 01c0: a5 15 cb 8d 10 63 b0 c2 2e b2 81 f7 c1 76 0e 53 |.....c.......v.S|
1168 01c0: a5 15 cb 8d 10 63 b0 c2 2e b2 81 f7 c1 76 0e 53 |.....c.......v.S|
1168 01d0: 6c 0e 46 73 b5 ae 67 f9 4c 0b 45 6b a8 32 2a 2f |l.Fs..g.L.Ek.2*/|
1169 01d0: 6c 0e 46 73 b5 ae 67 f9 4c 0b 45 6b a8 32 2a 2f |l.Fs..g.L.Ek.2*/|
1169 01e0: a2 54 a4 44 05 20 a1 38 d1 a4 c6 09 a8 2b 08 99 |.T.D. .8.....+..|
1170 01e0: a2 54 a4 44 05 20 a1 38 d1 a4 c6 09 a8 2b 08 99 |.T.D. .8.....+..|
1170 01f0: a4 14 ae 8d a3 e3 aa 34 27 d8 44 ca c3 5d 21 8b |.......4'.D..]!.|
1171 01f0: a4 14 ae 8d a3 e3 aa 34 27 d8 44 ca c3 5d 21 8b |.......4'.D..]!.|
1171 0200: 1a 1e 97 29 71 2b 09 4a 4a 55 55 94 58 65 b2 bc |...)q+.JJUU.Xe..|
1172 0200: 1a 1e 97 29 71 2b 09 4a 4a 55 55 94 58 65 b2 bc |...)q+.JJUU.Xe..|
1172 0210: f3 a5 90 26 36 76 67 7a 51 98 d6 8a 4a 99 50 b5 |...&6vgzQ...J.P.|
1173 0210: f3 a5 90 26 36 76 67 7a 51 98 d6 8a 4a 99 50 b5 |...&6vgzQ...J.P.|
1173 0220: 99 8f 94 21 17 a9 8b f3 ad 4c 33 d4 2e 40 c8 0c |...!.....L3..@..|
1174 0220: 99 8f 94 21 17 a9 8b f3 ad 4c 33 d4 2e 40 c8 0c |...!.....L3..@..|
1174 0230: 3b 90 53 39 db 48 02 34 83 48 d6 b3 99 13 d2 58 |;.S9.H.4.H.....X|
1175 0230: 3b 90 53 39 db 48 02 34 83 48 d6 b3 99 13 d2 58 |;.S9.H.4.H.....X|
1175 0240: 65 8e 71 ac a9 06 95 f2 c4 8e b4 08 6b d3 0c ae |e.q.........k...|
1176 0240: 65 8e 71 ac a9 06 95 f2 c4 8e b4 08 6b d3 0c ae |e.q.........k...|
1176 0250: d9 90 56 71 43 a7 a2 62 16 3e 50 63 d3 57 3c 2d |..VqC..b.>Pc.W<-|
1177 0250: d9 90 56 71 43 a7 a2 62 16 3e 50 63 d3 57 3c 2d |..VqC..b.>Pc.W<-|
1177 0260: 9f 0f 34 05 08 d8 a6 4b 59 31 54 66 3a 45 0c 8a |..4....KY1Tf:E..|
1178 0260: 9f 0f 34 05 08 d8 a6 4b 59 31 54 66 3a 45 0c 8a |..4....KY1Tf:E..|
1178 0270: c7 90 3a f0 6a 83 1b f5 ca fb 80 2b 50 06 fb 51 |..:.j......+P..Q|
1179 0270: c7 90 3a f0 6a 83 1b f5 ca fb 80 2b 50 06 fb 51 |..:.j......+P..Q|
1179 0280: 7e a6 a4 d4 81 44 82 21 54 00 5b 1a 30 83 62 a3 |~....D.!T.[.0.b.|
1180 0280: 7e a6 a4 d4 81 44 82 21 54 00 5b 1a 30 83 62 a3 |~....D.!T.[.0.b.|
1180 0290: 18 b6 24 19 1e 45 df 4d 5c db a6 af 5b ac 90 fa |..$..E.M\...[...|
1181 0290: 18 b6 24 19 1e 45 df 4d 5c db a6 af 5b ac 90 fa |..$..E.M\...[...|
1181 02a0: 3e ed f9 ec 4c ba 36 ee d8 60 20 a7 c7 3b cb d1 |>...L.6..` ..;..|
1182 02a0: 3e ed f9 ec 4c ba 36 ee d8 60 20 a7 c7 3b cb d1 |>...L.6..` ..;..|
1182 02b0: 90 43 7d 27 16 50 5d ad f4 14 07 0b 90 5c cc 6b |.C}'.P]......\.k|
1183 02b0: 90 43 7d 27 16 50 5d ad f4 14 07 0b 90 5c cc 6b |.C}'.P]......\.k|
1183 02c0: 8d 3f a6 88 f4 34 37 a8 cf 14 63 36 19 f7 3e 28 |.?...47...c6..>(|
1184 02c0: 8d 3f a6 88 f4 34 37 a8 cf 14 63 36 19 f7 3e 28 |.?...47...c6..>(|
1184 02d0: de 99 e8 16 a4 9d 0d 40 a1 a7 24 52 14 a6 72 62 |.......@..$R..rb|
1185 02d0: de 99 e8 16 a4 9d 0d 40 a1 a7 24 52 14 a6 72 62 |.......@..$R..rb|
1185 02e0: 59 5a ca 2d e5 51 90 78 88 d9 c6 c7 21 d0 f7 46 |YZ.-.Q.x....!..F|
1186 02e0: 59 5a ca 2d e5 51 90 78 88 d9 c6 c7 21 d0 f7 46 |YZ.-.Q.x....!..F|
1186 02f0: b2 04 46 44 4e 20 9c 12 b1 03 4e 25 e0 a9 0c 58 |..FDN ....N%...X|
1187 02f0: b2 04 46 44 4e 20 9c 12 b1 03 4e 25 e0 a9 0c 58 |..FDN ....N%...X|
1187 0300: 5b 1d 3c 93 20 01 51 de a9 1c 69 23 32 46 14 b4 |[.<. .Q...i#2F..|
1188 0300: 5b 1d 3c 93 20 01 51 de a9 1c 69 23 32 46 14 b4 |[.<. .Q...i#2F..|
1188 0310: 90 db 17 98 98 50 03 90 29 aa 40 b0 13 d8 43 d2 |.....P..).@...C.|
1189 0310: 90 db 17 98 98 50 03 90 29 aa 40 b0 13 d8 43 d2 |.....P..).@...C.|
1189 0320: 5f c5 9d eb f3 f2 ad 41 e8 7a a9 ed a1 58 84 a6 |_......A.z...X..|
1190 0320: 5f c5 9d eb f3 f2 ad 41 e8 7a a9 ed a1 58 84 a6 |_......A.z...X..|
1190 0330: 42 bf d6 fc 24 82 c1 20 32 26 4a 15 a6 1d 29 7f |B...$.. 2&J...).|
1191 0330: 42 bf d6 fc 24 82 c1 20 32 26 4a 15 a6 1d 29 7f |B...$.. 2&J...).|
1191 0340: 7e f4 3d 07 bc 62 9a 5b ec 44 3d 72 1d 41 8b 5c |~.=..b.[.D=r.A.\|
1192 0340: 7e f4 3d 07 bc 62 9a 5b ec 44 3d 72 1d 41 8b 5c |~.=..b.[.D=r.A.\|
1192 0350: 80 de 0e 62 9a 2e f8 83 00 d5 07 a0 9c c6 74 98 |...b..........t.|
1193 0350: 80 de 0e 62 9a 2e f8 83 00 d5 07 a0 9c c6 74 98 |...b..........t.|
1193 0360: 11 b2 5e a9 38 02 03 ee fd 86 5c f4 86 b3 ae da |..^.8.....\.....|
1194 0360: 11 b2 5e a9 38 02 03 ee fd 86 5c f4 86 b3 ae da |..^.8.....\.....|
1194 0370: 05 94 01 c5 c6 ea 18 e6 ba 2a ba b3 04 5c 96 89 |.........*...\..|
1195 0370: 05 94 01 c5 c6 ea 18 e6 ba 2a ba b3 04 5c 96 89 |.........*...\..|
1195 0380: 72 63 5b 10 11 f6 67 34 98 cb e4 c0 4e fa e6 99 |rc[...g4....N...|
1196 0380: 72 63 5b 10 11 f6 67 34 98 cb e4 c0 4e fa e6 99 |rc[...g4....N...|
1196 0390: 19 6e 50 e8 26 8d 0c 17 e0 be ef e1 8e 02 6f 32 |.nP.&.........o2|
1197 0390: 19 6e 50 e8 26 8d 0c 17 e0 be ef e1 8e 02 6f 32 |.nP.&.........o2|
1197 03a0: 82 dc 26 f8 a1 08 f3 8a 0d f3 c4 75 00 48 73 b8 |..&........u.Hs.|
1198 03a0: 82 dc 26 f8 a1 08 f3 8a 0d f3 c4 75 00 48 73 b8 |..&........u.Hs.|
1198 03b0: be 3b 0d 7f d0 fd c7 78 96 ec e0 03 80 68 4d 8d |.;.....x.....hM.|
1199 03b0: be 3b 0d 7f d0 fd c7 78 96 ec e0 03 80 68 4d 8d |.;.....x.....hM.|
1199 03c0: 43 8c d7 68 58 f9 50 f0 18 cb 21 58 1b 60 cd 1f |C..hX.P...!X.`..|
1200 03c0: 43 8c d7 68 58 f9 50 f0 18 cb 21 58 1b 60 cd 1f |C..hX.P...!X.`..|
1200 03d0: 84 36 2e 16 1f 0a f7 4e 8f eb df 01 2d c2 79 0b |.6.....N....-.y.|
1201 03d0: 84 36 2e 16 1f 0a f7 4e 8f eb df 01 2d c2 79 0b |.6.....N....-.y.|
1201 03e0: f7 24 ea 0d e8 59 86 51 6e 1c 30 a3 ad 2f ee 8c |.$...Y.Qn.0../..|
1202 03e0: f7 24 ea 0d e8 59 86 51 6e 1c 30 a3 ad 2f ee 8c |.$...Y.Qn.0../..|
1202 03f0: 90 c8 84 d5 e8 34 c1 95 b2 c9 f6 4d 87 1c 7d 19 |.....4.....M..}.|
1203 03f0: 90 c8 84 d5 e8 34 c1 95 b2 c9 f6 4d 87 1c 7d 19 |.....4.....M..}.|
1203 0400: d6 41 58 56 7a e0 6c ba 10 c7 e8 33 39 36 96 e7 |.AXVz.l....396..|
1204 0400: d6 41 58 56 7a e0 6c ba 10 c7 e8 33 39 36 96 e7 |.AXVz.l....396..|
1204 0410: d2 f9 59 9a 08 95 48 38 e7 0b b7 0a 24 67 c4 39 |..Y...H8....$g.9|
1205 0410: d2 f9 59 9a 08 95 48 38 e7 0b b7 0a 24 67 c4 39 |..Y...H8....$g.9|
1205 0420: 8b 43 88 57 9c 01 f5 61 b5 e1 27 41 7e af 83 fe |.C.W...a..'A~...|
1206 0420: 8b 43 88 57 9c 01 f5 61 b5 e1 27 41 7e af 83 fe |.C.W...a..'A~...|
1206 0430: 2e e4 8a 70 a1 21 46 96 30 7a |...p.!F.0z|
1207 0430: 2e e4 8a 70 a1 21 46 96 30 7a |...p.!F.0z|
1207 $ hg debugbundle ../rev.hg2.bz
1208 $ hg debugbundle ../rev.hg2.bz
1208 Stream params: {Compression: BZ}
1209 Stream params: {Compression: BZ}
1209 changegroup -- {}
1210 changegroup -- {}
1210 32af7686d403cf45b5d95f2d70cebea587ac806a
1211 32af7686d403cf45b5d95f2d70cebea587ac806a
1211 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
1212 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
1212 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
1213 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
1213 02de42196ebee42ef284b6780a87cdc96e8eaab6
1214 02de42196ebee42ef284b6780a87cdc96e8eaab6
1214 $ hg unbundle ../rev.hg2.bz
1215 $ hg unbundle ../rev.hg2.bz
1215 adding changesets
1216 adding changesets
1216 adding manifests
1217 adding manifests
1217 adding file changes
1218 adding file changes
1218 added 0 changesets with 0 changes to 3 files
1219 added 0 changesets with 0 changes to 3 files
1219 (run 'hg update' to get a working copy)
1220 (run 'hg update' to get a working copy)
1220
1221
1221 unknown compression while unbundling
1222 unknown compression while unbundling
1222 -----------------------------
1223 -----------------------------
1223
1224
1224 $ hg bundle2 --param Compression=FooBarUnknown --rev '8+7+5+4' ../rev.hg2.bz
1225 $ hg bundle2 --param Compression=FooBarUnknown --rev '8+7+5+4' ../rev.hg2.bz
1225 $ cat ../rev.hg2.bz | hg statbundle2
1226 $ cat ../rev.hg2.bz | hg statbundle2
1226 abort: unknown parameters: Stream Parameter - Compression='FooBarUnknown'
1227 abort: unknown parameters: Stream Parameter - Compression='FooBarUnknown'
1227 [255]
1228 [255]
1228 $ hg unbundle ../rev.hg2.bz
1229 $ hg unbundle ../rev.hg2.bz
1229 abort: ../rev.hg2.bz: unknown bundle feature, Stream Parameter - Compression='FooBarUnknown'
1230 abort: ../rev.hg2.bz: unknown bundle feature, Stream Parameter - Compression='FooBarUnknown'
1230 (see https://mercurial-scm.org/wiki/BundleFeature for more information)
1231 (see https://mercurial-scm.org/wiki/BundleFeature for more information)
1231 [255]
1232 [255]
1232
1233
1233 $ cd ..
1234 $ cd ..
@@ -1,260 +1,260 b''
1 Create an extension to test bundle2 with multiple changegroups
1 Create an extension to test bundle2 with multiple changegroups
2
2
3 $ cat > bundle2.py <<EOF
3 $ cat > bundle2.py <<EOF
4 > """
4 > """
5 > """
5 > """
6 > from mercurial import changegroup, discovery, exchange
6 > from mercurial import changegroup, discovery, exchange
7 >
7 >
8 > def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
8 > def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
9 > b2caps=None, heads=None, common=None,
9 > b2caps=None, heads=None, common=None,
10 > **kwargs):
10 > **kwargs):
11 > # Create two changegroups given the common changesets and heads for the
11 > # Create two changegroups given the common changesets and heads for the
12 > # changegroup part we are being requested. Use the parent of each head
12 > # changegroup part we are being requested. Use the parent of each head
13 > # in 'heads' as intermediate heads for the first changegroup.
13 > # in 'heads' as intermediate heads for the first changegroup.
14 > intermediates = [repo[r].p1().node() for r in heads]
14 > intermediates = [repo[r].p1().node() for r in heads]
15 > outgoing = discovery.outgoing(repo, common, intermediates)
15 > outgoing = discovery.outgoing(repo, common, intermediates)
16 > cg = changegroup.getchangegroup(repo, source, outgoing,
16 > cg = changegroup.makechangegroup(repo, outgoing, '01',
17 > bundlecaps=bundlecaps)
17 > source, bundlecaps=bundlecaps)
18 > bundler.newpart('output', data='changegroup1')
18 > bundler.newpart('output', data='changegroup1')
19 > bundler.newpart('changegroup', data=cg.getchunks())
19 > bundler.newpart('changegroup', data=cg.getchunks())
20 > outgoing = discovery.outgoing(repo, common + intermediates, heads)
20 > outgoing = discovery.outgoing(repo, common + intermediates, heads)
21 > cg = changegroup.getchangegroup(repo, source, outgoing,
21 > cg = changegroup.makechangegroup(repo, outgoing, '01',
22 > bundlecaps=bundlecaps)
22 > source, bundlecaps=bundlecaps)
23 > bundler.newpart('output', data='changegroup2')
23 > bundler.newpart('output', data='changegroup2')
24 > bundler.newpart('changegroup', data=cg.getchunks())
24 > bundler.newpart('changegroup', data=cg.getchunks())
25 >
25 >
26 > def _pull(repo, *args, **kwargs):
26 > def _pull(repo, *args, **kwargs):
27 > pullop = _orig_pull(repo, *args, **kwargs)
27 > pullop = _orig_pull(repo, *args, **kwargs)
28 > repo.ui.write('pullop.cgresult is %d\n' % pullop.cgresult)
28 > repo.ui.write('pullop.cgresult is %d\n' % pullop.cgresult)
29 > return pullop
29 > return pullop
30 >
30 >
31 > _orig_pull = exchange.pull
31 > _orig_pull = exchange.pull
32 > exchange.pull = _pull
32 > exchange.pull = _pull
33 > exchange.getbundle2partsmapping['changegroup'] = _getbundlechangegrouppart
33 > exchange.getbundle2partsmapping['changegroup'] = _getbundlechangegrouppart
34 > EOF
34 > EOF
35
35
36 $ cat >> $HGRCPATH << EOF
36 $ cat >> $HGRCPATH << EOF
37 > [ui]
37 > [ui]
38 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
38 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
39 > EOF
39 > EOF
40
40
41 Start with a simple repository with a single commit
41 Start with a simple repository with a single commit
42
42
43 $ hg init repo
43 $ hg init repo
44 $ cd repo
44 $ cd repo
45 $ cat > .hg/hgrc << EOF
45 $ cat > .hg/hgrc << EOF
46 > [extensions]
46 > [extensions]
47 > bundle2=$TESTTMP/bundle2.py
47 > bundle2=$TESTTMP/bundle2.py
48 > EOF
48 > EOF
49
49
50 $ echo A > A
50 $ echo A > A
51 $ hg commit -A -m A -q
51 $ hg commit -A -m A -q
52 $ cd ..
52 $ cd ..
53
53
54 Clone
54 Clone
55
55
56 $ hg clone -q repo clone
56 $ hg clone -q repo clone
57
57
58 Add two linear commits
58 Add two linear commits
59
59
60 $ cd repo
60 $ cd repo
61 $ echo B > B
61 $ echo B > B
62 $ hg commit -A -m B -q
62 $ hg commit -A -m B -q
63 $ echo C > C
63 $ echo C > C
64 $ hg commit -A -m C -q
64 $ hg commit -A -m C -q
65
65
66 $ cd ../clone
66 $ cd ../clone
67 $ cat >> .hg/hgrc <<EOF
67 $ cat >> .hg/hgrc <<EOF
68 > [hooks]
68 > [hooks]
69 > pretxnchangegroup = sh -c "printenv.py pretxnchangegroup"
69 > pretxnchangegroup = sh -c "printenv.py pretxnchangegroup"
70 > changegroup = sh -c "printenv.py changegroup"
70 > changegroup = sh -c "printenv.py changegroup"
71 > incoming = sh -c "printenv.py incoming"
71 > incoming = sh -c "printenv.py incoming"
72 > EOF
72 > EOF
73
73
74 Pull the new commits in the clone
74 Pull the new commits in the clone
75
75
76 $ hg pull
76 $ hg pull
77 pulling from $TESTTMP/repo (glob)
77 pulling from $TESTTMP/repo (glob)
78 searching for changes
78 searching for changes
79 remote: changegroup1
79 remote: changegroup1
80 adding changesets
80 adding changesets
81 adding manifests
81 adding manifests
82 adding file changes
82 adding file changes
83 added 1 changesets with 1 changes to 1 files
83 added 1 changesets with 1 changes to 1 files
84 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
84 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
85 remote: changegroup2
85 remote: changegroup2
86 adding changesets
86 adding changesets
87 adding manifests
87 adding manifests
88 adding file changes
88 adding file changes
89 added 1 changesets with 1 changes to 1 files
89 added 1 changesets with 1 changes to 1 files
90 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
90 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
91 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
91 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
92 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
92 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
93 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
93 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
94 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
94 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
95 pullop.cgresult is 1
95 pullop.cgresult is 1
96 (run 'hg update' to get a working copy)
96 (run 'hg update' to get a working copy)
97 $ hg update
97 $ hg update
98 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
98 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
99 $ hg log -G
99 $ hg log -G
100 @ 2:f838bfaca5c7 public test C
100 @ 2:f838bfaca5c7 public test C
101 |
101 |
102 o 1:27547f69f254 public test B
102 o 1:27547f69f254 public test B
103 |
103 |
104 o 0:4a2df7238c3b public test A
104 o 0:4a2df7238c3b public test A
105
105
106 Add more changesets with multiple heads to the original repository
106 Add more changesets with multiple heads to the original repository
107
107
108 $ cd ../repo
108 $ cd ../repo
109 $ echo D > D
109 $ echo D > D
110 $ hg commit -A -m D -q
110 $ hg commit -A -m D -q
111 $ hg up -r 1
111 $ hg up -r 1
112 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
112 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
113 $ echo E > E
113 $ echo E > E
114 $ hg commit -A -m E -q
114 $ hg commit -A -m E -q
115 $ echo F > F
115 $ echo F > F
116 $ hg commit -A -m F -q
116 $ hg commit -A -m F -q
117 $ hg up -r 1
117 $ hg up -r 1
118 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
118 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
119 $ echo G > G
119 $ echo G > G
120 $ hg commit -A -m G -q
120 $ hg commit -A -m G -q
121 $ hg up -r 3
121 $ hg up -r 3
122 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
122 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
123 $ echo H > H
123 $ echo H > H
124 $ hg commit -A -m H -q
124 $ hg commit -A -m H -q
125 $ hg log -G
125 $ hg log -G
126 @ 7:5cd59d311f65 draft test H
126 @ 7:5cd59d311f65 draft test H
127 |
127 |
128 | o 6:1d14c3ce6ac0 draft test G
128 | o 6:1d14c3ce6ac0 draft test G
129 | |
129 | |
130 | | o 5:7f219660301f draft test F
130 | | o 5:7f219660301f draft test F
131 | | |
131 | | |
132 | | o 4:8a5212ebc852 draft test E
132 | | o 4:8a5212ebc852 draft test E
133 | |/
133 | |/
134 o | 3:b3325c91a4d9 draft test D
134 o | 3:b3325c91a4d9 draft test D
135 | |
135 | |
136 o | 2:f838bfaca5c7 draft test C
136 o | 2:f838bfaca5c7 draft test C
137 |/
137 |/
138 o 1:27547f69f254 draft test B
138 o 1:27547f69f254 draft test B
139 |
139 |
140 o 0:4a2df7238c3b draft test A
140 o 0:4a2df7238c3b draft test A
141
141
142 New heads are reported during transfer and properly accounted for in
142 New heads are reported during transfer and properly accounted for in
143 pullop.cgresult
143 pullop.cgresult
144
144
145 $ cd ../clone
145 $ cd ../clone
146 $ hg pull
146 $ hg pull
147 pulling from $TESTTMP/repo (glob)
147 pulling from $TESTTMP/repo (glob)
148 searching for changes
148 searching for changes
149 remote: changegroup1
149 remote: changegroup1
150 adding changesets
150 adding changesets
151 adding manifests
151 adding manifests
152 adding file changes
152 adding file changes
153 added 2 changesets with 2 changes to 2 files (+1 heads)
153 added 2 changesets with 2 changes to 2 files (+1 heads)
154 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
154 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
155 remote: changegroup2
155 remote: changegroup2
156 adding changesets
156 adding changesets
157 adding manifests
157 adding manifests
158 adding file changes
158 adding file changes
159 added 3 changesets with 3 changes to 3 files (+1 heads)
159 added 3 changesets with 3 changes to 3 files (+1 heads)
160 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
160 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
161 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
161 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
162 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
162 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
163 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
163 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
164 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
164 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
165 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
165 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
166 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=1d14c3ce6ac0582d2809220d33e8cd7a696e0156 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
166 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=1d14c3ce6ac0582d2809220d33e8cd7a696e0156 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
167 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
167 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
168 pullop.cgresult is 3
168 pullop.cgresult is 3
169 (run 'hg heads' to see heads, 'hg merge' to merge)
169 (run 'hg heads' to see heads, 'hg merge' to merge)
170 $ hg log -G
170 $ hg log -G
171 o 7:5cd59d311f65 public test H
171 o 7:5cd59d311f65 public test H
172 |
172 |
173 | o 6:1d14c3ce6ac0 public test G
173 | o 6:1d14c3ce6ac0 public test G
174 | |
174 | |
175 | | o 5:7f219660301f public test F
175 | | o 5:7f219660301f public test F
176 | | |
176 | | |
177 | | o 4:8a5212ebc852 public test E
177 | | o 4:8a5212ebc852 public test E
178 | |/
178 | |/
179 o | 3:b3325c91a4d9 public test D
179 o | 3:b3325c91a4d9 public test D
180 | |
180 | |
181 @ | 2:f838bfaca5c7 public test C
181 @ | 2:f838bfaca5c7 public test C
182 |/
182 |/
183 o 1:27547f69f254 public test B
183 o 1:27547f69f254 public test B
184 |
184 |
185 o 0:4a2df7238c3b public test A
185 o 0:4a2df7238c3b public test A
186
186
187 Removing a head from the original repository by merging it
187 Removing a head from the original repository by merging it
188
188
189 $ cd ../repo
189 $ cd ../repo
190 $ hg merge -r 6 -q
190 $ hg merge -r 6 -q
191 $ hg commit -m Merge
191 $ hg commit -m Merge
192 $ echo I > I
192 $ echo I > I
193 $ hg commit -A -m H -q
193 $ hg commit -A -m H -q
194 $ hg log -G
194 $ hg log -G
195 @ 9:9d18e5bd9ab0 draft test H
195 @ 9:9d18e5bd9ab0 draft test H
196 |
196 |
197 o 8:71bd7b46de72 draft test Merge
197 o 8:71bd7b46de72 draft test Merge
198 |\
198 |\
199 | o 7:5cd59d311f65 draft test H
199 | o 7:5cd59d311f65 draft test H
200 | |
200 | |
201 o | 6:1d14c3ce6ac0 draft test G
201 o | 6:1d14c3ce6ac0 draft test G
202 | |
202 | |
203 | | o 5:7f219660301f draft test F
203 | | o 5:7f219660301f draft test F
204 | | |
204 | | |
205 +---o 4:8a5212ebc852 draft test E
205 +---o 4:8a5212ebc852 draft test E
206 | |
206 | |
207 | o 3:b3325c91a4d9 draft test D
207 | o 3:b3325c91a4d9 draft test D
208 | |
208 | |
209 | o 2:f838bfaca5c7 draft test C
209 | o 2:f838bfaca5c7 draft test C
210 |/
210 |/
211 o 1:27547f69f254 draft test B
211 o 1:27547f69f254 draft test B
212 |
212 |
213 o 0:4a2df7238c3b draft test A
213 o 0:4a2df7238c3b draft test A
214
214
215 Removed heads are reported during transfer and properly accounted for in
215 Removed heads are reported during transfer and properly accounted for in
216 pullop.cgresult
216 pullop.cgresult
217
217
218 $ cd ../clone
218 $ cd ../clone
219 $ hg pull
219 $ hg pull
220 pulling from $TESTTMP/repo (glob)
220 pulling from $TESTTMP/repo (glob)
221 searching for changes
221 searching for changes
222 remote: changegroup1
222 remote: changegroup1
223 adding changesets
223 adding changesets
224 adding manifests
224 adding manifests
225 adding file changes
225 adding file changes
226 added 1 changesets with 0 changes to 0 files (-1 heads)
226 added 1 changesets with 0 changes to 0 files (-1 heads)
227 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
227 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
228 remote: changegroup2
228 remote: changegroup2
229 adding changesets
229 adding changesets
230 adding manifests
230 adding manifests
231 adding file changes
231 adding file changes
232 added 1 changesets with 1 changes to 1 files
232 added 1 changesets with 1 changes to 1 files
233 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
233 pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
234 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
234 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
235 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
235 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
236 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
236 changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
237 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
237 incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
238 pullop.cgresult is -2
238 pullop.cgresult is -2
239 (run 'hg update' to get a working copy)
239 (run 'hg update' to get a working copy)
240 $ hg log -G
240 $ hg log -G
241 o 9:9d18e5bd9ab0 public test H
241 o 9:9d18e5bd9ab0 public test H
242 |
242 |
243 o 8:71bd7b46de72 public test Merge
243 o 8:71bd7b46de72 public test Merge
244 |\
244 |\
245 | o 7:5cd59d311f65 public test H
245 | o 7:5cd59d311f65 public test H
246 | |
246 | |
247 o | 6:1d14c3ce6ac0 public test G
247 o | 6:1d14c3ce6ac0 public test G
248 | |
248 | |
249 | | o 5:7f219660301f public test F
249 | | o 5:7f219660301f public test F
250 | | |
250 | | |
251 +---o 4:8a5212ebc852 public test E
251 +---o 4:8a5212ebc852 public test E
252 | |
252 | |
253 | o 3:b3325c91a4d9 public test D
253 | o 3:b3325c91a4d9 public test D
254 | |
254 | |
255 | @ 2:f838bfaca5c7 public test C
255 | @ 2:f838bfaca5c7 public test C
256 |/
256 |/
257 o 1:27547f69f254 public test B
257 o 1:27547f69f254 public test B
258 |
258 |
259 o 0:4a2df7238c3b public test A
259 o 0:4a2df7238c3b public test A
260
260
@@ -1,590 +1,591 b''
1 #require killdaemons
1 #require killdaemons
2
2
3 Create an extension to test bundle2 remote-changegroup parts
3 Create an extension to test bundle2 remote-changegroup parts
4
4
5 $ cat > bundle2.py << EOF
5 $ cat > bundle2.py << EOF
6 > """A small extension to test bundle2 remote-changegroup parts.
6 > """A small extension to test bundle2 remote-changegroup parts.
7 >
7 >
8 > Current bundle2 implementation doesn't provide a way to generate those
8 > Current bundle2 implementation doesn't provide a way to generate those
9 > parts, so they must be created by extensions.
9 > parts, so they must be created by extensions.
10 > """
10 > """
11 > from mercurial import bundle2, changegroup, discovery, exchange, util
11 > from mercurial import bundle2, changegroup, discovery, exchange, util
12 >
12 >
13 > def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
13 > def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
14 > b2caps=None, heads=None, common=None,
14 > b2caps=None, heads=None, common=None,
15 > **kwargs):
15 > **kwargs):
16 > """this function replaces the changegroup part handler for getbundle.
16 > """this function replaces the changegroup part handler for getbundle.
17 > It allows to create a set of arbitrary parts containing changegroups
17 > It allows to create a set of arbitrary parts containing changegroups
18 > and remote-changegroups, as described in a bundle2maker file in the
18 > and remote-changegroups, as described in a bundle2maker file in the
19 > repository .hg/ directory.
19 > repository .hg/ directory.
20 >
20 >
21 > Each line of that bundle2maker file contain a description of the
21 > Each line of that bundle2maker file contain a description of the
22 > part to add:
22 > part to add:
23 > - changegroup common_revset heads_revset
23 > - changegroup common_revset heads_revset
24 > Creates a changegroup part based, using common_revset and
24 > Creates a changegroup part based, using common_revset and
25 > heads_revset for outgoing
25 > heads_revset for outgoing
26 > - remote-changegroup url file
26 > - remote-changegroup url file
27 > Creates a remote-changegroup part for a bundle at the given
27 > Creates a remote-changegroup part for a bundle at the given
28 > url. Size and digest, as required by the client, are computed
28 > url. Size and digest, as required by the client, are computed
29 > from the given file.
29 > from the given file.
30 > - raw-remote-changegroup <python expression>
30 > - raw-remote-changegroup <python expression>
31 > Creates a remote-changegroup part with the data given in the
31 > Creates a remote-changegroup part with the data given in the
32 > Python expression as parameters. The Python expression is
32 > Python expression as parameters. The Python expression is
33 > evaluated with eval, and is expected to be a dict.
33 > evaluated with eval, and is expected to be a dict.
34 > """
34 > """
35 > def newpart(name, data=''):
35 > def newpart(name, data=''):
36 > """wrapper around bundler.newpart adding an extra part making the
36 > """wrapper around bundler.newpart adding an extra part making the
37 > client output information about each processed part"""
37 > client output information about each processed part"""
38 > bundler.newpart('output', data=name)
38 > bundler.newpart('output', data=name)
39 > part = bundler.newpart(name, data=data)
39 > part = bundler.newpart(name, data=data)
40 > return part
40 > return part
41 >
41 >
42 > for line in open(repo.vfs.join('bundle2maker'), 'r'):
42 > for line in open(repo.vfs.join('bundle2maker'), 'r'):
43 > line = line.strip()
43 > line = line.strip()
44 > try:
44 > try:
45 > verb, args = line.split(None, 1)
45 > verb, args = line.split(None, 1)
46 > except ValueError:
46 > except ValueError:
47 > verb, args = line, ''
47 > verb, args = line, ''
48 > if verb == 'remote-changegroup':
48 > if verb == 'remote-changegroup':
49 > url, file = args.split()
49 > url, file = args.split()
50 > bundledata = open(file, 'rb').read()
50 > bundledata = open(file, 'rb').read()
51 > digest = util.digester.preferred(b2caps['digests'])
51 > digest = util.digester.preferred(b2caps['digests'])
52 > d = util.digester([digest], bundledata)
52 > d = util.digester([digest], bundledata)
53 > part = newpart('remote-changegroup')
53 > part = newpart('remote-changegroup')
54 > part.addparam('url', url)
54 > part.addparam('url', url)
55 > part.addparam('size', str(len(bundledata)))
55 > part.addparam('size', str(len(bundledata)))
56 > part.addparam('digests', digest)
56 > part.addparam('digests', digest)
57 > part.addparam('digest:%s' % digest, d[digest])
57 > part.addparam('digest:%s' % digest, d[digest])
58 > elif verb == 'raw-remote-changegroup':
58 > elif verb == 'raw-remote-changegroup':
59 > part = newpart('remote-changegroup')
59 > part = newpart('remote-changegroup')
60 > for k, v in eval(args).items():
60 > for k, v in eval(args).items():
61 > part.addparam(k, str(v))
61 > part.addparam(k, str(v))
62 > elif verb == 'changegroup':
62 > elif verb == 'changegroup':
63 > _common, heads = args.split()
63 > _common, heads = args.split()
64 > common.extend(repo.lookup(r) for r in repo.revs(_common))
64 > common.extend(repo.lookup(r) for r in repo.revs(_common))
65 > heads = [repo.lookup(r) for r in repo.revs(heads)]
65 > heads = [repo.lookup(r) for r in repo.revs(heads)]
66 > outgoing = discovery.outgoing(repo, common, heads)
66 > outgoing = discovery.outgoing(repo, common, heads)
67 > cg = changegroup.getchangegroup(repo, 'changegroup', outgoing)
67 > cg = changegroup.makechangegroup(repo, outgoing, '01',
68 > 'changegroup')
68 > newpart('changegroup', cg.getchunks())
69 > newpart('changegroup', cg.getchunks())
69 > else:
70 > else:
70 > raise Exception('unknown verb')
71 > raise Exception('unknown verb')
71 >
72 >
72 > exchange.getbundle2partsmapping['changegroup'] = _getbundlechangegrouppart
73 > exchange.getbundle2partsmapping['changegroup'] = _getbundlechangegrouppart
73 > EOF
74 > EOF
74
75
75 Start a simple HTTP server to serve bundles
76 Start a simple HTTP server to serve bundles
76
77
77 $ $PYTHON "$TESTDIR/dumbhttp.py" -p $HGPORT --pid dumb.pid
78 $ $PYTHON "$TESTDIR/dumbhttp.py" -p $HGPORT --pid dumb.pid
78 $ cat dumb.pid >> $DAEMON_PIDS
79 $ cat dumb.pid >> $DAEMON_PIDS
79
80
80 $ cat >> $HGRCPATH << EOF
81 $ cat >> $HGRCPATH << EOF
81 > [ui]
82 > [ui]
82 > ssh=$PYTHON "$TESTDIR/dummyssh"
83 > ssh=$PYTHON "$TESTDIR/dummyssh"
83 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
84 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
84 > EOF
85 > EOF
85
86
86 $ hg init repo
87 $ hg init repo
87
88
88 $ hg -R repo unbundle $TESTDIR/bundles/rebase.hg
89 $ hg -R repo unbundle $TESTDIR/bundles/rebase.hg
89 adding changesets
90 adding changesets
90 adding manifests
91 adding manifests
91 adding file changes
92 adding file changes
92 added 8 changesets with 7 changes to 7 files (+2 heads)
93 added 8 changesets with 7 changes to 7 files (+2 heads)
93 (run 'hg heads' to see heads, 'hg merge' to merge)
94 (run 'hg heads' to see heads, 'hg merge' to merge)
94
95
95 $ hg -R repo log -G
96 $ hg -R repo log -G
96 o 7:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
97 o 7:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
97 |
98 |
98 | o 6:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
99 | o 6:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
99 |/|
100 |/|
100 o | 5:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
101 o | 5:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
101 | |
102 | |
102 | o 4:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
103 | o 4:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
103 |/
104 |/
104 | o 3:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
105 | o 3:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
105 | |
106 | |
106 | o 2:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
107 | o 2:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
107 | |
108 | |
108 | o 1:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
109 | o 1:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
109 |/
110 |/
110 o 0:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
111 o 0:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
111
112
112 $ hg clone repo orig
113 $ hg clone repo orig
113 updating to branch default
114 updating to branch default
114 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
115 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
115
116
116 $ cat > repo/.hg/hgrc << EOF
117 $ cat > repo/.hg/hgrc << EOF
117 > [extensions]
118 > [extensions]
118 > bundle2=$TESTTMP/bundle2.py
119 > bundle2=$TESTTMP/bundle2.py
119 > EOF
120 > EOF
120
121
121 Test a pull with an remote-changegroup
122 Test a pull with an remote-changegroup
122
123
123 $ hg bundle -R repo --type v1 --base '0:4' -r '5:7' bundle.hg
124 $ hg bundle -R repo --type v1 --base '0:4' -r '5:7' bundle.hg
124 3 changesets found
125 3 changesets found
125 $ cat > repo/.hg/bundle2maker << EOF
126 $ cat > repo/.hg/bundle2maker << EOF
126 > remote-changegroup http://localhost:$HGPORT/bundle.hg bundle.hg
127 > remote-changegroup http://localhost:$HGPORT/bundle.hg bundle.hg
127 > EOF
128 > EOF
128 $ hg clone orig clone -r 3 -r 4
129 $ hg clone orig clone -r 3 -r 4
129 adding changesets
130 adding changesets
130 adding manifests
131 adding manifests
131 adding file changes
132 adding file changes
132 added 5 changesets with 5 changes to 5 files (+1 heads)
133 added 5 changesets with 5 changes to 5 files (+1 heads)
133 updating to branch default
134 updating to branch default
134 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
135 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
135 $ hg pull -R clone ssh://user@dummy/repo
136 $ hg pull -R clone ssh://user@dummy/repo
136 pulling from ssh://user@dummy/repo
137 pulling from ssh://user@dummy/repo
137 searching for changes
138 searching for changes
138 remote: remote-changegroup
139 remote: remote-changegroup
139 adding changesets
140 adding changesets
140 adding manifests
141 adding manifests
141 adding file changes
142 adding file changes
142 added 3 changesets with 2 changes to 2 files (+1 heads)
143 added 3 changesets with 2 changes to 2 files (+1 heads)
143 (run 'hg heads .' to see heads, 'hg merge' to merge)
144 (run 'hg heads .' to see heads, 'hg merge' to merge)
144 $ hg -R clone log -G
145 $ hg -R clone log -G
145 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
146 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
146 |
147 |
147 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
148 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
148 |/|
149 |/|
149 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
150 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
150 | |
151 | |
151 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
152 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
152 |/
153 |/
153 | @ 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
154 | @ 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
154 | |
155 | |
155 | o 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
156 | o 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
156 | |
157 | |
157 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
158 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
158 |/
159 |/
159 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
160 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
160
161
161 $ rm -rf clone
162 $ rm -rf clone
162
163
163 Test a pull with an remote-changegroup and a following changegroup
164 Test a pull with an remote-changegroup and a following changegroup
164
165
165 $ hg bundle -R repo --type v1 --base 2 -r '3:4' bundle2.hg
166 $ hg bundle -R repo --type v1 --base 2 -r '3:4' bundle2.hg
166 2 changesets found
167 2 changesets found
167 $ cat > repo/.hg/bundle2maker << EOF
168 $ cat > repo/.hg/bundle2maker << EOF
168 > remote-changegroup http://localhost:$HGPORT/bundle2.hg bundle2.hg
169 > remote-changegroup http://localhost:$HGPORT/bundle2.hg bundle2.hg
169 > changegroup 0:4 5:7
170 > changegroup 0:4 5:7
170 > EOF
171 > EOF
171 $ hg clone orig clone -r 2
172 $ hg clone orig clone -r 2
172 adding changesets
173 adding changesets
173 adding manifests
174 adding manifests
174 adding file changes
175 adding file changes
175 added 3 changesets with 3 changes to 3 files
176 added 3 changesets with 3 changes to 3 files
176 updating to branch default
177 updating to branch default
177 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
178 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
178 $ hg pull -R clone ssh://user@dummy/repo
179 $ hg pull -R clone ssh://user@dummy/repo
179 pulling from ssh://user@dummy/repo
180 pulling from ssh://user@dummy/repo
180 searching for changes
181 searching for changes
181 remote: remote-changegroup
182 remote: remote-changegroup
182 adding changesets
183 adding changesets
183 adding manifests
184 adding manifests
184 adding file changes
185 adding file changes
185 added 2 changesets with 2 changes to 2 files (+1 heads)
186 added 2 changesets with 2 changes to 2 files (+1 heads)
186 remote: changegroup
187 remote: changegroup
187 adding changesets
188 adding changesets
188 adding manifests
189 adding manifests
189 adding file changes
190 adding file changes
190 added 3 changesets with 2 changes to 2 files (+1 heads)
191 added 3 changesets with 2 changes to 2 files (+1 heads)
191 (run 'hg heads' to see heads, 'hg merge' to merge)
192 (run 'hg heads' to see heads, 'hg merge' to merge)
192 $ hg -R clone log -G
193 $ hg -R clone log -G
193 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
194 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
194 |
195 |
195 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
196 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
196 |/|
197 |/|
197 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
198 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
198 | |
199 | |
199 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
200 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
200 |/
201 |/
201 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
202 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
202 | |
203 | |
203 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
204 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
204 | |
205 | |
205 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
206 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
206 |/
207 |/
207 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
208 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
208
209
209 $ rm -rf clone
210 $ rm -rf clone
210
211
211 Test a pull with a changegroup followed by an remote-changegroup
212 Test a pull with a changegroup followed by an remote-changegroup
212
213
213 $ hg bundle -R repo --type v1 --base '0:4' -r '5:7' bundle3.hg
214 $ hg bundle -R repo --type v1 --base '0:4' -r '5:7' bundle3.hg
214 3 changesets found
215 3 changesets found
215 $ cat > repo/.hg/bundle2maker << EOF
216 $ cat > repo/.hg/bundle2maker << EOF
216 > changegroup 000000000000 :4
217 > changegroup 000000000000 :4
217 > remote-changegroup http://localhost:$HGPORT/bundle3.hg bundle3.hg
218 > remote-changegroup http://localhost:$HGPORT/bundle3.hg bundle3.hg
218 > EOF
219 > EOF
219 $ hg clone orig clone -r 2
220 $ hg clone orig clone -r 2
220 adding changesets
221 adding changesets
221 adding manifests
222 adding manifests
222 adding file changes
223 adding file changes
223 added 3 changesets with 3 changes to 3 files
224 added 3 changesets with 3 changes to 3 files
224 updating to branch default
225 updating to branch default
225 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
226 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
226 $ hg pull -R clone ssh://user@dummy/repo
227 $ hg pull -R clone ssh://user@dummy/repo
227 pulling from ssh://user@dummy/repo
228 pulling from ssh://user@dummy/repo
228 searching for changes
229 searching for changes
229 remote: changegroup
230 remote: changegroup
230 adding changesets
231 adding changesets
231 adding manifests
232 adding manifests
232 adding file changes
233 adding file changes
233 added 2 changesets with 2 changes to 2 files (+1 heads)
234 added 2 changesets with 2 changes to 2 files (+1 heads)
234 remote: remote-changegroup
235 remote: remote-changegroup
235 adding changesets
236 adding changesets
236 adding manifests
237 adding manifests
237 adding file changes
238 adding file changes
238 added 3 changesets with 2 changes to 2 files (+1 heads)
239 added 3 changesets with 2 changes to 2 files (+1 heads)
239 (run 'hg heads' to see heads, 'hg merge' to merge)
240 (run 'hg heads' to see heads, 'hg merge' to merge)
240 $ hg -R clone log -G
241 $ hg -R clone log -G
241 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
242 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
242 |
243 |
243 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
244 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
244 |/|
245 |/|
245 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
246 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
246 | |
247 | |
247 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
248 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
248 |/
249 |/
249 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
250 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
250 | |
251 | |
251 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
252 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
252 | |
253 | |
253 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
254 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
254 |/
255 |/
255 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
256 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
256
257
257 $ rm -rf clone
258 $ rm -rf clone
258
259
259 Test a pull with two remote-changegroups and a changegroup
260 Test a pull with two remote-changegroups and a changegroup
260
261
261 $ hg bundle -R repo --type v1 --base 2 -r '3:4' bundle4.hg
262 $ hg bundle -R repo --type v1 --base 2 -r '3:4' bundle4.hg
262 2 changesets found
263 2 changesets found
263 $ hg bundle -R repo --type v1 --base '3:4' -r '5:6' bundle5.hg
264 $ hg bundle -R repo --type v1 --base '3:4' -r '5:6' bundle5.hg
264 2 changesets found
265 2 changesets found
265 $ cat > repo/.hg/bundle2maker << EOF
266 $ cat > repo/.hg/bundle2maker << EOF
266 > remote-changegroup http://localhost:$HGPORT/bundle4.hg bundle4.hg
267 > remote-changegroup http://localhost:$HGPORT/bundle4.hg bundle4.hg
267 > remote-changegroup http://localhost:$HGPORT/bundle5.hg bundle5.hg
268 > remote-changegroup http://localhost:$HGPORT/bundle5.hg bundle5.hg
268 > changegroup 0:6 7
269 > changegroup 0:6 7
269 > EOF
270 > EOF
270 $ hg clone orig clone -r 2
271 $ hg clone orig clone -r 2
271 adding changesets
272 adding changesets
272 adding manifests
273 adding manifests
273 adding file changes
274 adding file changes
274 added 3 changesets with 3 changes to 3 files
275 added 3 changesets with 3 changes to 3 files
275 updating to branch default
276 updating to branch default
276 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
277 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
277 $ hg pull -R clone ssh://user@dummy/repo
278 $ hg pull -R clone ssh://user@dummy/repo
278 pulling from ssh://user@dummy/repo
279 pulling from ssh://user@dummy/repo
279 searching for changes
280 searching for changes
280 remote: remote-changegroup
281 remote: remote-changegroup
281 adding changesets
282 adding changesets
282 adding manifests
283 adding manifests
283 adding file changes
284 adding file changes
284 added 2 changesets with 2 changes to 2 files (+1 heads)
285 added 2 changesets with 2 changes to 2 files (+1 heads)
285 remote: remote-changegroup
286 remote: remote-changegroup
286 adding changesets
287 adding changesets
287 adding manifests
288 adding manifests
288 adding file changes
289 adding file changes
289 added 2 changesets with 1 changes to 1 files
290 added 2 changesets with 1 changes to 1 files
290 remote: changegroup
291 remote: changegroup
291 adding changesets
292 adding changesets
292 adding manifests
293 adding manifests
293 adding file changes
294 adding file changes
294 added 1 changesets with 1 changes to 1 files (+1 heads)
295 added 1 changesets with 1 changes to 1 files (+1 heads)
295 (run 'hg heads' to see heads, 'hg merge' to merge)
296 (run 'hg heads' to see heads, 'hg merge' to merge)
296 $ hg -R clone log -G
297 $ hg -R clone log -G
297 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
298 o 7:02de42196ebe public Nicolas Dumazet <nicdumz.commits@gmail.com> H
298 |
299 |
299 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
300 | o 6:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> G
300 |/|
301 |/|
301 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
302 o | 5:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
302 | |
303 | |
303 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
304 | o 4:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
304 |/
305 |/
305 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
306 | o 3:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> D
306 | |
307 | |
307 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
308 | @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
308 | |
309 | |
309 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
310 | o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
310 |/
311 |/
311 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
312 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
312
313
313 $ rm -rf clone
314 $ rm -rf clone
314
315
315 Hash digest tests
316 Hash digest tests
316
317
317 $ hg bundle -R repo --type v1 -a bundle6.hg
318 $ hg bundle -R repo --type v1 -a bundle6.hg
318 8 changesets found
319 8 changesets found
319
320
320 $ cat > repo/.hg/bundle2maker << EOF
321 $ cat > repo/.hg/bundle2maker << EOF
321 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'sha1', 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
322 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'sha1', 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
322 > EOF
323 > EOF
323 $ hg clone ssh://user@dummy/repo clone
324 $ hg clone ssh://user@dummy/repo clone
324 requesting all changes
325 requesting all changes
325 remote: remote-changegroup
326 remote: remote-changegroup
326 adding changesets
327 adding changesets
327 adding manifests
328 adding manifests
328 adding file changes
329 adding file changes
329 added 8 changesets with 7 changes to 7 files (+2 heads)
330 added 8 changesets with 7 changes to 7 files (+2 heads)
330 updating to branch default
331 updating to branch default
331 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
332 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
332 $ rm -rf clone
333 $ rm -rf clone
333
334
334 $ cat > repo/.hg/bundle2maker << EOF
335 $ cat > repo/.hg/bundle2maker << EOF
335 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394'}
336 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394'}
336 > EOF
337 > EOF
337 $ hg clone ssh://user@dummy/repo clone
338 $ hg clone ssh://user@dummy/repo clone
338 requesting all changes
339 requesting all changes
339 remote: remote-changegroup
340 remote: remote-changegroup
340 adding changesets
341 adding changesets
341 adding manifests
342 adding manifests
342 adding file changes
343 adding file changes
343 added 8 changesets with 7 changes to 7 files (+2 heads)
344 added 8 changesets with 7 changes to 7 files (+2 heads)
344 updating to branch default
345 updating to branch default
345 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
346 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
346 $ rm -rf clone
347 $ rm -rf clone
347
348
348 Hash digest mismatch throws an error
349 Hash digest mismatch throws an error
349
350
350 $ cat > repo/.hg/bundle2maker << EOF
351 $ cat > repo/.hg/bundle2maker << EOF
351 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'sha1', 'digest:sha1': '0' * 40}
352 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'sha1', 'digest:sha1': '0' * 40}
352 > EOF
353 > EOF
353 $ hg clone ssh://user@dummy/repo clone
354 $ hg clone ssh://user@dummy/repo clone
354 requesting all changes
355 requesting all changes
355 remote: remote-changegroup
356 remote: remote-changegroup
356 adding changesets
357 adding changesets
357 adding manifests
358 adding manifests
358 adding file changes
359 adding file changes
359 added 8 changesets with 7 changes to 7 files (+2 heads)
360 added 8 changesets with 7 changes to 7 files (+2 heads)
360 transaction abort!
361 transaction abort!
361 rollback completed
362 rollback completed
362 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
363 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
363 sha1 mismatch: expected 0000000000000000000000000000000000000000, got 2c880cfec23cff7d8f80c2f12958d1563cbdaba6
364 sha1 mismatch: expected 0000000000000000000000000000000000000000, got 2c880cfec23cff7d8f80c2f12958d1563cbdaba6
364 [255]
365 [255]
365
366
366 Multiple hash digests can be given
367 Multiple hash digests can be given
367
368
368 $ cat > repo/.hg/bundle2maker << EOF
369 $ cat > repo/.hg/bundle2maker << EOF
369 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394', 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
370 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394', 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
370 > EOF
371 > EOF
371 $ hg clone ssh://user@dummy/repo clone
372 $ hg clone ssh://user@dummy/repo clone
372 requesting all changes
373 requesting all changes
373 remote: remote-changegroup
374 remote: remote-changegroup
374 adding changesets
375 adding changesets
375 adding manifests
376 adding manifests
376 adding file changes
377 adding file changes
377 added 8 changesets with 7 changes to 7 files (+2 heads)
378 added 8 changesets with 7 changes to 7 files (+2 heads)
378 updating to branch default
379 updating to branch default
379 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
380 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
380 $ rm -rf clone
381 $ rm -rf clone
381
382
382 If either of the multiple hash digests mismatches, an error is thrown
383 If either of the multiple hash digests mismatches, an error is thrown
383
384
384 $ cat > repo/.hg/bundle2maker << EOF
385 $ cat > repo/.hg/bundle2maker << EOF
385 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': '0' * 32, 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
386 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': '0' * 32, 'digest:sha1': '2c880cfec23cff7d8f80c2f12958d1563cbdaba6'}
386 > EOF
387 > EOF
387 $ hg clone ssh://user@dummy/repo clone
388 $ hg clone ssh://user@dummy/repo clone
388 requesting all changes
389 requesting all changes
389 remote: remote-changegroup
390 remote: remote-changegroup
390 adding changesets
391 adding changesets
391 adding manifests
392 adding manifests
392 adding file changes
393 adding file changes
393 added 8 changesets with 7 changes to 7 files (+2 heads)
394 added 8 changesets with 7 changes to 7 files (+2 heads)
394 transaction abort!
395 transaction abort!
395 rollback completed
396 rollback completed
396 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
397 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
397 md5 mismatch: expected 00000000000000000000000000000000, got e22172c2907ef88794b7bea6642c2394
398 md5 mismatch: expected 00000000000000000000000000000000, got e22172c2907ef88794b7bea6642c2394
398 [255]
399 [255]
399
400
400 $ cat > repo/.hg/bundle2maker << EOF
401 $ cat > repo/.hg/bundle2maker << EOF
401 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394', 'digest:sha1': '0' * 40}
402 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle6.hg', 'size': 1663, 'digests': 'md5 sha1', 'digest:md5': 'e22172c2907ef88794b7bea6642c2394', 'digest:sha1': '0' * 40}
402 > EOF
403 > EOF
403 $ hg clone ssh://user@dummy/repo clone
404 $ hg clone ssh://user@dummy/repo clone
404 requesting all changes
405 requesting all changes
405 remote: remote-changegroup
406 remote: remote-changegroup
406 adding changesets
407 adding changesets
407 adding manifests
408 adding manifests
408 adding file changes
409 adding file changes
409 added 8 changesets with 7 changes to 7 files (+2 heads)
410 added 8 changesets with 7 changes to 7 files (+2 heads)
410 transaction abort!
411 transaction abort!
411 rollback completed
412 rollback completed
412 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
413 abort: bundle at http://localhost:$HGPORT/bundle6.hg is corrupted:
413 sha1 mismatch: expected 0000000000000000000000000000000000000000, got 2c880cfec23cff7d8f80c2f12958d1563cbdaba6
414 sha1 mismatch: expected 0000000000000000000000000000000000000000, got 2c880cfec23cff7d8f80c2f12958d1563cbdaba6
414 [255]
415 [255]
415
416
416 Corruption tests
417 Corruption tests
417
418
418 $ hg clone orig clone -r 2
419 $ hg clone orig clone -r 2
419 adding changesets
420 adding changesets
420 adding manifests
421 adding manifests
421 adding file changes
422 adding file changes
422 added 3 changesets with 3 changes to 3 files
423 added 3 changesets with 3 changes to 3 files
423 updating to branch default
424 updating to branch default
424 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
425 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
425
426
426 $ cat > repo/.hg/bundle2maker << EOF
427 $ cat > repo/.hg/bundle2maker << EOF
427 > remote-changegroup http://localhost:$HGPORT/bundle4.hg bundle4.hg
428 > remote-changegroup http://localhost:$HGPORT/bundle4.hg bundle4.hg
428 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle5.hg', 'size': 578, 'digests': 'sha1', 'digest:sha1': '0' * 40}
429 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle5.hg', 'size': 578, 'digests': 'sha1', 'digest:sha1': '0' * 40}
429 > changegroup 0:6 7
430 > changegroup 0:6 7
430 > EOF
431 > EOF
431 $ hg pull -R clone ssh://user@dummy/repo
432 $ hg pull -R clone ssh://user@dummy/repo
432 pulling from ssh://user@dummy/repo
433 pulling from ssh://user@dummy/repo
433 searching for changes
434 searching for changes
434 remote: remote-changegroup
435 remote: remote-changegroup
435 adding changesets
436 adding changesets
436 adding manifests
437 adding manifests
437 adding file changes
438 adding file changes
438 added 2 changesets with 2 changes to 2 files (+1 heads)
439 added 2 changesets with 2 changes to 2 files (+1 heads)
439 remote: remote-changegroup
440 remote: remote-changegroup
440 adding changesets
441 adding changesets
441 adding manifests
442 adding manifests
442 adding file changes
443 adding file changes
443 added 2 changesets with 1 changes to 1 files
444 added 2 changesets with 1 changes to 1 files
444 transaction abort!
445 transaction abort!
445 rollback completed
446 rollback completed
446 abort: bundle at http://localhost:$HGPORT/bundle5.hg is corrupted:
447 abort: bundle at http://localhost:$HGPORT/bundle5.hg is corrupted:
447 sha1 mismatch: expected 0000000000000000000000000000000000000000, got f29485d6bfd37db99983cfc95ecb52f8ca396106
448 sha1 mismatch: expected 0000000000000000000000000000000000000000, got f29485d6bfd37db99983cfc95ecb52f8ca396106
448 [255]
449 [255]
449
450
450 The entire transaction has been rolled back in the pull above
451 The entire transaction has been rolled back in the pull above
451
452
452 $ hg -R clone log -G
453 $ hg -R clone log -G
453 @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
454 @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
454 |
455 |
455 o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
456 o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
456 |
457 |
457 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
458 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
458
459
459
460
460 No params
461 No params
461
462
462 $ cat > repo/.hg/bundle2maker << EOF
463 $ cat > repo/.hg/bundle2maker << EOF
463 > raw-remote-changegroup {}
464 > raw-remote-changegroup {}
464 > EOF
465 > EOF
465 $ hg pull -R clone ssh://user@dummy/repo
466 $ hg pull -R clone ssh://user@dummy/repo
466 pulling from ssh://user@dummy/repo
467 pulling from ssh://user@dummy/repo
467 searching for changes
468 searching for changes
468 remote: remote-changegroup
469 remote: remote-changegroup
469 abort: remote-changegroup: missing "url" param
470 abort: remote-changegroup: missing "url" param
470 [255]
471 [255]
471
472
472 Missing size
473 Missing size
473
474
474 $ cat > repo/.hg/bundle2maker << EOF
475 $ cat > repo/.hg/bundle2maker << EOF
475 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg'}
476 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg'}
476 > EOF
477 > EOF
477 $ hg pull -R clone ssh://user@dummy/repo
478 $ hg pull -R clone ssh://user@dummy/repo
478 pulling from ssh://user@dummy/repo
479 pulling from ssh://user@dummy/repo
479 searching for changes
480 searching for changes
480 remote: remote-changegroup
481 remote: remote-changegroup
481 abort: remote-changegroup: missing "size" param
482 abort: remote-changegroup: missing "size" param
482 [255]
483 [255]
483
484
484 Invalid size
485 Invalid size
485
486
486 $ cat > repo/.hg/bundle2maker << EOF
487 $ cat > repo/.hg/bundle2maker << EOF
487 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 'foo'}
488 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 'foo'}
488 > EOF
489 > EOF
489 $ hg pull -R clone ssh://user@dummy/repo
490 $ hg pull -R clone ssh://user@dummy/repo
490 pulling from ssh://user@dummy/repo
491 pulling from ssh://user@dummy/repo
491 searching for changes
492 searching for changes
492 remote: remote-changegroup
493 remote: remote-changegroup
493 abort: remote-changegroup: invalid value for param "size"
494 abort: remote-changegroup: invalid value for param "size"
494 [255]
495 [255]
495
496
496 Size mismatch
497 Size mismatch
497
498
498 $ cat > repo/.hg/bundle2maker << EOF
499 $ cat > repo/.hg/bundle2maker << EOF
499 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 42}
500 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 42}
500 > EOF
501 > EOF
501 $ hg pull -R clone ssh://user@dummy/repo
502 $ hg pull -R clone ssh://user@dummy/repo
502 pulling from ssh://user@dummy/repo
503 pulling from ssh://user@dummy/repo
503 searching for changes
504 searching for changes
504 remote: remote-changegroup
505 remote: remote-changegroup
505 adding changesets
506 adding changesets
506 adding manifests
507 adding manifests
507 adding file changes
508 adding file changes
508 added 2 changesets with 2 changes to 2 files (+1 heads)
509 added 2 changesets with 2 changes to 2 files (+1 heads)
509 transaction abort!
510 transaction abort!
510 rollback completed
511 rollback completed
511 abort: bundle at http://localhost:$HGPORT/bundle4.hg is corrupted:
512 abort: bundle at http://localhost:$HGPORT/bundle4.hg is corrupted:
512 size mismatch: expected 42, got 581
513 size mismatch: expected 42, got 581
513 [255]
514 [255]
514
515
515 Unknown digest
516 Unknown digest
516
517
517 $ cat > repo/.hg/bundle2maker << EOF
518 $ cat > repo/.hg/bundle2maker << EOF
518 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 581, 'digests': 'foo', 'digest:foo': 'bar'}
519 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 581, 'digests': 'foo', 'digest:foo': 'bar'}
519 > EOF
520 > EOF
520 $ hg pull -R clone ssh://user@dummy/repo
521 $ hg pull -R clone ssh://user@dummy/repo
521 pulling from ssh://user@dummy/repo
522 pulling from ssh://user@dummy/repo
522 searching for changes
523 searching for changes
523 remote: remote-changegroup
524 remote: remote-changegroup
524 abort: missing support for remote-changegroup - digest:foo
525 abort: missing support for remote-changegroup - digest:foo
525 [255]
526 [255]
526
527
527 Missing digest
528 Missing digest
528
529
529 $ cat > repo/.hg/bundle2maker << EOF
530 $ cat > repo/.hg/bundle2maker << EOF
530 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 581, 'digests': 'sha1'}
531 > raw-remote-changegroup {'url': 'http://localhost:$HGPORT/bundle4.hg', 'size': 581, 'digests': 'sha1'}
531 > EOF
532 > EOF
532 $ hg pull -R clone ssh://user@dummy/repo
533 $ hg pull -R clone ssh://user@dummy/repo
533 pulling from ssh://user@dummy/repo
534 pulling from ssh://user@dummy/repo
534 searching for changes
535 searching for changes
535 remote: remote-changegroup
536 remote: remote-changegroup
536 abort: remote-changegroup: missing "digest:sha1" param
537 abort: remote-changegroup: missing "digest:sha1" param
537 [255]
538 [255]
538
539
539 Not an HTTP url
540 Not an HTTP url
540
541
541 $ cat > repo/.hg/bundle2maker << EOF
542 $ cat > repo/.hg/bundle2maker << EOF
542 > raw-remote-changegroup {'url': 'ssh://localhost:$HGPORT/bundle4.hg', 'size': 581}
543 > raw-remote-changegroup {'url': 'ssh://localhost:$HGPORT/bundle4.hg', 'size': 581}
543 > EOF
544 > EOF
544 $ hg pull -R clone ssh://user@dummy/repo
545 $ hg pull -R clone ssh://user@dummy/repo
545 pulling from ssh://user@dummy/repo
546 pulling from ssh://user@dummy/repo
546 searching for changes
547 searching for changes
547 remote: remote-changegroup
548 remote: remote-changegroup
548 abort: remote-changegroup does not support ssh urls
549 abort: remote-changegroup does not support ssh urls
549 [255]
550 [255]
550
551
551 Not a bundle
552 Not a bundle
552
553
553 $ cat > notbundle.hg << EOF
554 $ cat > notbundle.hg << EOF
554 > foo
555 > foo
555 > EOF
556 > EOF
556 $ cat > repo/.hg/bundle2maker << EOF
557 $ cat > repo/.hg/bundle2maker << EOF
557 > remote-changegroup http://localhost:$HGPORT/notbundle.hg notbundle.hg
558 > remote-changegroup http://localhost:$HGPORT/notbundle.hg notbundle.hg
558 > EOF
559 > EOF
559 $ hg pull -R clone ssh://user@dummy/repo
560 $ hg pull -R clone ssh://user@dummy/repo
560 pulling from ssh://user@dummy/repo
561 pulling from ssh://user@dummy/repo
561 searching for changes
562 searching for changes
562 remote: remote-changegroup
563 remote: remote-changegroup
563 abort: http://localhost:$HGPORT/notbundle.hg: not a Mercurial bundle
564 abort: http://localhost:$HGPORT/notbundle.hg: not a Mercurial bundle
564 [255]
565 [255]
565
566
566 Not a bundle 1.0
567 Not a bundle 1.0
567
568
568 $ cat > notbundle10.hg << EOF
569 $ cat > notbundle10.hg << EOF
569 > HG20
570 > HG20
570 > EOF
571 > EOF
571 $ cat > repo/.hg/bundle2maker << EOF
572 $ cat > repo/.hg/bundle2maker << EOF
572 > remote-changegroup http://localhost:$HGPORT/notbundle10.hg notbundle10.hg
573 > remote-changegroup http://localhost:$HGPORT/notbundle10.hg notbundle10.hg
573 > EOF
574 > EOF
574 $ hg pull -R clone ssh://user@dummy/repo
575 $ hg pull -R clone ssh://user@dummy/repo
575 pulling from ssh://user@dummy/repo
576 pulling from ssh://user@dummy/repo
576 searching for changes
577 searching for changes
577 remote: remote-changegroup
578 remote: remote-changegroup
578 abort: http://localhost:$HGPORT/notbundle10.hg: not a bundle version 1.0
579 abort: http://localhost:$HGPORT/notbundle10.hg: not a bundle version 1.0
579 [255]
580 [255]
580
581
581 $ hg -R clone log -G
582 $ hg -R clone log -G
582 @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
583 @ 2:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> C
583 |
584 |
584 o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
585 o 1:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> B
585 |
586 |
586 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
587 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
587
588
588 $ rm -rf clone
589 $ rm -rf clone
589
590
590 $ killdaemons.py
591 $ killdaemons.py
General Comments 0
You need to be logged in to leave comments. Login now