##// END OF EJS Templates
bundle2: move exception classes into the error module...
Pierre-Yves David -
r21618:7568f5c1 default
parent child Browse files
Show More
@@ -1,849 +1,839 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 Bundle processing
128 Bundle processing
129 ============================
129 ============================
130
130
131 Each part is processed in order using a "part handler". Handler are registered
131 Each part is processed in order using a "part handler". Handler are registered
132 for a certain part type.
132 for a certain part type.
133
133
134 The matching of a part to its handler is case insensitive. The case of the
134 The matching of a part to its handler is case insensitive. The case of the
135 part type is used to know if a part is mandatory or advisory. If the Part type
135 part type is used to know if a part is mandatory or advisory. If the Part type
136 contains any uppercase char it is considered mandatory. When no handler is
136 contains any uppercase char it is considered mandatory. When no handler is
137 known for a Mandatory part, the process is aborted and an exception is raised.
137 known for a Mandatory part, the process is aborted and an exception is raised.
138 If the part is advisory and no handler is known, the part is ignored. When the
138 If the part is advisory and no handler is known, the part is ignored. When the
139 process is aborted, the full bundle is still read from the stream to keep the
139 process is aborted, the full bundle is still read from the stream to keep the
140 channel usable. But none of the part read from an abort are processed. In the
140 channel usable. But none of the part read from an abort are processed. In the
141 future, dropping the stream may become an option for channel we do not care to
141 future, dropping the stream may become an option for channel we do not care to
142 preserve.
142 preserve.
143 """
143 """
144
144
145 import util
145 import util
146 import struct
146 import struct
147 import urllib
147 import urllib
148 import string
148 import string
149
149
150 import changegroup, error
150 import changegroup, error
151 from i18n import _
151 from i18n import _
152
152
153 _pack = struct.pack
153 _pack = struct.pack
154 _unpack = struct.unpack
154 _unpack = struct.unpack
155
155
156 _magicstring = 'HG2X'
156 _magicstring = 'HG2X'
157
157
158 _fstreamparamsize = '>H'
158 _fstreamparamsize = '>H'
159 _fpartheadersize = '>H'
159 _fpartheadersize = '>H'
160 _fparttypesize = '>B'
160 _fparttypesize = '>B'
161 _fpartid = '>I'
161 _fpartid = '>I'
162 _fpayloadsize = '>I'
162 _fpayloadsize = '>I'
163 _fpartparamcount = '>BB'
163 _fpartparamcount = '>BB'
164
164
165 preferedchunksize = 4096
165 preferedchunksize = 4096
166
166
167 def _makefpartparamsizes(nbparams):
167 def _makefpartparamsizes(nbparams):
168 """return a struct format to read part parameter sizes
168 """return a struct format to read part parameter sizes
169
169
170 The number parameters is variable so we need to build that format
170 The number parameters is variable so we need to build that format
171 dynamically.
171 dynamically.
172 """
172 """
173 return '>'+('BB'*nbparams)
173 return '>'+('BB'*nbparams)
174
174
175 class BundleValueError(ValueError):
176 """error raised when bundle2 cannot be processed
177
178 Current main usecase is unsupported part types."""
179 pass
180
181 class ReadOnlyPartError(RuntimeError):
182 """error raised when code tries to alter a part being generated"""
183 pass
184
185 parthandlermapping = {}
175 parthandlermapping = {}
186
176
187 def parthandler(parttype):
177 def parthandler(parttype):
188 """decorator that register a function as a bundle2 part handler
178 """decorator that register a function as a bundle2 part handler
189
179
190 eg::
180 eg::
191
181
192 @parthandler('myparttype')
182 @parthandler('myparttype')
193 def myparttypehandler(...):
183 def myparttypehandler(...):
194 '''process a part of type "my part".'''
184 '''process a part of type "my part".'''
195 ...
185 ...
196 """
186 """
197 def _decorator(func):
187 def _decorator(func):
198 lparttype = parttype.lower() # enforce lower case matching.
188 lparttype = parttype.lower() # enforce lower case matching.
199 assert lparttype not in parthandlermapping
189 assert lparttype not in parthandlermapping
200 parthandlermapping[lparttype] = func
190 parthandlermapping[lparttype] = func
201 return func
191 return func
202 return _decorator
192 return _decorator
203
193
204 class unbundlerecords(object):
194 class unbundlerecords(object):
205 """keep record of what happens during and unbundle
195 """keep record of what happens during and unbundle
206
196
207 New records are added using `records.add('cat', obj)`. Where 'cat' is a
197 New records are added using `records.add('cat', obj)`. Where 'cat' is a
208 category of record and obj is an arbitrary object.
198 category of record and obj is an arbitrary object.
209
199
210 `records['cat']` will return all entries of this category 'cat'.
200 `records['cat']` will return all entries of this category 'cat'.
211
201
212 Iterating on the object itself will yield `('category', obj)` tuples
202 Iterating on the object itself will yield `('category', obj)` tuples
213 for all entries.
203 for all entries.
214
204
215 All iterations happens in chronological order.
205 All iterations happens in chronological order.
216 """
206 """
217
207
218 def __init__(self):
208 def __init__(self):
219 self._categories = {}
209 self._categories = {}
220 self._sequences = []
210 self._sequences = []
221 self._replies = {}
211 self._replies = {}
222
212
223 def add(self, category, entry, inreplyto=None):
213 def add(self, category, entry, inreplyto=None):
224 """add a new record of a given category.
214 """add a new record of a given category.
225
215
226 The entry can then be retrieved in the list returned by
216 The entry can then be retrieved in the list returned by
227 self['category']."""
217 self['category']."""
228 self._categories.setdefault(category, []).append(entry)
218 self._categories.setdefault(category, []).append(entry)
229 self._sequences.append((category, entry))
219 self._sequences.append((category, entry))
230 if inreplyto is not None:
220 if inreplyto is not None:
231 self.getreplies(inreplyto).add(category, entry)
221 self.getreplies(inreplyto).add(category, entry)
232
222
233 def getreplies(self, partid):
223 def getreplies(self, partid):
234 """get the subrecords that replies to a specific part"""
224 """get the subrecords that replies to a specific part"""
235 return self._replies.setdefault(partid, unbundlerecords())
225 return self._replies.setdefault(partid, unbundlerecords())
236
226
237 def __getitem__(self, cat):
227 def __getitem__(self, cat):
238 return tuple(self._categories.get(cat, ()))
228 return tuple(self._categories.get(cat, ()))
239
229
240 def __iter__(self):
230 def __iter__(self):
241 return iter(self._sequences)
231 return iter(self._sequences)
242
232
243 def __len__(self):
233 def __len__(self):
244 return len(self._sequences)
234 return len(self._sequences)
245
235
246 def __nonzero__(self):
236 def __nonzero__(self):
247 return bool(self._sequences)
237 return bool(self._sequences)
248
238
249 class bundleoperation(object):
239 class bundleoperation(object):
250 """an object that represents a single bundling process
240 """an object that represents a single bundling process
251
241
252 Its purpose is to carry unbundle-related objects and states.
242 Its purpose is to carry unbundle-related objects and states.
253
243
254 A new object should be created at the beginning of each bundle processing.
244 A new object should be created at the beginning of each bundle processing.
255 The object is to be returned by the processing function.
245 The object is to be returned by the processing function.
256
246
257 The object has very little content now it will ultimately contain:
247 The object has very little content now it will ultimately contain:
258 * an access to the repo the bundle is applied to,
248 * an access to the repo the bundle is applied to,
259 * a ui object,
249 * a ui object,
260 * a way to retrieve a transaction to add changes to the repo,
250 * a way to retrieve a transaction to add changes to the repo,
261 * a way to record the result of processing each part,
251 * a way to record the result of processing each part,
262 * a way to construct a bundle response when applicable.
252 * a way to construct a bundle response when applicable.
263 """
253 """
264
254
265 def __init__(self, repo, transactiongetter):
255 def __init__(self, repo, transactiongetter):
266 self.repo = repo
256 self.repo = repo
267 self.ui = repo.ui
257 self.ui = repo.ui
268 self.records = unbundlerecords()
258 self.records = unbundlerecords()
269 self.gettransaction = transactiongetter
259 self.gettransaction = transactiongetter
270 self.reply = None
260 self.reply = None
271
261
272 class TransactionUnavailable(RuntimeError):
262 class TransactionUnavailable(RuntimeError):
273 pass
263 pass
274
264
275 def _notransaction():
265 def _notransaction():
276 """default method to get a transaction while processing a bundle
266 """default method to get a transaction while processing a bundle
277
267
278 Raise an exception to highlight the fact that no transaction was expected
268 Raise an exception to highlight the fact that no transaction was expected
279 to be created"""
269 to be created"""
280 raise TransactionUnavailable()
270 raise TransactionUnavailable()
281
271
282 def processbundle(repo, unbundler, transactiongetter=_notransaction):
272 def processbundle(repo, unbundler, transactiongetter=_notransaction):
283 """This function process a bundle, apply effect to/from a repo
273 """This function process a bundle, apply effect to/from a repo
284
274
285 It iterates over each part then searches for and uses the proper handling
275 It iterates over each part then searches for and uses the proper handling
286 code to process the part. Parts are processed in order.
276 code to process the part. Parts are processed in order.
287
277
288 This is very early version of this function that will be strongly reworked
278 This is very early version of this function that will be strongly reworked
289 before final usage.
279 before final usage.
290
280
291 Unknown Mandatory part will abort the process.
281 Unknown Mandatory part will abort the process.
292 """
282 """
293 op = bundleoperation(repo, transactiongetter)
283 op = bundleoperation(repo, transactiongetter)
294 # todo:
284 # todo:
295 # - replace this is a init function soon.
285 # - replace this is a init function soon.
296 # - exception catching
286 # - exception catching
297 unbundler.params
287 unbundler.params
298 iterparts = unbundler.iterparts()
288 iterparts = unbundler.iterparts()
299 part = None
289 part = None
300 try:
290 try:
301 for part in iterparts:
291 for part in iterparts:
302 parttype = part.type
292 parttype = part.type
303 # part key are matched lower case
293 # part key are matched lower case
304 key = parttype.lower()
294 key = parttype.lower()
305 try:
295 try:
306 handler = parthandlermapping[key]
296 handler = parthandlermapping[key]
307 op.ui.debug('found a handler for part %r\n' % parttype)
297 op.ui.debug('found a handler for part %r\n' % parttype)
308 except KeyError:
298 except KeyError:
309 if key != parttype: # mandatory parts
299 if key != parttype: # mandatory parts
310 # todo:
300 # todo:
311 # - use a more precise exception
301 # - use a more precise exception
312 raise BundleValueError(key)
302 raise error.BundleValueError(key)
313 op.ui.debug('ignoring unknown advisory part %r\n' % key)
303 op.ui.debug('ignoring unknown advisory part %r\n' % key)
314 # consuming the part
304 # consuming the part
315 part.read()
305 part.read()
316 continue
306 continue
317
307
318 # handler is called outside the above try block so that we don't
308 # handler is called outside the above try block so that we don't
319 # risk catching KeyErrors from anything other than the
309 # risk catching KeyErrors from anything other than the
320 # parthandlermapping lookup (any KeyError raised by handler()
310 # parthandlermapping lookup (any KeyError raised by handler()
321 # itself represents a defect of a different variety).
311 # itself represents a defect of a different variety).
322 output = None
312 output = None
323 if op.reply is not None:
313 if op.reply is not None:
324 op.ui.pushbuffer(error=True)
314 op.ui.pushbuffer(error=True)
325 output = ''
315 output = ''
326 try:
316 try:
327 handler(op, part)
317 handler(op, part)
328 finally:
318 finally:
329 if output is not None:
319 if output is not None:
330 output = op.ui.popbuffer()
320 output = op.ui.popbuffer()
331 if output:
321 if output:
332 outpart = op.reply.newpart('b2x:output', data=output)
322 outpart = op.reply.newpart('b2x:output', data=output)
333 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
323 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
334 part.read()
324 part.read()
335 except Exception, exc:
325 except Exception, exc:
336 if part is not None:
326 if part is not None:
337 # consume the bundle content
327 # consume the bundle content
338 part.read()
328 part.read()
339 for part in iterparts:
329 for part in iterparts:
340 # consume the bundle content
330 # consume the bundle content
341 part.read()
331 part.read()
342 # Small hack to let caller code distinguish exceptions from bundle2
332 # Small hack to let caller code distinguish exceptions from bundle2
343 # processing fron the ones from bundle1 processing. This is mostly
333 # processing fron the ones from bundle1 processing. This is mostly
344 # needed to handle different return codes to unbundle according to the
334 # needed to handle different return codes to unbundle according to the
345 # type of bundle. We should probably clean up or drop this return code
335 # type of bundle. We should probably clean up or drop this return code
346 # craziness in a future version.
336 # craziness in a future version.
347 exc.duringunbundle2 = True
337 exc.duringunbundle2 = True
348 raise
338 raise
349 return op
339 return op
350
340
351 def decodecaps(blob):
341 def decodecaps(blob):
352 """decode a bundle2 caps bytes blob into a dictionnary
342 """decode a bundle2 caps bytes blob into a dictionnary
353
343
354 The blob is a list of capabilities (one per line)
344 The blob is a list of capabilities (one per line)
355 Capabilities may have values using a line of the form::
345 Capabilities may have values using a line of the form::
356
346
357 capability=value1,value2,value3
347 capability=value1,value2,value3
358
348
359 The values are always a list."""
349 The values are always a list."""
360 caps = {}
350 caps = {}
361 for line in blob.splitlines():
351 for line in blob.splitlines():
362 if not line:
352 if not line:
363 continue
353 continue
364 if '=' not in line:
354 if '=' not in line:
365 key, vals = line, ()
355 key, vals = line, ()
366 else:
356 else:
367 key, vals = line.split('=', 1)
357 key, vals = line.split('=', 1)
368 vals = vals.split(',')
358 vals = vals.split(',')
369 key = urllib.unquote(key)
359 key = urllib.unquote(key)
370 vals = [urllib.unquote(v) for v in vals]
360 vals = [urllib.unquote(v) for v in vals]
371 caps[key] = vals
361 caps[key] = vals
372 return caps
362 return caps
373
363
374 def encodecaps(caps):
364 def encodecaps(caps):
375 """encode a bundle2 caps dictionary into a bytes blob"""
365 """encode a bundle2 caps dictionary into a bytes blob"""
376 chunks = []
366 chunks = []
377 for ca in sorted(caps):
367 for ca in sorted(caps):
378 vals = caps[ca]
368 vals = caps[ca]
379 ca = urllib.quote(ca)
369 ca = urllib.quote(ca)
380 vals = [urllib.quote(v) for v in vals]
370 vals = [urllib.quote(v) for v in vals]
381 if vals:
371 if vals:
382 ca = "%s=%s" % (ca, ','.join(vals))
372 ca = "%s=%s" % (ca, ','.join(vals))
383 chunks.append(ca)
373 chunks.append(ca)
384 return '\n'.join(chunks)
374 return '\n'.join(chunks)
385
375
386 class bundle20(object):
376 class bundle20(object):
387 """represent an outgoing bundle2 container
377 """represent an outgoing bundle2 container
388
378
389 Use the `addparam` method to add stream level parameter. and `newpart` to
379 Use the `addparam` method to add stream level parameter. and `newpart` to
390 populate it. Then call `getchunks` to retrieve all the binary chunks of
380 populate it. Then call `getchunks` to retrieve all the binary chunks of
391 data that compose the bundle2 container."""
381 data that compose the bundle2 container."""
392
382
393 def __init__(self, ui, capabilities=()):
383 def __init__(self, ui, capabilities=()):
394 self.ui = ui
384 self.ui = ui
395 self._params = []
385 self._params = []
396 self._parts = []
386 self._parts = []
397 self.capabilities = dict(capabilities)
387 self.capabilities = dict(capabilities)
398
388
399 # methods used to defines the bundle2 content
389 # methods used to defines the bundle2 content
400 def addparam(self, name, value=None):
390 def addparam(self, name, value=None):
401 """add a stream level parameter"""
391 """add a stream level parameter"""
402 if not name:
392 if not name:
403 raise ValueError('empty parameter name')
393 raise ValueError('empty parameter name')
404 if name[0] not in string.letters:
394 if name[0] not in string.letters:
405 raise ValueError('non letter first character: %r' % name)
395 raise ValueError('non letter first character: %r' % name)
406 self._params.append((name, value))
396 self._params.append((name, value))
407
397
408 def addpart(self, part):
398 def addpart(self, part):
409 """add a new part to the bundle2 container
399 """add a new part to the bundle2 container
410
400
411 Parts contains the actual applicative payload."""
401 Parts contains the actual applicative payload."""
412 assert part.id is None
402 assert part.id is None
413 part.id = len(self._parts) # very cheap counter
403 part.id = len(self._parts) # very cheap counter
414 self._parts.append(part)
404 self._parts.append(part)
415
405
416 def newpart(self, typeid, *args, **kwargs):
406 def newpart(self, typeid, *args, **kwargs):
417 """create a new part and add it to the containers
407 """create a new part and add it to the containers
418
408
419 As the part is directly added to the containers. For now, this means
409 As the part is directly added to the containers. For now, this means
420 that any failure to properly initialize the part after calling
410 that any failure to properly initialize the part after calling
421 ``newpart`` should result in a failure of the whole bundling process.
411 ``newpart`` should result in a failure of the whole bundling process.
422
412
423 You can still fall back to manually create and add if you need better
413 You can still fall back to manually create and add if you need better
424 control."""
414 control."""
425 part = bundlepart(typeid, *args, **kwargs)
415 part = bundlepart(typeid, *args, **kwargs)
426 self.addpart(part)
416 self.addpart(part)
427 return part
417 return part
428
418
429 # methods used to generate the bundle2 stream
419 # methods used to generate the bundle2 stream
430 def getchunks(self):
420 def getchunks(self):
431 self.ui.debug('start emission of %s stream\n' % _magicstring)
421 self.ui.debug('start emission of %s stream\n' % _magicstring)
432 yield _magicstring
422 yield _magicstring
433 param = self._paramchunk()
423 param = self._paramchunk()
434 self.ui.debug('bundle parameter: %s\n' % param)
424 self.ui.debug('bundle parameter: %s\n' % param)
435 yield _pack(_fstreamparamsize, len(param))
425 yield _pack(_fstreamparamsize, len(param))
436 if param:
426 if param:
437 yield param
427 yield param
438
428
439 self.ui.debug('start of parts\n')
429 self.ui.debug('start of parts\n')
440 for part in self._parts:
430 for part in self._parts:
441 self.ui.debug('bundle part: "%s"\n' % part.type)
431 self.ui.debug('bundle part: "%s"\n' % part.type)
442 for chunk in part.getchunks():
432 for chunk in part.getchunks():
443 yield chunk
433 yield chunk
444 self.ui.debug('end of bundle\n')
434 self.ui.debug('end of bundle\n')
445 yield '\0\0'
435 yield '\0\0'
446
436
447 def _paramchunk(self):
437 def _paramchunk(self):
448 """return a encoded version of all stream parameters"""
438 """return a encoded version of all stream parameters"""
449 blocks = []
439 blocks = []
450 for par, value in self._params:
440 for par, value in self._params:
451 par = urllib.quote(par)
441 par = urllib.quote(par)
452 if value is not None:
442 if value is not None:
453 value = urllib.quote(value)
443 value = urllib.quote(value)
454 par = '%s=%s' % (par, value)
444 par = '%s=%s' % (par, value)
455 blocks.append(par)
445 blocks.append(par)
456 return ' '.join(blocks)
446 return ' '.join(blocks)
457
447
458 class unpackermixin(object):
448 class unpackermixin(object):
459 """A mixin to extract bytes and struct data from a stream"""
449 """A mixin to extract bytes and struct data from a stream"""
460
450
461 def __init__(self, fp):
451 def __init__(self, fp):
462 self._fp = fp
452 self._fp = fp
463
453
464 def _unpack(self, format):
454 def _unpack(self, format):
465 """unpack this struct format from the stream"""
455 """unpack this struct format from the stream"""
466 data = self._readexact(struct.calcsize(format))
456 data = self._readexact(struct.calcsize(format))
467 return _unpack(format, data)
457 return _unpack(format, data)
468
458
469 def _readexact(self, size):
459 def _readexact(self, size):
470 """read exactly <size> bytes from the stream"""
460 """read exactly <size> bytes from the stream"""
471 return changegroup.readexactly(self._fp, size)
461 return changegroup.readexactly(self._fp, size)
472
462
473
463
474 class unbundle20(unpackermixin):
464 class unbundle20(unpackermixin):
475 """interpret a bundle2 stream
465 """interpret a bundle2 stream
476
466
477 This class is fed with a binary stream and yields parts through its
467 This class is fed with a binary stream and yields parts through its
478 `iterparts` methods."""
468 `iterparts` methods."""
479
469
480 def __init__(self, ui, fp, header=None):
470 def __init__(self, ui, fp, header=None):
481 """If header is specified, we do not read it out of the stream."""
471 """If header is specified, we do not read it out of the stream."""
482 self.ui = ui
472 self.ui = ui
483 super(unbundle20, self).__init__(fp)
473 super(unbundle20, self).__init__(fp)
484 if header is None:
474 if header is None:
485 header = self._readexact(4)
475 header = self._readexact(4)
486 magic, version = header[0:2], header[2:4]
476 magic, version = header[0:2], header[2:4]
487 if magic != 'HG':
477 if magic != 'HG':
488 raise util.Abort(_('not a Mercurial bundle'))
478 raise util.Abort(_('not a Mercurial bundle'))
489 if version != '2X':
479 if version != '2X':
490 raise util.Abort(_('unknown bundle version %s') % version)
480 raise util.Abort(_('unknown bundle version %s') % version)
491 self.ui.debug('start processing of %s stream\n' % header)
481 self.ui.debug('start processing of %s stream\n' % header)
492
482
493 @util.propertycache
483 @util.propertycache
494 def params(self):
484 def params(self):
495 """dictionary of stream level parameters"""
485 """dictionary of stream level parameters"""
496 self.ui.debug('reading bundle2 stream parameters\n')
486 self.ui.debug('reading bundle2 stream parameters\n')
497 params = {}
487 params = {}
498 paramssize = self._unpack(_fstreamparamsize)[0]
488 paramssize = self._unpack(_fstreamparamsize)[0]
499 if paramssize:
489 if paramssize:
500 for p in self._readexact(paramssize).split(' '):
490 for p in self._readexact(paramssize).split(' '):
501 p = p.split('=', 1)
491 p = p.split('=', 1)
502 p = [urllib.unquote(i) for i in p]
492 p = [urllib.unquote(i) for i in p]
503 if len(p) < 2:
493 if len(p) < 2:
504 p.append(None)
494 p.append(None)
505 self._processparam(*p)
495 self._processparam(*p)
506 params[p[0]] = p[1]
496 params[p[0]] = p[1]
507 return params
497 return params
508
498
509 def _processparam(self, name, value):
499 def _processparam(self, name, value):
510 """process a parameter, applying its effect if needed
500 """process a parameter, applying its effect if needed
511
501
512 Parameter starting with a lower case letter are advisory and will be
502 Parameter starting with a lower case letter are advisory and will be
513 ignored when unknown. Those starting with an upper case letter are
503 ignored when unknown. Those starting with an upper case letter are
514 mandatory and will this function will raise a KeyError when unknown.
504 mandatory and will this function will raise a KeyError when unknown.
515
505
516 Note: no option are currently supported. Any input will be either
506 Note: no option are currently supported. Any input will be either
517 ignored or failing.
507 ignored or failing.
518 """
508 """
519 if not name:
509 if not name:
520 raise ValueError('empty parameter name')
510 raise ValueError('empty parameter name')
521 if name[0] not in string.letters:
511 if name[0] not in string.letters:
522 raise ValueError('non letter first character: %r' % name)
512 raise ValueError('non letter first character: %r' % name)
523 # Some logic will be later added here to try to process the option for
513 # Some logic will be later added here to try to process the option for
524 # a dict of known parameter.
514 # a dict of known parameter.
525 if name[0].islower():
515 if name[0].islower():
526 self.ui.debug("ignoring unknown parameter %r\n" % name)
516 self.ui.debug("ignoring unknown parameter %r\n" % name)
527 else:
517 else:
528 raise KeyError(name)
518 raise KeyError(name)
529
519
530
520
531 def iterparts(self):
521 def iterparts(self):
532 """yield all parts contained in the stream"""
522 """yield all parts contained in the stream"""
533 # make sure param have been loaded
523 # make sure param have been loaded
534 self.params
524 self.params
535 self.ui.debug('start extraction of bundle2 parts\n')
525 self.ui.debug('start extraction of bundle2 parts\n')
536 headerblock = self._readpartheader()
526 headerblock = self._readpartheader()
537 while headerblock is not None:
527 while headerblock is not None:
538 part = unbundlepart(self.ui, headerblock, self._fp)
528 part = unbundlepart(self.ui, headerblock, self._fp)
539 yield part
529 yield part
540 headerblock = self._readpartheader()
530 headerblock = self._readpartheader()
541 self.ui.debug('end of bundle2 stream\n')
531 self.ui.debug('end of bundle2 stream\n')
542
532
543 def _readpartheader(self):
533 def _readpartheader(self):
544 """reads a part header size and return the bytes blob
534 """reads a part header size and return the bytes blob
545
535
546 returns None if empty"""
536 returns None if empty"""
547 headersize = self._unpack(_fpartheadersize)[0]
537 headersize = self._unpack(_fpartheadersize)[0]
548 self.ui.debug('part header size: %i\n' % headersize)
538 self.ui.debug('part header size: %i\n' % headersize)
549 if headersize:
539 if headersize:
550 return self._readexact(headersize)
540 return self._readexact(headersize)
551 return None
541 return None
552
542
553
543
554 class bundlepart(object):
544 class bundlepart(object):
555 """A bundle2 part contains application level payload
545 """A bundle2 part contains application level payload
556
546
557 The part `type` is used to route the part to the application level
547 The part `type` is used to route the part to the application level
558 handler.
548 handler.
559
549
560 The part payload is contained in ``part.data``. It could be raw bytes or a
550 The part payload is contained in ``part.data``. It could be raw bytes or a
561 generator of byte chunks.
551 generator of byte chunks.
562
552
563 You can add parameters to the part using the ``addparam`` method.
553 You can add parameters to the part using the ``addparam`` method.
564 Parameters can be either mandatory (default) or advisory. Remote side
554 Parameters can be either mandatory (default) or advisory. Remote side
565 should be able to safely ignore the advisory ones.
555 should be able to safely ignore the advisory ones.
566
556
567 Both data and parameters cannot be modified after the generation has begun.
557 Both data and parameters cannot be modified after the generation has begun.
568 """
558 """
569
559
570 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
560 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
571 data=''):
561 data=''):
572 self.id = None
562 self.id = None
573 self.type = parttype
563 self.type = parttype
574 self._data = data
564 self._data = data
575 self._mandatoryparams = list(mandatoryparams)
565 self._mandatoryparams = list(mandatoryparams)
576 self._advisoryparams = list(advisoryparams)
566 self._advisoryparams = list(advisoryparams)
577 # checking for duplicated entries
567 # checking for duplicated entries
578 self._seenparams = set()
568 self._seenparams = set()
579 for pname, __ in self._mandatoryparams + self._advisoryparams:
569 for pname, __ in self._mandatoryparams + self._advisoryparams:
580 if pname in self._seenparams:
570 if pname in self._seenparams:
581 raise RuntimeError('duplicated params: %s' % pname)
571 raise RuntimeError('duplicated params: %s' % pname)
582 self._seenparams.add(pname)
572 self._seenparams.add(pname)
583 # status of the part's generation:
573 # status of the part's generation:
584 # - None: not started,
574 # - None: not started,
585 # - False: currently generated,
575 # - False: currently generated,
586 # - True: generation done.
576 # - True: generation done.
587 self._generated = None
577 self._generated = None
588
578
589 # methods used to defines the part content
579 # methods used to defines the part content
590 def __setdata(self, data):
580 def __setdata(self, data):
591 if self._generated is not None:
581 if self._generated is not None:
592 raise ReadOnlyPartError('part is being generated')
582 raise error.ReadOnlyPartError('part is being generated')
593 self._data = data
583 self._data = data
594 def __getdata(self):
584 def __getdata(self):
595 return self._data
585 return self._data
596 data = property(__getdata, __setdata)
586 data = property(__getdata, __setdata)
597
587
598 @property
588 @property
599 def mandatoryparams(self):
589 def mandatoryparams(self):
600 # make it an immutable tuple to force people through ``addparam``
590 # make it an immutable tuple to force people through ``addparam``
601 return tuple(self._mandatoryparams)
591 return tuple(self._mandatoryparams)
602
592
603 @property
593 @property
604 def advisoryparams(self):
594 def advisoryparams(self):
605 # make it an immutable tuple to force people through ``addparam``
595 # make it an immutable tuple to force people through ``addparam``
606 return tuple(self._advisoryparams)
596 return tuple(self._advisoryparams)
607
597
608 def addparam(self, name, value='', mandatory=True):
598 def addparam(self, name, value='', mandatory=True):
609 if self._generated is not None:
599 if self._generated is not None:
610 raise ReadOnlyPartError('part is being generated')
600 raise error.ReadOnlyPartError('part is being generated')
611 if name in self._seenparams:
601 if name in self._seenparams:
612 raise ValueError('duplicated params: %s' % name)
602 raise ValueError('duplicated params: %s' % name)
613 self._seenparams.add(name)
603 self._seenparams.add(name)
614 params = self._advisoryparams
604 params = self._advisoryparams
615 if mandatory:
605 if mandatory:
616 params = self._mandatoryparams
606 params = self._mandatoryparams
617 params.append((name, value))
607 params.append((name, value))
618
608
619 # methods used to generates the bundle2 stream
609 # methods used to generates the bundle2 stream
620 def getchunks(self):
610 def getchunks(self):
621 if self._generated is not None:
611 if self._generated is not None:
622 raise RuntimeError('part can only be consumed once')
612 raise RuntimeError('part can only be consumed once')
623 self._generated = False
613 self._generated = False
624 #### header
614 #### header
625 ## parttype
615 ## parttype
626 header = [_pack(_fparttypesize, len(self.type)),
616 header = [_pack(_fparttypesize, len(self.type)),
627 self.type, _pack(_fpartid, self.id),
617 self.type, _pack(_fpartid, self.id),
628 ]
618 ]
629 ## parameters
619 ## parameters
630 # count
620 # count
631 manpar = self.mandatoryparams
621 manpar = self.mandatoryparams
632 advpar = self.advisoryparams
622 advpar = self.advisoryparams
633 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
623 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
634 # size
624 # size
635 parsizes = []
625 parsizes = []
636 for key, value in manpar:
626 for key, value in manpar:
637 parsizes.append(len(key))
627 parsizes.append(len(key))
638 parsizes.append(len(value))
628 parsizes.append(len(value))
639 for key, value in advpar:
629 for key, value in advpar:
640 parsizes.append(len(key))
630 parsizes.append(len(key))
641 parsizes.append(len(value))
631 parsizes.append(len(value))
642 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
632 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
643 header.append(paramsizes)
633 header.append(paramsizes)
644 # key, value
634 # key, value
645 for key, value in manpar:
635 for key, value in manpar:
646 header.append(key)
636 header.append(key)
647 header.append(value)
637 header.append(value)
648 for key, value in advpar:
638 for key, value in advpar:
649 header.append(key)
639 header.append(key)
650 header.append(value)
640 header.append(value)
651 ## finalize header
641 ## finalize header
652 headerchunk = ''.join(header)
642 headerchunk = ''.join(header)
653 yield _pack(_fpartheadersize, len(headerchunk))
643 yield _pack(_fpartheadersize, len(headerchunk))
654 yield headerchunk
644 yield headerchunk
655 ## payload
645 ## payload
656 for chunk in self._payloadchunks():
646 for chunk in self._payloadchunks():
657 yield _pack(_fpayloadsize, len(chunk))
647 yield _pack(_fpayloadsize, len(chunk))
658 yield chunk
648 yield chunk
659 # end of payload
649 # end of payload
660 yield _pack(_fpayloadsize, 0)
650 yield _pack(_fpayloadsize, 0)
661 self._generated = True
651 self._generated = True
662
652
663 def _payloadchunks(self):
653 def _payloadchunks(self):
664 """yield chunks of a the part payload
654 """yield chunks of a the part payload
665
655
666 Exists to handle the different methods to provide data to a part."""
656 Exists to handle the different methods to provide data to a part."""
667 # we only support fixed size data now.
657 # we only support fixed size data now.
668 # This will be improved in the future.
658 # This will be improved in the future.
669 if util.safehasattr(self.data, 'next'):
659 if util.safehasattr(self.data, 'next'):
670 buff = util.chunkbuffer(self.data)
660 buff = util.chunkbuffer(self.data)
671 chunk = buff.read(preferedchunksize)
661 chunk = buff.read(preferedchunksize)
672 while chunk:
662 while chunk:
673 yield chunk
663 yield chunk
674 chunk = buff.read(preferedchunksize)
664 chunk = buff.read(preferedchunksize)
675 elif len(self.data):
665 elif len(self.data):
676 yield self.data
666 yield self.data
677
667
678 class unbundlepart(unpackermixin):
668 class unbundlepart(unpackermixin):
679 """a bundle part read from a bundle"""
669 """a bundle part read from a bundle"""
680
670
681 def __init__(self, ui, header, fp):
671 def __init__(self, ui, header, fp):
682 super(unbundlepart, self).__init__(fp)
672 super(unbundlepart, self).__init__(fp)
683 self.ui = ui
673 self.ui = ui
684 # unbundle state attr
674 # unbundle state attr
685 self._headerdata = header
675 self._headerdata = header
686 self._headeroffset = 0
676 self._headeroffset = 0
687 self._initialized = False
677 self._initialized = False
688 self.consumed = False
678 self.consumed = False
689 # part data
679 # part data
690 self.id = None
680 self.id = None
691 self.type = None
681 self.type = None
692 self.mandatoryparams = None
682 self.mandatoryparams = None
693 self.advisoryparams = None
683 self.advisoryparams = None
694 self.params = None
684 self.params = None
695 self.mandatorykeys = ()
685 self.mandatorykeys = ()
696 self._payloadstream = None
686 self._payloadstream = None
697 self._readheader()
687 self._readheader()
698
688
699 def _fromheader(self, size):
689 def _fromheader(self, size):
700 """return the next <size> byte from the header"""
690 """return the next <size> byte from the header"""
701 offset = self._headeroffset
691 offset = self._headeroffset
702 data = self._headerdata[offset:(offset + size)]
692 data = self._headerdata[offset:(offset + size)]
703 self._headeroffset = offset + size
693 self._headeroffset = offset + size
704 return data
694 return data
705
695
706 def _unpackheader(self, format):
696 def _unpackheader(self, format):
707 """read given format from header
697 """read given format from header
708
698
709 This automatically compute the size of the format to read."""
699 This automatically compute the size of the format to read."""
710 data = self._fromheader(struct.calcsize(format))
700 data = self._fromheader(struct.calcsize(format))
711 return _unpack(format, data)
701 return _unpack(format, data)
712
702
713 def _initparams(self, mandatoryparams, advisoryparams):
703 def _initparams(self, mandatoryparams, advisoryparams):
714 """internal function to setup all logic related parameters"""
704 """internal function to setup all logic related parameters"""
715 # make it read only to prevent people touching it by mistake.
705 # make it read only to prevent people touching it by mistake.
716 self.mandatoryparams = tuple(mandatoryparams)
706 self.mandatoryparams = tuple(mandatoryparams)
717 self.advisoryparams = tuple(advisoryparams)
707 self.advisoryparams = tuple(advisoryparams)
718 # user friendly UI
708 # user friendly UI
719 self.params = dict(self.mandatoryparams)
709 self.params = dict(self.mandatoryparams)
720 self.params.update(dict(self.advisoryparams))
710 self.params.update(dict(self.advisoryparams))
721 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
711 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
722
712
723 def _readheader(self):
713 def _readheader(self):
724 """read the header and setup the object"""
714 """read the header and setup the object"""
725 typesize = self._unpackheader(_fparttypesize)[0]
715 typesize = self._unpackheader(_fparttypesize)[0]
726 self.type = self._fromheader(typesize)
716 self.type = self._fromheader(typesize)
727 self.ui.debug('part type: "%s"\n' % self.type)
717 self.ui.debug('part type: "%s"\n' % self.type)
728 self.id = self._unpackheader(_fpartid)[0]
718 self.id = self._unpackheader(_fpartid)[0]
729 self.ui.debug('part id: "%s"\n' % self.id)
719 self.ui.debug('part id: "%s"\n' % self.id)
730 ## reading parameters
720 ## reading parameters
731 # param count
721 # param count
732 mancount, advcount = self._unpackheader(_fpartparamcount)
722 mancount, advcount = self._unpackheader(_fpartparamcount)
733 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
723 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
734 # param size
724 # param size
735 fparamsizes = _makefpartparamsizes(mancount + advcount)
725 fparamsizes = _makefpartparamsizes(mancount + advcount)
736 paramsizes = self._unpackheader(fparamsizes)
726 paramsizes = self._unpackheader(fparamsizes)
737 # make it a list of couple again
727 # make it a list of couple again
738 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
728 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
739 # split mandatory from advisory
729 # split mandatory from advisory
740 mansizes = paramsizes[:mancount]
730 mansizes = paramsizes[:mancount]
741 advsizes = paramsizes[mancount:]
731 advsizes = paramsizes[mancount:]
742 # retrive param value
732 # retrive param value
743 manparams = []
733 manparams = []
744 for key, value in mansizes:
734 for key, value in mansizes:
745 manparams.append((self._fromheader(key), self._fromheader(value)))
735 manparams.append((self._fromheader(key), self._fromheader(value)))
746 advparams = []
736 advparams = []
747 for key, value in advsizes:
737 for key, value in advsizes:
748 advparams.append((self._fromheader(key), self._fromheader(value)))
738 advparams.append((self._fromheader(key), self._fromheader(value)))
749 self._initparams(manparams, advparams)
739 self._initparams(manparams, advparams)
750 ## part payload
740 ## part payload
751 def payloadchunks():
741 def payloadchunks():
752 payloadsize = self._unpack(_fpayloadsize)[0]
742 payloadsize = self._unpack(_fpayloadsize)[0]
753 self.ui.debug('payload chunk size: %i\n' % payloadsize)
743 self.ui.debug('payload chunk size: %i\n' % payloadsize)
754 while payloadsize:
744 while payloadsize:
755 yield self._readexact(payloadsize)
745 yield self._readexact(payloadsize)
756 payloadsize = self._unpack(_fpayloadsize)[0]
746 payloadsize = self._unpack(_fpayloadsize)[0]
757 self.ui.debug('payload chunk size: %i\n' % payloadsize)
747 self.ui.debug('payload chunk size: %i\n' % payloadsize)
758 self._payloadstream = util.chunkbuffer(payloadchunks())
748 self._payloadstream = util.chunkbuffer(payloadchunks())
759 # we read the data, tell it
749 # we read the data, tell it
760 self._initialized = True
750 self._initialized = True
761
751
762 def read(self, size=None):
752 def read(self, size=None):
763 """read payload data"""
753 """read payload data"""
764 if not self._initialized:
754 if not self._initialized:
765 self._readheader()
755 self._readheader()
766 if size is None:
756 if size is None:
767 data = self._payloadstream.read()
757 data = self._payloadstream.read()
768 else:
758 else:
769 data = self._payloadstream.read(size)
759 data = self._payloadstream.read(size)
770 if size is None or len(data) < size:
760 if size is None or len(data) < size:
771 self.consumed = True
761 self.consumed = True
772 return data
762 return data
773
763
774
764
775 @parthandler('b2x:changegroup')
765 @parthandler('b2x:changegroup')
776 def handlechangegroup(op, inpart):
766 def handlechangegroup(op, inpart):
777 """apply a changegroup part on the repo
767 """apply a changegroup part on the repo
778
768
779 This is a very early implementation that will massive rework before being
769 This is a very early implementation that will massive rework before being
780 inflicted to any end-user.
770 inflicted to any end-user.
781 """
771 """
782 # Make sure we trigger a transaction creation
772 # Make sure we trigger a transaction creation
783 #
773 #
784 # The addchangegroup function will get a transaction object by itself, but
774 # The addchangegroup function will get a transaction object by itself, but
785 # we need to make sure we trigger the creation of a transaction object used
775 # we need to make sure we trigger the creation of a transaction object used
786 # for the whole processing scope.
776 # for the whole processing scope.
787 op.gettransaction()
777 op.gettransaction()
788 cg = changegroup.unbundle10(inpart, 'UN')
778 cg = changegroup.unbundle10(inpart, 'UN')
789 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
779 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
790 op.records.add('changegroup', {'return': ret})
780 op.records.add('changegroup', {'return': ret})
791 if op.reply is not None:
781 if op.reply is not None:
792 # This is definitly not the final form of this
782 # This is definitly not the final form of this
793 # return. But one need to start somewhere.
783 # return. But one need to start somewhere.
794 part = op.reply.newpart('b2x:reply:changegroup')
784 part = op.reply.newpart('b2x:reply:changegroup')
795 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
785 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
796 part.addparam('return', '%i' % ret, mandatory=False)
786 part.addparam('return', '%i' % ret, mandatory=False)
797 assert not inpart.read()
787 assert not inpart.read()
798
788
799 @parthandler('b2x:reply:changegroup')
789 @parthandler('b2x:reply:changegroup')
800 def handlechangegroup(op, inpart):
790 def handlechangegroup(op, inpart):
801 ret = int(inpart.params['return'])
791 ret = int(inpart.params['return'])
802 replyto = int(inpart.params['in-reply-to'])
792 replyto = int(inpart.params['in-reply-to'])
803 op.records.add('changegroup', {'return': ret}, replyto)
793 op.records.add('changegroup', {'return': ret}, replyto)
804
794
805 @parthandler('b2x:check:heads')
795 @parthandler('b2x:check:heads')
806 def handlechangegroup(op, inpart):
796 def handlechangegroup(op, inpart):
807 """check that head of the repo did not change
797 """check that head of the repo did not change
808
798
809 This is used to detect a push race when using unbundle.
799 This is used to detect a push race when using unbundle.
810 This replaces the "heads" argument of unbundle."""
800 This replaces the "heads" argument of unbundle."""
811 h = inpart.read(20)
801 h = inpart.read(20)
812 heads = []
802 heads = []
813 while len(h) == 20:
803 while len(h) == 20:
814 heads.append(h)
804 heads.append(h)
815 h = inpart.read(20)
805 h = inpart.read(20)
816 assert not h
806 assert not h
817 if heads != op.repo.heads():
807 if heads != op.repo.heads():
818 raise error.PushRaced('repository changed while pushing - '
808 raise error.PushRaced('repository changed while pushing - '
819 'please try again')
809 'please try again')
820
810
821 @parthandler('b2x:output')
811 @parthandler('b2x:output')
822 def handleoutput(op, inpart):
812 def handleoutput(op, inpart):
823 """forward output captured on the server to the client"""
813 """forward output captured on the server to the client"""
824 for line in inpart.read().splitlines():
814 for line in inpart.read().splitlines():
825 op.ui.write(('remote: %s\n' % line))
815 op.ui.write(('remote: %s\n' % line))
826
816
827 @parthandler('b2x:replycaps')
817 @parthandler('b2x:replycaps')
828 def handlereplycaps(op, inpart):
818 def handlereplycaps(op, inpart):
829 """Notify that a reply bundle should be created
819 """Notify that a reply bundle should be created
830
820
831 The payload contains the capabilities information for the reply"""
821 The payload contains the capabilities information for the reply"""
832 caps = decodecaps(inpart.read())
822 caps = decodecaps(inpart.read())
833 if op.reply is None:
823 if op.reply is None:
834 op.reply = bundle20(op.ui, caps)
824 op.reply = bundle20(op.ui, caps)
835
825
836 @parthandler('b2x:error:abort')
826 @parthandler('b2x:error:abort')
837 def handlereplycaps(op, inpart):
827 def handlereplycaps(op, inpart):
838 """Used to transmit abort error over the wire"""
828 """Used to transmit abort error over the wire"""
839 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
829 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
840
830
841 @parthandler('b2x:error:unknownpart')
831 @parthandler('b2x:error:unknownpart')
842 def handlereplycaps(op, inpart):
832 def handlereplycaps(op, inpart):
843 """Used to transmit unknown part error over the wire"""
833 """Used to transmit unknown part error over the wire"""
844 raise BundleValueError(inpart.params['parttype'])
834 raise error.BundleValueError(inpart.params['parttype'])
845
835
846 @parthandler('b2x:error:pushraced')
836 @parthandler('b2x:error:pushraced')
847 def handlereplycaps(op, inpart):
837 def handlereplycaps(op, inpart):
848 """Used to transmit push race error over the wire"""
838 """Used to transmit push race error over the wire"""
849 raise error.ResponseError(_('push failed:'), inpart.params['message'])
839 raise error.ResponseError(_('push failed:'), inpart.params['message'])
@@ -1,100 +1,111 b''
1 # error.py - Mercurial exceptions
1 # error.py - Mercurial exceptions
2 #
2 #
3 # Copyright 2005-2008 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2008 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """Mercurial exceptions.
8 """Mercurial exceptions.
9
9
10 This allows us to catch exceptions at higher levels without forcing
10 This allows us to catch exceptions at higher levels without forcing
11 imports.
11 imports.
12 """
12 """
13
13
14 # Do not import anything here, please
14 # Do not import anything here, please
15
15
16 class RevlogError(Exception):
16 class RevlogError(Exception):
17 pass
17 pass
18
18
19 class LookupError(RevlogError, KeyError):
19 class LookupError(RevlogError, KeyError):
20 def __init__(self, name, index, message):
20 def __init__(self, name, index, message):
21 self.name = name
21 self.name = name
22 if isinstance(name, str) and len(name) == 20:
22 if isinstance(name, str) and len(name) == 20:
23 from node import short
23 from node import short
24 name = short(name)
24 name = short(name)
25 RevlogError.__init__(self, '%s@%s: %s' % (index, name, message))
25 RevlogError.__init__(self, '%s@%s: %s' % (index, name, message))
26
26
27 def __str__(self):
27 def __str__(self):
28 return RevlogError.__str__(self)
28 return RevlogError.__str__(self)
29
29
30 class ManifestLookupError(LookupError):
30 class ManifestLookupError(LookupError):
31 pass
31 pass
32
32
33 class CommandError(Exception):
33 class CommandError(Exception):
34 """Exception raised on errors in parsing the command line."""
34 """Exception raised on errors in parsing the command line."""
35
35
36 class InterventionRequired(Exception):
36 class InterventionRequired(Exception):
37 """Exception raised when a command requires human intervention."""
37 """Exception raised when a command requires human intervention."""
38
38
39 class Abort(Exception):
39 class Abort(Exception):
40 """Raised if a command needs to print an error and exit."""
40 """Raised if a command needs to print an error and exit."""
41 def __init__(self, *args, **kw):
41 def __init__(self, *args, **kw):
42 Exception.__init__(self, *args)
42 Exception.__init__(self, *args)
43 self.hint = kw.get('hint')
43 self.hint = kw.get('hint')
44
44
45 class ConfigError(Abort):
45 class ConfigError(Abort):
46 'Exception raised when parsing config files'
46 'Exception raised when parsing config files'
47
47
48 class OutOfBandError(Exception):
48 class OutOfBandError(Exception):
49 'Exception raised when a remote repo reports failure'
49 'Exception raised when a remote repo reports failure'
50
50
51 class ParseError(Exception):
51 class ParseError(Exception):
52 'Exception raised when parsing config files (msg[, pos])'
52 'Exception raised when parsing config files (msg[, pos])'
53
53
54 class RepoError(Exception):
54 class RepoError(Exception):
55 def __init__(self, *args, **kw):
55 def __init__(self, *args, **kw):
56 Exception.__init__(self, *args)
56 Exception.__init__(self, *args)
57 self.hint = kw.get('hint')
57 self.hint = kw.get('hint')
58
58
59 class RepoLookupError(RepoError):
59 class RepoLookupError(RepoError):
60 pass
60 pass
61
61
62 class CapabilityError(RepoError):
62 class CapabilityError(RepoError):
63 pass
63 pass
64
64
65 class RequirementError(RepoError):
65 class RequirementError(RepoError):
66 """Exception raised if .hg/requires has an unknown entry."""
66 """Exception raised if .hg/requires has an unknown entry."""
67 pass
67 pass
68
68
69 class LockError(IOError):
69 class LockError(IOError):
70 def __init__(self, errno, strerror, filename, desc):
70 def __init__(self, errno, strerror, filename, desc):
71 IOError.__init__(self, errno, strerror, filename)
71 IOError.__init__(self, errno, strerror, filename)
72 self.desc = desc
72 self.desc = desc
73
73
74 class LockHeld(LockError):
74 class LockHeld(LockError):
75 def __init__(self, errno, filename, desc, locker):
75 def __init__(self, errno, filename, desc, locker):
76 LockError.__init__(self, errno, 'Lock held', filename, desc)
76 LockError.__init__(self, errno, 'Lock held', filename, desc)
77 self.locker = locker
77 self.locker = locker
78
78
79 class LockUnavailable(LockError):
79 class LockUnavailable(LockError):
80 pass
80 pass
81
81
82 class ResponseError(Exception):
82 class ResponseError(Exception):
83 """Raised to print an error with part of output and exit."""
83 """Raised to print an error with part of output and exit."""
84
84
85 class UnknownCommand(Exception):
85 class UnknownCommand(Exception):
86 """Exception raised if command is not in the command table."""
86 """Exception raised if command is not in the command table."""
87
87
88 class AmbiguousCommand(Exception):
88 class AmbiguousCommand(Exception):
89 """Exception raised if command shortcut matches more than one command."""
89 """Exception raised if command shortcut matches more than one command."""
90
90
91 # derived from KeyboardInterrupt to simplify some breakout code
91 # derived from KeyboardInterrupt to simplify some breakout code
92 class SignalInterrupt(KeyboardInterrupt):
92 class SignalInterrupt(KeyboardInterrupt):
93 """Exception raised on SIGTERM and SIGHUP."""
93 """Exception raised on SIGTERM and SIGHUP."""
94
94
95 class SignatureError(Exception):
95 class SignatureError(Exception):
96 pass
96 pass
97
97
98 class PushRaced(RuntimeError):
98 class PushRaced(RuntimeError):
99 """An exception raised during unbundling that indicate a push race"""
99 """An exception raised during unbundling that indicate a push race"""
100
100
101 # bundle2 related errors
102 class BundleValueError(ValueError):
103 """error raised when bundle2 cannot be processed
104
105 Current main usecase is unsupported part types."""
106 pass
107
108 class ReadOnlyPartError(RuntimeError):
109 """error raised when code tries to alter a part being generated"""
110 pass
111
@@ -1,730 +1,730 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks, bundle2
12 import discovery, phases, obsolete, bookmarks, bundle2
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.unbundle10(fh, alg)
34 return changegroup.unbundle10(fh, alg)
35 elif version == '2X':
35 elif version == '2X':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40
40
41 class pushoperation(object):
41 class pushoperation(object):
42 """A object that represent a single push operation
42 """A object that represent a single push operation
43
43
44 It purpose is to carry push related state and very common operation.
44 It purpose is to carry push related state and very common operation.
45
45
46 A new should be created at the beginning of each push and discarded
46 A new should be created at the beginning of each push and discarded
47 afterward.
47 afterward.
48 """
48 """
49
49
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
51 # repo we push from
51 # repo we push from
52 self.repo = repo
52 self.repo = repo
53 self.ui = repo.ui
53 self.ui = repo.ui
54 # repo we push to
54 # repo we push to
55 self.remote = remote
55 self.remote = remote
56 # force option provided
56 # force option provided
57 self.force = force
57 self.force = force
58 # revs to be pushed (None is "all")
58 # revs to be pushed (None is "all")
59 self.revs = revs
59 self.revs = revs
60 # allow push of new branch
60 # allow push of new branch
61 self.newbranch = newbranch
61 self.newbranch = newbranch
62 # did a local lock get acquired?
62 # did a local lock get acquired?
63 self.locallocked = None
63 self.locallocked = None
64 # Integer version of the push result
64 # Integer version of the push result
65 # - None means nothing to push
65 # - None means nothing to push
66 # - 0 means HTTP error
66 # - 0 means HTTP error
67 # - 1 means we pushed and remote head count is unchanged *or*
67 # - 1 means we pushed and remote head count is unchanged *or*
68 # we have outgoing changesets but refused to push
68 # we have outgoing changesets but refused to push
69 # - other values as described by addchangegroup()
69 # - other values as described by addchangegroup()
70 self.ret = None
70 self.ret = None
71 # discover.outgoing object (contains common and outgoing data)
71 # discover.outgoing object (contains common and outgoing data)
72 self.outgoing = None
72 self.outgoing = None
73 # all remote heads before the push
73 # all remote heads before the push
74 self.remoteheads = None
74 self.remoteheads = None
75 # testable as a boolean indicating if any nodes are missing locally.
75 # testable as a boolean indicating if any nodes are missing locally.
76 self.incoming = None
76 self.incoming = None
77 # set of all heads common after changeset bundle push
77 # set of all heads common after changeset bundle push
78 self.commonheads = None
78 self.commonheads = None
79
79
80 def push(repo, remote, force=False, revs=None, newbranch=False):
80 def push(repo, remote, force=False, revs=None, newbranch=False):
81 '''Push outgoing changesets (limited by revs) from a local
81 '''Push outgoing changesets (limited by revs) from a local
82 repository to remote. Return an integer:
82 repository to remote. Return an integer:
83 - None means nothing to push
83 - None means nothing to push
84 - 0 means HTTP error
84 - 0 means HTTP error
85 - 1 means we pushed and remote head count is unchanged *or*
85 - 1 means we pushed and remote head count is unchanged *or*
86 we have outgoing changesets but refused to push
86 we have outgoing changesets but refused to push
87 - other values as described by addchangegroup()
87 - other values as described by addchangegroup()
88 '''
88 '''
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
90 if pushop.remote.local():
90 if pushop.remote.local():
91 missing = (set(pushop.repo.requirements)
91 missing = (set(pushop.repo.requirements)
92 - pushop.remote.local().supported)
92 - pushop.remote.local().supported)
93 if missing:
93 if missing:
94 msg = _("required features are not"
94 msg = _("required features are not"
95 " supported in the destination:"
95 " supported in the destination:"
96 " %s") % (', '.join(sorted(missing)))
96 " %s") % (', '.join(sorted(missing)))
97 raise util.Abort(msg)
97 raise util.Abort(msg)
98
98
99 # there are two ways to push to remote repo:
99 # there are two ways to push to remote repo:
100 #
100 #
101 # addchangegroup assumes local user can lock remote
101 # addchangegroup assumes local user can lock remote
102 # repo (local filesystem, old ssh servers).
102 # repo (local filesystem, old ssh servers).
103 #
103 #
104 # unbundle assumes local user cannot lock remote repo (new ssh
104 # unbundle assumes local user cannot lock remote repo (new ssh
105 # servers, http servers).
105 # servers, http servers).
106
106
107 if not pushop.remote.canpush():
107 if not pushop.remote.canpush():
108 raise util.Abort(_("destination does not support push"))
108 raise util.Abort(_("destination does not support push"))
109 # get local lock as we might write phase data
109 # get local lock as we might write phase data
110 locallock = None
110 locallock = None
111 try:
111 try:
112 locallock = pushop.repo.lock()
112 locallock = pushop.repo.lock()
113 pushop.locallocked = True
113 pushop.locallocked = True
114 except IOError, err:
114 except IOError, err:
115 pushop.locallocked = False
115 pushop.locallocked = False
116 if err.errno != errno.EACCES:
116 if err.errno != errno.EACCES:
117 raise
117 raise
118 # source repo cannot be locked.
118 # source repo cannot be locked.
119 # We do not abort the push, but just disable the local phase
119 # We do not abort the push, but just disable the local phase
120 # synchronisation.
120 # synchronisation.
121 msg = 'cannot lock source repository: %s\n' % err
121 msg = 'cannot lock source repository: %s\n' % err
122 pushop.ui.debug(msg)
122 pushop.ui.debug(msg)
123 try:
123 try:
124 pushop.repo.checkpush(pushop)
124 pushop.repo.checkpush(pushop)
125 lock = None
125 lock = None
126 unbundle = pushop.remote.capable('unbundle')
126 unbundle = pushop.remote.capable('unbundle')
127 if not unbundle:
127 if not unbundle:
128 lock = pushop.remote.lock()
128 lock = pushop.remote.lock()
129 try:
129 try:
130 _pushdiscovery(pushop)
130 _pushdiscovery(pushop)
131 if _pushcheckoutgoing(pushop):
131 if _pushcheckoutgoing(pushop):
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
133 pushop.remote,
133 pushop.remote,
134 pushop.outgoing)
134 pushop.outgoing)
135 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
135 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
136 False)
136 False)
137 and pushop.remote.capable('bundle2-exp')):
137 and pushop.remote.capable('bundle2-exp')):
138 _pushbundle2(pushop)
138 _pushbundle2(pushop)
139 else:
139 else:
140 _pushchangeset(pushop)
140 _pushchangeset(pushop)
141 _pushcomputecommonheads(pushop)
141 _pushcomputecommonheads(pushop)
142 _pushsyncphase(pushop)
142 _pushsyncphase(pushop)
143 _pushobsolete(pushop)
143 _pushobsolete(pushop)
144 finally:
144 finally:
145 if lock is not None:
145 if lock is not None:
146 lock.release()
146 lock.release()
147 finally:
147 finally:
148 if locallock is not None:
148 if locallock is not None:
149 locallock.release()
149 locallock.release()
150
150
151 _pushbookmark(pushop)
151 _pushbookmark(pushop)
152 return pushop.ret
152 return pushop.ret
153
153
154 def _pushdiscovery(pushop):
154 def _pushdiscovery(pushop):
155 # discovery
155 # discovery
156 unfi = pushop.repo.unfiltered()
156 unfi = pushop.repo.unfiltered()
157 fci = discovery.findcommonincoming
157 fci = discovery.findcommonincoming
158 commoninc = fci(unfi, pushop.remote, force=pushop.force)
158 commoninc = fci(unfi, pushop.remote, force=pushop.force)
159 common, inc, remoteheads = commoninc
159 common, inc, remoteheads = commoninc
160 fco = discovery.findcommonoutgoing
160 fco = discovery.findcommonoutgoing
161 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
161 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
162 commoninc=commoninc, force=pushop.force)
162 commoninc=commoninc, force=pushop.force)
163 pushop.outgoing = outgoing
163 pushop.outgoing = outgoing
164 pushop.remoteheads = remoteheads
164 pushop.remoteheads = remoteheads
165 pushop.incoming = inc
165 pushop.incoming = inc
166
166
167 def _pushcheckoutgoing(pushop):
167 def _pushcheckoutgoing(pushop):
168 outgoing = pushop.outgoing
168 outgoing = pushop.outgoing
169 unfi = pushop.repo.unfiltered()
169 unfi = pushop.repo.unfiltered()
170 if not outgoing.missing:
170 if not outgoing.missing:
171 # nothing to push
171 # nothing to push
172 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
172 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
173 return False
173 return False
174 # something to push
174 # something to push
175 if not pushop.force:
175 if not pushop.force:
176 # if repo.obsstore == False --> no obsolete
176 # if repo.obsstore == False --> no obsolete
177 # then, save the iteration
177 # then, save the iteration
178 if unfi.obsstore:
178 if unfi.obsstore:
179 # this message are here for 80 char limit reason
179 # this message are here for 80 char limit reason
180 mso = _("push includes obsolete changeset: %s!")
180 mso = _("push includes obsolete changeset: %s!")
181 mst = "push includes %s changeset: %s!"
181 mst = "push includes %s changeset: %s!"
182 # plain versions for i18n tool to detect them
182 # plain versions for i18n tool to detect them
183 _("push includes unstable changeset: %s!")
183 _("push includes unstable changeset: %s!")
184 _("push includes bumped changeset: %s!")
184 _("push includes bumped changeset: %s!")
185 _("push includes divergent changeset: %s!")
185 _("push includes divergent changeset: %s!")
186 # If we are to push if there is at least one
186 # If we are to push if there is at least one
187 # obsolete or unstable changeset in missing, at
187 # obsolete or unstable changeset in missing, at
188 # least one of the missinghead will be obsolete or
188 # least one of the missinghead will be obsolete or
189 # unstable. So checking heads only is ok
189 # unstable. So checking heads only is ok
190 for node in outgoing.missingheads:
190 for node in outgoing.missingheads:
191 ctx = unfi[node]
191 ctx = unfi[node]
192 if ctx.obsolete():
192 if ctx.obsolete():
193 raise util.Abort(mso % ctx)
193 raise util.Abort(mso % ctx)
194 elif ctx.troubled():
194 elif ctx.troubled():
195 raise util.Abort(_(mst)
195 raise util.Abort(_(mst)
196 % (ctx.troubles()[0],
196 % (ctx.troubles()[0],
197 ctx))
197 ctx))
198 newbm = pushop.ui.configlist('bookmarks', 'pushing')
198 newbm = pushop.ui.configlist('bookmarks', 'pushing')
199 discovery.checkheads(unfi, pushop.remote, outgoing,
199 discovery.checkheads(unfi, pushop.remote, outgoing,
200 pushop.remoteheads,
200 pushop.remoteheads,
201 pushop.newbranch,
201 pushop.newbranch,
202 bool(pushop.incoming),
202 bool(pushop.incoming),
203 newbm)
203 newbm)
204 return True
204 return True
205
205
206 def _pushbundle2(pushop):
206 def _pushbundle2(pushop):
207 """push data to the remote using bundle2
207 """push data to the remote using bundle2
208
208
209 The only currently supported type of data is changegroup but this will
209 The only currently supported type of data is changegroup but this will
210 evolve in the future."""
210 evolve in the future."""
211 # Send known head to the server for race detection.
211 # Send known head to the server for race detection.
212 capsblob = urllib.unquote(pushop.remote.capable('bundle2-exp'))
212 capsblob = urllib.unquote(pushop.remote.capable('bundle2-exp'))
213 caps = bundle2.decodecaps(capsblob)
213 caps = bundle2.decodecaps(capsblob)
214 bundler = bundle2.bundle20(pushop.ui, caps)
214 bundler = bundle2.bundle20(pushop.ui, caps)
215 # create reply capability
215 # create reply capability
216 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
216 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
217 bundler.newpart('b2x:replycaps', data=capsblob)
217 bundler.newpart('b2x:replycaps', data=capsblob)
218 if not pushop.force:
218 if not pushop.force:
219 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
219 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
220 extrainfo = _pushbundle2extraparts(pushop, bundler)
220 extrainfo = _pushbundle2extraparts(pushop, bundler)
221 # add the changegroup bundle
221 # add the changegroup bundle
222 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
222 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
223 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
223 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
224 stream = util.chunkbuffer(bundler.getchunks())
224 stream = util.chunkbuffer(bundler.getchunks())
225 try:
225 try:
226 reply = pushop.remote.unbundle(stream, ['force'], 'push')
226 reply = pushop.remote.unbundle(stream, ['force'], 'push')
227 except bundle2.BundleValueError, exc:
227 except error.BundleValueError, exc:
228 raise util.Abort('missing support for %s' % exc)
228 raise util.Abort('missing support for %s' % exc)
229 try:
229 try:
230 op = bundle2.processbundle(pushop.repo, reply)
230 op = bundle2.processbundle(pushop.repo, reply)
231 except bundle2.BundleValueError, exc:
231 except error.BundleValueError, exc:
232 raise util.Abort('missing support for %s' % exc)
232 raise util.Abort('missing support for %s' % exc)
233 cgreplies = op.records.getreplies(cgpart.id)
233 cgreplies = op.records.getreplies(cgpart.id)
234 assert len(cgreplies['changegroup']) == 1
234 assert len(cgreplies['changegroup']) == 1
235 pushop.ret = cgreplies['changegroup'][0]['return']
235 pushop.ret = cgreplies['changegroup'][0]['return']
236 _pushbundle2extrareply(pushop, op, extrainfo)
236 _pushbundle2extrareply(pushop, op, extrainfo)
237
237
238 def _pushbundle2extraparts(pushop, bundler):
238 def _pushbundle2extraparts(pushop, bundler):
239 """hook function to let extensions add parts
239 """hook function to let extensions add parts
240
240
241 Return a dict to let extensions pass data to the reply processing.
241 Return a dict to let extensions pass data to the reply processing.
242 """
242 """
243 return {}
243 return {}
244
244
245 def _pushbundle2extrareply(pushop, op, extrainfo):
245 def _pushbundle2extrareply(pushop, op, extrainfo):
246 """hook function to let extensions react to part replies
246 """hook function to let extensions react to part replies
247
247
248 The dict from _pushbundle2extrareply is fed to this function.
248 The dict from _pushbundle2extrareply is fed to this function.
249 """
249 """
250 pass
250 pass
251
251
252 def _pushchangeset(pushop):
252 def _pushchangeset(pushop):
253 """Make the actual push of changeset bundle to remote repo"""
253 """Make the actual push of changeset bundle to remote repo"""
254 outgoing = pushop.outgoing
254 outgoing = pushop.outgoing
255 unbundle = pushop.remote.capable('unbundle')
255 unbundle = pushop.remote.capable('unbundle')
256 # TODO: get bundlecaps from remote
256 # TODO: get bundlecaps from remote
257 bundlecaps = None
257 bundlecaps = None
258 # create a changegroup from local
258 # create a changegroup from local
259 if pushop.revs is None and not (outgoing.excluded
259 if pushop.revs is None and not (outgoing.excluded
260 or pushop.repo.changelog.filteredrevs):
260 or pushop.repo.changelog.filteredrevs):
261 # push everything,
261 # push everything,
262 # use the fast path, no race possible on push
262 # use the fast path, no race possible on push
263 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
263 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
264 cg = changegroup.getsubset(pushop.repo,
264 cg = changegroup.getsubset(pushop.repo,
265 outgoing,
265 outgoing,
266 bundler,
266 bundler,
267 'push',
267 'push',
268 fastpath=True)
268 fastpath=True)
269 else:
269 else:
270 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
270 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
271 bundlecaps)
271 bundlecaps)
272
272
273 # apply changegroup to remote
273 # apply changegroup to remote
274 if unbundle:
274 if unbundle:
275 # local repo finds heads on server, finds out what
275 # local repo finds heads on server, finds out what
276 # revs it must push. once revs transferred, if server
276 # revs it must push. once revs transferred, if server
277 # finds it has different heads (someone else won
277 # finds it has different heads (someone else won
278 # commit/push race), server aborts.
278 # commit/push race), server aborts.
279 if pushop.force:
279 if pushop.force:
280 remoteheads = ['force']
280 remoteheads = ['force']
281 else:
281 else:
282 remoteheads = pushop.remoteheads
282 remoteheads = pushop.remoteheads
283 # ssh: return remote's addchangegroup()
283 # ssh: return remote's addchangegroup()
284 # http: return remote's addchangegroup() or 0 for error
284 # http: return remote's addchangegroup() or 0 for error
285 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
285 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
286 'push')
286 'push')
287 else:
287 else:
288 # we return an integer indicating remote head count
288 # we return an integer indicating remote head count
289 # change
289 # change
290 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
290 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
291
291
292 def _pushcomputecommonheads(pushop):
292 def _pushcomputecommonheads(pushop):
293 unfi = pushop.repo.unfiltered()
293 unfi = pushop.repo.unfiltered()
294 if pushop.ret:
294 if pushop.ret:
295 # push succeed, synchronize target of the push
295 # push succeed, synchronize target of the push
296 cheads = pushop.outgoing.missingheads
296 cheads = pushop.outgoing.missingheads
297 elif pushop.revs is None:
297 elif pushop.revs is None:
298 # All out push fails. synchronize all common
298 # All out push fails. synchronize all common
299 cheads = pushop.outgoing.commonheads
299 cheads = pushop.outgoing.commonheads
300 else:
300 else:
301 # I want cheads = heads(::missingheads and ::commonheads)
301 # I want cheads = heads(::missingheads and ::commonheads)
302 # (missingheads is revs with secret changeset filtered out)
302 # (missingheads is revs with secret changeset filtered out)
303 #
303 #
304 # This can be expressed as:
304 # This can be expressed as:
305 # cheads = ( (missingheads and ::commonheads)
305 # cheads = ( (missingheads and ::commonheads)
306 # + (commonheads and ::missingheads))"
306 # + (commonheads and ::missingheads))"
307 # )
307 # )
308 #
308 #
309 # while trying to push we already computed the following:
309 # while trying to push we already computed the following:
310 # common = (::commonheads)
310 # common = (::commonheads)
311 # missing = ((commonheads::missingheads) - commonheads)
311 # missing = ((commonheads::missingheads) - commonheads)
312 #
312 #
313 # We can pick:
313 # We can pick:
314 # * missingheads part of common (::commonheads)
314 # * missingheads part of common (::commonheads)
315 common = set(pushop.outgoing.common)
315 common = set(pushop.outgoing.common)
316 nm = pushop.repo.changelog.nodemap
316 nm = pushop.repo.changelog.nodemap
317 cheads = [node for node in pushop.revs if nm[node] in common]
317 cheads = [node for node in pushop.revs if nm[node] in common]
318 # and
318 # and
319 # * commonheads parents on missing
319 # * commonheads parents on missing
320 revset = unfi.set('%ln and parents(roots(%ln))',
320 revset = unfi.set('%ln and parents(roots(%ln))',
321 pushop.outgoing.commonheads,
321 pushop.outgoing.commonheads,
322 pushop.outgoing.missing)
322 pushop.outgoing.missing)
323 cheads.extend(c.node() for c in revset)
323 cheads.extend(c.node() for c in revset)
324 pushop.commonheads = cheads
324 pushop.commonheads = cheads
325
325
326 def _pushsyncphase(pushop):
326 def _pushsyncphase(pushop):
327 """synchronise phase information locally and remotely"""
327 """synchronise phase information locally and remotely"""
328 unfi = pushop.repo.unfiltered()
328 unfi = pushop.repo.unfiltered()
329 cheads = pushop.commonheads
329 cheads = pushop.commonheads
330 # even when we don't push, exchanging phase data is useful
330 # even when we don't push, exchanging phase data is useful
331 remotephases = pushop.remote.listkeys('phases')
331 remotephases = pushop.remote.listkeys('phases')
332 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
332 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
333 and remotephases # server supports phases
333 and remotephases # server supports phases
334 and pushop.ret is None # nothing was pushed
334 and pushop.ret is None # nothing was pushed
335 and remotephases.get('publishing', False)):
335 and remotephases.get('publishing', False)):
336 # When:
336 # When:
337 # - this is a subrepo push
337 # - this is a subrepo push
338 # - and remote support phase
338 # - and remote support phase
339 # - and no changeset was pushed
339 # - and no changeset was pushed
340 # - and remote is publishing
340 # - and remote is publishing
341 # We may be in issue 3871 case!
341 # We may be in issue 3871 case!
342 # We drop the possible phase synchronisation done by
342 # We drop the possible phase synchronisation done by
343 # courtesy to publish changesets possibly locally draft
343 # courtesy to publish changesets possibly locally draft
344 # on the remote.
344 # on the remote.
345 remotephases = {'publishing': 'True'}
345 remotephases = {'publishing': 'True'}
346 if not remotephases: # old server or public only reply from non-publishing
346 if not remotephases: # old server or public only reply from non-publishing
347 _localphasemove(pushop, cheads)
347 _localphasemove(pushop, cheads)
348 # don't push any phase data as there is nothing to push
348 # don't push any phase data as there is nothing to push
349 else:
349 else:
350 ana = phases.analyzeremotephases(pushop.repo, cheads,
350 ana = phases.analyzeremotephases(pushop.repo, cheads,
351 remotephases)
351 remotephases)
352 pheads, droots = ana
352 pheads, droots = ana
353 ### Apply remote phase on local
353 ### Apply remote phase on local
354 if remotephases.get('publishing', False):
354 if remotephases.get('publishing', False):
355 _localphasemove(pushop, cheads)
355 _localphasemove(pushop, cheads)
356 else: # publish = False
356 else: # publish = False
357 _localphasemove(pushop, pheads)
357 _localphasemove(pushop, pheads)
358 _localphasemove(pushop, cheads, phases.draft)
358 _localphasemove(pushop, cheads, phases.draft)
359 ### Apply local phase on remote
359 ### Apply local phase on remote
360
360
361 # Get the list of all revs draft on remote by public here.
361 # Get the list of all revs draft on remote by public here.
362 # XXX Beware that revset break if droots is not strictly
362 # XXX Beware that revset break if droots is not strictly
363 # XXX root we may want to ensure it is but it is costly
363 # XXX root we may want to ensure it is but it is costly
364 outdated = unfi.set('heads((%ln::%ln) and public())',
364 outdated = unfi.set('heads((%ln::%ln) and public())',
365 droots, cheads)
365 droots, cheads)
366 for newremotehead in outdated:
366 for newremotehead in outdated:
367 r = pushop.remote.pushkey('phases',
367 r = pushop.remote.pushkey('phases',
368 newremotehead.hex(),
368 newremotehead.hex(),
369 str(phases.draft),
369 str(phases.draft),
370 str(phases.public))
370 str(phases.public))
371 if not r:
371 if not r:
372 pushop.ui.warn(_('updating %s to public failed!\n')
372 pushop.ui.warn(_('updating %s to public failed!\n')
373 % newremotehead)
373 % newremotehead)
374
374
375 def _localphasemove(pushop, nodes, phase=phases.public):
375 def _localphasemove(pushop, nodes, phase=phases.public):
376 """move <nodes> to <phase> in the local source repo"""
376 """move <nodes> to <phase> in the local source repo"""
377 if pushop.locallocked:
377 if pushop.locallocked:
378 phases.advanceboundary(pushop.repo, phase, nodes)
378 phases.advanceboundary(pushop.repo, phase, nodes)
379 else:
379 else:
380 # repo is not locked, do not change any phases!
380 # repo is not locked, do not change any phases!
381 # Informs the user that phases should have been moved when
381 # Informs the user that phases should have been moved when
382 # applicable.
382 # applicable.
383 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
383 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
384 phasestr = phases.phasenames[phase]
384 phasestr = phases.phasenames[phase]
385 if actualmoves:
385 if actualmoves:
386 pushop.ui.status(_('cannot lock source repo, skipping '
386 pushop.ui.status(_('cannot lock source repo, skipping '
387 'local %s phase update\n') % phasestr)
387 'local %s phase update\n') % phasestr)
388
388
389 def _pushobsolete(pushop):
389 def _pushobsolete(pushop):
390 """utility function to push obsolete markers to a remote"""
390 """utility function to push obsolete markers to a remote"""
391 pushop.ui.debug('try to push obsolete markers to remote\n')
391 pushop.ui.debug('try to push obsolete markers to remote\n')
392 repo = pushop.repo
392 repo = pushop.repo
393 remote = pushop.remote
393 remote = pushop.remote
394 if (obsolete._enabled and repo.obsstore and
394 if (obsolete._enabled and repo.obsstore and
395 'obsolete' in remote.listkeys('namespaces')):
395 'obsolete' in remote.listkeys('namespaces')):
396 rslts = []
396 rslts = []
397 remotedata = repo.listkeys('obsolete')
397 remotedata = repo.listkeys('obsolete')
398 for key in sorted(remotedata, reverse=True):
398 for key in sorted(remotedata, reverse=True):
399 # reverse sort to ensure we end with dump0
399 # reverse sort to ensure we end with dump0
400 data = remotedata[key]
400 data = remotedata[key]
401 rslts.append(remote.pushkey('obsolete', key, '', data))
401 rslts.append(remote.pushkey('obsolete', key, '', data))
402 if [r for r in rslts if not r]:
402 if [r for r in rslts if not r]:
403 msg = _('failed to push some obsolete markers!\n')
403 msg = _('failed to push some obsolete markers!\n')
404 repo.ui.warn(msg)
404 repo.ui.warn(msg)
405
405
406 def _pushbookmark(pushop):
406 def _pushbookmark(pushop):
407 """Update bookmark position on remote"""
407 """Update bookmark position on remote"""
408 ui = pushop.ui
408 ui = pushop.ui
409 repo = pushop.repo.unfiltered()
409 repo = pushop.repo.unfiltered()
410 remote = pushop.remote
410 remote = pushop.remote
411 ui.debug("checking for updated bookmarks\n")
411 ui.debug("checking for updated bookmarks\n")
412 revnums = map(repo.changelog.rev, pushop.revs or [])
412 revnums = map(repo.changelog.rev, pushop.revs or [])
413 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
413 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
414 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
414 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
415 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
415 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
416 srchex=hex)
416 srchex=hex)
417
417
418 for b, scid, dcid in advsrc:
418 for b, scid, dcid in advsrc:
419 if ancestors and repo[scid].rev() not in ancestors:
419 if ancestors and repo[scid].rev() not in ancestors:
420 continue
420 continue
421 if remote.pushkey('bookmarks', b, dcid, scid):
421 if remote.pushkey('bookmarks', b, dcid, scid):
422 ui.status(_("updating bookmark %s\n") % b)
422 ui.status(_("updating bookmark %s\n") % b)
423 else:
423 else:
424 ui.warn(_('updating bookmark %s failed!\n') % b)
424 ui.warn(_('updating bookmark %s failed!\n') % b)
425
425
426 class pulloperation(object):
426 class pulloperation(object):
427 """A object that represent a single pull operation
427 """A object that represent a single pull operation
428
428
429 It purpose is to carry push related state and very common operation.
429 It purpose is to carry push related state and very common operation.
430
430
431 A new should be created at the beginning of each pull and discarded
431 A new should be created at the beginning of each pull and discarded
432 afterward.
432 afterward.
433 """
433 """
434
434
435 def __init__(self, repo, remote, heads=None, force=False):
435 def __init__(self, repo, remote, heads=None, force=False):
436 # repo we pull into
436 # repo we pull into
437 self.repo = repo
437 self.repo = repo
438 # repo we pull from
438 # repo we pull from
439 self.remote = remote
439 self.remote = remote
440 # revision we try to pull (None is "all")
440 # revision we try to pull (None is "all")
441 self.heads = heads
441 self.heads = heads
442 # do we force pull?
442 # do we force pull?
443 self.force = force
443 self.force = force
444 # the name the pull transaction
444 # the name the pull transaction
445 self._trname = 'pull\n' + util.hidepassword(remote.url())
445 self._trname = 'pull\n' + util.hidepassword(remote.url())
446 # hold the transaction once created
446 # hold the transaction once created
447 self._tr = None
447 self._tr = None
448 # set of common changeset between local and remote before pull
448 # set of common changeset between local and remote before pull
449 self.common = None
449 self.common = None
450 # set of pulled head
450 # set of pulled head
451 self.rheads = None
451 self.rheads = None
452 # list of missing changeset to fetch remotely
452 # list of missing changeset to fetch remotely
453 self.fetch = None
453 self.fetch = None
454 # result of changegroup pulling (used as return code by pull)
454 # result of changegroup pulling (used as return code by pull)
455 self.cgresult = None
455 self.cgresult = None
456 # list of step remaining todo (related to future bundle2 usage)
456 # list of step remaining todo (related to future bundle2 usage)
457 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
457 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
458
458
459 @util.propertycache
459 @util.propertycache
460 def pulledsubset(self):
460 def pulledsubset(self):
461 """heads of the set of changeset target by the pull"""
461 """heads of the set of changeset target by the pull"""
462 # compute target subset
462 # compute target subset
463 if self.heads is None:
463 if self.heads is None:
464 # We pulled every thing possible
464 # We pulled every thing possible
465 # sync on everything common
465 # sync on everything common
466 c = set(self.common)
466 c = set(self.common)
467 ret = list(self.common)
467 ret = list(self.common)
468 for n in self.rheads:
468 for n in self.rheads:
469 if n not in c:
469 if n not in c:
470 ret.append(n)
470 ret.append(n)
471 return ret
471 return ret
472 else:
472 else:
473 # We pulled a specific subset
473 # We pulled a specific subset
474 # sync on this subset
474 # sync on this subset
475 return self.heads
475 return self.heads
476
476
477 def gettransaction(self):
477 def gettransaction(self):
478 """get appropriate pull transaction, creating it if needed"""
478 """get appropriate pull transaction, creating it if needed"""
479 if self._tr is None:
479 if self._tr is None:
480 self._tr = self.repo.transaction(self._trname)
480 self._tr = self.repo.transaction(self._trname)
481 return self._tr
481 return self._tr
482
482
483 def closetransaction(self):
483 def closetransaction(self):
484 """close transaction if created"""
484 """close transaction if created"""
485 if self._tr is not None:
485 if self._tr is not None:
486 self._tr.close()
486 self._tr.close()
487
487
488 def releasetransaction(self):
488 def releasetransaction(self):
489 """release transaction if created"""
489 """release transaction if created"""
490 if self._tr is not None:
490 if self._tr is not None:
491 self._tr.release()
491 self._tr.release()
492
492
493 def pull(repo, remote, heads=None, force=False):
493 def pull(repo, remote, heads=None, force=False):
494 pullop = pulloperation(repo, remote, heads, force)
494 pullop = pulloperation(repo, remote, heads, force)
495 if pullop.remote.local():
495 if pullop.remote.local():
496 missing = set(pullop.remote.requirements) - pullop.repo.supported
496 missing = set(pullop.remote.requirements) - pullop.repo.supported
497 if missing:
497 if missing:
498 msg = _("required features are not"
498 msg = _("required features are not"
499 " supported in the destination:"
499 " supported in the destination:"
500 " %s") % (', '.join(sorted(missing)))
500 " %s") % (', '.join(sorted(missing)))
501 raise util.Abort(msg)
501 raise util.Abort(msg)
502
502
503 lock = pullop.repo.lock()
503 lock = pullop.repo.lock()
504 try:
504 try:
505 _pulldiscovery(pullop)
505 _pulldiscovery(pullop)
506 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
506 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
507 and pullop.remote.capable('bundle2-exp')):
507 and pullop.remote.capable('bundle2-exp')):
508 _pullbundle2(pullop)
508 _pullbundle2(pullop)
509 if 'changegroup' in pullop.todosteps:
509 if 'changegroup' in pullop.todosteps:
510 _pullchangeset(pullop)
510 _pullchangeset(pullop)
511 if 'phases' in pullop.todosteps:
511 if 'phases' in pullop.todosteps:
512 _pullphase(pullop)
512 _pullphase(pullop)
513 if 'obsmarkers' in pullop.todosteps:
513 if 'obsmarkers' in pullop.todosteps:
514 _pullobsolete(pullop)
514 _pullobsolete(pullop)
515 pullop.closetransaction()
515 pullop.closetransaction()
516 finally:
516 finally:
517 pullop.releasetransaction()
517 pullop.releasetransaction()
518 lock.release()
518 lock.release()
519
519
520 return pullop.cgresult
520 return pullop.cgresult
521
521
522 def _pulldiscovery(pullop):
522 def _pulldiscovery(pullop):
523 """discovery phase for the pull
523 """discovery phase for the pull
524
524
525 Current handle changeset discovery only, will change handle all discovery
525 Current handle changeset discovery only, will change handle all discovery
526 at some point."""
526 at some point."""
527 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
527 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
528 pullop.remote,
528 pullop.remote,
529 heads=pullop.heads,
529 heads=pullop.heads,
530 force=pullop.force)
530 force=pullop.force)
531 pullop.common, pullop.fetch, pullop.rheads = tmp
531 pullop.common, pullop.fetch, pullop.rheads = tmp
532
532
533 def _pullbundle2(pullop):
533 def _pullbundle2(pullop):
534 """pull data using bundle2
534 """pull data using bundle2
535
535
536 For now, the only supported data are changegroup."""
536 For now, the only supported data are changegroup."""
537 kwargs = {'bundlecaps': set(['HG2X'])}
537 kwargs = {'bundlecaps': set(['HG2X'])}
538 capsblob = bundle2.encodecaps(pullop.repo.bundle2caps)
538 capsblob = bundle2.encodecaps(pullop.repo.bundle2caps)
539 kwargs['bundlecaps'].add('bundle2=' + urllib.quote(capsblob))
539 kwargs['bundlecaps'].add('bundle2=' + urllib.quote(capsblob))
540 # pulling changegroup
540 # pulling changegroup
541 pullop.todosteps.remove('changegroup')
541 pullop.todosteps.remove('changegroup')
542
542
543 kwargs['common'] = pullop.common
543 kwargs['common'] = pullop.common
544 kwargs['heads'] = pullop.heads or pullop.rheads
544 kwargs['heads'] = pullop.heads or pullop.rheads
545 if not pullop.fetch:
545 if not pullop.fetch:
546 pullop.repo.ui.status(_("no changes found\n"))
546 pullop.repo.ui.status(_("no changes found\n"))
547 pullop.cgresult = 0
547 pullop.cgresult = 0
548 else:
548 else:
549 if pullop.heads is None and list(pullop.common) == [nullid]:
549 if pullop.heads is None and list(pullop.common) == [nullid]:
550 pullop.repo.ui.status(_("requesting all changes\n"))
550 pullop.repo.ui.status(_("requesting all changes\n"))
551 _pullbundle2extraprepare(pullop, kwargs)
551 _pullbundle2extraprepare(pullop, kwargs)
552 if kwargs.keys() == ['format']:
552 if kwargs.keys() == ['format']:
553 return # nothing to pull
553 return # nothing to pull
554 bundle = pullop.remote.getbundle('pull', **kwargs)
554 bundle = pullop.remote.getbundle('pull', **kwargs)
555 try:
555 try:
556 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
556 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
557 except bundle2.BundleValueError, exc:
557 except error.BundleValueError, exc:
558 raise util.Abort('missing support for %s' % exc)
558 raise util.Abort('missing support for %s' % exc)
559
559
560 if pullop.fetch:
560 if pullop.fetch:
561 assert len(op.records['changegroup']) == 1
561 assert len(op.records['changegroup']) == 1
562 pullop.cgresult = op.records['changegroup'][0]['return']
562 pullop.cgresult = op.records['changegroup'][0]['return']
563
563
564 def _pullbundle2extraprepare(pullop, kwargs):
564 def _pullbundle2extraprepare(pullop, kwargs):
565 """hook function so that extensions can extend the getbundle call"""
565 """hook function so that extensions can extend the getbundle call"""
566 pass
566 pass
567
567
568 def _pullchangeset(pullop):
568 def _pullchangeset(pullop):
569 """pull changeset from unbundle into the local repo"""
569 """pull changeset from unbundle into the local repo"""
570 # We delay the open of the transaction as late as possible so we
570 # We delay the open of the transaction as late as possible so we
571 # don't open transaction for nothing or you break future useful
571 # don't open transaction for nothing or you break future useful
572 # rollback call
572 # rollback call
573 pullop.todosteps.remove('changegroup')
573 pullop.todosteps.remove('changegroup')
574 if not pullop.fetch:
574 if not pullop.fetch:
575 pullop.repo.ui.status(_("no changes found\n"))
575 pullop.repo.ui.status(_("no changes found\n"))
576 pullop.cgresult = 0
576 pullop.cgresult = 0
577 return
577 return
578 pullop.gettransaction()
578 pullop.gettransaction()
579 if pullop.heads is None and list(pullop.common) == [nullid]:
579 if pullop.heads is None and list(pullop.common) == [nullid]:
580 pullop.repo.ui.status(_("requesting all changes\n"))
580 pullop.repo.ui.status(_("requesting all changes\n"))
581 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
581 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
582 # issue1320, avoid a race if remote changed after discovery
582 # issue1320, avoid a race if remote changed after discovery
583 pullop.heads = pullop.rheads
583 pullop.heads = pullop.rheads
584
584
585 if pullop.remote.capable('getbundle'):
585 if pullop.remote.capable('getbundle'):
586 # TODO: get bundlecaps from remote
586 # TODO: get bundlecaps from remote
587 cg = pullop.remote.getbundle('pull', common=pullop.common,
587 cg = pullop.remote.getbundle('pull', common=pullop.common,
588 heads=pullop.heads or pullop.rheads)
588 heads=pullop.heads or pullop.rheads)
589 elif pullop.heads is None:
589 elif pullop.heads is None:
590 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
590 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
591 elif not pullop.remote.capable('changegroupsubset'):
591 elif not pullop.remote.capable('changegroupsubset'):
592 raise util.Abort(_("partial pull cannot be done because "
592 raise util.Abort(_("partial pull cannot be done because "
593 "other repository doesn't support "
593 "other repository doesn't support "
594 "changegroupsubset."))
594 "changegroupsubset."))
595 else:
595 else:
596 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
596 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
597 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
597 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
598 pullop.remote.url())
598 pullop.remote.url())
599
599
600 def _pullphase(pullop):
600 def _pullphase(pullop):
601 # Get remote phases data from remote
601 # Get remote phases data from remote
602 pullop.todosteps.remove('phases')
602 pullop.todosteps.remove('phases')
603 remotephases = pullop.remote.listkeys('phases')
603 remotephases = pullop.remote.listkeys('phases')
604 publishing = bool(remotephases.get('publishing', False))
604 publishing = bool(remotephases.get('publishing', False))
605 if remotephases and not publishing:
605 if remotephases and not publishing:
606 # remote is new and unpublishing
606 # remote is new and unpublishing
607 pheads, _dr = phases.analyzeremotephases(pullop.repo,
607 pheads, _dr = phases.analyzeremotephases(pullop.repo,
608 pullop.pulledsubset,
608 pullop.pulledsubset,
609 remotephases)
609 remotephases)
610 phases.advanceboundary(pullop.repo, phases.public, pheads)
610 phases.advanceboundary(pullop.repo, phases.public, pheads)
611 phases.advanceboundary(pullop.repo, phases.draft,
611 phases.advanceboundary(pullop.repo, phases.draft,
612 pullop.pulledsubset)
612 pullop.pulledsubset)
613 else:
613 else:
614 # Remote is old or publishing all common changesets
614 # Remote is old or publishing all common changesets
615 # should be seen as public
615 # should be seen as public
616 phases.advanceboundary(pullop.repo, phases.public,
616 phases.advanceboundary(pullop.repo, phases.public,
617 pullop.pulledsubset)
617 pullop.pulledsubset)
618
618
619 def _pullobsolete(pullop):
619 def _pullobsolete(pullop):
620 """utility function to pull obsolete markers from a remote
620 """utility function to pull obsolete markers from a remote
621
621
622 The `gettransaction` is function that return the pull transaction, creating
622 The `gettransaction` is function that return the pull transaction, creating
623 one if necessary. We return the transaction to inform the calling code that
623 one if necessary. We return the transaction to inform the calling code that
624 a new transaction have been created (when applicable).
624 a new transaction have been created (when applicable).
625
625
626 Exists mostly to allow overriding for experimentation purpose"""
626 Exists mostly to allow overriding for experimentation purpose"""
627 pullop.todosteps.remove('obsmarkers')
627 pullop.todosteps.remove('obsmarkers')
628 tr = None
628 tr = None
629 if obsolete._enabled:
629 if obsolete._enabled:
630 pullop.repo.ui.debug('fetching remote obsolete markers\n')
630 pullop.repo.ui.debug('fetching remote obsolete markers\n')
631 remoteobs = pullop.remote.listkeys('obsolete')
631 remoteobs = pullop.remote.listkeys('obsolete')
632 if 'dump0' in remoteobs:
632 if 'dump0' in remoteobs:
633 tr = pullop.gettransaction()
633 tr = pullop.gettransaction()
634 for key in sorted(remoteobs, reverse=True):
634 for key in sorted(remoteobs, reverse=True):
635 if key.startswith('dump'):
635 if key.startswith('dump'):
636 data = base85.b85decode(remoteobs[key])
636 data = base85.b85decode(remoteobs[key])
637 pullop.repo.obsstore.mergemarkers(tr, data)
637 pullop.repo.obsstore.mergemarkers(tr, data)
638 pullop.repo.invalidatevolatilesets()
638 pullop.repo.invalidatevolatilesets()
639 return tr
639 return tr
640
640
641 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
641 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
642 **kwargs):
642 **kwargs):
643 """return a full bundle (with potentially multiple kind of parts)
643 """return a full bundle (with potentially multiple kind of parts)
644
644
645 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
645 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
646 passed. For now, the bundle can contain only changegroup, but this will
646 passed. For now, the bundle can contain only changegroup, but this will
647 changes when more part type will be available for bundle2.
647 changes when more part type will be available for bundle2.
648
648
649 This is different from changegroup.getbundle that only returns an HG10
649 This is different from changegroup.getbundle that only returns an HG10
650 changegroup bundle. They may eventually get reunited in the future when we
650 changegroup bundle. They may eventually get reunited in the future when we
651 have a clearer idea of the API we what to query different data.
651 have a clearer idea of the API we what to query different data.
652
652
653 The implementation is at a very early stage and will get massive rework
653 The implementation is at a very early stage and will get massive rework
654 when the API of bundle is refined.
654 when the API of bundle is refined.
655 """
655 """
656 # build changegroup bundle here.
656 # build changegroup bundle here.
657 cg = changegroup.getbundle(repo, source, heads=heads,
657 cg = changegroup.getbundle(repo, source, heads=heads,
658 common=common, bundlecaps=bundlecaps)
658 common=common, bundlecaps=bundlecaps)
659 if bundlecaps is None or 'HG2X' not in bundlecaps:
659 if bundlecaps is None or 'HG2X' not in bundlecaps:
660 return cg
660 return cg
661 # very crude first implementation,
661 # very crude first implementation,
662 # the bundle API will change and the generation will be done lazily.
662 # the bundle API will change and the generation will be done lazily.
663 b2caps = {}
663 b2caps = {}
664 for bcaps in bundlecaps:
664 for bcaps in bundlecaps:
665 if bcaps.startswith('bundle2='):
665 if bcaps.startswith('bundle2='):
666 blob = urllib.unquote(bcaps[len('bundle2='):])
666 blob = urllib.unquote(bcaps[len('bundle2='):])
667 b2caps.update(bundle2.decodecaps(blob))
667 b2caps.update(bundle2.decodecaps(blob))
668 bundler = bundle2.bundle20(repo.ui, b2caps)
668 bundler = bundle2.bundle20(repo.ui, b2caps)
669 if cg:
669 if cg:
670 bundler.newpart('b2x:changegroup', data=cg.getchunks())
670 bundler.newpart('b2x:changegroup', data=cg.getchunks())
671 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
671 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
672 bundlecaps=bundlecaps, **kwargs)
672 bundlecaps=bundlecaps, **kwargs)
673 return util.chunkbuffer(bundler.getchunks())
673 return util.chunkbuffer(bundler.getchunks())
674
674
675 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
675 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
676 bundlecaps=None, **kwargs):
676 bundlecaps=None, **kwargs):
677 """hook function to let extensions add parts to the requested bundle"""
677 """hook function to let extensions add parts to the requested bundle"""
678 pass
678 pass
679
679
680 def check_heads(repo, their_heads, context):
680 def check_heads(repo, their_heads, context):
681 """check if the heads of a repo have been modified
681 """check if the heads of a repo have been modified
682
682
683 Used by peer for unbundling.
683 Used by peer for unbundling.
684 """
684 """
685 heads = repo.heads()
685 heads = repo.heads()
686 heads_hash = util.sha1(''.join(sorted(heads))).digest()
686 heads_hash = util.sha1(''.join(sorted(heads))).digest()
687 if not (their_heads == ['force'] or their_heads == heads or
687 if not (their_heads == ['force'] or their_heads == heads or
688 their_heads == ['hashed', heads_hash]):
688 their_heads == ['hashed', heads_hash]):
689 # someone else committed/pushed/unbundled while we
689 # someone else committed/pushed/unbundled while we
690 # were transferring data
690 # were transferring data
691 raise error.PushRaced('repository changed while %s - '
691 raise error.PushRaced('repository changed while %s - '
692 'please try again' % context)
692 'please try again' % context)
693
693
694 def unbundle(repo, cg, heads, source, url):
694 def unbundle(repo, cg, heads, source, url):
695 """Apply a bundle to a repo.
695 """Apply a bundle to a repo.
696
696
697 this function makes sure the repo is locked during the application and have
697 this function makes sure the repo is locked during the application and have
698 mechanism to check that no push race occurred between the creation of the
698 mechanism to check that no push race occurred between the creation of the
699 bundle and its application.
699 bundle and its application.
700
700
701 If the push was raced as PushRaced exception is raised."""
701 If the push was raced as PushRaced exception is raised."""
702 r = 0
702 r = 0
703 # need a transaction when processing a bundle2 stream
703 # need a transaction when processing a bundle2 stream
704 tr = None
704 tr = None
705 lock = repo.lock()
705 lock = repo.lock()
706 try:
706 try:
707 check_heads(repo, heads, 'uploading changes')
707 check_heads(repo, heads, 'uploading changes')
708 # push can proceed
708 # push can proceed
709 if util.safehasattr(cg, 'params'):
709 if util.safehasattr(cg, 'params'):
710 try:
710 try:
711 tr = repo.transaction('unbundle')
711 tr = repo.transaction('unbundle')
712 tr.hookargs['bundle2-exp'] = '1'
712 tr.hookargs['bundle2-exp'] = '1'
713 r = bundle2.processbundle(repo, cg, lambda: tr).reply
713 r = bundle2.processbundle(repo, cg, lambda: tr).reply
714 cl = repo.unfiltered().changelog
714 cl = repo.unfiltered().changelog
715 p = cl.writepending() and repo.root or ""
715 p = cl.writepending() and repo.root or ""
716 repo.hook('b2x-pretransactionclose', throw=True, source=source,
716 repo.hook('b2x-pretransactionclose', throw=True, source=source,
717 url=url, pending=p, **tr.hookargs)
717 url=url, pending=p, **tr.hookargs)
718 tr.close()
718 tr.close()
719 repo.hook('b2x-transactionclose', source=source, url=url,
719 repo.hook('b2x-transactionclose', source=source, url=url,
720 **tr.hookargs)
720 **tr.hookargs)
721 except Exception, exc:
721 except Exception, exc:
722 exc.duringunbundle2 = True
722 exc.duringunbundle2 = True
723 raise
723 raise
724 else:
724 else:
725 r = changegroup.addchangegroup(repo, cg, source, url)
725 r = changegroup.addchangegroup(repo, cg, source, url)
726 finally:
726 finally:
727 if tr is not None:
727 if tr is not None:
728 tr.release()
728 tr.release()
729 lock.release()
729 lock.release()
730 return r
730 return r
@@ -1,833 +1,833 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2
11 import changegroup as changegroupmod, bundle2
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # client side
193 # client side
194
194
195 class wirepeer(peer.peerrepository):
195 class wirepeer(peer.peerrepository):
196
196
197 def batch(self):
197 def batch(self):
198 return remotebatch(self)
198 return remotebatch(self)
199 def _submitbatch(self, req):
199 def _submitbatch(self, req):
200 cmds = []
200 cmds = []
201 for op, argsdict in req:
201 for op, argsdict in req:
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
203 cmds.append('%s %s' % (op, args))
203 cmds.append('%s %s' % (op, args))
204 rsp = self._call("batch", cmds=';'.join(cmds))
204 rsp = self._call("batch", cmds=';'.join(cmds))
205 return rsp.split(';')
205 return rsp.split(';')
206 def _submitone(self, op, args):
206 def _submitone(self, op, args):
207 return self._call(op, **args)
207 return self._call(op, **args)
208
208
209 @batchable
209 @batchable
210 def lookup(self, key):
210 def lookup(self, key):
211 self.requirecap('lookup', _('look up remote revision'))
211 self.requirecap('lookup', _('look up remote revision'))
212 f = future()
212 f = future()
213 yield {'key': encoding.fromlocal(key)}, f
213 yield {'key': encoding.fromlocal(key)}, f
214 d = f.value
214 d = f.value
215 success, data = d[:-1].split(" ", 1)
215 success, data = d[:-1].split(" ", 1)
216 if int(success):
216 if int(success):
217 yield bin(data)
217 yield bin(data)
218 self._abort(error.RepoError(data))
218 self._abort(error.RepoError(data))
219
219
220 @batchable
220 @batchable
221 def heads(self):
221 def heads(self):
222 f = future()
222 f = future()
223 yield {}, f
223 yield {}, f
224 d = f.value
224 d = f.value
225 try:
225 try:
226 yield decodelist(d[:-1])
226 yield decodelist(d[:-1])
227 except ValueError:
227 except ValueError:
228 self._abort(error.ResponseError(_("unexpected response:"), d))
228 self._abort(error.ResponseError(_("unexpected response:"), d))
229
229
230 @batchable
230 @batchable
231 def known(self, nodes):
231 def known(self, nodes):
232 f = future()
232 f = future()
233 yield {'nodes': encodelist(nodes)}, f
233 yield {'nodes': encodelist(nodes)}, f
234 d = f.value
234 d = f.value
235 try:
235 try:
236 yield [bool(int(f)) for f in d]
236 yield [bool(int(f)) for f in d]
237 except ValueError:
237 except ValueError:
238 self._abort(error.ResponseError(_("unexpected response:"), d))
238 self._abort(error.ResponseError(_("unexpected response:"), d))
239
239
240 @batchable
240 @batchable
241 def branchmap(self):
241 def branchmap(self):
242 f = future()
242 f = future()
243 yield {}, f
243 yield {}, f
244 d = f.value
244 d = f.value
245 try:
245 try:
246 branchmap = {}
246 branchmap = {}
247 for branchpart in d.splitlines():
247 for branchpart in d.splitlines():
248 branchname, branchheads = branchpart.split(' ', 1)
248 branchname, branchheads = branchpart.split(' ', 1)
249 branchname = encoding.tolocal(urllib.unquote(branchname))
249 branchname = encoding.tolocal(urllib.unquote(branchname))
250 branchheads = decodelist(branchheads)
250 branchheads = decodelist(branchheads)
251 branchmap[branchname] = branchheads
251 branchmap[branchname] = branchheads
252 yield branchmap
252 yield branchmap
253 except TypeError:
253 except TypeError:
254 self._abort(error.ResponseError(_("unexpected response:"), d))
254 self._abort(error.ResponseError(_("unexpected response:"), d))
255
255
256 def branches(self, nodes):
256 def branches(self, nodes):
257 n = encodelist(nodes)
257 n = encodelist(nodes)
258 d = self._call("branches", nodes=n)
258 d = self._call("branches", nodes=n)
259 try:
259 try:
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
261 return br
261 return br
262 except ValueError:
262 except ValueError:
263 self._abort(error.ResponseError(_("unexpected response:"), d))
263 self._abort(error.ResponseError(_("unexpected response:"), d))
264
264
265 def between(self, pairs):
265 def between(self, pairs):
266 batch = 8 # avoid giant requests
266 batch = 8 # avoid giant requests
267 r = []
267 r = []
268 for i in xrange(0, len(pairs), batch):
268 for i in xrange(0, len(pairs), batch):
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
270 d = self._call("between", pairs=n)
270 d = self._call("between", pairs=n)
271 try:
271 try:
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
273 except ValueError:
273 except ValueError:
274 self._abort(error.ResponseError(_("unexpected response:"), d))
274 self._abort(error.ResponseError(_("unexpected response:"), d))
275 return r
275 return r
276
276
277 @batchable
277 @batchable
278 def pushkey(self, namespace, key, old, new):
278 def pushkey(self, namespace, key, old, new):
279 if not self.capable('pushkey'):
279 if not self.capable('pushkey'):
280 yield False, None
280 yield False, None
281 f = future()
281 f = future()
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
283 yield {'namespace': encoding.fromlocal(namespace),
283 yield {'namespace': encoding.fromlocal(namespace),
284 'key': encoding.fromlocal(key),
284 'key': encoding.fromlocal(key),
285 'old': encoding.fromlocal(old),
285 'old': encoding.fromlocal(old),
286 'new': encoding.fromlocal(new)}, f
286 'new': encoding.fromlocal(new)}, f
287 d = f.value
287 d = f.value
288 d, output = d.split('\n', 1)
288 d, output = d.split('\n', 1)
289 try:
289 try:
290 d = bool(int(d))
290 d = bool(int(d))
291 except ValueError:
291 except ValueError:
292 raise error.ResponseError(
292 raise error.ResponseError(
293 _('push failed (unexpected response):'), d)
293 _('push failed (unexpected response):'), d)
294 for l in output.splitlines(True):
294 for l in output.splitlines(True):
295 self.ui.status(_('remote: '), l)
295 self.ui.status(_('remote: '), l)
296 yield d
296 yield d
297
297
298 @batchable
298 @batchable
299 def listkeys(self, namespace):
299 def listkeys(self, namespace):
300 if not self.capable('pushkey'):
300 if not self.capable('pushkey'):
301 yield {}, None
301 yield {}, None
302 f = future()
302 f = future()
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
304 yield {'namespace': encoding.fromlocal(namespace)}, f
304 yield {'namespace': encoding.fromlocal(namespace)}, f
305 d = f.value
305 d = f.value
306 r = {}
306 r = {}
307 for l in d.splitlines():
307 for l in d.splitlines():
308 k, v = l.split('\t')
308 k, v = l.split('\t')
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
310 yield r
310 yield r
311
311
312 def stream_out(self):
312 def stream_out(self):
313 return self._callstream('stream_out')
313 return self._callstream('stream_out')
314
314
315 def changegroup(self, nodes, kind):
315 def changegroup(self, nodes, kind):
316 n = encodelist(nodes)
316 n = encodelist(nodes)
317 f = self._callcompressable("changegroup", roots=n)
317 f = self._callcompressable("changegroup", roots=n)
318 return changegroupmod.unbundle10(f, 'UN')
318 return changegroupmod.unbundle10(f, 'UN')
319
319
320 def changegroupsubset(self, bases, heads, kind):
320 def changegroupsubset(self, bases, heads, kind):
321 self.requirecap('changegroupsubset', _('look up remote changes'))
321 self.requirecap('changegroupsubset', _('look up remote changes'))
322 bases = encodelist(bases)
322 bases = encodelist(bases)
323 heads = encodelist(heads)
323 heads = encodelist(heads)
324 f = self._callcompressable("changegroupsubset",
324 f = self._callcompressable("changegroupsubset",
325 bases=bases, heads=heads)
325 bases=bases, heads=heads)
326 return changegroupmod.unbundle10(f, 'UN')
326 return changegroupmod.unbundle10(f, 'UN')
327
327
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
329 **kwargs):
329 **kwargs):
330 self.requirecap('getbundle', _('look up remote changes'))
330 self.requirecap('getbundle', _('look up remote changes'))
331 opts = {}
331 opts = {}
332 if heads is not None:
332 if heads is not None:
333 opts['heads'] = encodelist(heads)
333 opts['heads'] = encodelist(heads)
334 if common is not None:
334 if common is not None:
335 opts['common'] = encodelist(common)
335 opts['common'] = encodelist(common)
336 if bundlecaps is not None:
336 if bundlecaps is not None:
337 opts['bundlecaps'] = ','.join(bundlecaps)
337 opts['bundlecaps'] = ','.join(bundlecaps)
338 opts.update(kwargs)
338 opts.update(kwargs)
339 f = self._callcompressable("getbundle", **opts)
339 f = self._callcompressable("getbundle", **opts)
340 if bundlecaps is not None and 'HG2X' in bundlecaps:
340 if bundlecaps is not None and 'HG2X' in bundlecaps:
341 return bundle2.unbundle20(self.ui, f)
341 return bundle2.unbundle20(self.ui, f)
342 else:
342 else:
343 return changegroupmod.unbundle10(f, 'UN')
343 return changegroupmod.unbundle10(f, 'UN')
344
344
345 def unbundle(self, cg, heads, source):
345 def unbundle(self, cg, heads, source):
346 '''Send cg (a readable file-like object representing the
346 '''Send cg (a readable file-like object representing the
347 changegroup to push, typically a chunkbuffer object) to the
347 changegroup to push, typically a chunkbuffer object) to the
348 remote server as a bundle.
348 remote server as a bundle.
349
349
350 When pushing a bundle10 stream, return an integer indicating the
350 When pushing a bundle10 stream, return an integer indicating the
351 result of the push (see localrepository.addchangegroup()).
351 result of the push (see localrepository.addchangegroup()).
352
352
353 When pushing a bundle20 stream, return a bundle20 stream.'''
353 When pushing a bundle20 stream, return a bundle20 stream.'''
354
354
355 if heads != ['force'] and self.capable('unbundlehash'):
355 if heads != ['force'] and self.capable('unbundlehash'):
356 heads = encodelist(['hashed',
356 heads = encodelist(['hashed',
357 util.sha1(''.join(sorted(heads))).digest()])
357 util.sha1(''.join(sorted(heads))).digest()])
358 else:
358 else:
359 heads = encodelist(heads)
359 heads = encodelist(heads)
360
360
361 if util.safehasattr(cg, 'deltaheader'):
361 if util.safehasattr(cg, 'deltaheader'):
362 # this a bundle10, do the old style call sequence
362 # this a bundle10, do the old style call sequence
363 ret, output = self._callpush("unbundle", cg, heads=heads)
363 ret, output = self._callpush("unbundle", cg, heads=heads)
364 if ret == "":
364 if ret == "":
365 raise error.ResponseError(
365 raise error.ResponseError(
366 _('push failed:'), output)
366 _('push failed:'), output)
367 try:
367 try:
368 ret = int(ret)
368 ret = int(ret)
369 except ValueError:
369 except ValueError:
370 raise error.ResponseError(
370 raise error.ResponseError(
371 _('push failed (unexpected response):'), ret)
371 _('push failed (unexpected response):'), ret)
372
372
373 for l in output.splitlines(True):
373 for l in output.splitlines(True):
374 self.ui.status(_('remote: '), l)
374 self.ui.status(_('remote: '), l)
375 else:
375 else:
376 # bundle2 push. Send a stream, fetch a stream.
376 # bundle2 push. Send a stream, fetch a stream.
377 stream = self._calltwowaystream('unbundle', cg, heads=heads)
377 stream = self._calltwowaystream('unbundle', cg, heads=heads)
378 ret = bundle2.unbundle20(self.ui, stream)
378 ret = bundle2.unbundle20(self.ui, stream)
379 return ret
379 return ret
380
380
381 def debugwireargs(self, one, two, three=None, four=None, five=None):
381 def debugwireargs(self, one, two, three=None, four=None, five=None):
382 # don't pass optional arguments left at their default value
382 # don't pass optional arguments left at their default value
383 opts = {}
383 opts = {}
384 if three is not None:
384 if three is not None:
385 opts['three'] = three
385 opts['three'] = three
386 if four is not None:
386 if four is not None:
387 opts['four'] = four
387 opts['four'] = four
388 return self._call('debugwireargs', one=one, two=two, **opts)
388 return self._call('debugwireargs', one=one, two=two, **opts)
389
389
390 def _call(self, cmd, **args):
390 def _call(self, cmd, **args):
391 """execute <cmd> on the server
391 """execute <cmd> on the server
392
392
393 The command is expected to return a simple string.
393 The command is expected to return a simple string.
394
394
395 returns the server reply as a string."""
395 returns the server reply as a string."""
396 raise NotImplementedError()
396 raise NotImplementedError()
397
397
398 def _callstream(self, cmd, **args):
398 def _callstream(self, cmd, **args):
399 """execute <cmd> on the server
399 """execute <cmd> on the server
400
400
401 The command is expected to return a stream.
401 The command is expected to return a stream.
402
402
403 returns the server reply as a file like object."""
403 returns the server reply as a file like object."""
404 raise NotImplementedError()
404 raise NotImplementedError()
405
405
406 def _callcompressable(self, cmd, **args):
406 def _callcompressable(self, cmd, **args):
407 """execute <cmd> on the server
407 """execute <cmd> on the server
408
408
409 The command is expected to return a stream.
409 The command is expected to return a stream.
410
410
411 The stream may have been compressed in some implementations. This
411 The stream may have been compressed in some implementations. This
412 function takes care of the decompression. This is the only difference
412 function takes care of the decompression. This is the only difference
413 with _callstream.
413 with _callstream.
414
414
415 returns the server reply as a file like object.
415 returns the server reply as a file like object.
416 """
416 """
417 raise NotImplementedError()
417 raise NotImplementedError()
418
418
419 def _callpush(self, cmd, fp, **args):
419 def _callpush(self, cmd, fp, **args):
420 """execute a <cmd> on server
420 """execute a <cmd> on server
421
421
422 The command is expected to be related to a push. Push has a special
422 The command is expected to be related to a push. Push has a special
423 return method.
423 return method.
424
424
425 returns the server reply as a (ret, output) tuple. ret is either
425 returns the server reply as a (ret, output) tuple. ret is either
426 empty (error) or a stringified int.
426 empty (error) or a stringified int.
427 """
427 """
428 raise NotImplementedError()
428 raise NotImplementedError()
429
429
430 def _calltwowaystream(self, cmd, fp, **args):
430 def _calltwowaystream(self, cmd, fp, **args):
431 """execute <cmd> on server
431 """execute <cmd> on server
432
432
433 The command will send a stream to the server and get a stream in reply.
433 The command will send a stream to the server and get a stream in reply.
434 """
434 """
435 raise NotImplementedError()
435 raise NotImplementedError()
436
436
437 def _abort(self, exception):
437 def _abort(self, exception):
438 """clearly abort the wire protocol connection and raise the exception
438 """clearly abort the wire protocol connection and raise the exception
439 """
439 """
440 raise NotImplementedError()
440 raise NotImplementedError()
441
441
442 # server side
442 # server side
443
443
444 # wire protocol command can either return a string or one of these classes.
444 # wire protocol command can either return a string or one of these classes.
445 class streamres(object):
445 class streamres(object):
446 """wireproto reply: binary stream
446 """wireproto reply: binary stream
447
447
448 The call was successful and the result is a stream.
448 The call was successful and the result is a stream.
449 Iterate on the `self.gen` attribute to retrieve chunks.
449 Iterate on the `self.gen` attribute to retrieve chunks.
450 """
450 """
451 def __init__(self, gen):
451 def __init__(self, gen):
452 self.gen = gen
452 self.gen = gen
453
453
454 class pushres(object):
454 class pushres(object):
455 """wireproto reply: success with simple integer return
455 """wireproto reply: success with simple integer return
456
456
457 The call was successful and returned an integer contained in `self.res`.
457 The call was successful and returned an integer contained in `self.res`.
458 """
458 """
459 def __init__(self, res):
459 def __init__(self, res):
460 self.res = res
460 self.res = res
461
461
462 class pusherr(object):
462 class pusherr(object):
463 """wireproto reply: failure
463 """wireproto reply: failure
464
464
465 The call failed. The `self.res` attribute contains the error message.
465 The call failed. The `self.res` attribute contains the error message.
466 """
466 """
467 def __init__(self, res):
467 def __init__(self, res):
468 self.res = res
468 self.res = res
469
469
470 class ooberror(object):
470 class ooberror(object):
471 """wireproto reply: failure of a batch of operation
471 """wireproto reply: failure of a batch of operation
472
472
473 Something failed during a batch call. The error message is stored in
473 Something failed during a batch call. The error message is stored in
474 `self.message`.
474 `self.message`.
475 """
475 """
476 def __init__(self, message):
476 def __init__(self, message):
477 self.message = message
477 self.message = message
478
478
479 def dispatch(repo, proto, command):
479 def dispatch(repo, proto, command):
480 repo = repo.filtered("served")
480 repo = repo.filtered("served")
481 func, spec = commands[command]
481 func, spec = commands[command]
482 args = proto.getargs(spec)
482 args = proto.getargs(spec)
483 return func(repo, proto, *args)
483 return func(repo, proto, *args)
484
484
485 def options(cmd, keys, others):
485 def options(cmd, keys, others):
486 opts = {}
486 opts = {}
487 for k in keys:
487 for k in keys:
488 if k in others:
488 if k in others:
489 opts[k] = others[k]
489 opts[k] = others[k]
490 del others[k]
490 del others[k]
491 if others:
491 if others:
492 sys.stderr.write("abort: %s got unexpected arguments %s\n"
492 sys.stderr.write("abort: %s got unexpected arguments %s\n"
493 % (cmd, ",".join(others)))
493 % (cmd, ",".join(others)))
494 return opts
494 return opts
495
495
496 # list of commands
496 # list of commands
497 commands = {}
497 commands = {}
498
498
499 def wireprotocommand(name, args=''):
499 def wireprotocommand(name, args=''):
500 """decorator for wire protocol command"""
500 """decorator for wire protocol command"""
501 def register(func):
501 def register(func):
502 commands[name] = (func, args)
502 commands[name] = (func, args)
503 return func
503 return func
504 return register
504 return register
505
505
506 @wireprotocommand('batch', 'cmds *')
506 @wireprotocommand('batch', 'cmds *')
507 def batch(repo, proto, cmds, others):
507 def batch(repo, proto, cmds, others):
508 repo = repo.filtered("served")
508 repo = repo.filtered("served")
509 res = []
509 res = []
510 for pair in cmds.split(';'):
510 for pair in cmds.split(';'):
511 op, args = pair.split(' ', 1)
511 op, args = pair.split(' ', 1)
512 vals = {}
512 vals = {}
513 for a in args.split(','):
513 for a in args.split(','):
514 if a:
514 if a:
515 n, v = a.split('=')
515 n, v = a.split('=')
516 vals[n] = unescapearg(v)
516 vals[n] = unescapearg(v)
517 func, spec = commands[op]
517 func, spec = commands[op]
518 if spec:
518 if spec:
519 keys = spec.split()
519 keys = spec.split()
520 data = {}
520 data = {}
521 for k in keys:
521 for k in keys:
522 if k == '*':
522 if k == '*':
523 star = {}
523 star = {}
524 for key in vals.keys():
524 for key in vals.keys():
525 if key not in keys:
525 if key not in keys:
526 star[key] = vals[key]
526 star[key] = vals[key]
527 data['*'] = star
527 data['*'] = star
528 else:
528 else:
529 data[k] = vals[k]
529 data[k] = vals[k]
530 result = func(repo, proto, *[data[k] for k in keys])
530 result = func(repo, proto, *[data[k] for k in keys])
531 else:
531 else:
532 result = func(repo, proto)
532 result = func(repo, proto)
533 if isinstance(result, ooberror):
533 if isinstance(result, ooberror):
534 return result
534 return result
535 res.append(escapearg(result))
535 res.append(escapearg(result))
536 return ';'.join(res)
536 return ';'.join(res)
537
537
538 @wireprotocommand('between', 'pairs')
538 @wireprotocommand('between', 'pairs')
539 def between(repo, proto, pairs):
539 def between(repo, proto, pairs):
540 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
540 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
541 r = []
541 r = []
542 for b in repo.between(pairs):
542 for b in repo.between(pairs):
543 r.append(encodelist(b) + "\n")
543 r.append(encodelist(b) + "\n")
544 return "".join(r)
544 return "".join(r)
545
545
546 @wireprotocommand('branchmap')
546 @wireprotocommand('branchmap')
547 def branchmap(repo, proto):
547 def branchmap(repo, proto):
548 branchmap = repo.branchmap()
548 branchmap = repo.branchmap()
549 heads = []
549 heads = []
550 for branch, nodes in branchmap.iteritems():
550 for branch, nodes in branchmap.iteritems():
551 branchname = urllib.quote(encoding.fromlocal(branch))
551 branchname = urllib.quote(encoding.fromlocal(branch))
552 branchnodes = encodelist(nodes)
552 branchnodes = encodelist(nodes)
553 heads.append('%s %s' % (branchname, branchnodes))
553 heads.append('%s %s' % (branchname, branchnodes))
554 return '\n'.join(heads)
554 return '\n'.join(heads)
555
555
556 @wireprotocommand('branches', 'nodes')
556 @wireprotocommand('branches', 'nodes')
557 def branches(repo, proto, nodes):
557 def branches(repo, proto, nodes):
558 nodes = decodelist(nodes)
558 nodes = decodelist(nodes)
559 r = []
559 r = []
560 for b in repo.branches(nodes):
560 for b in repo.branches(nodes):
561 r.append(encodelist(b) + "\n")
561 r.append(encodelist(b) + "\n")
562 return "".join(r)
562 return "".join(r)
563
563
564
564
565 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
565 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
566 'known', 'getbundle', 'unbundlehash', 'batch']
566 'known', 'getbundle', 'unbundlehash', 'batch']
567
567
568 def _capabilities(repo, proto):
568 def _capabilities(repo, proto):
569 """return a list of capabilities for a repo
569 """return a list of capabilities for a repo
570
570
571 This function exists to allow extensions to easily wrap capabilities
571 This function exists to allow extensions to easily wrap capabilities
572 computation
572 computation
573
573
574 - returns a lists: easy to alter
574 - returns a lists: easy to alter
575 - change done here will be propagated to both `capabilities` and `hello`
575 - change done here will be propagated to both `capabilities` and `hello`
576 command without any other action needed.
576 command without any other action needed.
577 """
577 """
578 # copy to prevent modification of the global list
578 # copy to prevent modification of the global list
579 caps = list(wireprotocaps)
579 caps = list(wireprotocaps)
580 if _allowstream(repo.ui):
580 if _allowstream(repo.ui):
581 if repo.ui.configbool('server', 'preferuncompressed', False):
581 if repo.ui.configbool('server', 'preferuncompressed', False):
582 caps.append('stream-preferred')
582 caps.append('stream-preferred')
583 requiredformats = repo.requirements & repo.supportedformats
583 requiredformats = repo.requirements & repo.supportedformats
584 # if our local revlogs are just revlogv1, add 'stream' cap
584 # if our local revlogs are just revlogv1, add 'stream' cap
585 if not requiredformats - set(('revlogv1',)):
585 if not requiredformats - set(('revlogv1',)):
586 caps.append('stream')
586 caps.append('stream')
587 # otherwise, add 'streamreqs' detailing our local revlog format
587 # otherwise, add 'streamreqs' detailing our local revlog format
588 else:
588 else:
589 caps.append('streamreqs=%s' % ','.join(requiredformats))
589 caps.append('streamreqs=%s' % ','.join(requiredformats))
590 if repo.ui.configbool('experimental', 'bundle2-exp', False):
590 if repo.ui.configbool('experimental', 'bundle2-exp', False):
591 capsblob = bundle2.encodecaps(repo.bundle2caps)
591 capsblob = bundle2.encodecaps(repo.bundle2caps)
592 caps.append('bundle2-exp=' + urllib.quote(capsblob))
592 caps.append('bundle2-exp=' + urllib.quote(capsblob))
593 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
593 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
594 caps.append('httpheader=1024')
594 caps.append('httpheader=1024')
595 return caps
595 return caps
596
596
597 # If you are writing an extension and consider wrapping this function. Wrap
597 # If you are writing an extension and consider wrapping this function. Wrap
598 # `_capabilities` instead.
598 # `_capabilities` instead.
599 @wireprotocommand('capabilities')
599 @wireprotocommand('capabilities')
600 def capabilities(repo, proto):
600 def capabilities(repo, proto):
601 return ' '.join(_capabilities(repo, proto))
601 return ' '.join(_capabilities(repo, proto))
602
602
603 @wireprotocommand('changegroup', 'roots')
603 @wireprotocommand('changegroup', 'roots')
604 def changegroup(repo, proto, roots):
604 def changegroup(repo, proto, roots):
605 nodes = decodelist(roots)
605 nodes = decodelist(roots)
606 cg = changegroupmod.changegroup(repo, nodes, 'serve')
606 cg = changegroupmod.changegroup(repo, nodes, 'serve')
607 return streamres(proto.groupchunks(cg))
607 return streamres(proto.groupchunks(cg))
608
608
609 @wireprotocommand('changegroupsubset', 'bases heads')
609 @wireprotocommand('changegroupsubset', 'bases heads')
610 def changegroupsubset(repo, proto, bases, heads):
610 def changegroupsubset(repo, proto, bases, heads):
611 bases = decodelist(bases)
611 bases = decodelist(bases)
612 heads = decodelist(heads)
612 heads = decodelist(heads)
613 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
613 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
614 return streamres(proto.groupchunks(cg))
614 return streamres(proto.groupchunks(cg))
615
615
616 @wireprotocommand('debugwireargs', 'one two *')
616 @wireprotocommand('debugwireargs', 'one two *')
617 def debugwireargs(repo, proto, one, two, others):
617 def debugwireargs(repo, proto, one, two, others):
618 # only accept optional args from the known set
618 # only accept optional args from the known set
619 opts = options('debugwireargs', ['three', 'four'], others)
619 opts = options('debugwireargs', ['three', 'four'], others)
620 return repo.debugwireargs(one, two, **opts)
620 return repo.debugwireargs(one, two, **opts)
621
621
622 @wireprotocommand('getbundle', '*')
622 @wireprotocommand('getbundle', '*')
623 def getbundle(repo, proto, others):
623 def getbundle(repo, proto, others):
624 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
624 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
625 for k, v in opts.iteritems():
625 for k, v in opts.iteritems():
626 if k in ('heads', 'common'):
626 if k in ('heads', 'common'):
627 opts[k] = decodelist(v)
627 opts[k] = decodelist(v)
628 elif k == 'bundlecaps':
628 elif k == 'bundlecaps':
629 opts[k] = set(v.split(','))
629 opts[k] = set(v.split(','))
630 cg = exchange.getbundle(repo, 'serve', **opts)
630 cg = exchange.getbundle(repo, 'serve', **opts)
631 return streamres(proto.groupchunks(cg))
631 return streamres(proto.groupchunks(cg))
632
632
633 @wireprotocommand('heads')
633 @wireprotocommand('heads')
634 def heads(repo, proto):
634 def heads(repo, proto):
635 h = repo.heads()
635 h = repo.heads()
636 return encodelist(h) + "\n"
636 return encodelist(h) + "\n"
637
637
638 @wireprotocommand('hello')
638 @wireprotocommand('hello')
639 def hello(repo, proto):
639 def hello(repo, proto):
640 '''the hello command returns a set of lines describing various
640 '''the hello command returns a set of lines describing various
641 interesting things about the server, in an RFC822-like format.
641 interesting things about the server, in an RFC822-like format.
642 Currently the only one defined is "capabilities", which
642 Currently the only one defined is "capabilities", which
643 consists of a line in the form:
643 consists of a line in the form:
644
644
645 capabilities: space separated list of tokens
645 capabilities: space separated list of tokens
646 '''
646 '''
647 return "capabilities: %s\n" % (capabilities(repo, proto))
647 return "capabilities: %s\n" % (capabilities(repo, proto))
648
648
649 @wireprotocommand('listkeys', 'namespace')
649 @wireprotocommand('listkeys', 'namespace')
650 def listkeys(repo, proto, namespace):
650 def listkeys(repo, proto, namespace):
651 d = repo.listkeys(encoding.tolocal(namespace)).items()
651 d = repo.listkeys(encoding.tolocal(namespace)).items()
652 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
652 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
653 for k, v in d])
653 for k, v in d])
654 return t
654 return t
655
655
656 @wireprotocommand('lookup', 'key')
656 @wireprotocommand('lookup', 'key')
657 def lookup(repo, proto, key):
657 def lookup(repo, proto, key):
658 try:
658 try:
659 k = encoding.tolocal(key)
659 k = encoding.tolocal(key)
660 c = repo[k]
660 c = repo[k]
661 r = c.hex()
661 r = c.hex()
662 success = 1
662 success = 1
663 except Exception, inst:
663 except Exception, inst:
664 r = str(inst)
664 r = str(inst)
665 success = 0
665 success = 0
666 return "%s %s\n" % (success, r)
666 return "%s %s\n" % (success, r)
667
667
668 @wireprotocommand('known', 'nodes *')
668 @wireprotocommand('known', 'nodes *')
669 def known(repo, proto, nodes, others):
669 def known(repo, proto, nodes, others):
670 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
670 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
671
671
672 @wireprotocommand('pushkey', 'namespace key old new')
672 @wireprotocommand('pushkey', 'namespace key old new')
673 def pushkey(repo, proto, namespace, key, old, new):
673 def pushkey(repo, proto, namespace, key, old, new):
674 # compatibility with pre-1.8 clients which were accidentally
674 # compatibility with pre-1.8 clients which were accidentally
675 # sending raw binary nodes rather than utf-8-encoded hex
675 # sending raw binary nodes rather than utf-8-encoded hex
676 if len(new) == 20 and new.encode('string-escape') != new:
676 if len(new) == 20 and new.encode('string-escape') != new:
677 # looks like it could be a binary node
677 # looks like it could be a binary node
678 try:
678 try:
679 new.decode('utf-8')
679 new.decode('utf-8')
680 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
680 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
681 except UnicodeDecodeError:
681 except UnicodeDecodeError:
682 pass # binary, leave unmodified
682 pass # binary, leave unmodified
683 else:
683 else:
684 new = encoding.tolocal(new) # normal path
684 new = encoding.tolocal(new) # normal path
685
685
686 if util.safehasattr(proto, 'restore'):
686 if util.safehasattr(proto, 'restore'):
687
687
688 proto.redirect()
688 proto.redirect()
689
689
690 try:
690 try:
691 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
691 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
692 encoding.tolocal(old), new) or False
692 encoding.tolocal(old), new) or False
693 except util.Abort:
693 except util.Abort:
694 r = False
694 r = False
695
695
696 output = proto.restore()
696 output = proto.restore()
697
697
698 return '%s\n%s' % (int(r), output)
698 return '%s\n%s' % (int(r), output)
699
699
700 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
700 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
701 encoding.tolocal(old), new)
701 encoding.tolocal(old), new)
702 return '%s\n' % int(r)
702 return '%s\n' % int(r)
703
703
704 def _allowstream(ui):
704 def _allowstream(ui):
705 return ui.configbool('server', 'uncompressed', True, untrusted=True)
705 return ui.configbool('server', 'uncompressed', True, untrusted=True)
706
706
707 def _walkstreamfiles(repo):
707 def _walkstreamfiles(repo):
708 # this is it's own function so extensions can override it
708 # this is it's own function so extensions can override it
709 return repo.store.walk()
709 return repo.store.walk()
710
710
711 @wireprotocommand('stream_out')
711 @wireprotocommand('stream_out')
712 def stream(repo, proto):
712 def stream(repo, proto):
713 '''If the server supports streaming clone, it advertises the "stream"
713 '''If the server supports streaming clone, it advertises the "stream"
714 capability with a value representing the version and flags of the repo
714 capability with a value representing the version and flags of the repo
715 it is serving. Client checks to see if it understands the format.
715 it is serving. Client checks to see if it understands the format.
716
716
717 The format is simple: the server writes out a line with the amount
717 The format is simple: the server writes out a line with the amount
718 of files, then the total amount of bytes to be transferred (separated
718 of files, then the total amount of bytes to be transferred (separated
719 by a space). Then, for each file, the server first writes the filename
719 by a space). Then, for each file, the server first writes the filename
720 and file size (separated by the null character), then the file contents.
720 and file size (separated by the null character), then the file contents.
721 '''
721 '''
722
722
723 if not _allowstream(repo.ui):
723 if not _allowstream(repo.ui):
724 return '1\n'
724 return '1\n'
725
725
726 entries = []
726 entries = []
727 total_bytes = 0
727 total_bytes = 0
728 try:
728 try:
729 # get consistent snapshot of repo, lock during scan
729 # get consistent snapshot of repo, lock during scan
730 lock = repo.lock()
730 lock = repo.lock()
731 try:
731 try:
732 repo.ui.debug('scanning\n')
732 repo.ui.debug('scanning\n')
733 for name, ename, size in _walkstreamfiles(repo):
733 for name, ename, size in _walkstreamfiles(repo):
734 if size:
734 if size:
735 entries.append((name, size))
735 entries.append((name, size))
736 total_bytes += size
736 total_bytes += size
737 finally:
737 finally:
738 lock.release()
738 lock.release()
739 except error.LockError:
739 except error.LockError:
740 return '2\n' # error: 2
740 return '2\n' # error: 2
741
741
742 def streamer(repo, entries, total):
742 def streamer(repo, entries, total):
743 '''stream out all metadata files in repository.'''
743 '''stream out all metadata files in repository.'''
744 yield '0\n' # success
744 yield '0\n' # success
745 repo.ui.debug('%d files, %d bytes to transfer\n' %
745 repo.ui.debug('%d files, %d bytes to transfer\n' %
746 (len(entries), total_bytes))
746 (len(entries), total_bytes))
747 yield '%d %d\n' % (len(entries), total_bytes)
747 yield '%d %d\n' % (len(entries), total_bytes)
748
748
749 sopener = repo.sopener
749 sopener = repo.sopener
750 oldaudit = sopener.mustaudit
750 oldaudit = sopener.mustaudit
751 debugflag = repo.ui.debugflag
751 debugflag = repo.ui.debugflag
752 sopener.mustaudit = False
752 sopener.mustaudit = False
753
753
754 try:
754 try:
755 for name, size in entries:
755 for name, size in entries:
756 if debugflag:
756 if debugflag:
757 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
757 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
758 # partially encode name over the wire for backwards compat
758 # partially encode name over the wire for backwards compat
759 yield '%s\0%d\n' % (store.encodedir(name), size)
759 yield '%s\0%d\n' % (store.encodedir(name), size)
760 if size <= 65536:
760 if size <= 65536:
761 fp = sopener(name)
761 fp = sopener(name)
762 try:
762 try:
763 data = fp.read(size)
763 data = fp.read(size)
764 finally:
764 finally:
765 fp.close()
765 fp.close()
766 yield data
766 yield data
767 else:
767 else:
768 for chunk in util.filechunkiter(sopener(name), limit=size):
768 for chunk in util.filechunkiter(sopener(name), limit=size):
769 yield chunk
769 yield chunk
770 # replace with "finally:" when support for python 2.4 has been dropped
770 # replace with "finally:" when support for python 2.4 has been dropped
771 except Exception:
771 except Exception:
772 sopener.mustaudit = oldaudit
772 sopener.mustaudit = oldaudit
773 raise
773 raise
774 sopener.mustaudit = oldaudit
774 sopener.mustaudit = oldaudit
775
775
776 return streamres(streamer(repo, entries, total_bytes))
776 return streamres(streamer(repo, entries, total_bytes))
777
777
778 @wireprotocommand('unbundle', 'heads')
778 @wireprotocommand('unbundle', 'heads')
779 def unbundle(repo, proto, heads):
779 def unbundle(repo, proto, heads):
780 their_heads = decodelist(heads)
780 their_heads = decodelist(heads)
781
781
782 try:
782 try:
783 proto.redirect()
783 proto.redirect()
784
784
785 exchange.check_heads(repo, their_heads, 'preparing changes')
785 exchange.check_heads(repo, their_heads, 'preparing changes')
786
786
787 # write bundle data to temporary file because it can be big
787 # write bundle data to temporary file because it can be big
788 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
788 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
789 fp = os.fdopen(fd, 'wb+')
789 fp = os.fdopen(fd, 'wb+')
790 r = 0
790 r = 0
791 try:
791 try:
792 proto.getfile(fp)
792 proto.getfile(fp)
793 fp.seek(0)
793 fp.seek(0)
794 gen = exchange.readbundle(repo.ui, fp, None)
794 gen = exchange.readbundle(repo.ui, fp, None)
795 r = exchange.unbundle(repo, gen, their_heads, 'serve',
795 r = exchange.unbundle(repo, gen, their_heads, 'serve',
796 proto._client())
796 proto._client())
797 if util.safehasattr(r, 'addpart'):
797 if util.safehasattr(r, 'addpart'):
798 # The return looks streameable, we are in the bundle2 case and
798 # The return looks streameable, we are in the bundle2 case and
799 # should return a stream.
799 # should return a stream.
800 return streamres(r.getchunks())
800 return streamres(r.getchunks())
801 return pushres(r)
801 return pushres(r)
802
802
803 finally:
803 finally:
804 fp.close()
804 fp.close()
805 os.unlink(tempname)
805 os.unlink(tempname)
806 except bundle2.BundleValueError, exc:
806 except error.BundleValueError, exc:
807 bundler = bundle2.bundle20(repo.ui)
807 bundler = bundle2.bundle20(repo.ui)
808 bundler.newpart('B2X:ERROR:UNKNOWNPART', [('parttype', str(exc))])
808 bundler.newpart('B2X:ERROR:UNKNOWNPART', [('parttype', str(exc))])
809 return streamres(bundler.getchunks())
809 return streamres(bundler.getchunks())
810 except util.Abort, inst:
810 except util.Abort, inst:
811 # The old code we moved used sys.stderr directly.
811 # The old code we moved used sys.stderr directly.
812 # We did not change it to minimise code change.
812 # We did not change it to minimise code change.
813 # This need to be moved to something proper.
813 # This need to be moved to something proper.
814 # Feel free to do it.
814 # Feel free to do it.
815 if getattr(inst, 'duringunbundle2', False):
815 if getattr(inst, 'duringunbundle2', False):
816 bundler = bundle2.bundle20(repo.ui)
816 bundler = bundle2.bundle20(repo.ui)
817 manargs = [('message', str(inst))]
817 manargs = [('message', str(inst))]
818 advargs = []
818 advargs = []
819 if inst.hint is not None:
819 if inst.hint is not None:
820 advargs.append(('hint', inst.hint))
820 advargs.append(('hint', inst.hint))
821 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
821 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
822 manargs, advargs))
822 manargs, advargs))
823 return streamres(bundler.getchunks())
823 return streamres(bundler.getchunks())
824 else:
824 else:
825 sys.stderr.write("abort: %s\n" % inst)
825 sys.stderr.write("abort: %s\n" % inst)
826 return pushres(0)
826 return pushres(0)
827 except error.PushRaced, exc:
827 except error.PushRaced, exc:
828 if getattr(exc, 'duringunbundle2', False):
828 if getattr(exc, 'duringunbundle2', False):
829 bundler = bundle2.bundle20(repo.ui)
829 bundler = bundle2.bundle20(repo.ui)
830 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
830 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
831 return streamres(bundler.getchunks())
831 return streamres(bundler.getchunks())
832 else:
832 else:
833 return pusherr(str(exc))
833 return pusherr(str(exc))
@@ -1,1085 +1,1085 b''
1
1
2 Create an extension to test bundle2 API
2 Create an extension to test bundle2 API
3
3
4 $ cat > bundle2.py << EOF
4 $ cat > bundle2.py << EOF
5 > """A small extension to test bundle2 implementation
5 > """A small extension to test bundle2 implementation
6 >
6 >
7 > Current bundle2 implementation is far too limited to be used in any core
7 > Current bundle2 implementation is far too limited to be used in any core
8 > code. We still need to be able to test it while it grow up.
8 > code. We still need to be able to test it while it grow up.
9 > """
9 > """
10 >
10 >
11 > try:
11 > try:
12 > import msvcrt
12 > import msvcrt
13 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
13 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
14 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
14 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
15 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
15 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
16 > except ImportError:
16 > except ImportError:
17 > pass
17 > pass
18 >
18 >
19 > import sys
19 > import sys
20 > from mercurial import cmdutil
20 > from mercurial import cmdutil
21 > from mercurial import util
21 > from mercurial import util
22 > from mercurial import bundle2
22 > from mercurial import bundle2
23 > from mercurial import scmutil
23 > from mercurial import scmutil
24 > from mercurial import discovery
24 > from mercurial import discovery
25 > from mercurial import changegroup
25 > from mercurial import changegroup
26 > from mercurial import error
26 > from mercurial import error
27 > cmdtable = {}
27 > cmdtable = {}
28 > command = cmdutil.command(cmdtable)
28 > command = cmdutil.command(cmdtable)
29 >
29 >
30 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
30 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
31 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
31 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
32 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
32 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
33 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
33 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
34 >
34 >
35 > @bundle2.parthandler('test:song')
35 > @bundle2.parthandler('test:song')
36 > def songhandler(op, part):
36 > def songhandler(op, part):
37 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
37 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
38 > op.ui.write('The choir starts singing:\n')
38 > op.ui.write('The choir starts singing:\n')
39 > verses = 0
39 > verses = 0
40 > for line in part.read().split('\n'):
40 > for line in part.read().split('\n'):
41 > op.ui.write(' %s\n' % line)
41 > op.ui.write(' %s\n' % line)
42 > verses += 1
42 > verses += 1
43 > op.records.add('song', {'verses': verses})
43 > op.records.add('song', {'verses': verses})
44 >
44 >
45 > @bundle2.parthandler('test:ping')
45 > @bundle2.parthandler('test:ping')
46 > def pinghandler(op, part):
46 > def pinghandler(op, part):
47 > op.ui.write('received ping request (id %i)\n' % part.id)
47 > op.ui.write('received ping request (id %i)\n' % part.id)
48 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
48 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
49 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
49 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
50 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
50 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
51 >
51 >
52 > @bundle2.parthandler('test:debugreply')
52 > @bundle2.parthandler('test:debugreply')
53 > def debugreply(op, part):
53 > def debugreply(op, part):
54 > """print data about the capacity of the bundle reply"""
54 > """print data about the capacity of the bundle reply"""
55 > if op.reply is None:
55 > if op.reply is None:
56 > op.ui.write('debugreply: no reply\n')
56 > op.ui.write('debugreply: no reply\n')
57 > else:
57 > else:
58 > op.ui.write('debugreply: capabilities:\n')
58 > op.ui.write('debugreply: capabilities:\n')
59 > for cap in sorted(op.reply.capabilities):
59 > for cap in sorted(op.reply.capabilities):
60 > op.ui.write('debugreply: %r\n' % cap)
60 > op.ui.write('debugreply: %r\n' % cap)
61 > for val in op.reply.capabilities[cap]:
61 > for val in op.reply.capabilities[cap]:
62 > op.ui.write('debugreply: %r\n' % val)
62 > op.ui.write('debugreply: %r\n' % val)
63 >
63 >
64 > @command('bundle2',
64 > @command('bundle2',
65 > [('', 'param', [], 'stream level parameter'),
65 > [('', 'param', [], 'stream level parameter'),
66 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
66 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
67 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
67 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
68 > ('', 'reply', False, 'produce a reply bundle'),
68 > ('', 'reply', False, 'produce a reply bundle'),
69 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
69 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
70 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
70 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
71 > '[OUTPUTFILE]')
71 > '[OUTPUTFILE]')
72 > def cmdbundle2(ui, repo, path=None, **opts):
72 > def cmdbundle2(ui, repo, path=None, **opts):
73 > """write a bundle2 container on standard ouput"""
73 > """write a bundle2 container on standard ouput"""
74 > bundler = bundle2.bundle20(ui)
74 > bundler = bundle2.bundle20(ui)
75 > for p in opts['param']:
75 > for p in opts['param']:
76 > p = p.split('=', 1)
76 > p = p.split('=', 1)
77 > try:
77 > try:
78 > bundler.addparam(*p)
78 > bundler.addparam(*p)
79 > except ValueError, exc:
79 > except ValueError, exc:
80 > raise util.Abort('%s' % exc)
80 > raise util.Abort('%s' % exc)
81 >
81 >
82 > if opts['reply']:
82 > if opts['reply']:
83 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
83 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
84 > bundler.newpart('b2x:replycaps', data=capsstring)
84 > bundler.newpart('b2x:replycaps', data=capsstring)
85 >
85 >
86 > if opts['pushrace']:
86 > if opts['pushrace']:
87 > # also serve to test the assignement of data outside of init
87 > # also serve to test the assignement of data outside of init
88 > part = bundler.newpart('b2x:check:heads')
88 > part = bundler.newpart('b2x:check:heads')
89 > part.data = '01234567890123456789'
89 > part.data = '01234567890123456789'
90 >
90 >
91 > revs = opts['rev']
91 > revs = opts['rev']
92 > if 'rev' in opts:
92 > if 'rev' in opts:
93 > revs = scmutil.revrange(repo, opts['rev'])
93 > revs = scmutil.revrange(repo, opts['rev'])
94 > if revs:
94 > if revs:
95 > # very crude version of a changegroup part creation
95 > # very crude version of a changegroup part creation
96 > bundled = repo.revs('%ld::%ld', revs, revs)
96 > bundled = repo.revs('%ld::%ld', revs, revs)
97 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
97 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
98 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
98 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
99 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
99 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
100 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
100 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
101 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
101 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
102 >
102 >
103 > if opts['parts']:
103 > if opts['parts']:
104 > bundler.newpart('test:empty')
104 > bundler.newpart('test:empty')
105 > # add a second one to make sure we handle multiple parts
105 > # add a second one to make sure we handle multiple parts
106 > bundler.newpart('test:empty')
106 > bundler.newpart('test:empty')
107 > bundler.newpart('test:song', data=ELEPHANTSSONG)
107 > bundler.newpart('test:song', data=ELEPHANTSSONG)
108 > bundler.newpart('test:debugreply')
108 > bundler.newpart('test:debugreply')
109 > mathpart = bundler.newpart('test:math')
109 > mathpart = bundler.newpart('test:math')
110 > mathpart.addparam('pi', '3.14')
110 > mathpart.addparam('pi', '3.14')
111 > mathpart.addparam('e', '2.72')
111 > mathpart.addparam('e', '2.72')
112 > mathpart.addparam('cooking', 'raw', mandatory=False)
112 > mathpart.addparam('cooking', 'raw', mandatory=False)
113 > mathpart.data = '42'
113 > mathpart.data = '42'
114 > if opts['unknown']:
114 > if opts['unknown']:
115 > bundler.newpart('test:UNKNOWN', data='some random content')
115 > bundler.newpart('test:UNKNOWN', data='some random content')
116 > if opts['parts']:
116 > if opts['parts']:
117 > bundler.newpart('test:ping')
117 > bundler.newpart('test:ping')
118 >
118 >
119 > if path is None:
119 > if path is None:
120 > file = sys.stdout
120 > file = sys.stdout
121 > else:
121 > else:
122 > file = open(path, 'w')
122 > file = open(path, 'w')
123 >
123 >
124 > for chunk in bundler.getchunks():
124 > for chunk in bundler.getchunks():
125 > file.write(chunk)
125 > file.write(chunk)
126 >
126 >
127 > @command('unbundle2', [], '')
127 > @command('unbundle2', [], '')
128 > def cmdunbundle2(ui, repo, replypath=None):
128 > def cmdunbundle2(ui, repo, replypath=None):
129 > """process a bundle2 stream from stdin on the current repo"""
129 > """process a bundle2 stream from stdin on the current repo"""
130 > try:
130 > try:
131 > tr = None
131 > tr = None
132 > lock = repo.lock()
132 > lock = repo.lock()
133 > tr = repo.transaction('processbundle')
133 > tr = repo.transaction('processbundle')
134 > try:
134 > try:
135 > unbundler = bundle2.unbundle20(ui, sys.stdin)
135 > unbundler = bundle2.unbundle20(ui, sys.stdin)
136 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
136 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
137 > tr.close()
137 > tr.close()
138 > except bundle2.BundleValueError, exc:
138 > except error.BundleValueError, exc:
139 > raise util.Abort('missing support for %s' % exc)
139 > raise util.Abort('missing support for %s' % exc)
140 > except error.PushRaced, exc:
140 > except error.PushRaced, exc:
141 > raise util.Abort('push race: %s' % exc)
141 > raise util.Abort('push race: %s' % exc)
142 > finally:
142 > finally:
143 > if tr is not None:
143 > if tr is not None:
144 > tr.release()
144 > tr.release()
145 > lock.release()
145 > lock.release()
146 > remains = sys.stdin.read()
146 > remains = sys.stdin.read()
147 > ui.write('%i unread bytes\n' % len(remains))
147 > ui.write('%i unread bytes\n' % len(remains))
148 > if op.records['song']:
148 > if op.records['song']:
149 > totalverses = sum(r['verses'] for r in op.records['song'])
149 > totalverses = sum(r['verses'] for r in op.records['song'])
150 > ui.write('%i total verses sung\n' % totalverses)
150 > ui.write('%i total verses sung\n' % totalverses)
151 > for rec in op.records['changegroup']:
151 > for rec in op.records['changegroup']:
152 > ui.write('addchangegroup return: %i\n' % rec['return'])
152 > ui.write('addchangegroup return: %i\n' % rec['return'])
153 > if op.reply is not None and replypath is not None:
153 > if op.reply is not None and replypath is not None:
154 > file = open(replypath, 'w')
154 > file = open(replypath, 'w')
155 > for chunk in op.reply.getchunks():
155 > for chunk in op.reply.getchunks():
156 > file.write(chunk)
156 > file.write(chunk)
157 >
157 >
158 > @command('statbundle2', [], '')
158 > @command('statbundle2', [], '')
159 > def cmdstatbundle2(ui, repo):
159 > def cmdstatbundle2(ui, repo):
160 > """print statistic on the bundle2 container read from stdin"""
160 > """print statistic on the bundle2 container read from stdin"""
161 > unbundler = bundle2.unbundle20(ui, sys.stdin)
161 > unbundler = bundle2.unbundle20(ui, sys.stdin)
162 > try:
162 > try:
163 > params = unbundler.params
163 > params = unbundler.params
164 > except KeyError, exc:
164 > except KeyError, exc:
165 > raise util.Abort('unknown parameters: %s' % exc)
165 > raise util.Abort('unknown parameters: %s' % exc)
166 > ui.write('options count: %i\n' % len(params))
166 > ui.write('options count: %i\n' % len(params))
167 > for key in sorted(params):
167 > for key in sorted(params):
168 > ui.write('- %s\n' % key)
168 > ui.write('- %s\n' % key)
169 > value = params[key]
169 > value = params[key]
170 > if value is not None:
170 > if value is not None:
171 > ui.write(' %s\n' % value)
171 > ui.write(' %s\n' % value)
172 > count = 0
172 > count = 0
173 > for p in unbundler.iterparts():
173 > for p in unbundler.iterparts():
174 > count += 1
174 > count += 1
175 > ui.write(' :%s:\n' % p.type)
175 > ui.write(' :%s:\n' % p.type)
176 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
176 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
177 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
177 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
178 > ui.write(' payload: %i bytes\n' % len(p.read()))
178 > ui.write(' payload: %i bytes\n' % len(p.read()))
179 > ui.write('parts count: %i\n' % count)
179 > ui.write('parts count: %i\n' % count)
180 > EOF
180 > EOF
181 $ cat >> $HGRCPATH << EOF
181 $ cat >> $HGRCPATH << EOF
182 > [extensions]
182 > [extensions]
183 > bundle2=$TESTTMP/bundle2.py
183 > bundle2=$TESTTMP/bundle2.py
184 > [experimental]
184 > [experimental]
185 > bundle2-exp=True
185 > bundle2-exp=True
186 > [ui]
186 > [ui]
187 > ssh=python "$TESTDIR/dummyssh"
187 > ssh=python "$TESTDIR/dummyssh"
188 > [web]
188 > [web]
189 > push_ssl = false
189 > push_ssl = false
190 > allow_push = *
190 > allow_push = *
191 > EOF
191 > EOF
192
192
193 The extension requires a repo (currently unused)
193 The extension requires a repo (currently unused)
194
194
195 $ hg init main
195 $ hg init main
196 $ cd main
196 $ cd main
197 $ touch a
197 $ touch a
198 $ hg add a
198 $ hg add a
199 $ hg commit -m 'a'
199 $ hg commit -m 'a'
200
200
201
201
202 Empty bundle
202 Empty bundle
203 =================
203 =================
204
204
205 - no option
205 - no option
206 - no parts
206 - no parts
207
207
208 Test bundling
208 Test bundling
209
209
210 $ hg bundle2
210 $ hg bundle2
211 HG2X\x00\x00\x00\x00 (no-eol) (esc)
211 HG2X\x00\x00\x00\x00 (no-eol) (esc)
212
212
213 Test unbundling
213 Test unbundling
214
214
215 $ hg bundle2 | hg statbundle2
215 $ hg bundle2 | hg statbundle2
216 options count: 0
216 options count: 0
217 parts count: 0
217 parts count: 0
218
218
219 Test old style bundle are detected and refused
219 Test old style bundle are detected and refused
220
220
221 $ hg bundle --all ../bundle.hg
221 $ hg bundle --all ../bundle.hg
222 1 changesets found
222 1 changesets found
223 $ hg statbundle2 < ../bundle.hg
223 $ hg statbundle2 < ../bundle.hg
224 abort: unknown bundle version 10
224 abort: unknown bundle version 10
225 [255]
225 [255]
226
226
227 Test parameters
227 Test parameters
228 =================
228 =================
229
229
230 - some options
230 - some options
231 - no parts
231 - no parts
232
232
233 advisory parameters, no value
233 advisory parameters, no value
234 -------------------------------
234 -------------------------------
235
235
236 Simplest possible parameters form
236 Simplest possible parameters form
237
237
238 Test generation simple option
238 Test generation simple option
239
239
240 $ hg bundle2 --param 'caution'
240 $ hg bundle2 --param 'caution'
241 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
241 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
242
242
243 Test unbundling
243 Test unbundling
244
244
245 $ hg bundle2 --param 'caution' | hg statbundle2
245 $ hg bundle2 --param 'caution' | hg statbundle2
246 options count: 1
246 options count: 1
247 - caution
247 - caution
248 parts count: 0
248 parts count: 0
249
249
250 Test generation multiple option
250 Test generation multiple option
251
251
252 $ hg bundle2 --param 'caution' --param 'meal'
252 $ hg bundle2 --param 'caution' --param 'meal'
253 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
253 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
254
254
255 Test unbundling
255 Test unbundling
256
256
257 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
257 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
258 options count: 2
258 options count: 2
259 - caution
259 - caution
260 - meal
260 - meal
261 parts count: 0
261 parts count: 0
262
262
263 advisory parameters, with value
263 advisory parameters, with value
264 -------------------------------
264 -------------------------------
265
265
266 Test generation
266 Test generation
267
267
268 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
268 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
269 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
269 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
270
270
271 Test unbundling
271 Test unbundling
272
272
273 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
273 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
274 options count: 3
274 options count: 3
275 - caution
275 - caution
276 - elephants
276 - elephants
277 - meal
277 - meal
278 vegan
278 vegan
279 parts count: 0
279 parts count: 0
280
280
281 parameter with special char in value
281 parameter with special char in value
282 ---------------------------------------------------
282 ---------------------------------------------------
283
283
284 Test generation
284 Test generation
285
285
286 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
286 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
287 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
287 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
288
288
289 Test unbundling
289 Test unbundling
290
290
291 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
291 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
292 options count: 2
292 options count: 2
293 - e|! 7/
293 - e|! 7/
294 babar%#==tutu
294 babar%#==tutu
295 - simple
295 - simple
296 parts count: 0
296 parts count: 0
297
297
298 Test unknown mandatory option
298 Test unknown mandatory option
299 ---------------------------------------------------
299 ---------------------------------------------------
300
300
301 $ hg bundle2 --param 'Gravity' | hg statbundle2
301 $ hg bundle2 --param 'Gravity' | hg statbundle2
302 abort: unknown parameters: 'Gravity'
302 abort: unknown parameters: 'Gravity'
303 [255]
303 [255]
304
304
305 Test debug output
305 Test debug output
306 ---------------------------------------------------
306 ---------------------------------------------------
307
307
308 bundling debug
308 bundling debug
309
309
310 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
310 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
311 start emission of HG2X stream
311 start emission of HG2X stream
312 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
312 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
313 start of parts
313 start of parts
314 end of bundle
314 end of bundle
315
315
316 file content is ok
316 file content is ok
317
317
318 $ cat ../out.hg2
318 $ cat ../out.hg2
319 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
319 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
320
320
321 unbundling debug
321 unbundling debug
322
322
323 $ hg statbundle2 --debug < ../out.hg2
323 $ hg statbundle2 --debug < ../out.hg2
324 start processing of HG2X stream
324 start processing of HG2X stream
325 reading bundle2 stream parameters
325 reading bundle2 stream parameters
326 ignoring unknown parameter 'e|! 7/'
326 ignoring unknown parameter 'e|! 7/'
327 ignoring unknown parameter 'simple'
327 ignoring unknown parameter 'simple'
328 options count: 2
328 options count: 2
329 - e|! 7/
329 - e|! 7/
330 babar%#==tutu
330 babar%#==tutu
331 - simple
331 - simple
332 start extraction of bundle2 parts
332 start extraction of bundle2 parts
333 part header size: 0
333 part header size: 0
334 end of bundle2 stream
334 end of bundle2 stream
335 parts count: 0
335 parts count: 0
336
336
337
337
338 Test buggy input
338 Test buggy input
339 ---------------------------------------------------
339 ---------------------------------------------------
340
340
341 empty parameter name
341 empty parameter name
342
342
343 $ hg bundle2 --param '' --quiet
343 $ hg bundle2 --param '' --quiet
344 abort: empty parameter name
344 abort: empty parameter name
345 [255]
345 [255]
346
346
347 bad parameter name
347 bad parameter name
348
348
349 $ hg bundle2 --param 42babar
349 $ hg bundle2 --param 42babar
350 abort: non letter first character: '42babar'
350 abort: non letter first character: '42babar'
351 [255]
351 [255]
352
352
353
353
354 Test part
354 Test part
355 =================
355 =================
356
356
357 $ hg bundle2 --parts ../parts.hg2 --debug
357 $ hg bundle2 --parts ../parts.hg2 --debug
358 start emission of HG2X stream
358 start emission of HG2X stream
359 bundle parameter:
359 bundle parameter:
360 start of parts
360 start of parts
361 bundle part: "test:empty"
361 bundle part: "test:empty"
362 bundle part: "test:empty"
362 bundle part: "test:empty"
363 bundle part: "test:song"
363 bundle part: "test:song"
364 bundle part: "test:debugreply"
364 bundle part: "test:debugreply"
365 bundle part: "test:math"
365 bundle part: "test:math"
366 bundle part: "test:ping"
366 bundle part: "test:ping"
367 end of bundle
367 end of bundle
368
368
369 $ cat ../parts.hg2
369 $ cat ../parts.hg2
370 HG2X\x00\x00\x00\x11 (esc)
370 HG2X\x00\x00\x00\x11 (esc)
371 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
371 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
372 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
372 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
373 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
373 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
374 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
374 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
375
375
376
376
377 $ hg statbundle2 < ../parts.hg2
377 $ hg statbundle2 < ../parts.hg2
378 options count: 0
378 options count: 0
379 :test:empty:
379 :test:empty:
380 mandatory: 0
380 mandatory: 0
381 advisory: 0
381 advisory: 0
382 payload: 0 bytes
382 payload: 0 bytes
383 :test:empty:
383 :test:empty:
384 mandatory: 0
384 mandatory: 0
385 advisory: 0
385 advisory: 0
386 payload: 0 bytes
386 payload: 0 bytes
387 :test:song:
387 :test:song:
388 mandatory: 0
388 mandatory: 0
389 advisory: 0
389 advisory: 0
390 payload: 178 bytes
390 payload: 178 bytes
391 :test:debugreply:
391 :test:debugreply:
392 mandatory: 0
392 mandatory: 0
393 advisory: 0
393 advisory: 0
394 payload: 0 bytes
394 payload: 0 bytes
395 :test:math:
395 :test:math:
396 mandatory: 2
396 mandatory: 2
397 advisory: 1
397 advisory: 1
398 payload: 2 bytes
398 payload: 2 bytes
399 :test:ping:
399 :test:ping:
400 mandatory: 0
400 mandatory: 0
401 advisory: 0
401 advisory: 0
402 payload: 0 bytes
402 payload: 0 bytes
403 parts count: 6
403 parts count: 6
404
404
405 $ hg statbundle2 --debug < ../parts.hg2
405 $ hg statbundle2 --debug < ../parts.hg2
406 start processing of HG2X stream
406 start processing of HG2X stream
407 reading bundle2 stream parameters
407 reading bundle2 stream parameters
408 options count: 0
408 options count: 0
409 start extraction of bundle2 parts
409 start extraction of bundle2 parts
410 part header size: 17
410 part header size: 17
411 part type: "test:empty"
411 part type: "test:empty"
412 part id: "0"
412 part id: "0"
413 part parameters: 0
413 part parameters: 0
414 :test:empty:
414 :test:empty:
415 mandatory: 0
415 mandatory: 0
416 advisory: 0
416 advisory: 0
417 payload chunk size: 0
417 payload chunk size: 0
418 payload: 0 bytes
418 payload: 0 bytes
419 part header size: 17
419 part header size: 17
420 part type: "test:empty"
420 part type: "test:empty"
421 part id: "1"
421 part id: "1"
422 part parameters: 0
422 part parameters: 0
423 :test:empty:
423 :test:empty:
424 mandatory: 0
424 mandatory: 0
425 advisory: 0
425 advisory: 0
426 payload chunk size: 0
426 payload chunk size: 0
427 payload: 0 bytes
427 payload: 0 bytes
428 part header size: 16
428 part header size: 16
429 part type: "test:song"
429 part type: "test:song"
430 part id: "2"
430 part id: "2"
431 part parameters: 0
431 part parameters: 0
432 :test:song:
432 :test:song:
433 mandatory: 0
433 mandatory: 0
434 advisory: 0
434 advisory: 0
435 payload chunk size: 178
435 payload chunk size: 178
436 payload chunk size: 0
436 payload chunk size: 0
437 payload: 178 bytes
437 payload: 178 bytes
438 part header size: 22
438 part header size: 22
439 part type: "test:debugreply"
439 part type: "test:debugreply"
440 part id: "3"
440 part id: "3"
441 part parameters: 0
441 part parameters: 0
442 :test:debugreply:
442 :test:debugreply:
443 mandatory: 0
443 mandatory: 0
444 advisory: 0
444 advisory: 0
445 payload chunk size: 0
445 payload chunk size: 0
446 payload: 0 bytes
446 payload: 0 bytes
447 part header size: 43
447 part header size: 43
448 part type: "test:math"
448 part type: "test:math"
449 part id: "4"
449 part id: "4"
450 part parameters: 3
450 part parameters: 3
451 :test:math:
451 :test:math:
452 mandatory: 2
452 mandatory: 2
453 advisory: 1
453 advisory: 1
454 payload chunk size: 2
454 payload chunk size: 2
455 payload chunk size: 0
455 payload chunk size: 0
456 payload: 2 bytes
456 payload: 2 bytes
457 part header size: 16
457 part header size: 16
458 part type: "test:ping"
458 part type: "test:ping"
459 part id: "5"
459 part id: "5"
460 part parameters: 0
460 part parameters: 0
461 :test:ping:
461 :test:ping:
462 mandatory: 0
462 mandatory: 0
463 advisory: 0
463 advisory: 0
464 payload chunk size: 0
464 payload chunk size: 0
465 payload: 0 bytes
465 payload: 0 bytes
466 part header size: 0
466 part header size: 0
467 end of bundle2 stream
467 end of bundle2 stream
468 parts count: 6
468 parts count: 6
469
469
470 Test actual unbundling of test part
470 Test actual unbundling of test part
471 =======================================
471 =======================================
472
472
473 Process the bundle
473 Process the bundle
474
474
475 $ hg unbundle2 --debug < ../parts.hg2
475 $ hg unbundle2 --debug < ../parts.hg2
476 start processing of HG2X stream
476 start processing of HG2X stream
477 reading bundle2 stream parameters
477 reading bundle2 stream parameters
478 start extraction of bundle2 parts
478 start extraction of bundle2 parts
479 part header size: 17
479 part header size: 17
480 part type: "test:empty"
480 part type: "test:empty"
481 part id: "0"
481 part id: "0"
482 part parameters: 0
482 part parameters: 0
483 ignoring unknown advisory part 'test:empty'
483 ignoring unknown advisory part 'test:empty'
484 payload chunk size: 0
484 payload chunk size: 0
485 part header size: 17
485 part header size: 17
486 part type: "test:empty"
486 part type: "test:empty"
487 part id: "1"
487 part id: "1"
488 part parameters: 0
488 part parameters: 0
489 ignoring unknown advisory part 'test:empty'
489 ignoring unknown advisory part 'test:empty'
490 payload chunk size: 0
490 payload chunk size: 0
491 part header size: 16
491 part header size: 16
492 part type: "test:song"
492 part type: "test:song"
493 part id: "2"
493 part id: "2"
494 part parameters: 0
494 part parameters: 0
495 found a handler for part 'test:song'
495 found a handler for part 'test:song'
496 The choir starts singing:
496 The choir starts singing:
497 payload chunk size: 178
497 payload chunk size: 178
498 payload chunk size: 0
498 payload chunk size: 0
499 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
499 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
500 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
500 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
501 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
501 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
502 part header size: 22
502 part header size: 22
503 part type: "test:debugreply"
503 part type: "test:debugreply"
504 part id: "3"
504 part id: "3"
505 part parameters: 0
505 part parameters: 0
506 found a handler for part 'test:debugreply'
506 found a handler for part 'test:debugreply'
507 debugreply: no reply
507 debugreply: no reply
508 payload chunk size: 0
508 payload chunk size: 0
509 part header size: 43
509 part header size: 43
510 part type: "test:math"
510 part type: "test:math"
511 part id: "4"
511 part id: "4"
512 part parameters: 3
512 part parameters: 3
513 ignoring unknown advisory part 'test:math'
513 ignoring unknown advisory part 'test:math'
514 payload chunk size: 2
514 payload chunk size: 2
515 payload chunk size: 0
515 payload chunk size: 0
516 part header size: 16
516 part header size: 16
517 part type: "test:ping"
517 part type: "test:ping"
518 part id: "5"
518 part id: "5"
519 part parameters: 0
519 part parameters: 0
520 found a handler for part 'test:ping'
520 found a handler for part 'test:ping'
521 received ping request (id 5)
521 received ping request (id 5)
522 payload chunk size: 0
522 payload chunk size: 0
523 part header size: 0
523 part header size: 0
524 end of bundle2 stream
524 end of bundle2 stream
525 0 unread bytes
525 0 unread bytes
526 3 total verses sung
526 3 total verses sung
527
527
528 Unbundle with an unknown mandatory part
528 Unbundle with an unknown mandatory part
529 (should abort)
529 (should abort)
530
530
531 $ hg bundle2 --parts --unknown ../unknown.hg2
531 $ hg bundle2 --parts --unknown ../unknown.hg2
532
532
533 $ hg unbundle2 < ../unknown.hg2
533 $ hg unbundle2 < ../unknown.hg2
534 The choir starts singing:
534 The choir starts singing:
535 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
535 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
536 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
536 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
537 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
537 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
538 debugreply: no reply
538 debugreply: no reply
539 0 unread bytes
539 0 unread bytes
540 abort: missing support for test:unknown
540 abort: missing support for test:unknown
541 [255]
541 [255]
542
542
543 unbundle with a reply
543 unbundle with a reply
544
544
545 $ hg bundle2 --parts --reply ../parts-reply.hg2
545 $ hg bundle2 --parts --reply ../parts-reply.hg2
546 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
546 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
547 0 unread bytes
547 0 unread bytes
548 3 total verses sung
548 3 total verses sung
549
549
550 The reply is a bundle
550 The reply is a bundle
551
551
552 $ cat ../reply.hg2
552 $ cat ../reply.hg2
553 HG2X\x00\x00\x00\x1f (esc)
553 HG2X\x00\x00\x00\x1f (esc)
554 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
554 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
555 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
555 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
556 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
556 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
557 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
557 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
558 \x00\x00\x00\x00\x00\x1f (esc)
558 \x00\x00\x00\x00\x00\x1f (esc)
559 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
559 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
560 debugreply: 'city=!'
560 debugreply: 'city=!'
561 debugreply: 'celeste,ville'
561 debugreply: 'celeste,ville'
562 debugreply: 'elephants'
562 debugreply: 'elephants'
563 debugreply: 'babar'
563 debugreply: 'babar'
564 debugreply: 'celeste'
564 debugreply: 'celeste'
565 debugreply: 'ping-pong'
565 debugreply: 'ping-pong'
566 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1f (esc)
566 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1f (esc)
567 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
567 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
568 replying to ping request (id 6)
568 replying to ping request (id 6)
569 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
569 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
570
570
571 The reply is valid
571 The reply is valid
572
572
573 $ hg statbundle2 < ../reply.hg2
573 $ hg statbundle2 < ../reply.hg2
574 options count: 0
574 options count: 0
575 :b2x:output:
575 :b2x:output:
576 mandatory: 0
576 mandatory: 0
577 advisory: 1
577 advisory: 1
578 payload: 217 bytes
578 payload: 217 bytes
579 :b2x:output:
579 :b2x:output:
580 mandatory: 0
580 mandatory: 0
581 advisory: 1
581 advisory: 1
582 payload: 201 bytes
582 payload: 201 bytes
583 :test:pong:
583 :test:pong:
584 mandatory: 1
584 mandatory: 1
585 advisory: 0
585 advisory: 0
586 payload: 0 bytes
586 payload: 0 bytes
587 :b2x:output:
587 :b2x:output:
588 mandatory: 0
588 mandatory: 0
589 advisory: 1
589 advisory: 1
590 payload: 61 bytes
590 payload: 61 bytes
591 parts count: 4
591 parts count: 4
592
592
593 Unbundle the reply to get the output:
593 Unbundle the reply to get the output:
594
594
595 $ hg unbundle2 < ../reply.hg2
595 $ hg unbundle2 < ../reply.hg2
596 remote: The choir starts singing:
596 remote: The choir starts singing:
597 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
597 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
598 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
598 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
599 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
599 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
600 remote: debugreply: capabilities:
600 remote: debugreply: capabilities:
601 remote: debugreply: 'city=!'
601 remote: debugreply: 'city=!'
602 remote: debugreply: 'celeste,ville'
602 remote: debugreply: 'celeste,ville'
603 remote: debugreply: 'elephants'
603 remote: debugreply: 'elephants'
604 remote: debugreply: 'babar'
604 remote: debugreply: 'babar'
605 remote: debugreply: 'celeste'
605 remote: debugreply: 'celeste'
606 remote: debugreply: 'ping-pong'
606 remote: debugreply: 'ping-pong'
607 remote: received ping request (id 6)
607 remote: received ping request (id 6)
608 remote: replying to ping request (id 6)
608 remote: replying to ping request (id 6)
609 0 unread bytes
609 0 unread bytes
610
610
611 Test push race detection
611 Test push race detection
612
612
613 $ hg bundle2 --pushrace ../part-race.hg2
613 $ hg bundle2 --pushrace ../part-race.hg2
614
614
615 $ hg unbundle2 < ../part-race.hg2
615 $ hg unbundle2 < ../part-race.hg2
616 0 unread bytes
616 0 unread bytes
617 abort: push race: repository changed while pushing - please try again
617 abort: push race: repository changed while pushing - please try again
618 [255]
618 [255]
619
619
620 Support for changegroup
620 Support for changegroup
621 ===================================
621 ===================================
622
622
623 $ hg unbundle $TESTDIR/bundles/rebase.hg
623 $ hg unbundle $TESTDIR/bundles/rebase.hg
624 adding changesets
624 adding changesets
625 adding manifests
625 adding manifests
626 adding file changes
626 adding file changes
627 added 8 changesets with 7 changes to 7 files (+3 heads)
627 added 8 changesets with 7 changes to 7 files (+3 heads)
628 (run 'hg heads' to see heads, 'hg merge' to merge)
628 (run 'hg heads' to see heads, 'hg merge' to merge)
629
629
630 $ hg log -G
630 $ hg log -G
631 o changeset: 8:02de42196ebe
631 o changeset: 8:02de42196ebe
632 | tag: tip
632 | tag: tip
633 | parent: 6:24b6387c8c8c
633 | parent: 6:24b6387c8c8c
634 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
634 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
635 | date: Sat Apr 30 15:24:48 2011 +0200
635 | date: Sat Apr 30 15:24:48 2011 +0200
636 | summary: H
636 | summary: H
637 |
637 |
638 | o changeset: 7:eea13746799a
638 | o changeset: 7:eea13746799a
639 |/| parent: 6:24b6387c8c8c
639 |/| parent: 6:24b6387c8c8c
640 | | parent: 5:9520eea781bc
640 | | parent: 5:9520eea781bc
641 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
641 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
642 | | date: Sat Apr 30 15:24:48 2011 +0200
642 | | date: Sat Apr 30 15:24:48 2011 +0200
643 | | summary: G
643 | | summary: G
644 | |
644 | |
645 o | changeset: 6:24b6387c8c8c
645 o | changeset: 6:24b6387c8c8c
646 | | parent: 1:cd010b8cd998
646 | | parent: 1:cd010b8cd998
647 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
647 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
648 | | date: Sat Apr 30 15:24:48 2011 +0200
648 | | date: Sat Apr 30 15:24:48 2011 +0200
649 | | summary: F
649 | | summary: F
650 | |
650 | |
651 | o changeset: 5:9520eea781bc
651 | o changeset: 5:9520eea781bc
652 |/ parent: 1:cd010b8cd998
652 |/ parent: 1:cd010b8cd998
653 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
653 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
654 | date: Sat Apr 30 15:24:48 2011 +0200
654 | date: Sat Apr 30 15:24:48 2011 +0200
655 | summary: E
655 | summary: E
656 |
656 |
657 | o changeset: 4:32af7686d403
657 | o changeset: 4:32af7686d403
658 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
658 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
659 | | date: Sat Apr 30 15:24:48 2011 +0200
659 | | date: Sat Apr 30 15:24:48 2011 +0200
660 | | summary: D
660 | | summary: D
661 | |
661 | |
662 | o changeset: 3:5fddd98957c8
662 | o changeset: 3:5fddd98957c8
663 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
663 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
664 | | date: Sat Apr 30 15:24:48 2011 +0200
664 | | date: Sat Apr 30 15:24:48 2011 +0200
665 | | summary: C
665 | | summary: C
666 | |
666 | |
667 | o changeset: 2:42ccdea3bb16
667 | o changeset: 2:42ccdea3bb16
668 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
668 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
669 | date: Sat Apr 30 15:24:48 2011 +0200
669 | date: Sat Apr 30 15:24:48 2011 +0200
670 | summary: B
670 | summary: B
671 |
671 |
672 o changeset: 1:cd010b8cd998
672 o changeset: 1:cd010b8cd998
673 parent: -1:000000000000
673 parent: -1:000000000000
674 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
674 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
675 date: Sat Apr 30 15:24:48 2011 +0200
675 date: Sat Apr 30 15:24:48 2011 +0200
676 summary: A
676 summary: A
677
677
678 @ changeset: 0:3903775176ed
678 @ changeset: 0:3903775176ed
679 user: test
679 user: test
680 date: Thu Jan 01 00:00:00 1970 +0000
680 date: Thu Jan 01 00:00:00 1970 +0000
681 summary: a
681 summary: a
682
682
683
683
684 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
684 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
685 4 changesets found
685 4 changesets found
686 list of changesets:
686 list of changesets:
687 32af7686d403cf45b5d95f2d70cebea587ac806a
687 32af7686d403cf45b5d95f2d70cebea587ac806a
688 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
688 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
689 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
689 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
690 02de42196ebee42ef284b6780a87cdc96e8eaab6
690 02de42196ebee42ef284b6780a87cdc96e8eaab6
691 start emission of HG2X stream
691 start emission of HG2X stream
692 bundle parameter:
692 bundle parameter:
693 start of parts
693 start of parts
694 bundle part: "b2x:changegroup"
694 bundle part: "b2x:changegroup"
695 bundling: 1/4 changesets (25.00%)
695 bundling: 1/4 changesets (25.00%)
696 bundling: 2/4 changesets (50.00%)
696 bundling: 2/4 changesets (50.00%)
697 bundling: 3/4 changesets (75.00%)
697 bundling: 3/4 changesets (75.00%)
698 bundling: 4/4 changesets (100.00%)
698 bundling: 4/4 changesets (100.00%)
699 bundling: 1/4 manifests (25.00%)
699 bundling: 1/4 manifests (25.00%)
700 bundling: 2/4 manifests (50.00%)
700 bundling: 2/4 manifests (50.00%)
701 bundling: 3/4 manifests (75.00%)
701 bundling: 3/4 manifests (75.00%)
702 bundling: 4/4 manifests (100.00%)
702 bundling: 4/4 manifests (100.00%)
703 bundling: D 1/3 files (33.33%)
703 bundling: D 1/3 files (33.33%)
704 bundling: E 2/3 files (66.67%)
704 bundling: E 2/3 files (66.67%)
705 bundling: H 3/3 files (100.00%)
705 bundling: H 3/3 files (100.00%)
706 end of bundle
706 end of bundle
707
707
708 $ cat ../rev.hg2
708 $ cat ../rev.hg2
709 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
709 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
710 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
710 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
711 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
711 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
712 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
712 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
713 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
713 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
714 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
714 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
715 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
715 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
716 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
716 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
717 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
717 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
718 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
718 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
719 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
719 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
720 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
720 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
721 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
721 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
722 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
722 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
723 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
723 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
724 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
724 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
725 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
725 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
726 l\r (no-eol) (esc)
726 l\r (no-eol) (esc)
727 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
727 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
728 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
728 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
729 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
729 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
730 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
730 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
731
731
732 $ hg unbundle2 < ../rev.hg2
732 $ hg unbundle2 < ../rev.hg2
733 adding changesets
733 adding changesets
734 adding manifests
734 adding manifests
735 adding file changes
735 adding file changes
736 added 0 changesets with 0 changes to 3 files
736 added 0 changesets with 0 changes to 3 files
737 0 unread bytes
737 0 unread bytes
738 addchangegroup return: 1
738 addchangegroup return: 1
739
739
740 with reply
740 with reply
741
741
742 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
742 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
743 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
743 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
744 0 unread bytes
744 0 unread bytes
745 addchangegroup return: 1
745 addchangegroup return: 1
746
746
747 $ cat ../rev-reply.hg2
747 $ cat ../rev-reply.hg2
748 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
748 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
749 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
749 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
750 adding manifests
750 adding manifests
751 adding file changes
751 adding file changes
752 added 0 changesets with 0 changes to 3 files
752 added 0 changesets with 0 changes to 3 files
753 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
753 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
754
754
755 Real world exchange
755 Real world exchange
756 =====================
756 =====================
757
757
758
758
759 clone --pull
759 clone --pull
760
760
761 $ cd ..
761 $ cd ..
762 $ hg clone main other --pull --rev 9520eea781bc
762 $ hg clone main other --pull --rev 9520eea781bc
763 adding changesets
763 adding changesets
764 adding manifests
764 adding manifests
765 adding file changes
765 adding file changes
766 added 2 changesets with 2 changes to 2 files
766 added 2 changesets with 2 changes to 2 files
767 updating to branch default
767 updating to branch default
768 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
768 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
769 $ hg -R other log -G
769 $ hg -R other log -G
770 @ changeset: 1:9520eea781bc
770 @ changeset: 1:9520eea781bc
771 | tag: tip
771 | tag: tip
772 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
772 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
773 | date: Sat Apr 30 15:24:48 2011 +0200
773 | date: Sat Apr 30 15:24:48 2011 +0200
774 | summary: E
774 | summary: E
775 |
775 |
776 o changeset: 0:cd010b8cd998
776 o changeset: 0:cd010b8cd998
777 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
777 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
778 date: Sat Apr 30 15:24:48 2011 +0200
778 date: Sat Apr 30 15:24:48 2011 +0200
779 summary: A
779 summary: A
780
780
781
781
782 pull
782 pull
783
783
784 $ hg -R other pull -r 24b6387c8c8c
784 $ hg -R other pull -r 24b6387c8c8c
785 pulling from $TESTTMP/main (glob)
785 pulling from $TESTTMP/main (glob)
786 searching for changes
786 searching for changes
787 adding changesets
787 adding changesets
788 adding manifests
788 adding manifests
789 adding file changes
789 adding file changes
790 added 1 changesets with 1 changes to 1 files (+1 heads)
790 added 1 changesets with 1 changes to 1 files (+1 heads)
791 (run 'hg heads' to see heads, 'hg merge' to merge)
791 (run 'hg heads' to see heads, 'hg merge' to merge)
792
792
793 pull empty
793 pull empty
794
794
795 $ hg -R other pull -r 24b6387c8c8c
795 $ hg -R other pull -r 24b6387c8c8c
796 pulling from $TESTTMP/main (glob)
796 pulling from $TESTTMP/main (glob)
797 no changes found
797 no changes found
798
798
799 push
799 push
800
800
801 $ hg -R main push other --rev eea13746799a
801 $ hg -R main push other --rev eea13746799a
802 pushing to other
802 pushing to other
803 searching for changes
803 searching for changes
804 remote: adding changesets
804 remote: adding changesets
805 remote: adding manifests
805 remote: adding manifests
806 remote: adding file changes
806 remote: adding file changes
807 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
807 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
808
808
809 pull over ssh
809 pull over ssh
810
810
811 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
811 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
812 pulling from ssh://user@dummy/main
812 pulling from ssh://user@dummy/main
813 searching for changes
813 searching for changes
814 adding changesets
814 adding changesets
815 adding manifests
815 adding manifests
816 adding file changes
816 adding file changes
817 added 1 changesets with 1 changes to 1 files (+1 heads)
817 added 1 changesets with 1 changes to 1 files (+1 heads)
818 (run 'hg heads' to see heads, 'hg merge' to merge)
818 (run 'hg heads' to see heads, 'hg merge' to merge)
819
819
820 pull over http
820 pull over http
821
821
822 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
822 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
823 $ cat main.pid >> $DAEMON_PIDS
823 $ cat main.pid >> $DAEMON_PIDS
824
824
825 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
825 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
826 pulling from http://localhost:$HGPORT/
826 pulling from http://localhost:$HGPORT/
827 searching for changes
827 searching for changes
828 adding changesets
828 adding changesets
829 adding manifests
829 adding manifests
830 adding file changes
830 adding file changes
831 added 1 changesets with 1 changes to 1 files (+1 heads)
831 added 1 changesets with 1 changes to 1 files (+1 heads)
832 (run 'hg heads .' to see heads, 'hg merge' to merge)
832 (run 'hg heads .' to see heads, 'hg merge' to merge)
833 $ cat main-error.log
833 $ cat main-error.log
834
834
835 push over ssh
835 push over ssh
836
836
837 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
837 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
838 pushing to ssh://user@dummy/other
838 pushing to ssh://user@dummy/other
839 searching for changes
839 searching for changes
840 remote: adding changesets
840 remote: adding changesets
841 remote: adding manifests
841 remote: adding manifests
842 remote: adding file changes
842 remote: adding file changes
843 remote: added 1 changesets with 1 changes to 1 files
843 remote: added 1 changesets with 1 changes to 1 files
844
844
845 push over http
845 push over http
846
846
847 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
847 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
848 $ cat other.pid >> $DAEMON_PIDS
848 $ cat other.pid >> $DAEMON_PIDS
849
849
850 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
850 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
851 pushing to http://localhost:$HGPORT2/
851 pushing to http://localhost:$HGPORT2/
852 searching for changes
852 searching for changes
853 remote: adding changesets
853 remote: adding changesets
854 remote: adding manifests
854 remote: adding manifests
855 remote: adding file changes
855 remote: adding file changes
856 remote: added 1 changesets with 1 changes to 1 files
856 remote: added 1 changesets with 1 changes to 1 files
857 $ cat other-error.log
857 $ cat other-error.log
858
858
859 Check final content.
859 Check final content.
860
860
861 $ hg -R other log -G
861 $ hg -R other log -G
862 o changeset: 7:32af7686d403
862 o changeset: 7:32af7686d403
863 | tag: tip
863 | tag: tip
864 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
864 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
865 | date: Sat Apr 30 15:24:48 2011 +0200
865 | date: Sat Apr 30 15:24:48 2011 +0200
866 | summary: D
866 | summary: D
867 |
867 |
868 o changeset: 6:5fddd98957c8
868 o changeset: 6:5fddd98957c8
869 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
869 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
870 | date: Sat Apr 30 15:24:48 2011 +0200
870 | date: Sat Apr 30 15:24:48 2011 +0200
871 | summary: C
871 | summary: C
872 |
872 |
873 o changeset: 5:42ccdea3bb16
873 o changeset: 5:42ccdea3bb16
874 | parent: 0:cd010b8cd998
874 | parent: 0:cd010b8cd998
875 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
875 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
876 | date: Sat Apr 30 15:24:48 2011 +0200
876 | date: Sat Apr 30 15:24:48 2011 +0200
877 | summary: B
877 | summary: B
878 |
878 |
879 | o changeset: 4:02de42196ebe
879 | o changeset: 4:02de42196ebe
880 | | parent: 2:24b6387c8c8c
880 | | parent: 2:24b6387c8c8c
881 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
881 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
882 | | date: Sat Apr 30 15:24:48 2011 +0200
882 | | date: Sat Apr 30 15:24:48 2011 +0200
883 | | summary: H
883 | | summary: H
884 | |
884 | |
885 | | o changeset: 3:eea13746799a
885 | | o changeset: 3:eea13746799a
886 | |/| parent: 2:24b6387c8c8c
886 | |/| parent: 2:24b6387c8c8c
887 | | | parent: 1:9520eea781bc
887 | | | parent: 1:9520eea781bc
888 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
888 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
889 | | | date: Sat Apr 30 15:24:48 2011 +0200
889 | | | date: Sat Apr 30 15:24:48 2011 +0200
890 | | | summary: G
890 | | | summary: G
891 | | |
891 | | |
892 | o | changeset: 2:24b6387c8c8c
892 | o | changeset: 2:24b6387c8c8c
893 |/ / parent: 0:cd010b8cd998
893 |/ / parent: 0:cd010b8cd998
894 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
894 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
895 | | date: Sat Apr 30 15:24:48 2011 +0200
895 | | date: Sat Apr 30 15:24:48 2011 +0200
896 | | summary: F
896 | | summary: F
897 | |
897 | |
898 | @ changeset: 1:9520eea781bc
898 | @ changeset: 1:9520eea781bc
899 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
899 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
900 | date: Sat Apr 30 15:24:48 2011 +0200
900 | date: Sat Apr 30 15:24:48 2011 +0200
901 | summary: E
901 | summary: E
902 |
902 |
903 o changeset: 0:cd010b8cd998
903 o changeset: 0:cd010b8cd998
904 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
904 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
905 date: Sat Apr 30 15:24:48 2011 +0200
905 date: Sat Apr 30 15:24:48 2011 +0200
906 summary: A
906 summary: A
907
907
908
908
909 Error Handling
909 Error Handling
910 ==============
910 ==============
911
911
912 Check that errors are properly returned to the client during push.
912 Check that errors are properly returned to the client during push.
913
913
914 Setting up
914 Setting up
915
915
916 $ cat > failpush.py << EOF
916 $ cat > failpush.py << EOF
917 > """A small extension that makes push fails when using bundle2
917 > """A small extension that makes push fails when using bundle2
918 >
918 >
919 > used to test error handling in bundle2
919 > used to test error handling in bundle2
920 > """
920 > """
921 >
921 >
922 > from mercurial import util
922 > from mercurial import util
923 > from mercurial import bundle2
923 > from mercurial import bundle2
924 > from mercurial import exchange
924 > from mercurial import exchange
925 > from mercurial import extensions
925 > from mercurial import extensions
926 >
926 >
927 > def _pushbundle2failpart(orig, pushop, bundler):
927 > def _pushbundle2failpart(orig, pushop, bundler):
928 > extradata = orig(pushop, bundler)
928 > extradata = orig(pushop, bundler)
929 > reason = pushop.ui.config('failpush', 'reason', None)
929 > reason = pushop.ui.config('failpush', 'reason', None)
930 > part = None
930 > part = None
931 > if reason == 'abort':
931 > if reason == 'abort':
932 > bundler.newpart('test:abort')
932 > bundler.newpart('test:abort')
933 > if reason == 'unknown':
933 > if reason == 'unknown':
934 > bundler.newpart('TEST:UNKNOWN')
934 > bundler.newpart('TEST:UNKNOWN')
935 > if reason == 'race':
935 > if reason == 'race':
936 > # 20 Bytes of crap
936 > # 20 Bytes of crap
937 > bundler.newpart('b2x:check:heads', data='01234567890123456789')
937 > bundler.newpart('b2x:check:heads', data='01234567890123456789')
938 > return extradata
938 > return extradata
939 >
939 >
940 > @bundle2.parthandler("test:abort")
940 > @bundle2.parthandler("test:abort")
941 > def handleabort(op, part):
941 > def handleabort(op, part):
942 > raise util.Abort('Abandon ship!', hint="don't panic")
942 > raise util.Abort('Abandon ship!', hint="don't panic")
943 >
943 >
944 > def uisetup(ui):
944 > def uisetup(ui):
945 > extensions.wrapfunction(exchange, '_pushbundle2extraparts', _pushbundle2failpart)
945 > extensions.wrapfunction(exchange, '_pushbundle2extraparts', _pushbundle2failpart)
946 >
946 >
947 > EOF
947 > EOF
948
948
949 $ cd main
949 $ cd main
950 $ hg up tip
950 $ hg up tip
951 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
951 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
952 $ echo 'I' > I
952 $ echo 'I' > I
953 $ hg add I
953 $ hg add I
954 $ hg ci -m 'I'
954 $ hg ci -m 'I'
955 $ hg id
955 $ hg id
956 e7ec4e813ba6 tip
956 e7ec4e813ba6 tip
957 $ cd ..
957 $ cd ..
958
958
959 $ cat << EOF >> $HGRCPATH
959 $ cat << EOF >> $HGRCPATH
960 > [extensions]
960 > [extensions]
961 > failpush=$TESTTMP/failpush.py
961 > failpush=$TESTTMP/failpush.py
962 > EOF
962 > EOF
963
963
964 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
964 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
965 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
965 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
966 $ cat other.pid >> $DAEMON_PIDS
966 $ cat other.pid >> $DAEMON_PIDS
967
967
968 Doing the actual push: Abort error
968 Doing the actual push: Abort error
969
969
970 $ cat << EOF >> $HGRCPATH
970 $ cat << EOF >> $HGRCPATH
971 > [failpush]
971 > [failpush]
972 > reason = abort
972 > reason = abort
973 > EOF
973 > EOF
974
974
975 $ hg -R main push other -r e7ec4e813ba6
975 $ hg -R main push other -r e7ec4e813ba6
976 pushing to other
976 pushing to other
977 searching for changes
977 searching for changes
978 abort: Abandon ship!
978 abort: Abandon ship!
979 (don't panic)
979 (don't panic)
980 [255]
980 [255]
981
981
982 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
982 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
983 pushing to ssh://user@dummy/other
983 pushing to ssh://user@dummy/other
984 searching for changes
984 searching for changes
985 abort: Abandon ship!
985 abort: Abandon ship!
986 (don't panic)
986 (don't panic)
987 [255]
987 [255]
988
988
989 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
989 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
990 pushing to http://localhost:$HGPORT2/
990 pushing to http://localhost:$HGPORT2/
991 searching for changes
991 searching for changes
992 abort: Abandon ship!
992 abort: Abandon ship!
993 (don't panic)
993 (don't panic)
994 [255]
994 [255]
995
995
996
996
997 Doing the actual push: unknown mandatory parts
997 Doing the actual push: unknown mandatory parts
998
998
999 $ cat << EOF >> $HGRCPATH
999 $ cat << EOF >> $HGRCPATH
1000 > [failpush]
1000 > [failpush]
1001 > reason = unknown
1001 > reason = unknown
1002 > EOF
1002 > EOF
1003
1003
1004 $ hg -R main push other -r e7ec4e813ba6
1004 $ hg -R main push other -r e7ec4e813ba6
1005 pushing to other
1005 pushing to other
1006 searching for changes
1006 searching for changes
1007 abort: missing support for test:unknown
1007 abort: missing support for test:unknown
1008 [255]
1008 [255]
1009
1009
1010 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1010 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1011 pushing to ssh://user@dummy/other
1011 pushing to ssh://user@dummy/other
1012 searching for changes
1012 searching for changes
1013 abort: missing support for test:unknown
1013 abort: missing support for test:unknown
1014 [255]
1014 [255]
1015
1015
1016 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1016 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1017 pushing to http://localhost:$HGPORT2/
1017 pushing to http://localhost:$HGPORT2/
1018 searching for changes
1018 searching for changes
1019 abort: missing support for test:unknown
1019 abort: missing support for test:unknown
1020 [255]
1020 [255]
1021
1021
1022 Doing the actual push: race
1022 Doing the actual push: race
1023
1023
1024 $ cat << EOF >> $HGRCPATH
1024 $ cat << EOF >> $HGRCPATH
1025 > [failpush]
1025 > [failpush]
1026 > reason = race
1026 > reason = race
1027 > EOF
1027 > EOF
1028
1028
1029 $ hg -R main push other -r e7ec4e813ba6
1029 $ hg -R main push other -r e7ec4e813ba6
1030 pushing to other
1030 pushing to other
1031 searching for changes
1031 searching for changes
1032 abort: push failed:
1032 abort: push failed:
1033 'repository changed while pushing - please try again'
1033 'repository changed while pushing - please try again'
1034 [255]
1034 [255]
1035
1035
1036 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1036 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1037 pushing to ssh://user@dummy/other
1037 pushing to ssh://user@dummy/other
1038 searching for changes
1038 searching for changes
1039 abort: push failed:
1039 abort: push failed:
1040 'repository changed while pushing - please try again'
1040 'repository changed while pushing - please try again'
1041 [255]
1041 [255]
1042
1042
1043 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1043 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1044 pushing to http://localhost:$HGPORT2/
1044 pushing to http://localhost:$HGPORT2/
1045 searching for changes
1045 searching for changes
1046 abort: push failed:
1046 abort: push failed:
1047 'repository changed while pushing - please try again'
1047 'repository changed while pushing - please try again'
1048 [255]
1048 [255]
1049
1049
1050 Doing the actual push: hook abort
1050 Doing the actual push: hook abort
1051
1051
1052 $ cat << EOF >> $HGRCPATH
1052 $ cat << EOF >> $HGRCPATH
1053 > [failpush]
1053 > [failpush]
1054 > reason =
1054 > reason =
1055 > [hooks]
1055 > [hooks]
1056 > b2x-pretransactionclose.failpush = false
1056 > b2x-pretransactionclose.failpush = false
1057 > EOF
1057 > EOF
1058
1058
1059 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
1059 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
1060 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
1060 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
1061 $ cat other.pid >> $DAEMON_PIDS
1061 $ cat other.pid >> $DAEMON_PIDS
1062
1062
1063 $ hg -R main push other -r e7ec4e813ba6
1063 $ hg -R main push other -r e7ec4e813ba6
1064 pushing to other
1064 pushing to other
1065 searching for changes
1065 searching for changes
1066 transaction abort!
1066 transaction abort!
1067 rollback completed
1067 rollback completed
1068 abort: b2x-pretransactionclose.failpush hook exited with status 1
1068 abort: b2x-pretransactionclose.failpush hook exited with status 1
1069 [255]
1069 [255]
1070
1070
1071 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1071 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1072 pushing to ssh://user@dummy/other
1072 pushing to ssh://user@dummy/other
1073 searching for changes
1073 searching for changes
1074 abort: b2x-pretransactionclose.failpush hook exited with status 1
1074 abort: b2x-pretransactionclose.failpush hook exited with status 1
1075 remote: transaction abort!
1075 remote: transaction abort!
1076 remote: rollback completed
1076 remote: rollback completed
1077 [255]
1077 [255]
1078
1078
1079 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1079 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1080 pushing to http://localhost:$HGPORT2/
1080 pushing to http://localhost:$HGPORT2/
1081 searching for changes
1081 searching for changes
1082 abort: b2x-pretransactionclose.failpush hook exited with status 1
1082 abort: b2x-pretransactionclose.failpush hook exited with status 1
1083 [255]
1083 [255]
1084
1084
1085
1085
General Comments 0
You need to be logged in to leave comments. Login now