##// END OF EJS Templates
bundle2: update all ``addpart`` callers to ``newpart``...
Pierre-Yves David -
r21600:5e08f3b6 default
parent child Browse files
Show More
@@ -1,775 +1,774 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 :payload:
116 :payload:
117
117
118 payload is a series of `<chunksize><chunkdata>`.
118 payload is a series of `<chunksize><chunkdata>`.
119
119
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122
122
123 The current implementation always produces either zero or one chunk.
123 The current implementation always produces either zero or one chunk.
124 This is an implementation limitation that will ultimately be lifted.
124 This is an implementation limitation that will ultimately be lifted.
125
125
126 Bundle processing
126 Bundle processing
127 ============================
127 ============================
128
128
129 Each part is processed in order using a "part handler". Handler are registered
129 Each part is processed in order using a "part handler". Handler are registered
130 for a certain part type.
130 for a certain part type.
131
131
132 The matching of a part to its handler is case insensitive. The case of the
132 The matching of a part to its handler is case insensitive. The case of the
133 part type is used to know if a part is mandatory or advisory. If the Part type
133 part type is used to know if a part is mandatory or advisory. If the Part type
134 contains any uppercase char it is considered mandatory. When no handler is
134 contains any uppercase char it is considered mandatory. When no handler is
135 known for a Mandatory part, the process is aborted and an exception is raised.
135 known for a Mandatory part, the process is aborted and an exception is raised.
136 If the part is advisory and no handler is known, the part is ignored. When the
136 If the part is advisory and no handler is known, the part is ignored. When the
137 process is aborted, the full bundle is still read from the stream to keep the
137 process is aborted, the full bundle is still read from the stream to keep the
138 channel usable. But none of the part read from an abort are processed. In the
138 channel usable. But none of the part read from an abort are processed. In the
139 future, dropping the stream may become an option for channel we do not care to
139 future, dropping the stream may become an option for channel we do not care to
140 preserve.
140 preserve.
141 """
141 """
142
142
143 import util
143 import util
144 import struct
144 import struct
145 import urllib
145 import urllib
146 import string
146 import string
147
147
148 import changegroup, error
148 import changegroup, error
149 from i18n import _
149 from i18n import _
150
150
151 _pack = struct.pack
151 _pack = struct.pack
152 _unpack = struct.unpack
152 _unpack = struct.unpack
153
153
154 _magicstring = 'HG2X'
154 _magicstring = 'HG2X'
155
155
156 _fstreamparamsize = '>H'
156 _fstreamparamsize = '>H'
157 _fpartheadersize = '>H'
157 _fpartheadersize = '>H'
158 _fparttypesize = '>B'
158 _fparttypesize = '>B'
159 _fpartid = '>I'
159 _fpartid = '>I'
160 _fpayloadsize = '>I'
160 _fpayloadsize = '>I'
161 _fpartparamcount = '>BB'
161 _fpartparamcount = '>BB'
162
162
163 preferedchunksize = 4096
163 preferedchunksize = 4096
164
164
165 def _makefpartparamsizes(nbparams):
165 def _makefpartparamsizes(nbparams):
166 """return a struct format to read part parameter sizes
166 """return a struct format to read part parameter sizes
167
167
168 The number parameters is variable so we need to build that format
168 The number parameters is variable so we need to build that format
169 dynamically.
169 dynamically.
170 """
170 """
171 return '>'+('BB'*nbparams)
171 return '>'+('BB'*nbparams)
172
172
173 class UnknownPartError(KeyError):
173 class UnknownPartError(KeyError):
174 """error raised when no handler is found for a Mandatory part"""
174 """error raised when no handler is found for a Mandatory part"""
175 pass
175 pass
176
176
177 parthandlermapping = {}
177 parthandlermapping = {}
178
178
179 def parthandler(parttype):
179 def parthandler(parttype):
180 """decorator that register a function as a bundle2 part handler
180 """decorator that register a function as a bundle2 part handler
181
181
182 eg::
182 eg::
183
183
184 @parthandler('myparttype')
184 @parthandler('myparttype')
185 def myparttypehandler(...):
185 def myparttypehandler(...):
186 '''process a part of type "my part".'''
186 '''process a part of type "my part".'''
187 ...
187 ...
188 """
188 """
189 def _decorator(func):
189 def _decorator(func):
190 lparttype = parttype.lower() # enforce lower case matching.
190 lparttype = parttype.lower() # enforce lower case matching.
191 assert lparttype not in parthandlermapping
191 assert lparttype not in parthandlermapping
192 parthandlermapping[lparttype] = func
192 parthandlermapping[lparttype] = func
193 return func
193 return func
194 return _decorator
194 return _decorator
195
195
196 class unbundlerecords(object):
196 class unbundlerecords(object):
197 """keep record of what happens during and unbundle
197 """keep record of what happens during and unbundle
198
198
199 New records are added using `records.add('cat', obj)`. Where 'cat' is a
199 New records are added using `records.add('cat', obj)`. Where 'cat' is a
200 category of record and obj is an arbitrary object.
200 category of record and obj is an arbitrary object.
201
201
202 `records['cat']` will return all entries of this category 'cat'.
202 `records['cat']` will return all entries of this category 'cat'.
203
203
204 Iterating on the object itself will yield `('category', obj)` tuples
204 Iterating on the object itself will yield `('category', obj)` tuples
205 for all entries.
205 for all entries.
206
206
207 All iterations happens in chronological order.
207 All iterations happens in chronological order.
208 """
208 """
209
209
210 def __init__(self):
210 def __init__(self):
211 self._categories = {}
211 self._categories = {}
212 self._sequences = []
212 self._sequences = []
213 self._replies = {}
213 self._replies = {}
214
214
215 def add(self, category, entry, inreplyto=None):
215 def add(self, category, entry, inreplyto=None):
216 """add a new record of a given category.
216 """add a new record of a given category.
217
217
218 The entry can then be retrieved in the list returned by
218 The entry can then be retrieved in the list returned by
219 self['category']."""
219 self['category']."""
220 self._categories.setdefault(category, []).append(entry)
220 self._categories.setdefault(category, []).append(entry)
221 self._sequences.append((category, entry))
221 self._sequences.append((category, entry))
222 if inreplyto is not None:
222 if inreplyto is not None:
223 self.getreplies(inreplyto).add(category, entry)
223 self.getreplies(inreplyto).add(category, entry)
224
224
225 def getreplies(self, partid):
225 def getreplies(self, partid):
226 """get the subrecords that replies to a specific part"""
226 """get the subrecords that replies to a specific part"""
227 return self._replies.setdefault(partid, unbundlerecords())
227 return self._replies.setdefault(partid, unbundlerecords())
228
228
229 def __getitem__(self, cat):
229 def __getitem__(self, cat):
230 return tuple(self._categories.get(cat, ()))
230 return tuple(self._categories.get(cat, ()))
231
231
232 def __iter__(self):
232 def __iter__(self):
233 return iter(self._sequences)
233 return iter(self._sequences)
234
234
235 def __len__(self):
235 def __len__(self):
236 return len(self._sequences)
236 return len(self._sequences)
237
237
238 def __nonzero__(self):
238 def __nonzero__(self):
239 return bool(self._sequences)
239 return bool(self._sequences)
240
240
241 class bundleoperation(object):
241 class bundleoperation(object):
242 """an object that represents a single bundling process
242 """an object that represents a single bundling process
243
243
244 Its purpose is to carry unbundle-related objects and states.
244 Its purpose is to carry unbundle-related objects and states.
245
245
246 A new object should be created at the beginning of each bundle processing.
246 A new object should be created at the beginning of each bundle processing.
247 The object is to be returned by the processing function.
247 The object is to be returned by the processing function.
248
248
249 The object has very little content now it will ultimately contain:
249 The object has very little content now it will ultimately contain:
250 * an access to the repo the bundle is applied to,
250 * an access to the repo the bundle is applied to,
251 * a ui object,
251 * a ui object,
252 * a way to retrieve a transaction to add changes to the repo,
252 * a way to retrieve a transaction to add changes to the repo,
253 * a way to record the result of processing each part,
253 * a way to record the result of processing each part,
254 * a way to construct a bundle response when applicable.
254 * a way to construct a bundle response when applicable.
255 """
255 """
256
256
257 def __init__(self, repo, transactiongetter):
257 def __init__(self, repo, transactiongetter):
258 self.repo = repo
258 self.repo = repo
259 self.ui = repo.ui
259 self.ui = repo.ui
260 self.records = unbundlerecords()
260 self.records = unbundlerecords()
261 self.gettransaction = transactiongetter
261 self.gettransaction = transactiongetter
262 self.reply = None
262 self.reply = None
263
263
264 class TransactionUnavailable(RuntimeError):
264 class TransactionUnavailable(RuntimeError):
265 pass
265 pass
266
266
267 def _notransaction():
267 def _notransaction():
268 """default method to get a transaction while processing a bundle
268 """default method to get a transaction while processing a bundle
269
269
270 Raise an exception to highlight the fact that no transaction was expected
270 Raise an exception to highlight the fact that no transaction was expected
271 to be created"""
271 to be created"""
272 raise TransactionUnavailable()
272 raise TransactionUnavailable()
273
273
274 def processbundle(repo, unbundler, transactiongetter=_notransaction):
274 def processbundle(repo, unbundler, transactiongetter=_notransaction):
275 """This function process a bundle, apply effect to/from a repo
275 """This function process a bundle, apply effect to/from a repo
276
276
277 It iterates over each part then searches for and uses the proper handling
277 It iterates over each part then searches for and uses the proper handling
278 code to process the part. Parts are processed in order.
278 code to process the part. Parts are processed in order.
279
279
280 This is very early version of this function that will be strongly reworked
280 This is very early version of this function that will be strongly reworked
281 before final usage.
281 before final usage.
282
282
283 Unknown Mandatory part will abort the process.
283 Unknown Mandatory part will abort the process.
284 """
284 """
285 op = bundleoperation(repo, transactiongetter)
285 op = bundleoperation(repo, transactiongetter)
286 # todo:
286 # todo:
287 # - replace this is a init function soon.
287 # - replace this is a init function soon.
288 # - exception catching
288 # - exception catching
289 unbundler.params
289 unbundler.params
290 iterparts = unbundler.iterparts()
290 iterparts = unbundler.iterparts()
291 part = None
291 part = None
292 try:
292 try:
293 for part in iterparts:
293 for part in iterparts:
294 parttype = part.type
294 parttype = part.type
295 # part key are matched lower case
295 # part key are matched lower case
296 key = parttype.lower()
296 key = parttype.lower()
297 try:
297 try:
298 handler = parthandlermapping[key]
298 handler = parthandlermapping[key]
299 op.ui.debug('found a handler for part %r\n' % parttype)
299 op.ui.debug('found a handler for part %r\n' % parttype)
300 except KeyError:
300 except KeyError:
301 if key != parttype: # mandatory parts
301 if key != parttype: # mandatory parts
302 # todo:
302 # todo:
303 # - use a more precise exception
303 # - use a more precise exception
304 raise UnknownPartError(key)
304 raise UnknownPartError(key)
305 op.ui.debug('ignoring unknown advisory part %r\n' % key)
305 op.ui.debug('ignoring unknown advisory part %r\n' % key)
306 # consuming the part
306 # consuming the part
307 part.read()
307 part.read()
308 continue
308 continue
309
309
310 # handler is called outside the above try block so that we don't
310 # handler is called outside the above try block so that we don't
311 # risk catching KeyErrors from anything other than the
311 # risk catching KeyErrors from anything other than the
312 # parthandlermapping lookup (any KeyError raised by handler()
312 # parthandlermapping lookup (any KeyError raised by handler()
313 # itself represents a defect of a different variety).
313 # itself represents a defect of a different variety).
314 output = None
314 output = None
315 if op.reply is not None:
315 if op.reply is not None:
316 op.ui.pushbuffer(error=True)
316 op.ui.pushbuffer(error=True)
317 output = ''
317 output = ''
318 try:
318 try:
319 handler(op, part)
319 handler(op, part)
320 finally:
320 finally:
321 if output is not None:
321 if output is not None:
322 output = op.ui.popbuffer()
322 output = op.ui.popbuffer()
323 if output:
323 if output:
324 outpart = bundlepart('b2x:output',
324 op.reply.newpart('b2x:output',
325 advisoryparams=[('in-reply-to',
325 advisoryparams=[('in-reply-to',
326 str(part.id))],
326 str(part.id))],
327 data=output)
327 data=output)
328 op.reply.addpart(outpart)
329 part.read()
328 part.read()
330 except Exception, exc:
329 except Exception, exc:
331 if part is not None:
330 if part is not None:
332 # consume the bundle content
331 # consume the bundle content
333 part.read()
332 part.read()
334 for part in iterparts:
333 for part in iterparts:
335 # consume the bundle content
334 # consume the bundle content
336 part.read()
335 part.read()
337 # Small hack to let caller code distinguish exceptions from bundle2
336 # Small hack to let caller code distinguish exceptions from bundle2
338 # processing fron the ones from bundle1 processing. This is mostly
337 # processing fron the ones from bundle1 processing. This is mostly
339 # needed to handle different return codes to unbundle according to the
338 # needed to handle different return codes to unbundle according to the
340 # type of bundle. We should probably clean up or drop this return code
339 # type of bundle. We should probably clean up or drop this return code
341 # craziness in a future version.
340 # craziness in a future version.
342 exc.duringunbundle2 = True
341 exc.duringunbundle2 = True
343 raise
342 raise
344 return op
343 return op
345
344
346 def decodecaps(blob):
345 def decodecaps(blob):
347 """decode a bundle2 caps bytes blob into a dictionnary
346 """decode a bundle2 caps bytes blob into a dictionnary
348
347
349 The blob is a list of capabilities (one per line)
348 The blob is a list of capabilities (one per line)
350 Capabilities may have values using a line of the form::
349 Capabilities may have values using a line of the form::
351
350
352 capability=value1,value2,value3
351 capability=value1,value2,value3
353
352
354 The values are always a list."""
353 The values are always a list."""
355 caps = {}
354 caps = {}
356 for line in blob.splitlines():
355 for line in blob.splitlines():
357 if not line:
356 if not line:
358 continue
357 continue
359 if '=' not in line:
358 if '=' not in line:
360 key, vals = line, ()
359 key, vals = line, ()
361 else:
360 else:
362 key, vals = line.split('=', 1)
361 key, vals = line.split('=', 1)
363 vals = vals.split(',')
362 vals = vals.split(',')
364 key = urllib.unquote(key)
363 key = urllib.unquote(key)
365 vals = [urllib.unquote(v) for v in vals]
364 vals = [urllib.unquote(v) for v in vals]
366 caps[key] = vals
365 caps[key] = vals
367 return caps
366 return caps
368
367
369 def encodecaps(caps):
368 def encodecaps(caps):
370 """encode a bundle2 caps dictionary into a bytes blob"""
369 """encode a bundle2 caps dictionary into a bytes blob"""
371 chunks = []
370 chunks = []
372 for ca in sorted(caps):
371 for ca in sorted(caps):
373 vals = caps[ca]
372 vals = caps[ca]
374 ca = urllib.quote(ca)
373 ca = urllib.quote(ca)
375 vals = [urllib.quote(v) for v in vals]
374 vals = [urllib.quote(v) for v in vals]
376 if vals:
375 if vals:
377 ca = "%s=%s" % (ca, ','.join(vals))
376 ca = "%s=%s" % (ca, ','.join(vals))
378 chunks.append(ca)
377 chunks.append(ca)
379 return '\n'.join(chunks)
378 return '\n'.join(chunks)
380
379
381 class bundle20(object):
380 class bundle20(object):
382 """represent an outgoing bundle2 container
381 """represent an outgoing bundle2 container
383
382
384 Use the `addparam` method to add stream level parameter. and `newpart` to
383 Use the `addparam` method to add stream level parameter. and `newpart` to
385 populate it. Then call `getchunks` to retrieve all the binary chunks of
384 populate it. Then call `getchunks` to retrieve all the binary chunks of
386 data that compose the bundle2 container."""
385 data that compose the bundle2 container."""
387
386
388 def __init__(self, ui, capabilities=()):
387 def __init__(self, ui, capabilities=()):
389 self.ui = ui
388 self.ui = ui
390 self._params = []
389 self._params = []
391 self._parts = []
390 self._parts = []
392 self.capabilities = dict(capabilities)
391 self.capabilities = dict(capabilities)
393
392
394 # methods used to defines the bundle2 content
393 # methods used to defines the bundle2 content
395 def addparam(self, name, value=None):
394 def addparam(self, name, value=None):
396 """add a stream level parameter"""
395 """add a stream level parameter"""
397 if not name:
396 if not name:
398 raise ValueError('empty parameter name')
397 raise ValueError('empty parameter name')
399 if name[0] not in string.letters:
398 if name[0] not in string.letters:
400 raise ValueError('non letter first character: %r' % name)
399 raise ValueError('non letter first character: %r' % name)
401 self._params.append((name, value))
400 self._params.append((name, value))
402
401
403 def addpart(self, part):
402 def addpart(self, part):
404 """add a new part to the bundle2 container
403 """add a new part to the bundle2 container
405
404
406 Parts contains the actual applicative payload."""
405 Parts contains the actual applicative payload."""
407 assert part.id is None
406 assert part.id is None
408 part.id = len(self._parts) # very cheap counter
407 part.id = len(self._parts) # very cheap counter
409 self._parts.append(part)
408 self._parts.append(part)
410
409
411 def newpart(self, typeid, *args, **kwargs):
410 def newpart(self, typeid, *args, **kwargs):
412 """create a new part for the containers"""
411 """create a new part and add it to the containers"""
413 part = bundlepart(typeid, *args, **kwargs)
412 part = bundlepart(typeid, *args, **kwargs)
414 self.addpart(part)
413 self.addpart(part)
415 return part
414 return part
416
415
417 # methods used to generate the bundle2 stream
416 # methods used to generate the bundle2 stream
418 def getchunks(self):
417 def getchunks(self):
419 self.ui.debug('start emission of %s stream\n' % _magicstring)
418 self.ui.debug('start emission of %s stream\n' % _magicstring)
420 yield _magicstring
419 yield _magicstring
421 param = self._paramchunk()
420 param = self._paramchunk()
422 self.ui.debug('bundle parameter: %s\n' % param)
421 self.ui.debug('bundle parameter: %s\n' % param)
423 yield _pack(_fstreamparamsize, len(param))
422 yield _pack(_fstreamparamsize, len(param))
424 if param:
423 if param:
425 yield param
424 yield param
426
425
427 self.ui.debug('start of parts\n')
426 self.ui.debug('start of parts\n')
428 for part in self._parts:
427 for part in self._parts:
429 self.ui.debug('bundle part: "%s"\n' % part.type)
428 self.ui.debug('bundle part: "%s"\n' % part.type)
430 for chunk in part.getchunks():
429 for chunk in part.getchunks():
431 yield chunk
430 yield chunk
432 self.ui.debug('end of bundle\n')
431 self.ui.debug('end of bundle\n')
433 yield '\0\0'
432 yield '\0\0'
434
433
435 def _paramchunk(self):
434 def _paramchunk(self):
436 """return a encoded version of all stream parameters"""
435 """return a encoded version of all stream parameters"""
437 blocks = []
436 blocks = []
438 for par, value in self._params:
437 for par, value in self._params:
439 par = urllib.quote(par)
438 par = urllib.quote(par)
440 if value is not None:
439 if value is not None:
441 value = urllib.quote(value)
440 value = urllib.quote(value)
442 par = '%s=%s' % (par, value)
441 par = '%s=%s' % (par, value)
443 blocks.append(par)
442 blocks.append(par)
444 return ' '.join(blocks)
443 return ' '.join(blocks)
445
444
446 class unpackermixin(object):
445 class unpackermixin(object):
447 """A mixin to extract bytes and struct data from a stream"""
446 """A mixin to extract bytes and struct data from a stream"""
448
447
449 def __init__(self, fp):
448 def __init__(self, fp):
450 self._fp = fp
449 self._fp = fp
451
450
452 def _unpack(self, format):
451 def _unpack(self, format):
453 """unpack this struct format from the stream"""
452 """unpack this struct format from the stream"""
454 data = self._readexact(struct.calcsize(format))
453 data = self._readexact(struct.calcsize(format))
455 return _unpack(format, data)
454 return _unpack(format, data)
456
455
457 def _readexact(self, size):
456 def _readexact(self, size):
458 """read exactly <size> bytes from the stream"""
457 """read exactly <size> bytes from the stream"""
459 return changegroup.readexactly(self._fp, size)
458 return changegroup.readexactly(self._fp, size)
460
459
461
460
462 class unbundle20(unpackermixin):
461 class unbundle20(unpackermixin):
463 """interpret a bundle2 stream
462 """interpret a bundle2 stream
464
463
465 This class is fed with a binary stream and yields parts through its
464 This class is fed with a binary stream and yields parts through its
466 `iterparts` methods."""
465 `iterparts` methods."""
467
466
468 def __init__(self, ui, fp, header=None):
467 def __init__(self, ui, fp, header=None):
469 """If header is specified, we do not read it out of the stream."""
468 """If header is specified, we do not read it out of the stream."""
470 self.ui = ui
469 self.ui = ui
471 super(unbundle20, self).__init__(fp)
470 super(unbundle20, self).__init__(fp)
472 if header is None:
471 if header is None:
473 header = self._readexact(4)
472 header = self._readexact(4)
474 magic, version = header[0:2], header[2:4]
473 magic, version = header[0:2], header[2:4]
475 if magic != 'HG':
474 if magic != 'HG':
476 raise util.Abort(_('not a Mercurial bundle'))
475 raise util.Abort(_('not a Mercurial bundle'))
477 if version != '2X':
476 if version != '2X':
478 raise util.Abort(_('unknown bundle version %s') % version)
477 raise util.Abort(_('unknown bundle version %s') % version)
479 self.ui.debug('start processing of %s stream\n' % header)
478 self.ui.debug('start processing of %s stream\n' % header)
480
479
481 @util.propertycache
480 @util.propertycache
482 def params(self):
481 def params(self):
483 """dictionary of stream level parameters"""
482 """dictionary of stream level parameters"""
484 self.ui.debug('reading bundle2 stream parameters\n')
483 self.ui.debug('reading bundle2 stream parameters\n')
485 params = {}
484 params = {}
486 paramssize = self._unpack(_fstreamparamsize)[0]
485 paramssize = self._unpack(_fstreamparamsize)[0]
487 if paramssize:
486 if paramssize:
488 for p in self._readexact(paramssize).split(' '):
487 for p in self._readexact(paramssize).split(' '):
489 p = p.split('=', 1)
488 p = p.split('=', 1)
490 p = [urllib.unquote(i) for i in p]
489 p = [urllib.unquote(i) for i in p]
491 if len(p) < 2:
490 if len(p) < 2:
492 p.append(None)
491 p.append(None)
493 self._processparam(*p)
492 self._processparam(*p)
494 params[p[0]] = p[1]
493 params[p[0]] = p[1]
495 return params
494 return params
496
495
497 def _processparam(self, name, value):
496 def _processparam(self, name, value):
498 """process a parameter, applying its effect if needed
497 """process a parameter, applying its effect if needed
499
498
500 Parameter starting with a lower case letter are advisory and will be
499 Parameter starting with a lower case letter are advisory and will be
501 ignored when unknown. Those starting with an upper case letter are
500 ignored when unknown. Those starting with an upper case letter are
502 mandatory and will this function will raise a KeyError when unknown.
501 mandatory and will this function will raise a KeyError when unknown.
503
502
504 Note: no option are currently supported. Any input will be either
503 Note: no option are currently supported. Any input will be either
505 ignored or failing.
504 ignored or failing.
506 """
505 """
507 if not name:
506 if not name:
508 raise ValueError('empty parameter name')
507 raise ValueError('empty parameter name')
509 if name[0] not in string.letters:
508 if name[0] not in string.letters:
510 raise ValueError('non letter first character: %r' % name)
509 raise ValueError('non letter first character: %r' % name)
511 # Some logic will be later added here to try to process the option for
510 # Some logic will be later added here to try to process the option for
512 # a dict of known parameter.
511 # a dict of known parameter.
513 if name[0].islower():
512 if name[0].islower():
514 self.ui.debug("ignoring unknown parameter %r\n" % name)
513 self.ui.debug("ignoring unknown parameter %r\n" % name)
515 else:
514 else:
516 raise KeyError(name)
515 raise KeyError(name)
517
516
518
517
519 def iterparts(self):
518 def iterparts(self):
520 """yield all parts contained in the stream"""
519 """yield all parts contained in the stream"""
521 # make sure param have been loaded
520 # make sure param have been loaded
522 self.params
521 self.params
523 self.ui.debug('start extraction of bundle2 parts\n')
522 self.ui.debug('start extraction of bundle2 parts\n')
524 headerblock = self._readpartheader()
523 headerblock = self._readpartheader()
525 while headerblock is not None:
524 while headerblock is not None:
526 part = unbundlepart(self.ui, headerblock, self._fp)
525 part = unbundlepart(self.ui, headerblock, self._fp)
527 yield part
526 yield part
528 headerblock = self._readpartheader()
527 headerblock = self._readpartheader()
529 self.ui.debug('end of bundle2 stream\n')
528 self.ui.debug('end of bundle2 stream\n')
530
529
531 def _readpartheader(self):
530 def _readpartheader(self):
532 """reads a part header size and return the bytes blob
531 """reads a part header size and return the bytes blob
533
532
534 returns None if empty"""
533 returns None if empty"""
535 headersize = self._unpack(_fpartheadersize)[0]
534 headersize = self._unpack(_fpartheadersize)[0]
536 self.ui.debug('part header size: %i\n' % headersize)
535 self.ui.debug('part header size: %i\n' % headersize)
537 if headersize:
536 if headersize:
538 return self._readexact(headersize)
537 return self._readexact(headersize)
539 return None
538 return None
540
539
541
540
542 class bundlepart(object):
541 class bundlepart(object):
543 """A bundle2 part contains application level payload
542 """A bundle2 part contains application level payload
544
543
545 The part `type` is used to route the part to the application level
544 The part `type` is used to route the part to the application level
546 handler.
545 handler.
547 """
546 """
548
547
549 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
548 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
550 data=''):
549 data=''):
551 self.id = None
550 self.id = None
552 self.type = parttype
551 self.type = parttype
553 self.data = data
552 self.data = data
554 self.mandatoryparams = mandatoryparams
553 self.mandatoryparams = mandatoryparams
555 self.advisoryparams = advisoryparams
554 self.advisoryparams = advisoryparams
556
555
557 def getchunks(self):
556 def getchunks(self):
558 #### header
557 #### header
559 ## parttype
558 ## parttype
560 header = [_pack(_fparttypesize, len(self.type)),
559 header = [_pack(_fparttypesize, len(self.type)),
561 self.type, _pack(_fpartid, self.id),
560 self.type, _pack(_fpartid, self.id),
562 ]
561 ]
563 ## parameters
562 ## parameters
564 # count
563 # count
565 manpar = self.mandatoryparams
564 manpar = self.mandatoryparams
566 advpar = self.advisoryparams
565 advpar = self.advisoryparams
567 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
566 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
568 # size
567 # size
569 parsizes = []
568 parsizes = []
570 for key, value in manpar:
569 for key, value in manpar:
571 parsizes.append(len(key))
570 parsizes.append(len(key))
572 parsizes.append(len(value))
571 parsizes.append(len(value))
573 for key, value in advpar:
572 for key, value in advpar:
574 parsizes.append(len(key))
573 parsizes.append(len(key))
575 parsizes.append(len(value))
574 parsizes.append(len(value))
576 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
575 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
577 header.append(paramsizes)
576 header.append(paramsizes)
578 # key, value
577 # key, value
579 for key, value in manpar:
578 for key, value in manpar:
580 header.append(key)
579 header.append(key)
581 header.append(value)
580 header.append(value)
582 for key, value in advpar:
581 for key, value in advpar:
583 header.append(key)
582 header.append(key)
584 header.append(value)
583 header.append(value)
585 ## finalize header
584 ## finalize header
586 headerchunk = ''.join(header)
585 headerchunk = ''.join(header)
587 yield _pack(_fpartheadersize, len(headerchunk))
586 yield _pack(_fpartheadersize, len(headerchunk))
588 yield headerchunk
587 yield headerchunk
589 ## payload
588 ## payload
590 for chunk in self._payloadchunks():
589 for chunk in self._payloadchunks():
591 yield _pack(_fpayloadsize, len(chunk))
590 yield _pack(_fpayloadsize, len(chunk))
592 yield chunk
591 yield chunk
593 # end of payload
592 # end of payload
594 yield _pack(_fpayloadsize, 0)
593 yield _pack(_fpayloadsize, 0)
595
594
596 def _payloadchunks(self):
595 def _payloadchunks(self):
597 """yield chunks of a the part payload
596 """yield chunks of a the part payload
598
597
599 Exists to handle the different methods to provide data to a part."""
598 Exists to handle the different methods to provide data to a part."""
600 # we only support fixed size data now.
599 # we only support fixed size data now.
601 # This will be improved in the future.
600 # This will be improved in the future.
602 if util.safehasattr(self.data, 'next'):
601 if util.safehasattr(self.data, 'next'):
603 buff = util.chunkbuffer(self.data)
602 buff = util.chunkbuffer(self.data)
604 chunk = buff.read(preferedchunksize)
603 chunk = buff.read(preferedchunksize)
605 while chunk:
604 while chunk:
606 yield chunk
605 yield chunk
607 chunk = buff.read(preferedchunksize)
606 chunk = buff.read(preferedchunksize)
608 elif len(self.data):
607 elif len(self.data):
609 yield self.data
608 yield self.data
610
609
611 class unbundlepart(unpackermixin):
610 class unbundlepart(unpackermixin):
612 """a bundle part read from a bundle"""
611 """a bundle part read from a bundle"""
613
612
614 def __init__(self, ui, header, fp):
613 def __init__(self, ui, header, fp):
615 super(unbundlepart, self).__init__(fp)
614 super(unbundlepart, self).__init__(fp)
616 self.ui = ui
615 self.ui = ui
617 # unbundle state attr
616 # unbundle state attr
618 self._headerdata = header
617 self._headerdata = header
619 self._headeroffset = 0
618 self._headeroffset = 0
620 self._initialized = False
619 self._initialized = False
621 self.consumed = False
620 self.consumed = False
622 # part data
621 # part data
623 self.id = None
622 self.id = None
624 self.type = None
623 self.type = None
625 self.mandatoryparams = None
624 self.mandatoryparams = None
626 self.advisoryparams = None
625 self.advisoryparams = None
627 self._payloadstream = None
626 self._payloadstream = None
628 self._readheader()
627 self._readheader()
629
628
630 def _fromheader(self, size):
629 def _fromheader(self, size):
631 """return the next <size> byte from the header"""
630 """return the next <size> byte from the header"""
632 offset = self._headeroffset
631 offset = self._headeroffset
633 data = self._headerdata[offset:(offset + size)]
632 data = self._headerdata[offset:(offset + size)]
634 self._headeroffset = offset + size
633 self._headeroffset = offset + size
635 return data
634 return data
636
635
637 def _unpackheader(self, format):
636 def _unpackheader(self, format):
638 """read given format from header
637 """read given format from header
639
638
640 This automatically compute the size of the format to read."""
639 This automatically compute the size of the format to read."""
641 data = self._fromheader(struct.calcsize(format))
640 data = self._fromheader(struct.calcsize(format))
642 return _unpack(format, data)
641 return _unpack(format, data)
643
642
644 def _readheader(self):
643 def _readheader(self):
645 """read the header and setup the object"""
644 """read the header and setup the object"""
646 typesize = self._unpackheader(_fparttypesize)[0]
645 typesize = self._unpackheader(_fparttypesize)[0]
647 self.type = self._fromheader(typesize)
646 self.type = self._fromheader(typesize)
648 self.ui.debug('part type: "%s"\n' % self.type)
647 self.ui.debug('part type: "%s"\n' % self.type)
649 self.id = self._unpackheader(_fpartid)[0]
648 self.id = self._unpackheader(_fpartid)[0]
650 self.ui.debug('part id: "%s"\n' % self.id)
649 self.ui.debug('part id: "%s"\n' % self.id)
651 ## reading parameters
650 ## reading parameters
652 # param count
651 # param count
653 mancount, advcount = self._unpackheader(_fpartparamcount)
652 mancount, advcount = self._unpackheader(_fpartparamcount)
654 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
653 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
655 # param size
654 # param size
656 fparamsizes = _makefpartparamsizes(mancount + advcount)
655 fparamsizes = _makefpartparamsizes(mancount + advcount)
657 paramsizes = self._unpackheader(fparamsizes)
656 paramsizes = self._unpackheader(fparamsizes)
658 # make it a list of couple again
657 # make it a list of couple again
659 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
658 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
660 # split mandatory from advisory
659 # split mandatory from advisory
661 mansizes = paramsizes[:mancount]
660 mansizes = paramsizes[:mancount]
662 advsizes = paramsizes[mancount:]
661 advsizes = paramsizes[mancount:]
663 # retrive param value
662 # retrive param value
664 manparams = []
663 manparams = []
665 for key, value in mansizes:
664 for key, value in mansizes:
666 manparams.append((self._fromheader(key), self._fromheader(value)))
665 manparams.append((self._fromheader(key), self._fromheader(value)))
667 advparams = []
666 advparams = []
668 for key, value in advsizes:
667 for key, value in advsizes:
669 advparams.append((self._fromheader(key), self._fromheader(value)))
668 advparams.append((self._fromheader(key), self._fromheader(value)))
670 self.mandatoryparams = manparams
669 self.mandatoryparams = manparams
671 self.advisoryparams = advparams
670 self.advisoryparams = advparams
672 ## part payload
671 ## part payload
673 def payloadchunks():
672 def payloadchunks():
674 payloadsize = self._unpack(_fpayloadsize)[0]
673 payloadsize = self._unpack(_fpayloadsize)[0]
675 self.ui.debug('payload chunk size: %i\n' % payloadsize)
674 self.ui.debug('payload chunk size: %i\n' % payloadsize)
676 while payloadsize:
675 while payloadsize:
677 yield self._readexact(payloadsize)
676 yield self._readexact(payloadsize)
678 payloadsize = self._unpack(_fpayloadsize)[0]
677 payloadsize = self._unpack(_fpayloadsize)[0]
679 self.ui.debug('payload chunk size: %i\n' % payloadsize)
678 self.ui.debug('payload chunk size: %i\n' % payloadsize)
680 self._payloadstream = util.chunkbuffer(payloadchunks())
679 self._payloadstream = util.chunkbuffer(payloadchunks())
681 # we read the data, tell it
680 # we read the data, tell it
682 self._initialized = True
681 self._initialized = True
683
682
684 def read(self, size=None):
683 def read(self, size=None):
685 """read payload data"""
684 """read payload data"""
686 if not self._initialized:
685 if not self._initialized:
687 self._readheader()
686 self._readheader()
688 if size is None:
687 if size is None:
689 data = self._payloadstream.read()
688 data = self._payloadstream.read()
690 else:
689 else:
691 data = self._payloadstream.read(size)
690 data = self._payloadstream.read(size)
692 if size is None or len(data) < size:
691 if size is None or len(data) < size:
693 self.consumed = True
692 self.consumed = True
694 return data
693 return data
695
694
696
695
697 @parthandler('b2x:changegroup')
696 @parthandler('b2x:changegroup')
698 def handlechangegroup(op, inpart):
697 def handlechangegroup(op, inpart):
699 """apply a changegroup part on the repo
698 """apply a changegroup part on the repo
700
699
701 This is a very early implementation that will massive rework before being
700 This is a very early implementation that will massive rework before being
702 inflicted to any end-user.
701 inflicted to any end-user.
703 """
702 """
704 # Make sure we trigger a transaction creation
703 # Make sure we trigger a transaction creation
705 #
704 #
706 # The addchangegroup function will get a transaction object by itself, but
705 # The addchangegroup function will get a transaction object by itself, but
707 # we need to make sure we trigger the creation of a transaction object used
706 # we need to make sure we trigger the creation of a transaction object used
708 # for the whole processing scope.
707 # for the whole processing scope.
709 op.gettransaction()
708 op.gettransaction()
710 cg = changegroup.unbundle10(inpart, 'UN')
709 cg = changegroup.unbundle10(inpart, 'UN')
711 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
710 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
712 op.records.add('changegroup', {'return': ret})
711 op.records.add('changegroup', {'return': ret})
713 if op.reply is not None:
712 if op.reply is not None:
714 # This is definitly not the final form of this
713 # This is definitly not the final form of this
715 # return. But one need to start somewhere.
714 # return. But one need to start somewhere.
716 op.reply.newpart('b2x:reply:changegroup', (),
715 op.reply.newpart('b2x:reply:changegroup', (),
717 [('in-reply-to', str(inpart.id)),
716 [('in-reply-to', str(inpart.id)),
718 ('return', '%i' % ret)])
717 ('return', '%i' % ret)])
719 assert not inpart.read()
718 assert not inpart.read()
720
719
721 @parthandler('b2x:reply:changegroup')
720 @parthandler('b2x:reply:changegroup')
722 def handlechangegroup(op, inpart):
721 def handlechangegroup(op, inpart):
723 p = dict(inpart.advisoryparams)
722 p = dict(inpart.advisoryparams)
724 ret = int(p['return'])
723 ret = int(p['return'])
725 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
724 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
726
725
727 @parthandler('b2x:check:heads')
726 @parthandler('b2x:check:heads')
728 def handlechangegroup(op, inpart):
727 def handlechangegroup(op, inpart):
729 """check that head of the repo did not change
728 """check that head of the repo did not change
730
729
731 This is used to detect a push race when using unbundle.
730 This is used to detect a push race when using unbundle.
732 This replaces the "heads" argument of unbundle."""
731 This replaces the "heads" argument of unbundle."""
733 h = inpart.read(20)
732 h = inpart.read(20)
734 heads = []
733 heads = []
735 while len(h) == 20:
734 while len(h) == 20:
736 heads.append(h)
735 heads.append(h)
737 h = inpart.read(20)
736 h = inpart.read(20)
738 assert not h
737 assert not h
739 if heads != op.repo.heads():
738 if heads != op.repo.heads():
740 raise error.PushRaced('repository changed while pushing - '
739 raise error.PushRaced('repository changed while pushing - '
741 'please try again')
740 'please try again')
742
741
743 @parthandler('b2x:output')
742 @parthandler('b2x:output')
744 def handleoutput(op, inpart):
743 def handleoutput(op, inpart):
745 """forward output captured on the server to the client"""
744 """forward output captured on the server to the client"""
746 for line in inpart.read().splitlines():
745 for line in inpart.read().splitlines():
747 op.ui.write(('remote: %s\n' % line))
746 op.ui.write(('remote: %s\n' % line))
748
747
749 @parthandler('b2x:replycaps')
748 @parthandler('b2x:replycaps')
750 def handlereplycaps(op, inpart):
749 def handlereplycaps(op, inpart):
751 """Notify that a reply bundle should be created
750 """Notify that a reply bundle should be created
752
751
753 The payload contains the capabilities information for the reply"""
752 The payload contains the capabilities information for the reply"""
754 caps = decodecaps(inpart.read())
753 caps = decodecaps(inpart.read())
755 if op.reply is None:
754 if op.reply is None:
756 op.reply = bundle20(op.ui, caps)
755 op.reply = bundle20(op.ui, caps)
757
756
758 @parthandler('b2x:error:abort')
757 @parthandler('b2x:error:abort')
759 def handlereplycaps(op, inpart):
758 def handlereplycaps(op, inpart):
760 """Used to transmit abort error over the wire"""
759 """Used to transmit abort error over the wire"""
761 manargs = dict(inpart.mandatoryparams)
760 manargs = dict(inpart.mandatoryparams)
762 advargs = dict(inpart.advisoryparams)
761 advargs = dict(inpart.advisoryparams)
763 raise util.Abort(manargs['message'], hint=advargs.get('hint'))
762 raise util.Abort(manargs['message'], hint=advargs.get('hint'))
764
763
765 @parthandler('b2x:error:unknownpart')
764 @parthandler('b2x:error:unknownpart')
766 def handlereplycaps(op, inpart):
765 def handlereplycaps(op, inpart):
767 """Used to transmit unknown part error over the wire"""
766 """Used to transmit unknown part error over the wire"""
768 manargs = dict(inpart.mandatoryparams)
767 manargs = dict(inpart.mandatoryparams)
769 raise UnknownPartError(manargs['parttype'])
768 raise UnknownPartError(manargs['parttype'])
770
769
771 @parthandler('b2x:error:pushraced')
770 @parthandler('b2x:error:pushraced')
772 def handlereplycaps(op, inpart):
771 def handlereplycaps(op, inpart):
773 """Used to transmit push race error over the wire"""
772 """Used to transmit push race error over the wire"""
774 manargs = dict(inpart.mandatoryparams)
773 manargs = dict(inpart.mandatoryparams)
775 raise error.ResponseError(_('push failed:'), manargs['message'])
774 raise error.ResponseError(_('push failed:'), manargs['message'])
@@ -1,734 +1,730 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks, bundle2
12 import discovery, phases, obsolete, bookmarks, bundle2
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.unbundle10(fh, alg)
34 return changegroup.unbundle10(fh, alg)
35 elif version == '2X':
35 elif version == '2X':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40
40
41 class pushoperation(object):
41 class pushoperation(object):
42 """A object that represent a single push operation
42 """A object that represent a single push operation
43
43
44 It purpose is to carry push related state and very common operation.
44 It purpose is to carry push related state and very common operation.
45
45
46 A new should be created at the beginning of each push and discarded
46 A new should be created at the beginning of each push and discarded
47 afterward.
47 afterward.
48 """
48 """
49
49
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
51 # repo we push from
51 # repo we push from
52 self.repo = repo
52 self.repo = repo
53 self.ui = repo.ui
53 self.ui = repo.ui
54 # repo we push to
54 # repo we push to
55 self.remote = remote
55 self.remote = remote
56 # force option provided
56 # force option provided
57 self.force = force
57 self.force = force
58 # revs to be pushed (None is "all")
58 # revs to be pushed (None is "all")
59 self.revs = revs
59 self.revs = revs
60 # allow push of new branch
60 # allow push of new branch
61 self.newbranch = newbranch
61 self.newbranch = newbranch
62 # did a local lock get acquired?
62 # did a local lock get acquired?
63 self.locallocked = None
63 self.locallocked = None
64 # Integer version of the push result
64 # Integer version of the push result
65 # - None means nothing to push
65 # - None means nothing to push
66 # - 0 means HTTP error
66 # - 0 means HTTP error
67 # - 1 means we pushed and remote head count is unchanged *or*
67 # - 1 means we pushed and remote head count is unchanged *or*
68 # we have outgoing changesets but refused to push
68 # we have outgoing changesets but refused to push
69 # - other values as described by addchangegroup()
69 # - other values as described by addchangegroup()
70 self.ret = None
70 self.ret = None
71 # discover.outgoing object (contains common and outgoing data)
71 # discover.outgoing object (contains common and outgoing data)
72 self.outgoing = None
72 self.outgoing = None
73 # all remote heads before the push
73 # all remote heads before the push
74 self.remoteheads = None
74 self.remoteheads = None
75 # testable as a boolean indicating if any nodes are missing locally.
75 # testable as a boolean indicating if any nodes are missing locally.
76 self.incoming = None
76 self.incoming = None
77 # set of all heads common after changeset bundle push
77 # set of all heads common after changeset bundle push
78 self.commonheads = None
78 self.commonheads = None
79
79
80 def push(repo, remote, force=False, revs=None, newbranch=False):
80 def push(repo, remote, force=False, revs=None, newbranch=False):
81 '''Push outgoing changesets (limited by revs) from a local
81 '''Push outgoing changesets (limited by revs) from a local
82 repository to remote. Return an integer:
82 repository to remote. Return an integer:
83 - None means nothing to push
83 - None means nothing to push
84 - 0 means HTTP error
84 - 0 means HTTP error
85 - 1 means we pushed and remote head count is unchanged *or*
85 - 1 means we pushed and remote head count is unchanged *or*
86 we have outgoing changesets but refused to push
86 we have outgoing changesets but refused to push
87 - other values as described by addchangegroup()
87 - other values as described by addchangegroup()
88 '''
88 '''
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
90 if pushop.remote.local():
90 if pushop.remote.local():
91 missing = (set(pushop.repo.requirements)
91 missing = (set(pushop.repo.requirements)
92 - pushop.remote.local().supported)
92 - pushop.remote.local().supported)
93 if missing:
93 if missing:
94 msg = _("required features are not"
94 msg = _("required features are not"
95 " supported in the destination:"
95 " supported in the destination:"
96 " %s") % (', '.join(sorted(missing)))
96 " %s") % (', '.join(sorted(missing)))
97 raise util.Abort(msg)
97 raise util.Abort(msg)
98
98
99 # there are two ways to push to remote repo:
99 # there are two ways to push to remote repo:
100 #
100 #
101 # addchangegroup assumes local user can lock remote
101 # addchangegroup assumes local user can lock remote
102 # repo (local filesystem, old ssh servers).
102 # repo (local filesystem, old ssh servers).
103 #
103 #
104 # unbundle assumes local user cannot lock remote repo (new ssh
104 # unbundle assumes local user cannot lock remote repo (new ssh
105 # servers, http servers).
105 # servers, http servers).
106
106
107 if not pushop.remote.canpush():
107 if not pushop.remote.canpush():
108 raise util.Abort(_("destination does not support push"))
108 raise util.Abort(_("destination does not support push"))
109 # get local lock as we might write phase data
109 # get local lock as we might write phase data
110 locallock = None
110 locallock = None
111 try:
111 try:
112 locallock = pushop.repo.lock()
112 locallock = pushop.repo.lock()
113 pushop.locallocked = True
113 pushop.locallocked = True
114 except IOError, err:
114 except IOError, err:
115 pushop.locallocked = False
115 pushop.locallocked = False
116 if err.errno != errno.EACCES:
116 if err.errno != errno.EACCES:
117 raise
117 raise
118 # source repo cannot be locked.
118 # source repo cannot be locked.
119 # We do not abort the push, but just disable the local phase
119 # We do not abort the push, but just disable the local phase
120 # synchronisation.
120 # synchronisation.
121 msg = 'cannot lock source repository: %s\n' % err
121 msg = 'cannot lock source repository: %s\n' % err
122 pushop.ui.debug(msg)
122 pushop.ui.debug(msg)
123 try:
123 try:
124 pushop.repo.checkpush(pushop)
124 pushop.repo.checkpush(pushop)
125 lock = None
125 lock = None
126 unbundle = pushop.remote.capable('unbundle')
126 unbundle = pushop.remote.capable('unbundle')
127 if not unbundle:
127 if not unbundle:
128 lock = pushop.remote.lock()
128 lock = pushop.remote.lock()
129 try:
129 try:
130 _pushdiscovery(pushop)
130 _pushdiscovery(pushop)
131 if _pushcheckoutgoing(pushop):
131 if _pushcheckoutgoing(pushop):
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
133 pushop.remote,
133 pushop.remote,
134 pushop.outgoing)
134 pushop.outgoing)
135 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
135 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
136 False)
136 False)
137 and pushop.remote.capable('bundle2-exp')):
137 and pushop.remote.capable('bundle2-exp')):
138 _pushbundle2(pushop)
138 _pushbundle2(pushop)
139 else:
139 else:
140 _pushchangeset(pushop)
140 _pushchangeset(pushop)
141 _pushcomputecommonheads(pushop)
141 _pushcomputecommonheads(pushop)
142 _pushsyncphase(pushop)
142 _pushsyncphase(pushop)
143 _pushobsolete(pushop)
143 _pushobsolete(pushop)
144 finally:
144 finally:
145 if lock is not None:
145 if lock is not None:
146 lock.release()
146 lock.release()
147 finally:
147 finally:
148 if locallock is not None:
148 if locallock is not None:
149 locallock.release()
149 locallock.release()
150
150
151 _pushbookmark(pushop)
151 _pushbookmark(pushop)
152 return pushop.ret
152 return pushop.ret
153
153
154 def _pushdiscovery(pushop):
154 def _pushdiscovery(pushop):
155 # discovery
155 # discovery
156 unfi = pushop.repo.unfiltered()
156 unfi = pushop.repo.unfiltered()
157 fci = discovery.findcommonincoming
157 fci = discovery.findcommonincoming
158 commoninc = fci(unfi, pushop.remote, force=pushop.force)
158 commoninc = fci(unfi, pushop.remote, force=pushop.force)
159 common, inc, remoteheads = commoninc
159 common, inc, remoteheads = commoninc
160 fco = discovery.findcommonoutgoing
160 fco = discovery.findcommonoutgoing
161 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
161 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
162 commoninc=commoninc, force=pushop.force)
162 commoninc=commoninc, force=pushop.force)
163 pushop.outgoing = outgoing
163 pushop.outgoing = outgoing
164 pushop.remoteheads = remoteheads
164 pushop.remoteheads = remoteheads
165 pushop.incoming = inc
165 pushop.incoming = inc
166
166
167 def _pushcheckoutgoing(pushop):
167 def _pushcheckoutgoing(pushop):
168 outgoing = pushop.outgoing
168 outgoing = pushop.outgoing
169 unfi = pushop.repo.unfiltered()
169 unfi = pushop.repo.unfiltered()
170 if not outgoing.missing:
170 if not outgoing.missing:
171 # nothing to push
171 # nothing to push
172 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
172 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
173 return False
173 return False
174 # something to push
174 # something to push
175 if not pushop.force:
175 if not pushop.force:
176 # if repo.obsstore == False --> no obsolete
176 # if repo.obsstore == False --> no obsolete
177 # then, save the iteration
177 # then, save the iteration
178 if unfi.obsstore:
178 if unfi.obsstore:
179 # this message are here for 80 char limit reason
179 # this message are here for 80 char limit reason
180 mso = _("push includes obsolete changeset: %s!")
180 mso = _("push includes obsolete changeset: %s!")
181 mst = "push includes %s changeset: %s!"
181 mst = "push includes %s changeset: %s!"
182 # plain versions for i18n tool to detect them
182 # plain versions for i18n tool to detect them
183 _("push includes unstable changeset: %s!")
183 _("push includes unstable changeset: %s!")
184 _("push includes bumped changeset: %s!")
184 _("push includes bumped changeset: %s!")
185 _("push includes divergent changeset: %s!")
185 _("push includes divergent changeset: %s!")
186 # If we are to push if there is at least one
186 # If we are to push if there is at least one
187 # obsolete or unstable changeset in missing, at
187 # obsolete or unstable changeset in missing, at
188 # least one of the missinghead will be obsolete or
188 # least one of the missinghead will be obsolete or
189 # unstable. So checking heads only is ok
189 # unstable. So checking heads only is ok
190 for node in outgoing.missingheads:
190 for node in outgoing.missingheads:
191 ctx = unfi[node]
191 ctx = unfi[node]
192 if ctx.obsolete():
192 if ctx.obsolete():
193 raise util.Abort(mso % ctx)
193 raise util.Abort(mso % ctx)
194 elif ctx.troubled():
194 elif ctx.troubled():
195 raise util.Abort(_(mst)
195 raise util.Abort(_(mst)
196 % (ctx.troubles()[0],
196 % (ctx.troubles()[0],
197 ctx))
197 ctx))
198 newbm = pushop.ui.configlist('bookmarks', 'pushing')
198 newbm = pushop.ui.configlist('bookmarks', 'pushing')
199 discovery.checkheads(unfi, pushop.remote, outgoing,
199 discovery.checkheads(unfi, pushop.remote, outgoing,
200 pushop.remoteheads,
200 pushop.remoteheads,
201 pushop.newbranch,
201 pushop.newbranch,
202 bool(pushop.incoming),
202 bool(pushop.incoming),
203 newbm)
203 newbm)
204 return True
204 return True
205
205
206 def _pushbundle2(pushop):
206 def _pushbundle2(pushop):
207 """push data to the remote using bundle2
207 """push data to the remote using bundle2
208
208
209 The only currently supported type of data is changegroup but this will
209 The only currently supported type of data is changegroup but this will
210 evolve in the future."""
210 evolve in the future."""
211 # Send known head to the server for race detection.
211 # Send known head to the server for race detection.
212 capsblob = urllib.unquote(pushop.remote.capable('bundle2-exp'))
212 capsblob = urllib.unquote(pushop.remote.capable('bundle2-exp'))
213 caps = bundle2.decodecaps(capsblob)
213 caps = bundle2.decodecaps(capsblob)
214 bundler = bundle2.bundle20(pushop.ui, caps)
214 bundler = bundle2.bundle20(pushop.ui, caps)
215 # create reply capability
215 # create reply capability
216 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
216 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
217 bundler.addpart(bundle2.bundlepart('b2x:replycaps', data=capsblob))
217 bundler.newpart('b2x:replycaps', data=capsblob)
218 if not pushop.force:
218 if not pushop.force:
219 part = bundle2.bundlepart('B2X:CHECK:HEADS',
219 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
220 data=iter(pushop.remoteheads))
221 bundler.addpart(part)
222 extrainfo = _pushbundle2extraparts(pushop, bundler)
220 extrainfo = _pushbundle2extraparts(pushop, bundler)
223 # add the changegroup bundle
221 # add the changegroup bundle
224 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
222 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
225 cgpart = bundle2.bundlepart('B2X:CHANGEGROUP', data=cg.getchunks())
223 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
226 bundler.addpart(cgpart)
227 stream = util.chunkbuffer(bundler.getchunks())
224 stream = util.chunkbuffer(bundler.getchunks())
228 try:
225 try:
229 reply = pushop.remote.unbundle(stream, ['force'], 'push')
226 reply = pushop.remote.unbundle(stream, ['force'], 'push')
230 except bundle2.UnknownPartError, exc:
227 except bundle2.UnknownPartError, exc:
231 raise util.Abort('missing support for %s' % exc)
228 raise util.Abort('missing support for %s' % exc)
232 try:
229 try:
233 op = bundle2.processbundle(pushop.repo, reply)
230 op = bundle2.processbundle(pushop.repo, reply)
234 except bundle2.UnknownPartError, exc:
231 except bundle2.UnknownPartError, exc:
235 raise util.Abort('missing support for %s' % exc)
232 raise util.Abort('missing support for %s' % exc)
236 cgreplies = op.records.getreplies(cgpart.id)
233 cgreplies = op.records.getreplies(cgpart.id)
237 assert len(cgreplies['changegroup']) == 1
234 assert len(cgreplies['changegroup']) == 1
238 pushop.ret = cgreplies['changegroup'][0]['return']
235 pushop.ret = cgreplies['changegroup'][0]['return']
239 _pushbundle2extrareply(pushop, op, extrainfo)
236 _pushbundle2extrareply(pushop, op, extrainfo)
240
237
241 def _pushbundle2extraparts(pushop, bundler):
238 def _pushbundle2extraparts(pushop, bundler):
242 """hook function to let extensions add parts
239 """hook function to let extensions add parts
243
240
244 Return a dict to let extensions pass data to the reply processing.
241 Return a dict to let extensions pass data to the reply processing.
245 """
242 """
246 return {}
243 return {}
247
244
248 def _pushbundle2extrareply(pushop, op, extrainfo):
245 def _pushbundle2extrareply(pushop, op, extrainfo):
249 """hook function to let extensions react to part replies
246 """hook function to let extensions react to part replies
250
247
251 The dict from _pushbundle2extrareply is fed to this function.
248 The dict from _pushbundle2extrareply is fed to this function.
252 """
249 """
253 pass
250 pass
254
251
255 def _pushchangeset(pushop):
252 def _pushchangeset(pushop):
256 """Make the actual push of changeset bundle to remote repo"""
253 """Make the actual push of changeset bundle to remote repo"""
257 outgoing = pushop.outgoing
254 outgoing = pushop.outgoing
258 unbundle = pushop.remote.capable('unbundle')
255 unbundle = pushop.remote.capable('unbundle')
259 # TODO: get bundlecaps from remote
256 # TODO: get bundlecaps from remote
260 bundlecaps = None
257 bundlecaps = None
261 # create a changegroup from local
258 # create a changegroup from local
262 if pushop.revs is None and not (outgoing.excluded
259 if pushop.revs is None and not (outgoing.excluded
263 or pushop.repo.changelog.filteredrevs):
260 or pushop.repo.changelog.filteredrevs):
264 # push everything,
261 # push everything,
265 # use the fast path, no race possible on push
262 # use the fast path, no race possible on push
266 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
263 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
267 cg = changegroup.getsubset(pushop.repo,
264 cg = changegroup.getsubset(pushop.repo,
268 outgoing,
265 outgoing,
269 bundler,
266 bundler,
270 'push',
267 'push',
271 fastpath=True)
268 fastpath=True)
272 else:
269 else:
273 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
270 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
274 bundlecaps)
271 bundlecaps)
275
272
276 # apply changegroup to remote
273 # apply changegroup to remote
277 if unbundle:
274 if unbundle:
278 # local repo finds heads on server, finds out what
275 # local repo finds heads on server, finds out what
279 # revs it must push. once revs transferred, if server
276 # revs it must push. once revs transferred, if server
280 # finds it has different heads (someone else won
277 # finds it has different heads (someone else won
281 # commit/push race), server aborts.
278 # commit/push race), server aborts.
282 if pushop.force:
279 if pushop.force:
283 remoteheads = ['force']
280 remoteheads = ['force']
284 else:
281 else:
285 remoteheads = pushop.remoteheads
282 remoteheads = pushop.remoteheads
286 # ssh: return remote's addchangegroup()
283 # ssh: return remote's addchangegroup()
287 # http: return remote's addchangegroup() or 0 for error
284 # http: return remote's addchangegroup() or 0 for error
288 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
285 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
289 'push')
286 'push')
290 else:
287 else:
291 # we return an integer indicating remote head count
288 # we return an integer indicating remote head count
292 # change
289 # change
293 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
290 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
294
291
295 def _pushcomputecommonheads(pushop):
292 def _pushcomputecommonheads(pushop):
296 unfi = pushop.repo.unfiltered()
293 unfi = pushop.repo.unfiltered()
297 if pushop.ret:
294 if pushop.ret:
298 # push succeed, synchronize target of the push
295 # push succeed, synchronize target of the push
299 cheads = pushop.outgoing.missingheads
296 cheads = pushop.outgoing.missingheads
300 elif pushop.revs is None:
297 elif pushop.revs is None:
301 # All out push fails. synchronize all common
298 # All out push fails. synchronize all common
302 cheads = pushop.outgoing.commonheads
299 cheads = pushop.outgoing.commonheads
303 else:
300 else:
304 # I want cheads = heads(::missingheads and ::commonheads)
301 # I want cheads = heads(::missingheads and ::commonheads)
305 # (missingheads is revs with secret changeset filtered out)
302 # (missingheads is revs with secret changeset filtered out)
306 #
303 #
307 # This can be expressed as:
304 # This can be expressed as:
308 # cheads = ( (missingheads and ::commonheads)
305 # cheads = ( (missingheads and ::commonheads)
309 # + (commonheads and ::missingheads))"
306 # + (commonheads and ::missingheads))"
310 # )
307 # )
311 #
308 #
312 # while trying to push we already computed the following:
309 # while trying to push we already computed the following:
313 # common = (::commonheads)
310 # common = (::commonheads)
314 # missing = ((commonheads::missingheads) - commonheads)
311 # missing = ((commonheads::missingheads) - commonheads)
315 #
312 #
316 # We can pick:
313 # We can pick:
317 # * missingheads part of common (::commonheads)
314 # * missingheads part of common (::commonheads)
318 common = set(pushop.outgoing.common)
315 common = set(pushop.outgoing.common)
319 nm = pushop.repo.changelog.nodemap
316 nm = pushop.repo.changelog.nodemap
320 cheads = [node for node in pushop.revs if nm[node] in common]
317 cheads = [node for node in pushop.revs if nm[node] in common]
321 # and
318 # and
322 # * commonheads parents on missing
319 # * commonheads parents on missing
323 revset = unfi.set('%ln and parents(roots(%ln))',
320 revset = unfi.set('%ln and parents(roots(%ln))',
324 pushop.outgoing.commonheads,
321 pushop.outgoing.commonheads,
325 pushop.outgoing.missing)
322 pushop.outgoing.missing)
326 cheads.extend(c.node() for c in revset)
323 cheads.extend(c.node() for c in revset)
327 pushop.commonheads = cheads
324 pushop.commonheads = cheads
328
325
329 def _pushsyncphase(pushop):
326 def _pushsyncphase(pushop):
330 """synchronise phase information locally and remotely"""
327 """synchronise phase information locally and remotely"""
331 unfi = pushop.repo.unfiltered()
328 unfi = pushop.repo.unfiltered()
332 cheads = pushop.commonheads
329 cheads = pushop.commonheads
333 # even when we don't push, exchanging phase data is useful
330 # even when we don't push, exchanging phase data is useful
334 remotephases = pushop.remote.listkeys('phases')
331 remotephases = pushop.remote.listkeys('phases')
335 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
332 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
336 and remotephases # server supports phases
333 and remotephases # server supports phases
337 and pushop.ret is None # nothing was pushed
334 and pushop.ret is None # nothing was pushed
338 and remotephases.get('publishing', False)):
335 and remotephases.get('publishing', False)):
339 # When:
336 # When:
340 # - this is a subrepo push
337 # - this is a subrepo push
341 # - and remote support phase
338 # - and remote support phase
342 # - and no changeset was pushed
339 # - and no changeset was pushed
343 # - and remote is publishing
340 # - and remote is publishing
344 # We may be in issue 3871 case!
341 # We may be in issue 3871 case!
345 # We drop the possible phase synchronisation done by
342 # We drop the possible phase synchronisation done by
346 # courtesy to publish changesets possibly locally draft
343 # courtesy to publish changesets possibly locally draft
347 # on the remote.
344 # on the remote.
348 remotephases = {'publishing': 'True'}
345 remotephases = {'publishing': 'True'}
349 if not remotephases: # old server or public only reply from non-publishing
346 if not remotephases: # old server or public only reply from non-publishing
350 _localphasemove(pushop, cheads)
347 _localphasemove(pushop, cheads)
351 # don't push any phase data as there is nothing to push
348 # don't push any phase data as there is nothing to push
352 else:
349 else:
353 ana = phases.analyzeremotephases(pushop.repo, cheads,
350 ana = phases.analyzeremotephases(pushop.repo, cheads,
354 remotephases)
351 remotephases)
355 pheads, droots = ana
352 pheads, droots = ana
356 ### Apply remote phase on local
353 ### Apply remote phase on local
357 if remotephases.get('publishing', False):
354 if remotephases.get('publishing', False):
358 _localphasemove(pushop, cheads)
355 _localphasemove(pushop, cheads)
359 else: # publish = False
356 else: # publish = False
360 _localphasemove(pushop, pheads)
357 _localphasemove(pushop, pheads)
361 _localphasemove(pushop, cheads, phases.draft)
358 _localphasemove(pushop, cheads, phases.draft)
362 ### Apply local phase on remote
359 ### Apply local phase on remote
363
360
364 # Get the list of all revs draft on remote by public here.
361 # Get the list of all revs draft on remote by public here.
365 # XXX Beware that revset break if droots is not strictly
362 # XXX Beware that revset break if droots is not strictly
366 # XXX root we may want to ensure it is but it is costly
363 # XXX root we may want to ensure it is but it is costly
367 outdated = unfi.set('heads((%ln::%ln) and public())',
364 outdated = unfi.set('heads((%ln::%ln) and public())',
368 droots, cheads)
365 droots, cheads)
369 for newremotehead in outdated:
366 for newremotehead in outdated:
370 r = pushop.remote.pushkey('phases',
367 r = pushop.remote.pushkey('phases',
371 newremotehead.hex(),
368 newremotehead.hex(),
372 str(phases.draft),
369 str(phases.draft),
373 str(phases.public))
370 str(phases.public))
374 if not r:
371 if not r:
375 pushop.ui.warn(_('updating %s to public failed!\n')
372 pushop.ui.warn(_('updating %s to public failed!\n')
376 % newremotehead)
373 % newremotehead)
377
374
378 def _localphasemove(pushop, nodes, phase=phases.public):
375 def _localphasemove(pushop, nodes, phase=phases.public):
379 """move <nodes> to <phase> in the local source repo"""
376 """move <nodes> to <phase> in the local source repo"""
380 if pushop.locallocked:
377 if pushop.locallocked:
381 phases.advanceboundary(pushop.repo, phase, nodes)
378 phases.advanceboundary(pushop.repo, phase, nodes)
382 else:
379 else:
383 # repo is not locked, do not change any phases!
380 # repo is not locked, do not change any phases!
384 # Informs the user that phases should have been moved when
381 # Informs the user that phases should have been moved when
385 # applicable.
382 # applicable.
386 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
383 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
387 phasestr = phases.phasenames[phase]
384 phasestr = phases.phasenames[phase]
388 if actualmoves:
385 if actualmoves:
389 pushop.ui.status(_('cannot lock source repo, skipping '
386 pushop.ui.status(_('cannot lock source repo, skipping '
390 'local %s phase update\n') % phasestr)
387 'local %s phase update\n') % phasestr)
391
388
392 def _pushobsolete(pushop):
389 def _pushobsolete(pushop):
393 """utility function to push obsolete markers to a remote"""
390 """utility function to push obsolete markers to a remote"""
394 pushop.ui.debug('try to push obsolete markers to remote\n')
391 pushop.ui.debug('try to push obsolete markers to remote\n')
395 repo = pushop.repo
392 repo = pushop.repo
396 remote = pushop.remote
393 remote = pushop.remote
397 if (obsolete._enabled and repo.obsstore and
394 if (obsolete._enabled and repo.obsstore and
398 'obsolete' in remote.listkeys('namespaces')):
395 'obsolete' in remote.listkeys('namespaces')):
399 rslts = []
396 rslts = []
400 remotedata = repo.listkeys('obsolete')
397 remotedata = repo.listkeys('obsolete')
401 for key in sorted(remotedata, reverse=True):
398 for key in sorted(remotedata, reverse=True):
402 # reverse sort to ensure we end with dump0
399 # reverse sort to ensure we end with dump0
403 data = remotedata[key]
400 data = remotedata[key]
404 rslts.append(remote.pushkey('obsolete', key, '', data))
401 rslts.append(remote.pushkey('obsolete', key, '', data))
405 if [r for r in rslts if not r]:
402 if [r for r in rslts if not r]:
406 msg = _('failed to push some obsolete markers!\n')
403 msg = _('failed to push some obsolete markers!\n')
407 repo.ui.warn(msg)
404 repo.ui.warn(msg)
408
405
409 def _pushbookmark(pushop):
406 def _pushbookmark(pushop):
410 """Update bookmark position on remote"""
407 """Update bookmark position on remote"""
411 ui = pushop.ui
408 ui = pushop.ui
412 repo = pushop.repo.unfiltered()
409 repo = pushop.repo.unfiltered()
413 remote = pushop.remote
410 remote = pushop.remote
414 ui.debug("checking for updated bookmarks\n")
411 ui.debug("checking for updated bookmarks\n")
415 revnums = map(repo.changelog.rev, pushop.revs or [])
412 revnums = map(repo.changelog.rev, pushop.revs or [])
416 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
413 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
417 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
414 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
418 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
415 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
419 srchex=hex)
416 srchex=hex)
420
417
421 for b, scid, dcid in advsrc:
418 for b, scid, dcid in advsrc:
422 if ancestors and repo[scid].rev() not in ancestors:
419 if ancestors and repo[scid].rev() not in ancestors:
423 continue
420 continue
424 if remote.pushkey('bookmarks', b, dcid, scid):
421 if remote.pushkey('bookmarks', b, dcid, scid):
425 ui.status(_("updating bookmark %s\n") % b)
422 ui.status(_("updating bookmark %s\n") % b)
426 else:
423 else:
427 ui.warn(_('updating bookmark %s failed!\n') % b)
424 ui.warn(_('updating bookmark %s failed!\n') % b)
428
425
429 class pulloperation(object):
426 class pulloperation(object):
430 """A object that represent a single pull operation
427 """A object that represent a single pull operation
431
428
432 It purpose is to carry push related state and very common operation.
429 It purpose is to carry push related state and very common operation.
433
430
434 A new should be created at the beginning of each pull and discarded
431 A new should be created at the beginning of each pull and discarded
435 afterward.
432 afterward.
436 """
433 """
437
434
438 def __init__(self, repo, remote, heads=None, force=False):
435 def __init__(self, repo, remote, heads=None, force=False):
439 # repo we pull into
436 # repo we pull into
440 self.repo = repo
437 self.repo = repo
441 # repo we pull from
438 # repo we pull from
442 self.remote = remote
439 self.remote = remote
443 # revision we try to pull (None is "all")
440 # revision we try to pull (None is "all")
444 self.heads = heads
441 self.heads = heads
445 # do we force pull?
442 # do we force pull?
446 self.force = force
443 self.force = force
447 # the name the pull transaction
444 # the name the pull transaction
448 self._trname = 'pull\n' + util.hidepassword(remote.url())
445 self._trname = 'pull\n' + util.hidepassword(remote.url())
449 # hold the transaction once created
446 # hold the transaction once created
450 self._tr = None
447 self._tr = None
451 # set of common changeset between local and remote before pull
448 # set of common changeset between local and remote before pull
452 self.common = None
449 self.common = None
453 # set of pulled head
450 # set of pulled head
454 self.rheads = None
451 self.rheads = None
455 # list of missing changeset to fetch remotely
452 # list of missing changeset to fetch remotely
456 self.fetch = None
453 self.fetch = None
457 # result of changegroup pulling (used as return code by pull)
454 # result of changegroup pulling (used as return code by pull)
458 self.cgresult = None
455 self.cgresult = None
459 # list of step remaining todo (related to future bundle2 usage)
456 # list of step remaining todo (related to future bundle2 usage)
460 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
457 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
461
458
462 @util.propertycache
459 @util.propertycache
463 def pulledsubset(self):
460 def pulledsubset(self):
464 """heads of the set of changeset target by the pull"""
461 """heads of the set of changeset target by the pull"""
465 # compute target subset
462 # compute target subset
466 if self.heads is None:
463 if self.heads is None:
467 # We pulled every thing possible
464 # We pulled every thing possible
468 # sync on everything common
465 # sync on everything common
469 c = set(self.common)
466 c = set(self.common)
470 ret = list(self.common)
467 ret = list(self.common)
471 for n in self.rheads:
468 for n in self.rheads:
472 if n not in c:
469 if n not in c:
473 ret.append(n)
470 ret.append(n)
474 return ret
471 return ret
475 else:
472 else:
476 # We pulled a specific subset
473 # We pulled a specific subset
477 # sync on this subset
474 # sync on this subset
478 return self.heads
475 return self.heads
479
476
480 def gettransaction(self):
477 def gettransaction(self):
481 """get appropriate pull transaction, creating it if needed"""
478 """get appropriate pull transaction, creating it if needed"""
482 if self._tr is None:
479 if self._tr is None:
483 self._tr = self.repo.transaction(self._trname)
480 self._tr = self.repo.transaction(self._trname)
484 return self._tr
481 return self._tr
485
482
486 def closetransaction(self):
483 def closetransaction(self):
487 """close transaction if created"""
484 """close transaction if created"""
488 if self._tr is not None:
485 if self._tr is not None:
489 self._tr.close()
486 self._tr.close()
490
487
491 def releasetransaction(self):
488 def releasetransaction(self):
492 """release transaction if created"""
489 """release transaction if created"""
493 if self._tr is not None:
490 if self._tr is not None:
494 self._tr.release()
491 self._tr.release()
495
492
496 def pull(repo, remote, heads=None, force=False):
493 def pull(repo, remote, heads=None, force=False):
497 pullop = pulloperation(repo, remote, heads, force)
494 pullop = pulloperation(repo, remote, heads, force)
498 if pullop.remote.local():
495 if pullop.remote.local():
499 missing = set(pullop.remote.requirements) - pullop.repo.supported
496 missing = set(pullop.remote.requirements) - pullop.repo.supported
500 if missing:
497 if missing:
501 msg = _("required features are not"
498 msg = _("required features are not"
502 " supported in the destination:"
499 " supported in the destination:"
503 " %s") % (', '.join(sorted(missing)))
500 " %s") % (', '.join(sorted(missing)))
504 raise util.Abort(msg)
501 raise util.Abort(msg)
505
502
506 lock = pullop.repo.lock()
503 lock = pullop.repo.lock()
507 try:
504 try:
508 _pulldiscovery(pullop)
505 _pulldiscovery(pullop)
509 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
506 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
510 and pullop.remote.capable('bundle2-exp')):
507 and pullop.remote.capable('bundle2-exp')):
511 _pullbundle2(pullop)
508 _pullbundle2(pullop)
512 if 'changegroup' in pullop.todosteps:
509 if 'changegroup' in pullop.todosteps:
513 _pullchangeset(pullop)
510 _pullchangeset(pullop)
514 if 'phases' in pullop.todosteps:
511 if 'phases' in pullop.todosteps:
515 _pullphase(pullop)
512 _pullphase(pullop)
516 if 'obsmarkers' in pullop.todosteps:
513 if 'obsmarkers' in pullop.todosteps:
517 _pullobsolete(pullop)
514 _pullobsolete(pullop)
518 pullop.closetransaction()
515 pullop.closetransaction()
519 finally:
516 finally:
520 pullop.releasetransaction()
517 pullop.releasetransaction()
521 lock.release()
518 lock.release()
522
519
523 return pullop.cgresult
520 return pullop.cgresult
524
521
525 def _pulldiscovery(pullop):
522 def _pulldiscovery(pullop):
526 """discovery phase for the pull
523 """discovery phase for the pull
527
524
528 Current handle changeset discovery only, will change handle all discovery
525 Current handle changeset discovery only, will change handle all discovery
529 at some point."""
526 at some point."""
530 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
527 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
531 pullop.remote,
528 pullop.remote,
532 heads=pullop.heads,
529 heads=pullop.heads,
533 force=pullop.force)
530 force=pullop.force)
534 pullop.common, pullop.fetch, pullop.rheads = tmp
531 pullop.common, pullop.fetch, pullop.rheads = tmp
535
532
536 def _pullbundle2(pullop):
533 def _pullbundle2(pullop):
537 """pull data using bundle2
534 """pull data using bundle2
538
535
539 For now, the only supported data are changegroup."""
536 For now, the only supported data are changegroup."""
540 kwargs = {'bundlecaps': set(['HG2X'])}
537 kwargs = {'bundlecaps': set(['HG2X'])}
541 capsblob = bundle2.encodecaps(pullop.repo.bundle2caps)
538 capsblob = bundle2.encodecaps(pullop.repo.bundle2caps)
542 kwargs['bundlecaps'].add('bundle2=' + urllib.quote(capsblob))
539 kwargs['bundlecaps'].add('bundle2=' + urllib.quote(capsblob))
543 # pulling changegroup
540 # pulling changegroup
544 pullop.todosteps.remove('changegroup')
541 pullop.todosteps.remove('changegroup')
545
542
546 kwargs['common'] = pullop.common
543 kwargs['common'] = pullop.common
547 kwargs['heads'] = pullop.heads or pullop.rheads
544 kwargs['heads'] = pullop.heads or pullop.rheads
548 if not pullop.fetch:
545 if not pullop.fetch:
549 pullop.repo.ui.status(_("no changes found\n"))
546 pullop.repo.ui.status(_("no changes found\n"))
550 pullop.cgresult = 0
547 pullop.cgresult = 0
551 else:
548 else:
552 if pullop.heads is None and list(pullop.common) == [nullid]:
549 if pullop.heads is None and list(pullop.common) == [nullid]:
553 pullop.repo.ui.status(_("requesting all changes\n"))
550 pullop.repo.ui.status(_("requesting all changes\n"))
554 _pullbundle2extraprepare(pullop, kwargs)
551 _pullbundle2extraprepare(pullop, kwargs)
555 if kwargs.keys() == ['format']:
552 if kwargs.keys() == ['format']:
556 return # nothing to pull
553 return # nothing to pull
557 bundle = pullop.remote.getbundle('pull', **kwargs)
554 bundle = pullop.remote.getbundle('pull', **kwargs)
558 try:
555 try:
559 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
556 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
560 except bundle2.UnknownPartError, exc:
557 except bundle2.UnknownPartError, exc:
561 raise util.Abort('missing support for %s' % exc)
558 raise util.Abort('missing support for %s' % exc)
562
559
563 if pullop.fetch:
560 if pullop.fetch:
564 assert len(op.records['changegroup']) == 1
561 assert len(op.records['changegroup']) == 1
565 pullop.cgresult = op.records['changegroup'][0]['return']
562 pullop.cgresult = op.records['changegroup'][0]['return']
566
563
567 def _pullbundle2extraprepare(pullop, kwargs):
564 def _pullbundle2extraprepare(pullop, kwargs):
568 """hook function so that extensions can extend the getbundle call"""
565 """hook function so that extensions can extend the getbundle call"""
569 pass
566 pass
570
567
571 def _pullchangeset(pullop):
568 def _pullchangeset(pullop):
572 """pull changeset from unbundle into the local repo"""
569 """pull changeset from unbundle into the local repo"""
573 # We delay the open of the transaction as late as possible so we
570 # We delay the open of the transaction as late as possible so we
574 # don't open transaction for nothing or you break future useful
571 # don't open transaction for nothing or you break future useful
575 # rollback call
572 # rollback call
576 pullop.todosteps.remove('changegroup')
573 pullop.todosteps.remove('changegroup')
577 if not pullop.fetch:
574 if not pullop.fetch:
578 pullop.repo.ui.status(_("no changes found\n"))
575 pullop.repo.ui.status(_("no changes found\n"))
579 pullop.cgresult = 0
576 pullop.cgresult = 0
580 return
577 return
581 pullop.gettransaction()
578 pullop.gettransaction()
582 if pullop.heads is None and list(pullop.common) == [nullid]:
579 if pullop.heads is None and list(pullop.common) == [nullid]:
583 pullop.repo.ui.status(_("requesting all changes\n"))
580 pullop.repo.ui.status(_("requesting all changes\n"))
584 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
581 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
585 # issue1320, avoid a race if remote changed after discovery
582 # issue1320, avoid a race if remote changed after discovery
586 pullop.heads = pullop.rheads
583 pullop.heads = pullop.rheads
587
584
588 if pullop.remote.capable('getbundle'):
585 if pullop.remote.capable('getbundle'):
589 # TODO: get bundlecaps from remote
586 # TODO: get bundlecaps from remote
590 cg = pullop.remote.getbundle('pull', common=pullop.common,
587 cg = pullop.remote.getbundle('pull', common=pullop.common,
591 heads=pullop.heads or pullop.rheads)
588 heads=pullop.heads or pullop.rheads)
592 elif pullop.heads is None:
589 elif pullop.heads is None:
593 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
590 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
594 elif not pullop.remote.capable('changegroupsubset'):
591 elif not pullop.remote.capable('changegroupsubset'):
595 raise util.Abort(_("partial pull cannot be done because "
592 raise util.Abort(_("partial pull cannot be done because "
596 "other repository doesn't support "
593 "other repository doesn't support "
597 "changegroupsubset."))
594 "changegroupsubset."))
598 else:
595 else:
599 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
596 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
600 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
597 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
601 pullop.remote.url())
598 pullop.remote.url())
602
599
603 def _pullphase(pullop):
600 def _pullphase(pullop):
604 # Get remote phases data from remote
601 # Get remote phases data from remote
605 pullop.todosteps.remove('phases')
602 pullop.todosteps.remove('phases')
606 remotephases = pullop.remote.listkeys('phases')
603 remotephases = pullop.remote.listkeys('phases')
607 publishing = bool(remotephases.get('publishing', False))
604 publishing = bool(remotephases.get('publishing', False))
608 if remotephases and not publishing:
605 if remotephases and not publishing:
609 # remote is new and unpublishing
606 # remote is new and unpublishing
610 pheads, _dr = phases.analyzeremotephases(pullop.repo,
607 pheads, _dr = phases.analyzeremotephases(pullop.repo,
611 pullop.pulledsubset,
608 pullop.pulledsubset,
612 remotephases)
609 remotephases)
613 phases.advanceboundary(pullop.repo, phases.public, pheads)
610 phases.advanceboundary(pullop.repo, phases.public, pheads)
614 phases.advanceboundary(pullop.repo, phases.draft,
611 phases.advanceboundary(pullop.repo, phases.draft,
615 pullop.pulledsubset)
612 pullop.pulledsubset)
616 else:
613 else:
617 # Remote is old or publishing all common changesets
614 # Remote is old or publishing all common changesets
618 # should be seen as public
615 # should be seen as public
619 phases.advanceboundary(pullop.repo, phases.public,
616 phases.advanceboundary(pullop.repo, phases.public,
620 pullop.pulledsubset)
617 pullop.pulledsubset)
621
618
622 def _pullobsolete(pullop):
619 def _pullobsolete(pullop):
623 """utility function to pull obsolete markers from a remote
620 """utility function to pull obsolete markers from a remote
624
621
625 The `gettransaction` is function that return the pull transaction, creating
622 The `gettransaction` is function that return the pull transaction, creating
626 one if necessary. We return the transaction to inform the calling code that
623 one if necessary. We return the transaction to inform the calling code that
627 a new transaction have been created (when applicable).
624 a new transaction have been created (when applicable).
628
625
629 Exists mostly to allow overriding for experimentation purpose"""
626 Exists mostly to allow overriding for experimentation purpose"""
630 pullop.todosteps.remove('obsmarkers')
627 pullop.todosteps.remove('obsmarkers')
631 tr = None
628 tr = None
632 if obsolete._enabled:
629 if obsolete._enabled:
633 pullop.repo.ui.debug('fetching remote obsolete markers\n')
630 pullop.repo.ui.debug('fetching remote obsolete markers\n')
634 remoteobs = pullop.remote.listkeys('obsolete')
631 remoteobs = pullop.remote.listkeys('obsolete')
635 if 'dump0' in remoteobs:
632 if 'dump0' in remoteobs:
636 tr = pullop.gettransaction()
633 tr = pullop.gettransaction()
637 for key in sorted(remoteobs, reverse=True):
634 for key in sorted(remoteobs, reverse=True):
638 if key.startswith('dump'):
635 if key.startswith('dump'):
639 data = base85.b85decode(remoteobs[key])
636 data = base85.b85decode(remoteobs[key])
640 pullop.repo.obsstore.mergemarkers(tr, data)
637 pullop.repo.obsstore.mergemarkers(tr, data)
641 pullop.repo.invalidatevolatilesets()
638 pullop.repo.invalidatevolatilesets()
642 return tr
639 return tr
643
640
644 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
641 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
645 **kwargs):
642 **kwargs):
646 """return a full bundle (with potentially multiple kind of parts)
643 """return a full bundle (with potentially multiple kind of parts)
647
644
648 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
645 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
649 passed. For now, the bundle can contain only changegroup, but this will
646 passed. For now, the bundle can contain only changegroup, but this will
650 changes when more part type will be available for bundle2.
647 changes when more part type will be available for bundle2.
651
648
652 This is different from changegroup.getbundle that only returns an HG10
649 This is different from changegroup.getbundle that only returns an HG10
653 changegroup bundle. They may eventually get reunited in the future when we
650 changegroup bundle. They may eventually get reunited in the future when we
654 have a clearer idea of the API we what to query different data.
651 have a clearer idea of the API we what to query different data.
655
652
656 The implementation is at a very early stage and will get massive rework
653 The implementation is at a very early stage and will get massive rework
657 when the API of bundle is refined.
654 when the API of bundle is refined.
658 """
655 """
659 # build changegroup bundle here.
656 # build changegroup bundle here.
660 cg = changegroup.getbundle(repo, source, heads=heads,
657 cg = changegroup.getbundle(repo, source, heads=heads,
661 common=common, bundlecaps=bundlecaps)
658 common=common, bundlecaps=bundlecaps)
662 if bundlecaps is None or 'HG2X' not in bundlecaps:
659 if bundlecaps is None or 'HG2X' not in bundlecaps:
663 return cg
660 return cg
664 # very crude first implementation,
661 # very crude first implementation,
665 # the bundle API will change and the generation will be done lazily.
662 # the bundle API will change and the generation will be done lazily.
666 b2caps = {}
663 b2caps = {}
667 for bcaps in bundlecaps:
664 for bcaps in bundlecaps:
668 if bcaps.startswith('bundle2='):
665 if bcaps.startswith('bundle2='):
669 blob = urllib.unquote(bcaps[len('bundle2='):])
666 blob = urllib.unquote(bcaps[len('bundle2='):])
670 b2caps.update(bundle2.decodecaps(blob))
667 b2caps.update(bundle2.decodecaps(blob))
671 bundler = bundle2.bundle20(repo.ui, b2caps)
668 bundler = bundle2.bundle20(repo.ui, b2caps)
672 if cg:
669 if cg:
673 part = bundle2.bundlepart('b2x:changegroup', data=cg.getchunks())
670 bundler.newpart('b2x:changegroup', data=cg.getchunks())
674 bundler.addpart(part)
675 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
671 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
676 bundlecaps=bundlecaps, **kwargs)
672 bundlecaps=bundlecaps, **kwargs)
677 return util.chunkbuffer(bundler.getchunks())
673 return util.chunkbuffer(bundler.getchunks())
678
674
679 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
675 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
680 bundlecaps=None, **kwargs):
676 bundlecaps=None, **kwargs):
681 """hook function to let extensions add parts to the requested bundle"""
677 """hook function to let extensions add parts to the requested bundle"""
682 pass
678 pass
683
679
684 def check_heads(repo, their_heads, context):
680 def check_heads(repo, their_heads, context):
685 """check if the heads of a repo have been modified
681 """check if the heads of a repo have been modified
686
682
687 Used by peer for unbundling.
683 Used by peer for unbundling.
688 """
684 """
689 heads = repo.heads()
685 heads = repo.heads()
690 heads_hash = util.sha1(''.join(sorted(heads))).digest()
686 heads_hash = util.sha1(''.join(sorted(heads))).digest()
691 if not (their_heads == ['force'] or their_heads == heads or
687 if not (their_heads == ['force'] or their_heads == heads or
692 their_heads == ['hashed', heads_hash]):
688 their_heads == ['hashed', heads_hash]):
693 # someone else committed/pushed/unbundled while we
689 # someone else committed/pushed/unbundled while we
694 # were transferring data
690 # were transferring data
695 raise error.PushRaced('repository changed while %s - '
691 raise error.PushRaced('repository changed while %s - '
696 'please try again' % context)
692 'please try again' % context)
697
693
698 def unbundle(repo, cg, heads, source, url):
694 def unbundle(repo, cg, heads, source, url):
699 """Apply a bundle to a repo.
695 """Apply a bundle to a repo.
700
696
701 this function makes sure the repo is locked during the application and have
697 this function makes sure the repo is locked during the application and have
702 mechanism to check that no push race occurred between the creation of the
698 mechanism to check that no push race occurred between the creation of the
703 bundle and its application.
699 bundle and its application.
704
700
705 If the push was raced as PushRaced exception is raised."""
701 If the push was raced as PushRaced exception is raised."""
706 r = 0
702 r = 0
707 # need a transaction when processing a bundle2 stream
703 # need a transaction when processing a bundle2 stream
708 tr = None
704 tr = None
709 lock = repo.lock()
705 lock = repo.lock()
710 try:
706 try:
711 check_heads(repo, heads, 'uploading changes')
707 check_heads(repo, heads, 'uploading changes')
712 # push can proceed
708 # push can proceed
713 if util.safehasattr(cg, 'params'):
709 if util.safehasattr(cg, 'params'):
714 try:
710 try:
715 tr = repo.transaction('unbundle')
711 tr = repo.transaction('unbundle')
716 tr.hookargs['bundle2-exp'] = '1'
712 tr.hookargs['bundle2-exp'] = '1'
717 r = bundle2.processbundle(repo, cg, lambda: tr).reply
713 r = bundle2.processbundle(repo, cg, lambda: tr).reply
718 cl = repo.unfiltered().changelog
714 cl = repo.unfiltered().changelog
719 p = cl.writepending() and repo.root or ""
715 p = cl.writepending() and repo.root or ""
720 repo.hook('b2x-pretransactionclose', throw=True, source=source,
716 repo.hook('b2x-pretransactionclose', throw=True, source=source,
721 url=url, pending=p, **tr.hookargs)
717 url=url, pending=p, **tr.hookargs)
722 tr.close()
718 tr.close()
723 repo.hook('b2x-transactionclose', source=source, url=url,
719 repo.hook('b2x-transactionclose', source=source, url=url,
724 **tr.hookargs)
720 **tr.hookargs)
725 except Exception, exc:
721 except Exception, exc:
726 exc.duringunbundle2 = True
722 exc.duringunbundle2 = True
727 raise
723 raise
728 else:
724 else:
729 r = changegroup.addchangegroup(repo, cg, source, url)
725 r = changegroup.addchangegroup(repo, cg, source, url)
730 finally:
726 finally:
731 if tr is not None:
727 if tr is not None:
732 tr.release()
728 tr.release()
733 lock.release()
729 lock.release()
734 return r
730 return r
@@ -1,837 +1,833 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2
11 import changegroup as changegroupmod, bundle2
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # client side
193 # client side
194
194
195 class wirepeer(peer.peerrepository):
195 class wirepeer(peer.peerrepository):
196
196
197 def batch(self):
197 def batch(self):
198 return remotebatch(self)
198 return remotebatch(self)
199 def _submitbatch(self, req):
199 def _submitbatch(self, req):
200 cmds = []
200 cmds = []
201 for op, argsdict in req:
201 for op, argsdict in req:
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
203 cmds.append('%s %s' % (op, args))
203 cmds.append('%s %s' % (op, args))
204 rsp = self._call("batch", cmds=';'.join(cmds))
204 rsp = self._call("batch", cmds=';'.join(cmds))
205 return rsp.split(';')
205 return rsp.split(';')
206 def _submitone(self, op, args):
206 def _submitone(self, op, args):
207 return self._call(op, **args)
207 return self._call(op, **args)
208
208
209 @batchable
209 @batchable
210 def lookup(self, key):
210 def lookup(self, key):
211 self.requirecap('lookup', _('look up remote revision'))
211 self.requirecap('lookup', _('look up remote revision'))
212 f = future()
212 f = future()
213 yield {'key': encoding.fromlocal(key)}, f
213 yield {'key': encoding.fromlocal(key)}, f
214 d = f.value
214 d = f.value
215 success, data = d[:-1].split(" ", 1)
215 success, data = d[:-1].split(" ", 1)
216 if int(success):
216 if int(success):
217 yield bin(data)
217 yield bin(data)
218 self._abort(error.RepoError(data))
218 self._abort(error.RepoError(data))
219
219
220 @batchable
220 @batchable
221 def heads(self):
221 def heads(self):
222 f = future()
222 f = future()
223 yield {}, f
223 yield {}, f
224 d = f.value
224 d = f.value
225 try:
225 try:
226 yield decodelist(d[:-1])
226 yield decodelist(d[:-1])
227 except ValueError:
227 except ValueError:
228 self._abort(error.ResponseError(_("unexpected response:"), d))
228 self._abort(error.ResponseError(_("unexpected response:"), d))
229
229
230 @batchable
230 @batchable
231 def known(self, nodes):
231 def known(self, nodes):
232 f = future()
232 f = future()
233 yield {'nodes': encodelist(nodes)}, f
233 yield {'nodes': encodelist(nodes)}, f
234 d = f.value
234 d = f.value
235 try:
235 try:
236 yield [bool(int(f)) for f in d]
236 yield [bool(int(f)) for f in d]
237 except ValueError:
237 except ValueError:
238 self._abort(error.ResponseError(_("unexpected response:"), d))
238 self._abort(error.ResponseError(_("unexpected response:"), d))
239
239
240 @batchable
240 @batchable
241 def branchmap(self):
241 def branchmap(self):
242 f = future()
242 f = future()
243 yield {}, f
243 yield {}, f
244 d = f.value
244 d = f.value
245 try:
245 try:
246 branchmap = {}
246 branchmap = {}
247 for branchpart in d.splitlines():
247 for branchpart in d.splitlines():
248 branchname, branchheads = branchpart.split(' ', 1)
248 branchname, branchheads = branchpart.split(' ', 1)
249 branchname = encoding.tolocal(urllib.unquote(branchname))
249 branchname = encoding.tolocal(urllib.unquote(branchname))
250 branchheads = decodelist(branchheads)
250 branchheads = decodelist(branchheads)
251 branchmap[branchname] = branchheads
251 branchmap[branchname] = branchheads
252 yield branchmap
252 yield branchmap
253 except TypeError:
253 except TypeError:
254 self._abort(error.ResponseError(_("unexpected response:"), d))
254 self._abort(error.ResponseError(_("unexpected response:"), d))
255
255
256 def branches(self, nodes):
256 def branches(self, nodes):
257 n = encodelist(nodes)
257 n = encodelist(nodes)
258 d = self._call("branches", nodes=n)
258 d = self._call("branches", nodes=n)
259 try:
259 try:
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
261 return br
261 return br
262 except ValueError:
262 except ValueError:
263 self._abort(error.ResponseError(_("unexpected response:"), d))
263 self._abort(error.ResponseError(_("unexpected response:"), d))
264
264
265 def between(self, pairs):
265 def between(self, pairs):
266 batch = 8 # avoid giant requests
266 batch = 8 # avoid giant requests
267 r = []
267 r = []
268 for i in xrange(0, len(pairs), batch):
268 for i in xrange(0, len(pairs), batch):
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
270 d = self._call("between", pairs=n)
270 d = self._call("between", pairs=n)
271 try:
271 try:
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
273 except ValueError:
273 except ValueError:
274 self._abort(error.ResponseError(_("unexpected response:"), d))
274 self._abort(error.ResponseError(_("unexpected response:"), d))
275 return r
275 return r
276
276
277 @batchable
277 @batchable
278 def pushkey(self, namespace, key, old, new):
278 def pushkey(self, namespace, key, old, new):
279 if not self.capable('pushkey'):
279 if not self.capable('pushkey'):
280 yield False, None
280 yield False, None
281 f = future()
281 f = future()
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
283 yield {'namespace': encoding.fromlocal(namespace),
283 yield {'namespace': encoding.fromlocal(namespace),
284 'key': encoding.fromlocal(key),
284 'key': encoding.fromlocal(key),
285 'old': encoding.fromlocal(old),
285 'old': encoding.fromlocal(old),
286 'new': encoding.fromlocal(new)}, f
286 'new': encoding.fromlocal(new)}, f
287 d = f.value
287 d = f.value
288 d, output = d.split('\n', 1)
288 d, output = d.split('\n', 1)
289 try:
289 try:
290 d = bool(int(d))
290 d = bool(int(d))
291 except ValueError:
291 except ValueError:
292 raise error.ResponseError(
292 raise error.ResponseError(
293 _('push failed (unexpected response):'), d)
293 _('push failed (unexpected response):'), d)
294 for l in output.splitlines(True):
294 for l in output.splitlines(True):
295 self.ui.status(_('remote: '), l)
295 self.ui.status(_('remote: '), l)
296 yield d
296 yield d
297
297
298 @batchable
298 @batchable
299 def listkeys(self, namespace):
299 def listkeys(self, namespace):
300 if not self.capable('pushkey'):
300 if not self.capable('pushkey'):
301 yield {}, None
301 yield {}, None
302 f = future()
302 f = future()
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
304 yield {'namespace': encoding.fromlocal(namespace)}, f
304 yield {'namespace': encoding.fromlocal(namespace)}, f
305 d = f.value
305 d = f.value
306 r = {}
306 r = {}
307 for l in d.splitlines():
307 for l in d.splitlines():
308 k, v = l.split('\t')
308 k, v = l.split('\t')
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
310 yield r
310 yield r
311
311
312 def stream_out(self):
312 def stream_out(self):
313 return self._callstream('stream_out')
313 return self._callstream('stream_out')
314
314
315 def changegroup(self, nodes, kind):
315 def changegroup(self, nodes, kind):
316 n = encodelist(nodes)
316 n = encodelist(nodes)
317 f = self._callcompressable("changegroup", roots=n)
317 f = self._callcompressable("changegroup", roots=n)
318 return changegroupmod.unbundle10(f, 'UN')
318 return changegroupmod.unbundle10(f, 'UN')
319
319
320 def changegroupsubset(self, bases, heads, kind):
320 def changegroupsubset(self, bases, heads, kind):
321 self.requirecap('changegroupsubset', _('look up remote changes'))
321 self.requirecap('changegroupsubset', _('look up remote changes'))
322 bases = encodelist(bases)
322 bases = encodelist(bases)
323 heads = encodelist(heads)
323 heads = encodelist(heads)
324 f = self._callcompressable("changegroupsubset",
324 f = self._callcompressable("changegroupsubset",
325 bases=bases, heads=heads)
325 bases=bases, heads=heads)
326 return changegroupmod.unbundle10(f, 'UN')
326 return changegroupmod.unbundle10(f, 'UN')
327
327
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
329 **kwargs):
329 **kwargs):
330 self.requirecap('getbundle', _('look up remote changes'))
330 self.requirecap('getbundle', _('look up remote changes'))
331 opts = {}
331 opts = {}
332 if heads is not None:
332 if heads is not None:
333 opts['heads'] = encodelist(heads)
333 opts['heads'] = encodelist(heads)
334 if common is not None:
334 if common is not None:
335 opts['common'] = encodelist(common)
335 opts['common'] = encodelist(common)
336 if bundlecaps is not None:
336 if bundlecaps is not None:
337 opts['bundlecaps'] = ','.join(bundlecaps)
337 opts['bundlecaps'] = ','.join(bundlecaps)
338 opts.update(kwargs)
338 opts.update(kwargs)
339 f = self._callcompressable("getbundle", **opts)
339 f = self._callcompressable("getbundle", **opts)
340 if bundlecaps is not None and 'HG2X' in bundlecaps:
340 if bundlecaps is not None and 'HG2X' in bundlecaps:
341 return bundle2.unbundle20(self.ui, f)
341 return bundle2.unbundle20(self.ui, f)
342 else:
342 else:
343 return changegroupmod.unbundle10(f, 'UN')
343 return changegroupmod.unbundle10(f, 'UN')
344
344
345 def unbundle(self, cg, heads, source):
345 def unbundle(self, cg, heads, source):
346 '''Send cg (a readable file-like object representing the
346 '''Send cg (a readable file-like object representing the
347 changegroup to push, typically a chunkbuffer object) to the
347 changegroup to push, typically a chunkbuffer object) to the
348 remote server as a bundle.
348 remote server as a bundle.
349
349
350 When pushing a bundle10 stream, return an integer indicating the
350 When pushing a bundle10 stream, return an integer indicating the
351 result of the push (see localrepository.addchangegroup()).
351 result of the push (see localrepository.addchangegroup()).
352
352
353 When pushing a bundle20 stream, return a bundle20 stream.'''
353 When pushing a bundle20 stream, return a bundle20 stream.'''
354
354
355 if heads != ['force'] and self.capable('unbundlehash'):
355 if heads != ['force'] and self.capable('unbundlehash'):
356 heads = encodelist(['hashed',
356 heads = encodelist(['hashed',
357 util.sha1(''.join(sorted(heads))).digest()])
357 util.sha1(''.join(sorted(heads))).digest()])
358 else:
358 else:
359 heads = encodelist(heads)
359 heads = encodelist(heads)
360
360
361 if util.safehasattr(cg, 'deltaheader'):
361 if util.safehasattr(cg, 'deltaheader'):
362 # this a bundle10, do the old style call sequence
362 # this a bundle10, do the old style call sequence
363 ret, output = self._callpush("unbundle", cg, heads=heads)
363 ret, output = self._callpush("unbundle", cg, heads=heads)
364 if ret == "":
364 if ret == "":
365 raise error.ResponseError(
365 raise error.ResponseError(
366 _('push failed:'), output)
366 _('push failed:'), output)
367 try:
367 try:
368 ret = int(ret)
368 ret = int(ret)
369 except ValueError:
369 except ValueError:
370 raise error.ResponseError(
370 raise error.ResponseError(
371 _('push failed (unexpected response):'), ret)
371 _('push failed (unexpected response):'), ret)
372
372
373 for l in output.splitlines(True):
373 for l in output.splitlines(True):
374 self.ui.status(_('remote: '), l)
374 self.ui.status(_('remote: '), l)
375 else:
375 else:
376 # bundle2 push. Send a stream, fetch a stream.
376 # bundle2 push. Send a stream, fetch a stream.
377 stream = self._calltwowaystream('unbundle', cg, heads=heads)
377 stream = self._calltwowaystream('unbundle', cg, heads=heads)
378 ret = bundle2.unbundle20(self.ui, stream)
378 ret = bundle2.unbundle20(self.ui, stream)
379 return ret
379 return ret
380
380
381 def debugwireargs(self, one, two, three=None, four=None, five=None):
381 def debugwireargs(self, one, two, three=None, four=None, five=None):
382 # don't pass optional arguments left at their default value
382 # don't pass optional arguments left at their default value
383 opts = {}
383 opts = {}
384 if three is not None:
384 if three is not None:
385 opts['three'] = three
385 opts['three'] = three
386 if four is not None:
386 if four is not None:
387 opts['four'] = four
387 opts['four'] = four
388 return self._call('debugwireargs', one=one, two=two, **opts)
388 return self._call('debugwireargs', one=one, two=two, **opts)
389
389
390 def _call(self, cmd, **args):
390 def _call(self, cmd, **args):
391 """execute <cmd> on the server
391 """execute <cmd> on the server
392
392
393 The command is expected to return a simple string.
393 The command is expected to return a simple string.
394
394
395 returns the server reply as a string."""
395 returns the server reply as a string."""
396 raise NotImplementedError()
396 raise NotImplementedError()
397
397
398 def _callstream(self, cmd, **args):
398 def _callstream(self, cmd, **args):
399 """execute <cmd> on the server
399 """execute <cmd> on the server
400
400
401 The command is expected to return a stream.
401 The command is expected to return a stream.
402
402
403 returns the server reply as a file like object."""
403 returns the server reply as a file like object."""
404 raise NotImplementedError()
404 raise NotImplementedError()
405
405
406 def _callcompressable(self, cmd, **args):
406 def _callcompressable(self, cmd, **args):
407 """execute <cmd> on the server
407 """execute <cmd> on the server
408
408
409 The command is expected to return a stream.
409 The command is expected to return a stream.
410
410
411 The stream may have been compressed in some implementations. This
411 The stream may have been compressed in some implementations. This
412 function takes care of the decompression. This is the only difference
412 function takes care of the decompression. This is the only difference
413 with _callstream.
413 with _callstream.
414
414
415 returns the server reply as a file like object.
415 returns the server reply as a file like object.
416 """
416 """
417 raise NotImplementedError()
417 raise NotImplementedError()
418
418
419 def _callpush(self, cmd, fp, **args):
419 def _callpush(self, cmd, fp, **args):
420 """execute a <cmd> on server
420 """execute a <cmd> on server
421
421
422 The command is expected to be related to a push. Push has a special
422 The command is expected to be related to a push. Push has a special
423 return method.
423 return method.
424
424
425 returns the server reply as a (ret, output) tuple. ret is either
425 returns the server reply as a (ret, output) tuple. ret is either
426 empty (error) or a stringified int.
426 empty (error) or a stringified int.
427 """
427 """
428 raise NotImplementedError()
428 raise NotImplementedError()
429
429
430 def _calltwowaystream(self, cmd, fp, **args):
430 def _calltwowaystream(self, cmd, fp, **args):
431 """execute <cmd> on server
431 """execute <cmd> on server
432
432
433 The command will send a stream to the server and get a stream in reply.
433 The command will send a stream to the server and get a stream in reply.
434 """
434 """
435 raise NotImplementedError()
435 raise NotImplementedError()
436
436
437 def _abort(self, exception):
437 def _abort(self, exception):
438 """clearly abort the wire protocol connection and raise the exception
438 """clearly abort the wire protocol connection and raise the exception
439 """
439 """
440 raise NotImplementedError()
440 raise NotImplementedError()
441
441
442 # server side
442 # server side
443
443
444 # wire protocol command can either return a string or one of these classes.
444 # wire protocol command can either return a string or one of these classes.
445 class streamres(object):
445 class streamres(object):
446 """wireproto reply: binary stream
446 """wireproto reply: binary stream
447
447
448 The call was successful and the result is a stream.
448 The call was successful and the result is a stream.
449 Iterate on the `self.gen` attribute to retrieve chunks.
449 Iterate on the `self.gen` attribute to retrieve chunks.
450 """
450 """
451 def __init__(self, gen):
451 def __init__(self, gen):
452 self.gen = gen
452 self.gen = gen
453
453
454 class pushres(object):
454 class pushres(object):
455 """wireproto reply: success with simple integer return
455 """wireproto reply: success with simple integer return
456
456
457 The call was successful and returned an integer contained in `self.res`.
457 The call was successful and returned an integer contained in `self.res`.
458 """
458 """
459 def __init__(self, res):
459 def __init__(self, res):
460 self.res = res
460 self.res = res
461
461
462 class pusherr(object):
462 class pusherr(object):
463 """wireproto reply: failure
463 """wireproto reply: failure
464
464
465 The call failed. The `self.res` attribute contains the error message.
465 The call failed. The `self.res` attribute contains the error message.
466 """
466 """
467 def __init__(self, res):
467 def __init__(self, res):
468 self.res = res
468 self.res = res
469
469
470 class ooberror(object):
470 class ooberror(object):
471 """wireproto reply: failure of a batch of operation
471 """wireproto reply: failure of a batch of operation
472
472
473 Something failed during a batch call. The error message is stored in
473 Something failed during a batch call. The error message is stored in
474 `self.message`.
474 `self.message`.
475 """
475 """
476 def __init__(self, message):
476 def __init__(self, message):
477 self.message = message
477 self.message = message
478
478
479 def dispatch(repo, proto, command):
479 def dispatch(repo, proto, command):
480 repo = repo.filtered("served")
480 repo = repo.filtered("served")
481 func, spec = commands[command]
481 func, spec = commands[command]
482 args = proto.getargs(spec)
482 args = proto.getargs(spec)
483 return func(repo, proto, *args)
483 return func(repo, proto, *args)
484
484
485 def options(cmd, keys, others):
485 def options(cmd, keys, others):
486 opts = {}
486 opts = {}
487 for k in keys:
487 for k in keys:
488 if k in others:
488 if k in others:
489 opts[k] = others[k]
489 opts[k] = others[k]
490 del others[k]
490 del others[k]
491 if others:
491 if others:
492 sys.stderr.write("abort: %s got unexpected arguments %s\n"
492 sys.stderr.write("abort: %s got unexpected arguments %s\n"
493 % (cmd, ",".join(others)))
493 % (cmd, ",".join(others)))
494 return opts
494 return opts
495
495
496 # list of commands
496 # list of commands
497 commands = {}
497 commands = {}
498
498
499 def wireprotocommand(name, args=''):
499 def wireprotocommand(name, args=''):
500 """decorator for wire protocol command"""
500 """decorator for wire protocol command"""
501 def register(func):
501 def register(func):
502 commands[name] = (func, args)
502 commands[name] = (func, args)
503 return func
503 return func
504 return register
504 return register
505
505
506 @wireprotocommand('batch', 'cmds *')
506 @wireprotocommand('batch', 'cmds *')
507 def batch(repo, proto, cmds, others):
507 def batch(repo, proto, cmds, others):
508 repo = repo.filtered("served")
508 repo = repo.filtered("served")
509 res = []
509 res = []
510 for pair in cmds.split(';'):
510 for pair in cmds.split(';'):
511 op, args = pair.split(' ', 1)
511 op, args = pair.split(' ', 1)
512 vals = {}
512 vals = {}
513 for a in args.split(','):
513 for a in args.split(','):
514 if a:
514 if a:
515 n, v = a.split('=')
515 n, v = a.split('=')
516 vals[n] = unescapearg(v)
516 vals[n] = unescapearg(v)
517 func, spec = commands[op]
517 func, spec = commands[op]
518 if spec:
518 if spec:
519 keys = spec.split()
519 keys = spec.split()
520 data = {}
520 data = {}
521 for k in keys:
521 for k in keys:
522 if k == '*':
522 if k == '*':
523 star = {}
523 star = {}
524 for key in vals.keys():
524 for key in vals.keys():
525 if key not in keys:
525 if key not in keys:
526 star[key] = vals[key]
526 star[key] = vals[key]
527 data['*'] = star
527 data['*'] = star
528 else:
528 else:
529 data[k] = vals[k]
529 data[k] = vals[k]
530 result = func(repo, proto, *[data[k] for k in keys])
530 result = func(repo, proto, *[data[k] for k in keys])
531 else:
531 else:
532 result = func(repo, proto)
532 result = func(repo, proto)
533 if isinstance(result, ooberror):
533 if isinstance(result, ooberror):
534 return result
534 return result
535 res.append(escapearg(result))
535 res.append(escapearg(result))
536 return ';'.join(res)
536 return ';'.join(res)
537
537
538 @wireprotocommand('between', 'pairs')
538 @wireprotocommand('between', 'pairs')
539 def between(repo, proto, pairs):
539 def between(repo, proto, pairs):
540 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
540 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
541 r = []
541 r = []
542 for b in repo.between(pairs):
542 for b in repo.between(pairs):
543 r.append(encodelist(b) + "\n")
543 r.append(encodelist(b) + "\n")
544 return "".join(r)
544 return "".join(r)
545
545
546 @wireprotocommand('branchmap')
546 @wireprotocommand('branchmap')
547 def branchmap(repo, proto):
547 def branchmap(repo, proto):
548 branchmap = repo.branchmap()
548 branchmap = repo.branchmap()
549 heads = []
549 heads = []
550 for branch, nodes in branchmap.iteritems():
550 for branch, nodes in branchmap.iteritems():
551 branchname = urllib.quote(encoding.fromlocal(branch))
551 branchname = urllib.quote(encoding.fromlocal(branch))
552 branchnodes = encodelist(nodes)
552 branchnodes = encodelist(nodes)
553 heads.append('%s %s' % (branchname, branchnodes))
553 heads.append('%s %s' % (branchname, branchnodes))
554 return '\n'.join(heads)
554 return '\n'.join(heads)
555
555
556 @wireprotocommand('branches', 'nodes')
556 @wireprotocommand('branches', 'nodes')
557 def branches(repo, proto, nodes):
557 def branches(repo, proto, nodes):
558 nodes = decodelist(nodes)
558 nodes = decodelist(nodes)
559 r = []
559 r = []
560 for b in repo.branches(nodes):
560 for b in repo.branches(nodes):
561 r.append(encodelist(b) + "\n")
561 r.append(encodelist(b) + "\n")
562 return "".join(r)
562 return "".join(r)
563
563
564
564
565 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
565 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
566 'known', 'getbundle', 'unbundlehash', 'batch']
566 'known', 'getbundle', 'unbundlehash', 'batch']
567
567
568 def _capabilities(repo, proto):
568 def _capabilities(repo, proto):
569 """return a list of capabilities for a repo
569 """return a list of capabilities for a repo
570
570
571 This function exists to allow extensions to easily wrap capabilities
571 This function exists to allow extensions to easily wrap capabilities
572 computation
572 computation
573
573
574 - returns a lists: easy to alter
574 - returns a lists: easy to alter
575 - change done here will be propagated to both `capabilities` and `hello`
575 - change done here will be propagated to both `capabilities` and `hello`
576 command without any other action needed.
576 command without any other action needed.
577 """
577 """
578 # copy to prevent modification of the global list
578 # copy to prevent modification of the global list
579 caps = list(wireprotocaps)
579 caps = list(wireprotocaps)
580 if _allowstream(repo.ui):
580 if _allowstream(repo.ui):
581 if repo.ui.configbool('server', 'preferuncompressed', False):
581 if repo.ui.configbool('server', 'preferuncompressed', False):
582 caps.append('stream-preferred')
582 caps.append('stream-preferred')
583 requiredformats = repo.requirements & repo.supportedformats
583 requiredformats = repo.requirements & repo.supportedformats
584 # if our local revlogs are just revlogv1, add 'stream' cap
584 # if our local revlogs are just revlogv1, add 'stream' cap
585 if not requiredformats - set(('revlogv1',)):
585 if not requiredformats - set(('revlogv1',)):
586 caps.append('stream')
586 caps.append('stream')
587 # otherwise, add 'streamreqs' detailing our local revlog format
587 # otherwise, add 'streamreqs' detailing our local revlog format
588 else:
588 else:
589 caps.append('streamreqs=%s' % ','.join(requiredformats))
589 caps.append('streamreqs=%s' % ','.join(requiredformats))
590 if repo.ui.configbool('experimental', 'bundle2-exp', False):
590 if repo.ui.configbool('experimental', 'bundle2-exp', False):
591 capsblob = bundle2.encodecaps(repo.bundle2caps)
591 capsblob = bundle2.encodecaps(repo.bundle2caps)
592 caps.append('bundle2-exp=' + urllib.quote(capsblob))
592 caps.append('bundle2-exp=' + urllib.quote(capsblob))
593 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
593 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
594 caps.append('httpheader=1024')
594 caps.append('httpheader=1024')
595 return caps
595 return caps
596
596
597 # If you are writing an extension and consider wrapping this function. Wrap
597 # If you are writing an extension and consider wrapping this function. Wrap
598 # `_capabilities` instead.
598 # `_capabilities` instead.
599 @wireprotocommand('capabilities')
599 @wireprotocommand('capabilities')
600 def capabilities(repo, proto):
600 def capabilities(repo, proto):
601 return ' '.join(_capabilities(repo, proto))
601 return ' '.join(_capabilities(repo, proto))
602
602
603 @wireprotocommand('changegroup', 'roots')
603 @wireprotocommand('changegroup', 'roots')
604 def changegroup(repo, proto, roots):
604 def changegroup(repo, proto, roots):
605 nodes = decodelist(roots)
605 nodes = decodelist(roots)
606 cg = changegroupmod.changegroup(repo, nodes, 'serve')
606 cg = changegroupmod.changegroup(repo, nodes, 'serve')
607 return streamres(proto.groupchunks(cg))
607 return streamres(proto.groupchunks(cg))
608
608
609 @wireprotocommand('changegroupsubset', 'bases heads')
609 @wireprotocommand('changegroupsubset', 'bases heads')
610 def changegroupsubset(repo, proto, bases, heads):
610 def changegroupsubset(repo, proto, bases, heads):
611 bases = decodelist(bases)
611 bases = decodelist(bases)
612 heads = decodelist(heads)
612 heads = decodelist(heads)
613 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
613 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
614 return streamres(proto.groupchunks(cg))
614 return streamres(proto.groupchunks(cg))
615
615
616 @wireprotocommand('debugwireargs', 'one two *')
616 @wireprotocommand('debugwireargs', 'one two *')
617 def debugwireargs(repo, proto, one, two, others):
617 def debugwireargs(repo, proto, one, two, others):
618 # only accept optional args from the known set
618 # only accept optional args from the known set
619 opts = options('debugwireargs', ['three', 'four'], others)
619 opts = options('debugwireargs', ['three', 'four'], others)
620 return repo.debugwireargs(one, two, **opts)
620 return repo.debugwireargs(one, two, **opts)
621
621
622 @wireprotocommand('getbundle', '*')
622 @wireprotocommand('getbundle', '*')
623 def getbundle(repo, proto, others):
623 def getbundle(repo, proto, others):
624 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
624 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
625 for k, v in opts.iteritems():
625 for k, v in opts.iteritems():
626 if k in ('heads', 'common'):
626 if k in ('heads', 'common'):
627 opts[k] = decodelist(v)
627 opts[k] = decodelist(v)
628 elif k == 'bundlecaps':
628 elif k == 'bundlecaps':
629 opts[k] = set(v.split(','))
629 opts[k] = set(v.split(','))
630 cg = exchange.getbundle(repo, 'serve', **opts)
630 cg = exchange.getbundle(repo, 'serve', **opts)
631 return streamres(proto.groupchunks(cg))
631 return streamres(proto.groupchunks(cg))
632
632
633 @wireprotocommand('heads')
633 @wireprotocommand('heads')
634 def heads(repo, proto):
634 def heads(repo, proto):
635 h = repo.heads()
635 h = repo.heads()
636 return encodelist(h) + "\n"
636 return encodelist(h) + "\n"
637
637
638 @wireprotocommand('hello')
638 @wireprotocommand('hello')
639 def hello(repo, proto):
639 def hello(repo, proto):
640 '''the hello command returns a set of lines describing various
640 '''the hello command returns a set of lines describing various
641 interesting things about the server, in an RFC822-like format.
641 interesting things about the server, in an RFC822-like format.
642 Currently the only one defined is "capabilities", which
642 Currently the only one defined is "capabilities", which
643 consists of a line in the form:
643 consists of a line in the form:
644
644
645 capabilities: space separated list of tokens
645 capabilities: space separated list of tokens
646 '''
646 '''
647 return "capabilities: %s\n" % (capabilities(repo, proto))
647 return "capabilities: %s\n" % (capabilities(repo, proto))
648
648
649 @wireprotocommand('listkeys', 'namespace')
649 @wireprotocommand('listkeys', 'namespace')
650 def listkeys(repo, proto, namespace):
650 def listkeys(repo, proto, namespace):
651 d = repo.listkeys(encoding.tolocal(namespace)).items()
651 d = repo.listkeys(encoding.tolocal(namespace)).items()
652 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
652 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
653 for k, v in d])
653 for k, v in d])
654 return t
654 return t
655
655
656 @wireprotocommand('lookup', 'key')
656 @wireprotocommand('lookup', 'key')
657 def lookup(repo, proto, key):
657 def lookup(repo, proto, key):
658 try:
658 try:
659 k = encoding.tolocal(key)
659 k = encoding.tolocal(key)
660 c = repo[k]
660 c = repo[k]
661 r = c.hex()
661 r = c.hex()
662 success = 1
662 success = 1
663 except Exception, inst:
663 except Exception, inst:
664 r = str(inst)
664 r = str(inst)
665 success = 0
665 success = 0
666 return "%s %s\n" % (success, r)
666 return "%s %s\n" % (success, r)
667
667
668 @wireprotocommand('known', 'nodes *')
668 @wireprotocommand('known', 'nodes *')
669 def known(repo, proto, nodes, others):
669 def known(repo, proto, nodes, others):
670 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
670 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
671
671
672 @wireprotocommand('pushkey', 'namespace key old new')
672 @wireprotocommand('pushkey', 'namespace key old new')
673 def pushkey(repo, proto, namespace, key, old, new):
673 def pushkey(repo, proto, namespace, key, old, new):
674 # compatibility with pre-1.8 clients which were accidentally
674 # compatibility with pre-1.8 clients which were accidentally
675 # sending raw binary nodes rather than utf-8-encoded hex
675 # sending raw binary nodes rather than utf-8-encoded hex
676 if len(new) == 20 and new.encode('string-escape') != new:
676 if len(new) == 20 and new.encode('string-escape') != new:
677 # looks like it could be a binary node
677 # looks like it could be a binary node
678 try:
678 try:
679 new.decode('utf-8')
679 new.decode('utf-8')
680 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
680 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
681 except UnicodeDecodeError:
681 except UnicodeDecodeError:
682 pass # binary, leave unmodified
682 pass # binary, leave unmodified
683 else:
683 else:
684 new = encoding.tolocal(new) # normal path
684 new = encoding.tolocal(new) # normal path
685
685
686 if util.safehasattr(proto, 'restore'):
686 if util.safehasattr(proto, 'restore'):
687
687
688 proto.redirect()
688 proto.redirect()
689
689
690 try:
690 try:
691 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
691 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
692 encoding.tolocal(old), new) or False
692 encoding.tolocal(old), new) or False
693 except util.Abort:
693 except util.Abort:
694 r = False
694 r = False
695
695
696 output = proto.restore()
696 output = proto.restore()
697
697
698 return '%s\n%s' % (int(r), output)
698 return '%s\n%s' % (int(r), output)
699
699
700 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
700 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
701 encoding.tolocal(old), new)
701 encoding.tolocal(old), new)
702 return '%s\n' % int(r)
702 return '%s\n' % int(r)
703
703
704 def _allowstream(ui):
704 def _allowstream(ui):
705 return ui.configbool('server', 'uncompressed', True, untrusted=True)
705 return ui.configbool('server', 'uncompressed', True, untrusted=True)
706
706
707 def _walkstreamfiles(repo):
707 def _walkstreamfiles(repo):
708 # this is it's own function so extensions can override it
708 # this is it's own function so extensions can override it
709 return repo.store.walk()
709 return repo.store.walk()
710
710
711 @wireprotocommand('stream_out')
711 @wireprotocommand('stream_out')
712 def stream(repo, proto):
712 def stream(repo, proto):
713 '''If the server supports streaming clone, it advertises the "stream"
713 '''If the server supports streaming clone, it advertises the "stream"
714 capability with a value representing the version and flags of the repo
714 capability with a value representing the version and flags of the repo
715 it is serving. Client checks to see if it understands the format.
715 it is serving. Client checks to see if it understands the format.
716
716
717 The format is simple: the server writes out a line with the amount
717 The format is simple: the server writes out a line with the amount
718 of files, then the total amount of bytes to be transferred (separated
718 of files, then the total amount of bytes to be transferred (separated
719 by a space). Then, for each file, the server first writes the filename
719 by a space). Then, for each file, the server first writes the filename
720 and file size (separated by the null character), then the file contents.
720 and file size (separated by the null character), then the file contents.
721 '''
721 '''
722
722
723 if not _allowstream(repo.ui):
723 if not _allowstream(repo.ui):
724 return '1\n'
724 return '1\n'
725
725
726 entries = []
726 entries = []
727 total_bytes = 0
727 total_bytes = 0
728 try:
728 try:
729 # get consistent snapshot of repo, lock during scan
729 # get consistent snapshot of repo, lock during scan
730 lock = repo.lock()
730 lock = repo.lock()
731 try:
731 try:
732 repo.ui.debug('scanning\n')
732 repo.ui.debug('scanning\n')
733 for name, ename, size in _walkstreamfiles(repo):
733 for name, ename, size in _walkstreamfiles(repo):
734 if size:
734 if size:
735 entries.append((name, size))
735 entries.append((name, size))
736 total_bytes += size
736 total_bytes += size
737 finally:
737 finally:
738 lock.release()
738 lock.release()
739 except error.LockError:
739 except error.LockError:
740 return '2\n' # error: 2
740 return '2\n' # error: 2
741
741
742 def streamer(repo, entries, total):
742 def streamer(repo, entries, total):
743 '''stream out all metadata files in repository.'''
743 '''stream out all metadata files in repository.'''
744 yield '0\n' # success
744 yield '0\n' # success
745 repo.ui.debug('%d files, %d bytes to transfer\n' %
745 repo.ui.debug('%d files, %d bytes to transfer\n' %
746 (len(entries), total_bytes))
746 (len(entries), total_bytes))
747 yield '%d %d\n' % (len(entries), total_bytes)
747 yield '%d %d\n' % (len(entries), total_bytes)
748
748
749 sopener = repo.sopener
749 sopener = repo.sopener
750 oldaudit = sopener.mustaudit
750 oldaudit = sopener.mustaudit
751 debugflag = repo.ui.debugflag
751 debugflag = repo.ui.debugflag
752 sopener.mustaudit = False
752 sopener.mustaudit = False
753
753
754 try:
754 try:
755 for name, size in entries:
755 for name, size in entries:
756 if debugflag:
756 if debugflag:
757 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
757 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
758 # partially encode name over the wire for backwards compat
758 # partially encode name over the wire for backwards compat
759 yield '%s\0%d\n' % (store.encodedir(name), size)
759 yield '%s\0%d\n' % (store.encodedir(name), size)
760 if size <= 65536:
760 if size <= 65536:
761 fp = sopener(name)
761 fp = sopener(name)
762 try:
762 try:
763 data = fp.read(size)
763 data = fp.read(size)
764 finally:
764 finally:
765 fp.close()
765 fp.close()
766 yield data
766 yield data
767 else:
767 else:
768 for chunk in util.filechunkiter(sopener(name), limit=size):
768 for chunk in util.filechunkiter(sopener(name), limit=size):
769 yield chunk
769 yield chunk
770 # replace with "finally:" when support for python 2.4 has been dropped
770 # replace with "finally:" when support for python 2.4 has been dropped
771 except Exception:
771 except Exception:
772 sopener.mustaudit = oldaudit
772 sopener.mustaudit = oldaudit
773 raise
773 raise
774 sopener.mustaudit = oldaudit
774 sopener.mustaudit = oldaudit
775
775
776 return streamres(streamer(repo, entries, total_bytes))
776 return streamres(streamer(repo, entries, total_bytes))
777
777
778 @wireprotocommand('unbundle', 'heads')
778 @wireprotocommand('unbundle', 'heads')
779 def unbundle(repo, proto, heads):
779 def unbundle(repo, proto, heads):
780 their_heads = decodelist(heads)
780 their_heads = decodelist(heads)
781
781
782 try:
782 try:
783 proto.redirect()
783 proto.redirect()
784
784
785 exchange.check_heads(repo, their_heads, 'preparing changes')
785 exchange.check_heads(repo, their_heads, 'preparing changes')
786
786
787 # write bundle data to temporary file because it can be big
787 # write bundle data to temporary file because it can be big
788 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
788 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
789 fp = os.fdopen(fd, 'wb+')
789 fp = os.fdopen(fd, 'wb+')
790 r = 0
790 r = 0
791 try:
791 try:
792 proto.getfile(fp)
792 proto.getfile(fp)
793 fp.seek(0)
793 fp.seek(0)
794 gen = exchange.readbundle(repo.ui, fp, None)
794 gen = exchange.readbundle(repo.ui, fp, None)
795 r = exchange.unbundle(repo, gen, their_heads, 'serve',
795 r = exchange.unbundle(repo, gen, their_heads, 'serve',
796 proto._client())
796 proto._client())
797 if util.safehasattr(r, 'addpart'):
797 if util.safehasattr(r, 'addpart'):
798 # The return looks streameable, we are in the bundle2 case and
798 # The return looks streameable, we are in the bundle2 case and
799 # should return a stream.
799 # should return a stream.
800 return streamres(r.getchunks())
800 return streamres(r.getchunks())
801 return pushres(r)
801 return pushres(r)
802
802
803 finally:
803 finally:
804 fp.close()
804 fp.close()
805 os.unlink(tempname)
805 os.unlink(tempname)
806 except bundle2.UnknownPartError, exc:
806 except bundle2.UnknownPartError, exc:
807 bundler = bundle2.bundle20(repo.ui)
807 bundler = bundle2.bundle20(repo.ui)
808 part = bundle2.bundlepart('B2X:ERROR:UNKNOWNPART',
808 bundler.newpart('B2X:ERROR:UNKNOWNPART', [('parttype', str(exc))])
809 [('parttype', str(exc))])
810 bundler.addpart(part)
811 return streamres(bundler.getchunks())
809 return streamres(bundler.getchunks())
812 except util.Abort, inst:
810 except util.Abort, inst:
813 # The old code we moved used sys.stderr directly.
811 # The old code we moved used sys.stderr directly.
814 # We did not change it to minimise code change.
812 # We did not change it to minimise code change.
815 # This need to be moved to something proper.
813 # This need to be moved to something proper.
816 # Feel free to do it.
814 # Feel free to do it.
817 if getattr(inst, 'duringunbundle2', False):
815 if getattr(inst, 'duringunbundle2', False):
818 bundler = bundle2.bundle20(repo.ui)
816 bundler = bundle2.bundle20(repo.ui)
819 manargs = [('message', str(inst))]
817 manargs = [('message', str(inst))]
820 advargs = []
818 advargs = []
821 if inst.hint is not None:
819 if inst.hint is not None:
822 advargs.append(('hint', inst.hint))
820 advargs.append(('hint', inst.hint))
823 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
821 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
824 manargs, advargs))
822 manargs, advargs))
825 return streamres(bundler.getchunks())
823 return streamres(bundler.getchunks())
826 else:
824 else:
827 sys.stderr.write("abort: %s\n" % inst)
825 sys.stderr.write("abort: %s\n" % inst)
828 return pushres(0)
826 return pushres(0)
829 except error.PushRaced, exc:
827 except error.PushRaced, exc:
830 if getattr(exc, 'duringunbundle2', False):
828 if getattr(exc, 'duringunbundle2', False):
831 bundler = bundle2.bundle20(repo.ui)
829 bundler = bundle2.bundle20(repo.ui)
832 part = bundle2.bundlepart('B2X:ERROR:PUSHRACED',
830 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
833 [('message', str(exc))])
834 bundler.addpart(part)
835 return streamres(bundler.getchunks())
831 return streamres(bundler.getchunks())
836 else:
832 else:
837 return pusherr(str(exc))
833 return pusherr(str(exc))
@@ -1,1096 +1,1083 b''
1
1
2 Create an extension to test bundle2 API
2 Create an extension to test bundle2 API
3
3
4 $ cat > bundle2.py << EOF
4 $ cat > bundle2.py << EOF
5 > """A small extension to test bundle2 implementation
5 > """A small extension to test bundle2 implementation
6 >
6 >
7 > Current bundle2 implementation is far too limited to be used in any core
7 > Current bundle2 implementation is far too limited to be used in any core
8 > code. We still need to be able to test it while it grow up.
8 > code. We still need to be able to test it while it grow up.
9 > """
9 > """
10 >
10 >
11 > try:
11 > try:
12 > import msvcrt
12 > import msvcrt
13 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
13 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
14 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
14 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
15 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
15 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
16 > except ImportError:
16 > except ImportError:
17 > pass
17 > pass
18 >
18 >
19 > import sys
19 > import sys
20 > from mercurial import cmdutil
20 > from mercurial import cmdutil
21 > from mercurial import util
21 > from mercurial import util
22 > from mercurial import bundle2
22 > from mercurial import bundle2
23 > from mercurial import scmutil
23 > from mercurial import scmutil
24 > from mercurial import discovery
24 > from mercurial import discovery
25 > from mercurial import changegroup
25 > from mercurial import changegroup
26 > from mercurial import error
26 > from mercurial import error
27 > cmdtable = {}
27 > cmdtable = {}
28 > command = cmdutil.command(cmdtable)
28 > command = cmdutil.command(cmdtable)
29 >
29 >
30 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
30 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
31 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
31 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
32 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
32 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
33 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
33 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
34 >
34 >
35 > @bundle2.parthandler('test:song')
35 > @bundle2.parthandler('test:song')
36 > def songhandler(op, part):
36 > def songhandler(op, part):
37 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
37 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
38 > op.ui.write('The choir starts singing:\n')
38 > op.ui.write('The choir starts singing:\n')
39 > verses = 0
39 > verses = 0
40 > for line in part.read().split('\n'):
40 > for line in part.read().split('\n'):
41 > op.ui.write(' %s\n' % line)
41 > op.ui.write(' %s\n' % line)
42 > verses += 1
42 > verses += 1
43 > op.records.add('song', {'verses': verses})
43 > op.records.add('song', {'verses': verses})
44 >
44 >
45 > @bundle2.parthandler('test:ping')
45 > @bundle2.parthandler('test:ping')
46 > def pinghandler(op, part):
46 > def pinghandler(op, part):
47 > op.ui.write('received ping request (id %i)\n' % part.id)
47 > op.ui.write('received ping request (id %i)\n' % part.id)
48 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
48 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
49 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
49 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
50 > rpart = bundle2.bundlepart('test:pong',
50 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
51 > [('in-reply-to', str(part.id))])
52 > op.reply.addpart(rpart)
53 >
51 >
54 > @bundle2.parthandler('test:debugreply')
52 > @bundle2.parthandler('test:debugreply')
55 > def debugreply(op, part):
53 > def debugreply(op, part):
56 > """print data about the capacity of the bundle reply"""
54 > """print data about the capacity of the bundle reply"""
57 > if op.reply is None:
55 > if op.reply is None:
58 > op.ui.write('debugreply: no reply\n')
56 > op.ui.write('debugreply: no reply\n')
59 > else:
57 > else:
60 > op.ui.write('debugreply: capabilities:\n')
58 > op.ui.write('debugreply: capabilities:\n')
61 > for cap in sorted(op.reply.capabilities):
59 > for cap in sorted(op.reply.capabilities):
62 > op.ui.write('debugreply: %r\n' % cap)
60 > op.ui.write('debugreply: %r\n' % cap)
63 > for val in op.reply.capabilities[cap]:
61 > for val in op.reply.capabilities[cap]:
64 > op.ui.write('debugreply: %r\n' % val)
62 > op.ui.write('debugreply: %r\n' % val)
65 >
63 >
66 > @command('bundle2',
64 > @command('bundle2',
67 > [('', 'param', [], 'stream level parameter'),
65 > [('', 'param', [], 'stream level parameter'),
68 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
66 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
69 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
67 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
70 > ('', 'reply', False, 'produce a reply bundle'),
68 > ('', 'reply', False, 'produce a reply bundle'),
71 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
69 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
72 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
70 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
73 > '[OUTPUTFILE]')
71 > '[OUTPUTFILE]')
74 > def cmdbundle2(ui, repo, path=None, **opts):
72 > def cmdbundle2(ui, repo, path=None, **opts):
75 > """write a bundle2 container on standard ouput"""
73 > """write a bundle2 container on standard ouput"""
76 > bundler = bundle2.bundle20(ui)
74 > bundler = bundle2.bundle20(ui)
77 > for p in opts['param']:
75 > for p in opts['param']:
78 > p = p.split('=', 1)
76 > p = p.split('=', 1)
79 > try:
77 > try:
80 > bundler.addparam(*p)
78 > bundler.addparam(*p)
81 > except ValueError, exc:
79 > except ValueError, exc:
82 > raise util.Abort('%s' % exc)
80 > raise util.Abort('%s' % exc)
83 >
81 >
84 > if opts['reply']:
82 > if opts['reply']:
85 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
83 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
86 > bundler.addpart(bundle2.bundlepart('b2x:replycaps', data=capsstring))
84 > bundler.newpart('b2x:replycaps', data=capsstring)
87 >
85 >
88 > if opts['pushrace']:
86 > if opts['pushrace']:
89 > dummynode = '01234567890123456789'
87 > dummynode = '01234567890123456789'
90 > bundler.addpart(bundle2.bundlepart('b2x:check:heads', data=dummynode))
88 > bundler.newpart('b2x:check:heads', data=dummynode)
91 >
89 >
92 > revs = opts['rev']
90 > revs = opts['rev']
93 > if 'rev' in opts:
91 > if 'rev' in opts:
94 > revs = scmutil.revrange(repo, opts['rev'])
92 > revs = scmutil.revrange(repo, opts['rev'])
95 > if revs:
93 > if revs:
96 > # very crude version of a changegroup part creation
94 > # very crude version of a changegroup part creation
97 > bundled = repo.revs('%ld::%ld', revs, revs)
95 > bundled = repo.revs('%ld::%ld', revs, revs)
98 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
96 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
99 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
97 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
100 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
98 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
101 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
99 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
102 > part = bundle2.bundlepart('b2x:changegroup', data=cg.getchunks())
100 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
103 > bundler.addpart(part)
104 >
101 >
105 > if opts['parts']:
102 > if opts['parts']:
106 > part = bundle2.bundlepart('test:empty')
103 > bundler.newpart('test:empty')
107 > bundler.addpart(part)
108 > # add a second one to make sure we handle multiple parts
104 > # add a second one to make sure we handle multiple parts
109 > part = bundle2.bundlepart('test:empty')
105 > bundler.newpart('test:empty')
110 > bundler.addpart(part)
106 > bundler.newpart('test:song', data=ELEPHANTSSONG)
111 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
107 > bundler.newpart('test:debugreply')
112 > bundler.addpart(part)
108 > bundler.newpart('test:math',
113 > part = bundle2.bundlepart('test:debugreply')
114 > bundler.addpart(part)
115 > part = bundle2.bundlepart('test:math',
116 > [('pi', '3.14'), ('e', '2.72')],
109 > [('pi', '3.14'), ('e', '2.72')],
117 > [('cooking', 'raw')],
110 > [('cooking', 'raw')],
118 > '42')
111 > '42')
119 > bundler.addpart(part)
120 > if opts['unknown']:
112 > if opts['unknown']:
121 > part = bundle2.bundlepart('test:UNKNOWN',
113 > bundler.newpart('test:UNKNOWN', data='some random content')
122 > data='some random content')
123 > bundler.addpart(part)
124 > if opts['parts']:
114 > if opts['parts']:
125 > part = bundle2.bundlepart('test:ping')
115 > bundler.newpart('test:ping')
126 > bundler.addpart(part)
127 >
116 >
128 > if path is None:
117 > if path is None:
129 > file = sys.stdout
118 > file = sys.stdout
130 > else:
119 > else:
131 > file = open(path, 'w')
120 > file = open(path, 'w')
132 >
121 >
133 > for chunk in bundler.getchunks():
122 > for chunk in bundler.getchunks():
134 > file.write(chunk)
123 > file.write(chunk)
135 >
124 >
136 > @command('unbundle2', [], '')
125 > @command('unbundle2', [], '')
137 > def cmdunbundle2(ui, repo, replypath=None):
126 > def cmdunbundle2(ui, repo, replypath=None):
138 > """process a bundle2 stream from stdin on the current repo"""
127 > """process a bundle2 stream from stdin on the current repo"""
139 > try:
128 > try:
140 > tr = None
129 > tr = None
141 > lock = repo.lock()
130 > lock = repo.lock()
142 > tr = repo.transaction('processbundle')
131 > tr = repo.transaction('processbundle')
143 > try:
132 > try:
144 > unbundler = bundle2.unbundle20(ui, sys.stdin)
133 > unbundler = bundle2.unbundle20(ui, sys.stdin)
145 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
134 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
146 > tr.close()
135 > tr.close()
147 > except KeyError, exc:
136 > except KeyError, exc:
148 > raise util.Abort('missing support for %s' % exc)
137 > raise util.Abort('missing support for %s' % exc)
149 > except error.PushRaced, exc:
138 > except error.PushRaced, exc:
150 > raise util.Abort('push race: %s' % exc)
139 > raise util.Abort('push race: %s' % exc)
151 > finally:
140 > finally:
152 > if tr is not None:
141 > if tr is not None:
153 > tr.release()
142 > tr.release()
154 > lock.release()
143 > lock.release()
155 > remains = sys.stdin.read()
144 > remains = sys.stdin.read()
156 > ui.write('%i unread bytes\n' % len(remains))
145 > ui.write('%i unread bytes\n' % len(remains))
157 > if op.records['song']:
146 > if op.records['song']:
158 > totalverses = sum(r['verses'] for r in op.records['song'])
147 > totalverses = sum(r['verses'] for r in op.records['song'])
159 > ui.write('%i total verses sung\n' % totalverses)
148 > ui.write('%i total verses sung\n' % totalverses)
160 > for rec in op.records['changegroup']:
149 > for rec in op.records['changegroup']:
161 > ui.write('addchangegroup return: %i\n' % rec['return'])
150 > ui.write('addchangegroup return: %i\n' % rec['return'])
162 > if op.reply is not None and replypath is not None:
151 > if op.reply is not None and replypath is not None:
163 > file = open(replypath, 'w')
152 > file = open(replypath, 'w')
164 > for chunk in op.reply.getchunks():
153 > for chunk in op.reply.getchunks():
165 > file.write(chunk)
154 > file.write(chunk)
166 >
155 >
167 > @command('statbundle2', [], '')
156 > @command('statbundle2', [], '')
168 > def cmdstatbundle2(ui, repo):
157 > def cmdstatbundle2(ui, repo):
169 > """print statistic on the bundle2 container read from stdin"""
158 > """print statistic on the bundle2 container read from stdin"""
170 > unbundler = bundle2.unbundle20(ui, sys.stdin)
159 > unbundler = bundle2.unbundle20(ui, sys.stdin)
171 > try:
160 > try:
172 > params = unbundler.params
161 > params = unbundler.params
173 > except KeyError, exc:
162 > except KeyError, exc:
174 > raise util.Abort('unknown parameters: %s' % exc)
163 > raise util.Abort('unknown parameters: %s' % exc)
175 > ui.write('options count: %i\n' % len(params))
164 > ui.write('options count: %i\n' % len(params))
176 > for key in sorted(params):
165 > for key in sorted(params):
177 > ui.write('- %s\n' % key)
166 > ui.write('- %s\n' % key)
178 > value = params[key]
167 > value = params[key]
179 > if value is not None:
168 > if value is not None:
180 > ui.write(' %s\n' % value)
169 > ui.write(' %s\n' % value)
181 > count = 0
170 > count = 0
182 > for p in unbundler.iterparts():
171 > for p in unbundler.iterparts():
183 > count += 1
172 > count += 1
184 > ui.write(' :%s:\n' % p.type)
173 > ui.write(' :%s:\n' % p.type)
185 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
174 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
186 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
175 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
187 > ui.write(' payload: %i bytes\n' % len(p.read()))
176 > ui.write(' payload: %i bytes\n' % len(p.read()))
188 > ui.write('parts count: %i\n' % count)
177 > ui.write('parts count: %i\n' % count)
189 > EOF
178 > EOF
190 $ cat >> $HGRCPATH << EOF
179 $ cat >> $HGRCPATH << EOF
191 > [extensions]
180 > [extensions]
192 > bundle2=$TESTTMP/bundle2.py
181 > bundle2=$TESTTMP/bundle2.py
193 > [experimental]
182 > [experimental]
194 > bundle2-exp=True
183 > bundle2-exp=True
195 > [ui]
184 > [ui]
196 > ssh=python "$TESTDIR/dummyssh"
185 > ssh=python "$TESTDIR/dummyssh"
197 > [web]
186 > [web]
198 > push_ssl = false
187 > push_ssl = false
199 > allow_push = *
188 > allow_push = *
200 > EOF
189 > EOF
201
190
202 The extension requires a repo (currently unused)
191 The extension requires a repo (currently unused)
203
192
204 $ hg init main
193 $ hg init main
205 $ cd main
194 $ cd main
206 $ touch a
195 $ touch a
207 $ hg add a
196 $ hg add a
208 $ hg commit -m 'a'
197 $ hg commit -m 'a'
209
198
210
199
211 Empty bundle
200 Empty bundle
212 =================
201 =================
213
202
214 - no option
203 - no option
215 - no parts
204 - no parts
216
205
217 Test bundling
206 Test bundling
218
207
219 $ hg bundle2
208 $ hg bundle2
220 HG2X\x00\x00\x00\x00 (no-eol) (esc)
209 HG2X\x00\x00\x00\x00 (no-eol) (esc)
221
210
222 Test unbundling
211 Test unbundling
223
212
224 $ hg bundle2 | hg statbundle2
213 $ hg bundle2 | hg statbundle2
225 options count: 0
214 options count: 0
226 parts count: 0
215 parts count: 0
227
216
228 Test old style bundle are detected and refused
217 Test old style bundle are detected and refused
229
218
230 $ hg bundle --all ../bundle.hg
219 $ hg bundle --all ../bundle.hg
231 1 changesets found
220 1 changesets found
232 $ hg statbundle2 < ../bundle.hg
221 $ hg statbundle2 < ../bundle.hg
233 abort: unknown bundle version 10
222 abort: unknown bundle version 10
234 [255]
223 [255]
235
224
236 Test parameters
225 Test parameters
237 =================
226 =================
238
227
239 - some options
228 - some options
240 - no parts
229 - no parts
241
230
242 advisory parameters, no value
231 advisory parameters, no value
243 -------------------------------
232 -------------------------------
244
233
245 Simplest possible parameters form
234 Simplest possible parameters form
246
235
247 Test generation simple option
236 Test generation simple option
248
237
249 $ hg bundle2 --param 'caution'
238 $ hg bundle2 --param 'caution'
250 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
239 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
251
240
252 Test unbundling
241 Test unbundling
253
242
254 $ hg bundle2 --param 'caution' | hg statbundle2
243 $ hg bundle2 --param 'caution' | hg statbundle2
255 options count: 1
244 options count: 1
256 - caution
245 - caution
257 parts count: 0
246 parts count: 0
258
247
259 Test generation multiple option
248 Test generation multiple option
260
249
261 $ hg bundle2 --param 'caution' --param 'meal'
250 $ hg bundle2 --param 'caution' --param 'meal'
262 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
251 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
263
252
264 Test unbundling
253 Test unbundling
265
254
266 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
255 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
267 options count: 2
256 options count: 2
268 - caution
257 - caution
269 - meal
258 - meal
270 parts count: 0
259 parts count: 0
271
260
272 advisory parameters, with value
261 advisory parameters, with value
273 -------------------------------
262 -------------------------------
274
263
275 Test generation
264 Test generation
276
265
277 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
266 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
278 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
267 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
279
268
280 Test unbundling
269 Test unbundling
281
270
282 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
271 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
283 options count: 3
272 options count: 3
284 - caution
273 - caution
285 - elephants
274 - elephants
286 - meal
275 - meal
287 vegan
276 vegan
288 parts count: 0
277 parts count: 0
289
278
290 parameter with special char in value
279 parameter with special char in value
291 ---------------------------------------------------
280 ---------------------------------------------------
292
281
293 Test generation
282 Test generation
294
283
295 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
284 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
296 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
285 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
297
286
298 Test unbundling
287 Test unbundling
299
288
300 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
289 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
301 options count: 2
290 options count: 2
302 - e|! 7/
291 - e|! 7/
303 babar%#==tutu
292 babar%#==tutu
304 - simple
293 - simple
305 parts count: 0
294 parts count: 0
306
295
307 Test unknown mandatory option
296 Test unknown mandatory option
308 ---------------------------------------------------
297 ---------------------------------------------------
309
298
310 $ hg bundle2 --param 'Gravity' | hg statbundle2
299 $ hg bundle2 --param 'Gravity' | hg statbundle2
311 abort: unknown parameters: 'Gravity'
300 abort: unknown parameters: 'Gravity'
312 [255]
301 [255]
313
302
314 Test debug output
303 Test debug output
315 ---------------------------------------------------
304 ---------------------------------------------------
316
305
317 bundling debug
306 bundling debug
318
307
319 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
308 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
320 start emission of HG2X stream
309 start emission of HG2X stream
321 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
310 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
322 start of parts
311 start of parts
323 end of bundle
312 end of bundle
324
313
325 file content is ok
314 file content is ok
326
315
327 $ cat ../out.hg2
316 $ cat ../out.hg2
328 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
317 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
329
318
330 unbundling debug
319 unbundling debug
331
320
332 $ hg statbundle2 --debug < ../out.hg2
321 $ hg statbundle2 --debug < ../out.hg2
333 start processing of HG2X stream
322 start processing of HG2X stream
334 reading bundle2 stream parameters
323 reading bundle2 stream parameters
335 ignoring unknown parameter 'e|! 7/'
324 ignoring unknown parameter 'e|! 7/'
336 ignoring unknown parameter 'simple'
325 ignoring unknown parameter 'simple'
337 options count: 2
326 options count: 2
338 - e|! 7/
327 - e|! 7/
339 babar%#==tutu
328 babar%#==tutu
340 - simple
329 - simple
341 start extraction of bundle2 parts
330 start extraction of bundle2 parts
342 part header size: 0
331 part header size: 0
343 end of bundle2 stream
332 end of bundle2 stream
344 parts count: 0
333 parts count: 0
345
334
346
335
347 Test buggy input
336 Test buggy input
348 ---------------------------------------------------
337 ---------------------------------------------------
349
338
350 empty parameter name
339 empty parameter name
351
340
352 $ hg bundle2 --param '' --quiet
341 $ hg bundle2 --param '' --quiet
353 abort: empty parameter name
342 abort: empty parameter name
354 [255]
343 [255]
355
344
356 bad parameter name
345 bad parameter name
357
346
358 $ hg bundle2 --param 42babar
347 $ hg bundle2 --param 42babar
359 abort: non letter first character: '42babar'
348 abort: non letter first character: '42babar'
360 [255]
349 [255]
361
350
362
351
363 Test part
352 Test part
364 =================
353 =================
365
354
366 $ hg bundle2 --parts ../parts.hg2 --debug
355 $ hg bundle2 --parts ../parts.hg2 --debug
367 start emission of HG2X stream
356 start emission of HG2X stream
368 bundle parameter:
357 bundle parameter:
369 start of parts
358 start of parts
370 bundle part: "test:empty"
359 bundle part: "test:empty"
371 bundle part: "test:empty"
360 bundle part: "test:empty"
372 bundle part: "test:song"
361 bundle part: "test:song"
373 bundle part: "test:debugreply"
362 bundle part: "test:debugreply"
374 bundle part: "test:math"
363 bundle part: "test:math"
375 bundle part: "test:ping"
364 bundle part: "test:ping"
376 end of bundle
365 end of bundle
377
366
378 $ cat ../parts.hg2
367 $ cat ../parts.hg2
379 HG2X\x00\x00\x00\x11 (esc)
368 HG2X\x00\x00\x00\x11 (esc)
380 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
369 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
381 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
370 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
382 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
371 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
383 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
372 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
384
373
385
374
386 $ hg statbundle2 < ../parts.hg2
375 $ hg statbundle2 < ../parts.hg2
387 options count: 0
376 options count: 0
388 :test:empty:
377 :test:empty:
389 mandatory: 0
378 mandatory: 0
390 advisory: 0
379 advisory: 0
391 payload: 0 bytes
380 payload: 0 bytes
392 :test:empty:
381 :test:empty:
393 mandatory: 0
382 mandatory: 0
394 advisory: 0
383 advisory: 0
395 payload: 0 bytes
384 payload: 0 bytes
396 :test:song:
385 :test:song:
397 mandatory: 0
386 mandatory: 0
398 advisory: 0
387 advisory: 0
399 payload: 178 bytes
388 payload: 178 bytes
400 :test:debugreply:
389 :test:debugreply:
401 mandatory: 0
390 mandatory: 0
402 advisory: 0
391 advisory: 0
403 payload: 0 bytes
392 payload: 0 bytes
404 :test:math:
393 :test:math:
405 mandatory: 2
394 mandatory: 2
406 advisory: 1
395 advisory: 1
407 payload: 2 bytes
396 payload: 2 bytes
408 :test:ping:
397 :test:ping:
409 mandatory: 0
398 mandatory: 0
410 advisory: 0
399 advisory: 0
411 payload: 0 bytes
400 payload: 0 bytes
412 parts count: 6
401 parts count: 6
413
402
414 $ hg statbundle2 --debug < ../parts.hg2
403 $ hg statbundle2 --debug < ../parts.hg2
415 start processing of HG2X stream
404 start processing of HG2X stream
416 reading bundle2 stream parameters
405 reading bundle2 stream parameters
417 options count: 0
406 options count: 0
418 start extraction of bundle2 parts
407 start extraction of bundle2 parts
419 part header size: 17
408 part header size: 17
420 part type: "test:empty"
409 part type: "test:empty"
421 part id: "0"
410 part id: "0"
422 part parameters: 0
411 part parameters: 0
423 :test:empty:
412 :test:empty:
424 mandatory: 0
413 mandatory: 0
425 advisory: 0
414 advisory: 0
426 payload chunk size: 0
415 payload chunk size: 0
427 payload: 0 bytes
416 payload: 0 bytes
428 part header size: 17
417 part header size: 17
429 part type: "test:empty"
418 part type: "test:empty"
430 part id: "1"
419 part id: "1"
431 part parameters: 0
420 part parameters: 0
432 :test:empty:
421 :test:empty:
433 mandatory: 0
422 mandatory: 0
434 advisory: 0
423 advisory: 0
435 payload chunk size: 0
424 payload chunk size: 0
436 payload: 0 bytes
425 payload: 0 bytes
437 part header size: 16
426 part header size: 16
438 part type: "test:song"
427 part type: "test:song"
439 part id: "2"
428 part id: "2"
440 part parameters: 0
429 part parameters: 0
441 :test:song:
430 :test:song:
442 mandatory: 0
431 mandatory: 0
443 advisory: 0
432 advisory: 0
444 payload chunk size: 178
433 payload chunk size: 178
445 payload chunk size: 0
434 payload chunk size: 0
446 payload: 178 bytes
435 payload: 178 bytes
447 part header size: 22
436 part header size: 22
448 part type: "test:debugreply"
437 part type: "test:debugreply"
449 part id: "3"
438 part id: "3"
450 part parameters: 0
439 part parameters: 0
451 :test:debugreply:
440 :test:debugreply:
452 mandatory: 0
441 mandatory: 0
453 advisory: 0
442 advisory: 0
454 payload chunk size: 0
443 payload chunk size: 0
455 payload: 0 bytes
444 payload: 0 bytes
456 part header size: 43
445 part header size: 43
457 part type: "test:math"
446 part type: "test:math"
458 part id: "4"
447 part id: "4"
459 part parameters: 3
448 part parameters: 3
460 :test:math:
449 :test:math:
461 mandatory: 2
450 mandatory: 2
462 advisory: 1
451 advisory: 1
463 payload chunk size: 2
452 payload chunk size: 2
464 payload chunk size: 0
453 payload chunk size: 0
465 payload: 2 bytes
454 payload: 2 bytes
466 part header size: 16
455 part header size: 16
467 part type: "test:ping"
456 part type: "test:ping"
468 part id: "5"
457 part id: "5"
469 part parameters: 0
458 part parameters: 0
470 :test:ping:
459 :test:ping:
471 mandatory: 0
460 mandatory: 0
472 advisory: 0
461 advisory: 0
473 payload chunk size: 0
462 payload chunk size: 0
474 payload: 0 bytes
463 payload: 0 bytes
475 part header size: 0
464 part header size: 0
476 end of bundle2 stream
465 end of bundle2 stream
477 parts count: 6
466 parts count: 6
478
467
479 Test actual unbundling of test part
468 Test actual unbundling of test part
480 =======================================
469 =======================================
481
470
482 Process the bundle
471 Process the bundle
483
472
484 $ hg unbundle2 --debug < ../parts.hg2
473 $ hg unbundle2 --debug < ../parts.hg2
485 start processing of HG2X stream
474 start processing of HG2X stream
486 reading bundle2 stream parameters
475 reading bundle2 stream parameters
487 start extraction of bundle2 parts
476 start extraction of bundle2 parts
488 part header size: 17
477 part header size: 17
489 part type: "test:empty"
478 part type: "test:empty"
490 part id: "0"
479 part id: "0"
491 part parameters: 0
480 part parameters: 0
492 ignoring unknown advisory part 'test:empty'
481 ignoring unknown advisory part 'test:empty'
493 payload chunk size: 0
482 payload chunk size: 0
494 part header size: 17
483 part header size: 17
495 part type: "test:empty"
484 part type: "test:empty"
496 part id: "1"
485 part id: "1"
497 part parameters: 0
486 part parameters: 0
498 ignoring unknown advisory part 'test:empty'
487 ignoring unknown advisory part 'test:empty'
499 payload chunk size: 0
488 payload chunk size: 0
500 part header size: 16
489 part header size: 16
501 part type: "test:song"
490 part type: "test:song"
502 part id: "2"
491 part id: "2"
503 part parameters: 0
492 part parameters: 0
504 found a handler for part 'test:song'
493 found a handler for part 'test:song'
505 The choir starts singing:
494 The choir starts singing:
506 payload chunk size: 178
495 payload chunk size: 178
507 payload chunk size: 0
496 payload chunk size: 0
508 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
497 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
509 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
498 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
510 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
499 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
511 part header size: 22
500 part header size: 22
512 part type: "test:debugreply"
501 part type: "test:debugreply"
513 part id: "3"
502 part id: "3"
514 part parameters: 0
503 part parameters: 0
515 found a handler for part 'test:debugreply'
504 found a handler for part 'test:debugreply'
516 debugreply: no reply
505 debugreply: no reply
517 payload chunk size: 0
506 payload chunk size: 0
518 part header size: 43
507 part header size: 43
519 part type: "test:math"
508 part type: "test:math"
520 part id: "4"
509 part id: "4"
521 part parameters: 3
510 part parameters: 3
522 ignoring unknown advisory part 'test:math'
511 ignoring unknown advisory part 'test:math'
523 payload chunk size: 2
512 payload chunk size: 2
524 payload chunk size: 0
513 payload chunk size: 0
525 part header size: 16
514 part header size: 16
526 part type: "test:ping"
515 part type: "test:ping"
527 part id: "5"
516 part id: "5"
528 part parameters: 0
517 part parameters: 0
529 found a handler for part 'test:ping'
518 found a handler for part 'test:ping'
530 received ping request (id 5)
519 received ping request (id 5)
531 payload chunk size: 0
520 payload chunk size: 0
532 part header size: 0
521 part header size: 0
533 end of bundle2 stream
522 end of bundle2 stream
534 0 unread bytes
523 0 unread bytes
535 3 total verses sung
524 3 total verses sung
536
525
537 Unbundle with an unknown mandatory part
526 Unbundle with an unknown mandatory part
538 (should abort)
527 (should abort)
539
528
540 $ hg bundle2 --parts --unknown ../unknown.hg2
529 $ hg bundle2 --parts --unknown ../unknown.hg2
541
530
542 $ hg unbundle2 < ../unknown.hg2
531 $ hg unbundle2 < ../unknown.hg2
543 The choir starts singing:
532 The choir starts singing:
544 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
533 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
545 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
534 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
546 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
535 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
547 debugreply: no reply
536 debugreply: no reply
548 0 unread bytes
537 0 unread bytes
549 abort: missing support for 'test:unknown'
538 abort: missing support for 'test:unknown'
550 [255]
539 [255]
551
540
552 unbundle with a reply
541 unbundle with a reply
553
542
554 $ hg bundle2 --parts --reply ../parts-reply.hg2
543 $ hg bundle2 --parts --reply ../parts-reply.hg2
555 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
544 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
556 0 unread bytes
545 0 unread bytes
557 3 total verses sung
546 3 total verses sung
558
547
559 The reply is a bundle
548 The reply is a bundle
560
549
561 $ cat ../reply.hg2
550 $ cat ../reply.hg2
562 HG2X\x00\x00\x00\x1f (esc)
551 HG2X\x00\x00\x00\x1f (esc)
563 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
552 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
564 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
553 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
565 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
554 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
566 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
555 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
567 \x00\x00\x00\x00\x00\x1f (esc)
556 \x00\x00\x00\x00\x00\x1f (esc)
568 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
557 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
569 debugreply: 'city=!'
558 debugreply: 'city=!'
570 debugreply: 'celeste,ville'
559 debugreply: 'celeste,ville'
571 debugreply: 'elephants'
560 debugreply: 'elephants'
572 debugreply: 'babar'
561 debugreply: 'babar'
573 debugreply: 'celeste'
562 debugreply: 'celeste'
574 debugreply: 'ping-pong'
563 debugreply: 'ping-pong'
575 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1f (esc)
564 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1f (esc)
576 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
565 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
577 replying to ping request (id 6)
566 replying to ping request (id 6)
578 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
567 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
579
568
580 The reply is valid
569 The reply is valid
581
570
582 $ hg statbundle2 < ../reply.hg2
571 $ hg statbundle2 < ../reply.hg2
583 options count: 0
572 options count: 0
584 :b2x:output:
573 :b2x:output:
585 mandatory: 0
574 mandatory: 0
586 advisory: 1
575 advisory: 1
587 payload: 217 bytes
576 payload: 217 bytes
588 :b2x:output:
577 :b2x:output:
589 mandatory: 0
578 mandatory: 0
590 advisory: 1
579 advisory: 1
591 payload: 201 bytes
580 payload: 201 bytes
592 :test:pong:
581 :test:pong:
593 mandatory: 1
582 mandatory: 1
594 advisory: 0
583 advisory: 0
595 payload: 0 bytes
584 payload: 0 bytes
596 :b2x:output:
585 :b2x:output:
597 mandatory: 0
586 mandatory: 0
598 advisory: 1
587 advisory: 1
599 payload: 61 bytes
588 payload: 61 bytes
600 parts count: 4
589 parts count: 4
601
590
602 Unbundle the reply to get the output:
591 Unbundle the reply to get the output:
603
592
604 $ hg unbundle2 < ../reply.hg2
593 $ hg unbundle2 < ../reply.hg2
605 remote: The choir starts singing:
594 remote: The choir starts singing:
606 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
595 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
607 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
596 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
608 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
597 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
609 remote: debugreply: capabilities:
598 remote: debugreply: capabilities:
610 remote: debugreply: 'city=!'
599 remote: debugreply: 'city=!'
611 remote: debugreply: 'celeste,ville'
600 remote: debugreply: 'celeste,ville'
612 remote: debugreply: 'elephants'
601 remote: debugreply: 'elephants'
613 remote: debugreply: 'babar'
602 remote: debugreply: 'babar'
614 remote: debugreply: 'celeste'
603 remote: debugreply: 'celeste'
615 remote: debugreply: 'ping-pong'
604 remote: debugreply: 'ping-pong'
616 remote: received ping request (id 6)
605 remote: received ping request (id 6)
617 remote: replying to ping request (id 6)
606 remote: replying to ping request (id 6)
618 0 unread bytes
607 0 unread bytes
619
608
620 Test push race detection
609 Test push race detection
621
610
622 $ hg bundle2 --pushrace ../part-race.hg2
611 $ hg bundle2 --pushrace ../part-race.hg2
623
612
624 $ hg unbundle2 < ../part-race.hg2
613 $ hg unbundle2 < ../part-race.hg2
625 0 unread bytes
614 0 unread bytes
626 abort: push race: repository changed while pushing - please try again
615 abort: push race: repository changed while pushing - please try again
627 [255]
616 [255]
628
617
629 Support for changegroup
618 Support for changegroup
630 ===================================
619 ===================================
631
620
632 $ hg unbundle $TESTDIR/bundles/rebase.hg
621 $ hg unbundle $TESTDIR/bundles/rebase.hg
633 adding changesets
622 adding changesets
634 adding manifests
623 adding manifests
635 adding file changes
624 adding file changes
636 added 8 changesets with 7 changes to 7 files (+3 heads)
625 added 8 changesets with 7 changes to 7 files (+3 heads)
637 (run 'hg heads' to see heads, 'hg merge' to merge)
626 (run 'hg heads' to see heads, 'hg merge' to merge)
638
627
639 $ hg log -G
628 $ hg log -G
640 o changeset: 8:02de42196ebe
629 o changeset: 8:02de42196ebe
641 | tag: tip
630 | tag: tip
642 | parent: 6:24b6387c8c8c
631 | parent: 6:24b6387c8c8c
643 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
632 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
644 | date: Sat Apr 30 15:24:48 2011 +0200
633 | date: Sat Apr 30 15:24:48 2011 +0200
645 | summary: H
634 | summary: H
646 |
635 |
647 | o changeset: 7:eea13746799a
636 | o changeset: 7:eea13746799a
648 |/| parent: 6:24b6387c8c8c
637 |/| parent: 6:24b6387c8c8c
649 | | parent: 5:9520eea781bc
638 | | parent: 5:9520eea781bc
650 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
639 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
651 | | date: Sat Apr 30 15:24:48 2011 +0200
640 | | date: Sat Apr 30 15:24:48 2011 +0200
652 | | summary: G
641 | | summary: G
653 | |
642 | |
654 o | changeset: 6:24b6387c8c8c
643 o | changeset: 6:24b6387c8c8c
655 | | parent: 1:cd010b8cd998
644 | | parent: 1:cd010b8cd998
656 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
645 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
657 | | date: Sat Apr 30 15:24:48 2011 +0200
646 | | date: Sat Apr 30 15:24:48 2011 +0200
658 | | summary: F
647 | | summary: F
659 | |
648 | |
660 | o changeset: 5:9520eea781bc
649 | o changeset: 5:9520eea781bc
661 |/ parent: 1:cd010b8cd998
650 |/ parent: 1:cd010b8cd998
662 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
651 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
663 | date: Sat Apr 30 15:24:48 2011 +0200
652 | date: Sat Apr 30 15:24:48 2011 +0200
664 | summary: E
653 | summary: E
665 |
654 |
666 | o changeset: 4:32af7686d403
655 | o changeset: 4:32af7686d403
667 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
656 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
668 | | date: Sat Apr 30 15:24:48 2011 +0200
657 | | date: Sat Apr 30 15:24:48 2011 +0200
669 | | summary: D
658 | | summary: D
670 | |
659 | |
671 | o changeset: 3:5fddd98957c8
660 | o changeset: 3:5fddd98957c8
672 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
661 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
673 | | date: Sat Apr 30 15:24:48 2011 +0200
662 | | date: Sat Apr 30 15:24:48 2011 +0200
674 | | summary: C
663 | | summary: C
675 | |
664 | |
676 | o changeset: 2:42ccdea3bb16
665 | o changeset: 2:42ccdea3bb16
677 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
666 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
678 | date: Sat Apr 30 15:24:48 2011 +0200
667 | date: Sat Apr 30 15:24:48 2011 +0200
679 | summary: B
668 | summary: B
680 |
669 |
681 o changeset: 1:cd010b8cd998
670 o changeset: 1:cd010b8cd998
682 parent: -1:000000000000
671 parent: -1:000000000000
683 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
672 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
684 date: Sat Apr 30 15:24:48 2011 +0200
673 date: Sat Apr 30 15:24:48 2011 +0200
685 summary: A
674 summary: A
686
675
687 @ changeset: 0:3903775176ed
676 @ changeset: 0:3903775176ed
688 user: test
677 user: test
689 date: Thu Jan 01 00:00:00 1970 +0000
678 date: Thu Jan 01 00:00:00 1970 +0000
690 summary: a
679 summary: a
691
680
692
681
693 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
682 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
694 4 changesets found
683 4 changesets found
695 list of changesets:
684 list of changesets:
696 32af7686d403cf45b5d95f2d70cebea587ac806a
685 32af7686d403cf45b5d95f2d70cebea587ac806a
697 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
686 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
698 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
687 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
699 02de42196ebee42ef284b6780a87cdc96e8eaab6
688 02de42196ebee42ef284b6780a87cdc96e8eaab6
700 start emission of HG2X stream
689 start emission of HG2X stream
701 bundle parameter:
690 bundle parameter:
702 start of parts
691 start of parts
703 bundle part: "b2x:changegroup"
692 bundle part: "b2x:changegroup"
704 bundling: 1/4 changesets (25.00%)
693 bundling: 1/4 changesets (25.00%)
705 bundling: 2/4 changesets (50.00%)
694 bundling: 2/4 changesets (50.00%)
706 bundling: 3/4 changesets (75.00%)
695 bundling: 3/4 changesets (75.00%)
707 bundling: 4/4 changesets (100.00%)
696 bundling: 4/4 changesets (100.00%)
708 bundling: 1/4 manifests (25.00%)
697 bundling: 1/4 manifests (25.00%)
709 bundling: 2/4 manifests (50.00%)
698 bundling: 2/4 manifests (50.00%)
710 bundling: 3/4 manifests (75.00%)
699 bundling: 3/4 manifests (75.00%)
711 bundling: 4/4 manifests (100.00%)
700 bundling: 4/4 manifests (100.00%)
712 bundling: D 1/3 files (33.33%)
701 bundling: D 1/3 files (33.33%)
713 bundling: E 2/3 files (66.67%)
702 bundling: E 2/3 files (66.67%)
714 bundling: H 3/3 files (100.00%)
703 bundling: H 3/3 files (100.00%)
715 end of bundle
704 end of bundle
716
705
717 $ cat ../rev.hg2
706 $ cat ../rev.hg2
718 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
707 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
719 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
708 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
720 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
709 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
721 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
710 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
722 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
711 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
723 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
712 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
724 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
713 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
725 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
714 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
726 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
715 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
727 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
716 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
728 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
717 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
729 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
718 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
730 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
719 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
731 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
720 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
732 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
721 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
733 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
722 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
734 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
723 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
735 l\r (no-eol) (esc)
724 l\r (no-eol) (esc)
736 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
725 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
737 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
726 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
738 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
727 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
739 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
728 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
740
729
741 $ hg unbundle2 < ../rev.hg2
730 $ hg unbundle2 < ../rev.hg2
742 adding changesets
731 adding changesets
743 adding manifests
732 adding manifests
744 adding file changes
733 adding file changes
745 added 0 changesets with 0 changes to 3 files
734 added 0 changesets with 0 changes to 3 files
746 0 unread bytes
735 0 unread bytes
747 addchangegroup return: 1
736 addchangegroup return: 1
748
737
749 with reply
738 with reply
750
739
751 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
740 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
752 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
741 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
753 0 unread bytes
742 0 unread bytes
754 addchangegroup return: 1
743 addchangegroup return: 1
755
744
756 $ cat ../rev-reply.hg2
745 $ cat ../rev-reply.hg2
757 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
746 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
758 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
747 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
759 adding manifests
748 adding manifests
760 adding file changes
749 adding file changes
761 added 0 changesets with 0 changes to 3 files
750 added 0 changesets with 0 changes to 3 files
762 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
751 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
763
752
764 Real world exchange
753 Real world exchange
765 =====================
754 =====================
766
755
767
756
768 clone --pull
757 clone --pull
769
758
770 $ cd ..
759 $ cd ..
771 $ hg clone main other --pull --rev 9520eea781bc
760 $ hg clone main other --pull --rev 9520eea781bc
772 adding changesets
761 adding changesets
773 adding manifests
762 adding manifests
774 adding file changes
763 adding file changes
775 added 2 changesets with 2 changes to 2 files
764 added 2 changesets with 2 changes to 2 files
776 updating to branch default
765 updating to branch default
777 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
766 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
778 $ hg -R other log -G
767 $ hg -R other log -G
779 @ changeset: 1:9520eea781bc
768 @ changeset: 1:9520eea781bc
780 | tag: tip
769 | tag: tip
781 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
770 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
782 | date: Sat Apr 30 15:24:48 2011 +0200
771 | date: Sat Apr 30 15:24:48 2011 +0200
783 | summary: E
772 | summary: E
784 |
773 |
785 o changeset: 0:cd010b8cd998
774 o changeset: 0:cd010b8cd998
786 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
775 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
787 date: Sat Apr 30 15:24:48 2011 +0200
776 date: Sat Apr 30 15:24:48 2011 +0200
788 summary: A
777 summary: A
789
778
790
779
791 pull
780 pull
792
781
793 $ hg -R other pull -r 24b6387c8c8c
782 $ hg -R other pull -r 24b6387c8c8c
794 pulling from $TESTTMP/main (glob)
783 pulling from $TESTTMP/main (glob)
795 searching for changes
784 searching for changes
796 adding changesets
785 adding changesets
797 adding manifests
786 adding manifests
798 adding file changes
787 adding file changes
799 added 1 changesets with 1 changes to 1 files (+1 heads)
788 added 1 changesets with 1 changes to 1 files (+1 heads)
800 (run 'hg heads' to see heads, 'hg merge' to merge)
789 (run 'hg heads' to see heads, 'hg merge' to merge)
801
790
802 pull empty
791 pull empty
803
792
804 $ hg -R other pull -r 24b6387c8c8c
793 $ hg -R other pull -r 24b6387c8c8c
805 pulling from $TESTTMP/main (glob)
794 pulling from $TESTTMP/main (glob)
806 no changes found
795 no changes found
807
796
808 push
797 push
809
798
810 $ hg -R main push other --rev eea13746799a
799 $ hg -R main push other --rev eea13746799a
811 pushing to other
800 pushing to other
812 searching for changes
801 searching for changes
813 remote: adding changesets
802 remote: adding changesets
814 remote: adding manifests
803 remote: adding manifests
815 remote: adding file changes
804 remote: adding file changes
816 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
805 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
817
806
818 pull over ssh
807 pull over ssh
819
808
820 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
809 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
821 pulling from ssh://user@dummy/main
810 pulling from ssh://user@dummy/main
822 searching for changes
811 searching for changes
823 adding changesets
812 adding changesets
824 adding manifests
813 adding manifests
825 adding file changes
814 adding file changes
826 added 1 changesets with 1 changes to 1 files (+1 heads)
815 added 1 changesets with 1 changes to 1 files (+1 heads)
827 (run 'hg heads' to see heads, 'hg merge' to merge)
816 (run 'hg heads' to see heads, 'hg merge' to merge)
828
817
829 pull over http
818 pull over http
830
819
831 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
820 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
832 $ cat main.pid >> $DAEMON_PIDS
821 $ cat main.pid >> $DAEMON_PIDS
833
822
834 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
823 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
835 pulling from http://localhost:$HGPORT/
824 pulling from http://localhost:$HGPORT/
836 searching for changes
825 searching for changes
837 adding changesets
826 adding changesets
838 adding manifests
827 adding manifests
839 adding file changes
828 adding file changes
840 added 1 changesets with 1 changes to 1 files (+1 heads)
829 added 1 changesets with 1 changes to 1 files (+1 heads)
841 (run 'hg heads .' to see heads, 'hg merge' to merge)
830 (run 'hg heads .' to see heads, 'hg merge' to merge)
842 $ cat main-error.log
831 $ cat main-error.log
843
832
844 push over ssh
833 push over ssh
845
834
846 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
835 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
847 pushing to ssh://user@dummy/other
836 pushing to ssh://user@dummy/other
848 searching for changes
837 searching for changes
849 remote: adding changesets
838 remote: adding changesets
850 remote: adding manifests
839 remote: adding manifests
851 remote: adding file changes
840 remote: adding file changes
852 remote: added 1 changesets with 1 changes to 1 files
841 remote: added 1 changesets with 1 changes to 1 files
853
842
854 push over http
843 push over http
855
844
856 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
845 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
857 $ cat other.pid >> $DAEMON_PIDS
846 $ cat other.pid >> $DAEMON_PIDS
858
847
859 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
848 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
860 pushing to http://localhost:$HGPORT2/
849 pushing to http://localhost:$HGPORT2/
861 searching for changes
850 searching for changes
862 remote: adding changesets
851 remote: adding changesets
863 remote: adding manifests
852 remote: adding manifests
864 remote: adding file changes
853 remote: adding file changes
865 remote: added 1 changesets with 1 changes to 1 files
854 remote: added 1 changesets with 1 changes to 1 files
866 $ cat other-error.log
855 $ cat other-error.log
867
856
868 Check final content.
857 Check final content.
869
858
870 $ hg -R other log -G
859 $ hg -R other log -G
871 o changeset: 7:32af7686d403
860 o changeset: 7:32af7686d403
872 | tag: tip
861 | tag: tip
873 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
862 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
874 | date: Sat Apr 30 15:24:48 2011 +0200
863 | date: Sat Apr 30 15:24:48 2011 +0200
875 | summary: D
864 | summary: D
876 |
865 |
877 o changeset: 6:5fddd98957c8
866 o changeset: 6:5fddd98957c8
878 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
867 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
879 | date: Sat Apr 30 15:24:48 2011 +0200
868 | date: Sat Apr 30 15:24:48 2011 +0200
880 | summary: C
869 | summary: C
881 |
870 |
882 o changeset: 5:42ccdea3bb16
871 o changeset: 5:42ccdea3bb16
883 | parent: 0:cd010b8cd998
872 | parent: 0:cd010b8cd998
884 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
873 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
885 | date: Sat Apr 30 15:24:48 2011 +0200
874 | date: Sat Apr 30 15:24:48 2011 +0200
886 | summary: B
875 | summary: B
887 |
876 |
888 | o changeset: 4:02de42196ebe
877 | o changeset: 4:02de42196ebe
889 | | parent: 2:24b6387c8c8c
878 | | parent: 2:24b6387c8c8c
890 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
879 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
891 | | date: Sat Apr 30 15:24:48 2011 +0200
880 | | date: Sat Apr 30 15:24:48 2011 +0200
892 | | summary: H
881 | | summary: H
893 | |
882 | |
894 | | o changeset: 3:eea13746799a
883 | | o changeset: 3:eea13746799a
895 | |/| parent: 2:24b6387c8c8c
884 | |/| parent: 2:24b6387c8c8c
896 | | | parent: 1:9520eea781bc
885 | | | parent: 1:9520eea781bc
897 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
886 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
898 | | | date: Sat Apr 30 15:24:48 2011 +0200
887 | | | date: Sat Apr 30 15:24:48 2011 +0200
899 | | | summary: G
888 | | | summary: G
900 | | |
889 | | |
901 | o | changeset: 2:24b6387c8c8c
890 | o | changeset: 2:24b6387c8c8c
902 |/ / parent: 0:cd010b8cd998
891 |/ / parent: 0:cd010b8cd998
903 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
892 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
904 | | date: Sat Apr 30 15:24:48 2011 +0200
893 | | date: Sat Apr 30 15:24:48 2011 +0200
905 | | summary: F
894 | | summary: F
906 | |
895 | |
907 | @ changeset: 1:9520eea781bc
896 | @ changeset: 1:9520eea781bc
908 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
897 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
909 | date: Sat Apr 30 15:24:48 2011 +0200
898 | date: Sat Apr 30 15:24:48 2011 +0200
910 | summary: E
899 | summary: E
911 |
900 |
912 o changeset: 0:cd010b8cd998
901 o changeset: 0:cd010b8cd998
913 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
902 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
914 date: Sat Apr 30 15:24:48 2011 +0200
903 date: Sat Apr 30 15:24:48 2011 +0200
915 summary: A
904 summary: A
916
905
917
906
918 Error Handling
907 Error Handling
919 ==============
908 ==============
920
909
921 Check that errors are properly returned to the client during push.
910 Check that errors are properly returned to the client during push.
922
911
923 Setting up
912 Setting up
924
913
925 $ cat > failpush.py << EOF
914 $ cat > failpush.py << EOF
926 > """A small extension that makes push fails when using bundle2
915 > """A small extension that makes push fails when using bundle2
927 >
916 >
928 > used to test error handling in bundle2
917 > used to test error handling in bundle2
929 > """
918 > """
930 >
919 >
931 > from mercurial import util
920 > from mercurial import util
932 > from mercurial import bundle2
921 > from mercurial import bundle2
933 > from mercurial import exchange
922 > from mercurial import exchange
934 > from mercurial import extensions
923 > from mercurial import extensions
935 >
924 >
936 > def _pushbundle2failpart(orig, pushop, bundler):
925 > def _pushbundle2failpart(orig, pushop, bundler):
937 > extradata = orig(pushop, bundler)
926 > extradata = orig(pushop, bundler)
938 > reason = pushop.ui.config('failpush', 'reason', None)
927 > reason = pushop.ui.config('failpush', 'reason', None)
939 > part = None
928 > part = None
940 > if reason == 'abort':
929 > if reason == 'abort':
941 > part = bundle2.bundlepart('test:abort')
930 > bundler.newpart('test:abort')
942 > if reason == 'unknown':
931 > if reason == 'unknown':
943 > part = bundle2.bundlepart('TEST:UNKNOWN')
932 > bundler.newpart('TEST:UNKNOWN')
944 > if reason == 'race':
933 > if reason == 'race':
945 > # 20 Bytes of crap
934 > # 20 Bytes of crap
946 > part = bundle2.bundlepart('b2x:check:heads', data='01234567890123456789')
935 > bundler.newpart('b2x:check:heads', data='01234567890123456789')
947 > if part is not None:
948 > bundler.addpart(part)
949 > return extradata
936 > return extradata
950 >
937 >
951 > @bundle2.parthandler("test:abort")
938 > @bundle2.parthandler("test:abort")
952 > def handleabort(op, part):
939 > def handleabort(op, part):
953 > raise util.Abort('Abandon ship!', hint="don't panic")
940 > raise util.Abort('Abandon ship!', hint="don't panic")
954 >
941 >
955 > def uisetup(ui):
942 > def uisetup(ui):
956 > extensions.wrapfunction(exchange, '_pushbundle2extraparts', _pushbundle2failpart)
943 > extensions.wrapfunction(exchange, '_pushbundle2extraparts', _pushbundle2failpart)
957 >
944 >
958 > EOF
945 > EOF
959
946
960 $ cd main
947 $ cd main
961 $ hg up tip
948 $ hg up tip
962 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
949 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
963 $ echo 'I' > I
950 $ echo 'I' > I
964 $ hg add I
951 $ hg add I
965 $ hg ci -m 'I'
952 $ hg ci -m 'I'
966 $ hg id
953 $ hg id
967 e7ec4e813ba6 tip
954 e7ec4e813ba6 tip
968 $ cd ..
955 $ cd ..
969
956
970 $ cat << EOF >> $HGRCPATH
957 $ cat << EOF >> $HGRCPATH
971 > [extensions]
958 > [extensions]
972 > failpush=$TESTTMP/failpush.py
959 > failpush=$TESTTMP/failpush.py
973 > EOF
960 > EOF
974
961
975 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
962 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
976 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
963 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
977 $ cat other.pid >> $DAEMON_PIDS
964 $ cat other.pid >> $DAEMON_PIDS
978
965
979 Doing the actual push: Abort error
966 Doing the actual push: Abort error
980
967
981 $ cat << EOF >> $HGRCPATH
968 $ cat << EOF >> $HGRCPATH
982 > [failpush]
969 > [failpush]
983 > reason = abort
970 > reason = abort
984 > EOF
971 > EOF
985
972
986 $ hg -R main push other -r e7ec4e813ba6
973 $ hg -R main push other -r e7ec4e813ba6
987 pushing to other
974 pushing to other
988 searching for changes
975 searching for changes
989 abort: Abandon ship!
976 abort: Abandon ship!
990 (don't panic)
977 (don't panic)
991 [255]
978 [255]
992
979
993 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
980 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
994 pushing to ssh://user@dummy/other
981 pushing to ssh://user@dummy/other
995 searching for changes
982 searching for changes
996 abort: Abandon ship!
983 abort: Abandon ship!
997 (don't panic)
984 (don't panic)
998 [255]
985 [255]
999
986
1000 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
987 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1001 pushing to http://localhost:$HGPORT2/
988 pushing to http://localhost:$HGPORT2/
1002 searching for changes
989 searching for changes
1003 abort: Abandon ship!
990 abort: Abandon ship!
1004 (don't panic)
991 (don't panic)
1005 [255]
992 [255]
1006
993
1007
994
1008 Doing the actual push: unknown mandatory parts
995 Doing the actual push: unknown mandatory parts
1009
996
1010 $ cat << EOF >> $HGRCPATH
997 $ cat << EOF >> $HGRCPATH
1011 > [failpush]
998 > [failpush]
1012 > reason = unknown
999 > reason = unknown
1013 > EOF
1000 > EOF
1014
1001
1015 $ hg -R main push other -r e7ec4e813ba6
1002 $ hg -R main push other -r e7ec4e813ba6
1016 pushing to other
1003 pushing to other
1017 searching for changes
1004 searching for changes
1018 abort: missing support for 'test:unknown'
1005 abort: missing support for 'test:unknown'
1019 [255]
1006 [255]
1020
1007
1021 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1008 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1022 pushing to ssh://user@dummy/other
1009 pushing to ssh://user@dummy/other
1023 searching for changes
1010 searching for changes
1024 abort: missing support for "'test:unknown'"
1011 abort: missing support for "'test:unknown'"
1025 [255]
1012 [255]
1026
1013
1027 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1014 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1028 pushing to http://localhost:$HGPORT2/
1015 pushing to http://localhost:$HGPORT2/
1029 searching for changes
1016 searching for changes
1030 abort: missing support for "'test:unknown'"
1017 abort: missing support for "'test:unknown'"
1031 [255]
1018 [255]
1032
1019
1033 Doing the actual push: race
1020 Doing the actual push: race
1034
1021
1035 $ cat << EOF >> $HGRCPATH
1022 $ cat << EOF >> $HGRCPATH
1036 > [failpush]
1023 > [failpush]
1037 > reason = race
1024 > reason = race
1038 > EOF
1025 > EOF
1039
1026
1040 $ hg -R main push other -r e7ec4e813ba6
1027 $ hg -R main push other -r e7ec4e813ba6
1041 pushing to other
1028 pushing to other
1042 searching for changes
1029 searching for changes
1043 abort: push failed:
1030 abort: push failed:
1044 'repository changed while pushing - please try again'
1031 'repository changed while pushing - please try again'
1045 [255]
1032 [255]
1046
1033
1047 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1034 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1048 pushing to ssh://user@dummy/other
1035 pushing to ssh://user@dummy/other
1049 searching for changes
1036 searching for changes
1050 abort: push failed:
1037 abort: push failed:
1051 'repository changed while pushing - please try again'
1038 'repository changed while pushing - please try again'
1052 [255]
1039 [255]
1053
1040
1054 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1041 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1055 pushing to http://localhost:$HGPORT2/
1042 pushing to http://localhost:$HGPORT2/
1056 searching for changes
1043 searching for changes
1057 abort: push failed:
1044 abort: push failed:
1058 'repository changed while pushing - please try again'
1045 'repository changed while pushing - please try again'
1059 [255]
1046 [255]
1060
1047
1061 Doing the actual push: hook abort
1048 Doing the actual push: hook abort
1062
1049
1063 $ cat << EOF >> $HGRCPATH
1050 $ cat << EOF >> $HGRCPATH
1064 > [failpush]
1051 > [failpush]
1065 > reason =
1052 > reason =
1066 > [hooks]
1053 > [hooks]
1067 > b2x-pretransactionclose.failpush = false
1054 > b2x-pretransactionclose.failpush = false
1068 > EOF
1055 > EOF
1069
1056
1070 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
1057 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
1071 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
1058 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
1072 $ cat other.pid >> $DAEMON_PIDS
1059 $ cat other.pid >> $DAEMON_PIDS
1073
1060
1074 $ hg -R main push other -r e7ec4e813ba6
1061 $ hg -R main push other -r e7ec4e813ba6
1075 pushing to other
1062 pushing to other
1076 searching for changes
1063 searching for changes
1077 transaction abort!
1064 transaction abort!
1078 rollback completed
1065 rollback completed
1079 abort: b2x-pretransactionclose.failpush hook exited with status 1
1066 abort: b2x-pretransactionclose.failpush hook exited with status 1
1080 [255]
1067 [255]
1081
1068
1082 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1069 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1083 pushing to ssh://user@dummy/other
1070 pushing to ssh://user@dummy/other
1084 searching for changes
1071 searching for changes
1085 abort: b2x-pretransactionclose.failpush hook exited with status 1
1072 abort: b2x-pretransactionclose.failpush hook exited with status 1
1086 remote: transaction abort!
1073 remote: transaction abort!
1087 remote: rollback completed
1074 remote: rollback completed
1088 [255]
1075 [255]
1089
1076
1090 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1077 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1091 pushing to http://localhost:$HGPORT2/
1078 pushing to http://localhost:$HGPORT2/
1092 searching for changes
1079 searching for changes
1093 abort: b2x-pretransactionclose.failpush hook exited with status 1
1080 abort: b2x-pretransactionclose.failpush hook exited with status 1
1094 [255]
1081 [255]
1095
1082
1096
1083
General Comments 0
You need to be logged in to leave comments. Login now