##// END OF EJS Templates
bundle2: introduce `replycaps` part for on-demand reply...
Pierre-Yves David -
r21130:1ff06386 default
parent child Browse files
Show More
@@ -1,676 +1,681 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 :payload:
116 :payload:
117
117
118 payload is a series of `<chunksize><chunkdata>`.
118 payload is a series of `<chunksize><chunkdata>`.
119
119
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122
122
123 The current implementation always produces either zero or one chunk.
123 The current implementation always produces either zero or one chunk.
124 This is an implementation limitation that will ultimately be lifted.
124 This is an implementation limitation that will ultimately be lifted.
125
125
126 Bundle processing
126 Bundle processing
127 ============================
127 ============================
128
128
129 Each part is processed in order using a "part handler". Handler are registered
129 Each part is processed in order using a "part handler". Handler are registered
130 for a certain part type.
130 for a certain part type.
131
131
132 The matching of a part to its handler is case insensitive. The case of the
132 The matching of a part to its handler is case insensitive. The case of the
133 part type is used to know if a part is mandatory or advisory. If the Part type
133 part type is used to know if a part is mandatory or advisory. If the Part type
134 contains any uppercase char it is considered mandatory. When no handler is
134 contains any uppercase char it is considered mandatory. When no handler is
135 known for a Mandatory part, the process is aborted and an exception is raised.
135 known for a Mandatory part, the process is aborted and an exception is raised.
136 If the part is advisory and no handler is known, the part is ignored. When the
136 If the part is advisory and no handler is known, the part is ignored. When the
137 process is aborted, the full bundle is still read from the stream to keep the
137 process is aborted, the full bundle is still read from the stream to keep the
138 channel usable. But none of the part read from an abort are processed. In the
138 channel usable. But none of the part read from an abort are processed. In the
139 future, dropping the stream may become an option for channel we do not care to
139 future, dropping the stream may become an option for channel we do not care to
140 preserve.
140 preserve.
141 """
141 """
142
142
143 import util
143 import util
144 import struct
144 import struct
145 import urllib
145 import urllib
146 import string
146 import string
147
147
148 import changegroup
148 import changegroup
149 from i18n import _
149 from i18n import _
150
150
151 _pack = struct.pack
151 _pack = struct.pack
152 _unpack = struct.unpack
152 _unpack = struct.unpack
153
153
154 _magicstring = 'HG20'
154 _magicstring = 'HG20'
155
155
156 _fstreamparamsize = '>H'
156 _fstreamparamsize = '>H'
157 _fpartheadersize = '>H'
157 _fpartheadersize = '>H'
158 _fparttypesize = '>B'
158 _fparttypesize = '>B'
159 _fpartid = '>I'
159 _fpartid = '>I'
160 _fpayloadsize = '>I'
160 _fpayloadsize = '>I'
161 _fpartparamcount = '>BB'
161 _fpartparamcount = '>BB'
162
162
163 preferedchunksize = 4096
163 preferedchunksize = 4096
164
164
165 def _makefpartparamsizes(nbparams):
165 def _makefpartparamsizes(nbparams):
166 """return a struct format to read part parameter sizes
166 """return a struct format to read part parameter sizes
167
167
168 The number parameters is variable so we need to build that format
168 The number parameters is variable so we need to build that format
169 dynamically.
169 dynamically.
170 """
170 """
171 return '>'+('BB'*nbparams)
171 return '>'+('BB'*nbparams)
172
172
173 parthandlermapping = {}
173 parthandlermapping = {}
174
174
175 def parthandler(parttype):
175 def parthandler(parttype):
176 """decorator that register a function as a bundle2 part handler
176 """decorator that register a function as a bundle2 part handler
177
177
178 eg::
178 eg::
179
179
180 @parthandler('myparttype')
180 @parthandler('myparttype')
181 def myparttypehandler(...):
181 def myparttypehandler(...):
182 '''process a part of type "my part".'''
182 '''process a part of type "my part".'''
183 ...
183 ...
184 """
184 """
185 def _decorator(func):
185 def _decorator(func):
186 lparttype = parttype.lower() # enforce lower case matching.
186 lparttype = parttype.lower() # enforce lower case matching.
187 assert lparttype not in parthandlermapping
187 assert lparttype not in parthandlermapping
188 parthandlermapping[lparttype] = func
188 parthandlermapping[lparttype] = func
189 return func
189 return func
190 return _decorator
190 return _decorator
191
191
192 class unbundlerecords(object):
192 class unbundlerecords(object):
193 """keep record of what happens during and unbundle
193 """keep record of what happens during and unbundle
194
194
195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
196 category of record and obj is an arbitrary object.
196 category of record and obj is an arbitrary object.
197
197
198 `records['cat']` will return all entries of this category 'cat'.
198 `records['cat']` will return all entries of this category 'cat'.
199
199
200 Iterating on the object itself will yield `('category', obj)` tuples
200 Iterating on the object itself will yield `('category', obj)` tuples
201 for all entries.
201 for all entries.
202
202
203 All iterations happens in chronological order.
203 All iterations happens in chronological order.
204 """
204 """
205
205
206 def __init__(self):
206 def __init__(self):
207 self._categories = {}
207 self._categories = {}
208 self._sequences = []
208 self._sequences = []
209 self._replies = {}
209 self._replies = {}
210
210
211 def add(self, category, entry, inreplyto=None):
211 def add(self, category, entry, inreplyto=None):
212 """add a new record of a given category.
212 """add a new record of a given category.
213
213
214 The entry can then be retrieved in the list returned by
214 The entry can then be retrieved in the list returned by
215 self['category']."""
215 self['category']."""
216 self._categories.setdefault(category, []).append(entry)
216 self._categories.setdefault(category, []).append(entry)
217 self._sequences.append((category, entry))
217 self._sequences.append((category, entry))
218 if inreplyto is not None:
218 if inreplyto is not None:
219 self.getreplies(inreplyto).add(category, entry)
219 self.getreplies(inreplyto).add(category, entry)
220
220
221 def getreplies(self, partid):
221 def getreplies(self, partid):
222 """get the subrecords that replies to a specific part"""
222 """get the subrecords that replies to a specific part"""
223 return self._replies.setdefault(partid, unbundlerecords())
223 return self._replies.setdefault(partid, unbundlerecords())
224
224
225 def __getitem__(self, cat):
225 def __getitem__(self, cat):
226 return tuple(self._categories.get(cat, ()))
226 return tuple(self._categories.get(cat, ()))
227
227
228 def __iter__(self):
228 def __iter__(self):
229 return iter(self._sequences)
229 return iter(self._sequences)
230
230
231 def __len__(self):
231 def __len__(self):
232 return len(self._sequences)
232 return len(self._sequences)
233
233
234 def __nonzero__(self):
234 def __nonzero__(self):
235 return bool(self._sequences)
235 return bool(self._sequences)
236
236
237 class bundleoperation(object):
237 class bundleoperation(object):
238 """an object that represents a single bundling process
238 """an object that represents a single bundling process
239
239
240 Its purpose is to carry unbundle-related objects and states.
240 Its purpose is to carry unbundle-related objects and states.
241
241
242 A new object should be created at the beginning of each bundle processing.
242 A new object should be created at the beginning of each bundle processing.
243 The object is to be returned by the processing function.
243 The object is to be returned by the processing function.
244
244
245 The object has very little content now it will ultimately contain:
245 The object has very little content now it will ultimately contain:
246 * an access to the repo the bundle is applied to,
246 * an access to the repo the bundle is applied to,
247 * a ui object,
247 * a ui object,
248 * a way to retrieve a transaction to add changes to the repo,
248 * a way to retrieve a transaction to add changes to the repo,
249 * a way to record the result of processing each part,
249 * a way to record the result of processing each part,
250 * a way to construct a bundle response when applicable.
250 * a way to construct a bundle response when applicable.
251 """
251 """
252
252
253 def __init__(self, repo, transactiongetter):
253 def __init__(self, repo, transactiongetter):
254 self.repo = repo
254 self.repo = repo
255 self.ui = repo.ui
255 self.ui = repo.ui
256 self.records = unbundlerecords()
256 self.records = unbundlerecords()
257 self.gettransaction = transactiongetter
257 self.gettransaction = transactiongetter
258 self.reply = None
258 self.reply = None
259
259
260 class TransactionUnavailable(RuntimeError):
260 class TransactionUnavailable(RuntimeError):
261 pass
261 pass
262
262
263 def _notransaction():
263 def _notransaction():
264 """default method to get a transaction while processing a bundle
264 """default method to get a transaction while processing a bundle
265
265
266 Raise an exception to highlight the fact that no transaction was expected
266 Raise an exception to highlight the fact that no transaction was expected
267 to be created"""
267 to be created"""
268 raise TransactionUnavailable()
268 raise TransactionUnavailable()
269
269
270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
271 """This function process a bundle, apply effect to/from a repo
271 """This function process a bundle, apply effect to/from a repo
272
272
273 It iterates over each part then searches for and uses the proper handling
273 It iterates over each part then searches for and uses the proper handling
274 code to process the part. Parts are processed in order.
274 code to process the part. Parts are processed in order.
275
275
276 This is very early version of this function that will be strongly reworked
276 This is very early version of this function that will be strongly reworked
277 before final usage.
277 before final usage.
278
278
279 Unknown Mandatory part will abort the process.
279 Unknown Mandatory part will abort the process.
280 """
280 """
281 op = bundleoperation(repo, transactiongetter)
281 op = bundleoperation(repo, transactiongetter)
282 # todo:
282 # todo:
283 # - only create reply bundle if requested.
284 op.reply = bundle20(op.ui)
285 # todo:
286 # - replace this is a init function soon.
283 # - replace this is a init function soon.
287 # - exception catching
284 # - exception catching
288 unbundler.params
285 unbundler.params
289 iterparts = unbundler.iterparts()
286 iterparts = unbundler.iterparts()
290 part = None
287 part = None
291 try:
288 try:
292 for part in iterparts:
289 for part in iterparts:
293 parttype = part.type
290 parttype = part.type
294 # part key are matched lower case
291 # part key are matched lower case
295 key = parttype.lower()
292 key = parttype.lower()
296 try:
293 try:
297 handler = parthandlermapping[key]
294 handler = parthandlermapping[key]
298 op.ui.debug('found a handler for part %r\n' % parttype)
295 op.ui.debug('found a handler for part %r\n' % parttype)
299 except KeyError:
296 except KeyError:
300 if key != parttype: # mandatory parts
297 if key != parttype: # mandatory parts
301 # todo:
298 # todo:
302 # - use a more precise exception
299 # - use a more precise exception
303 raise
300 raise
304 op.ui.debug('ignoring unknown advisory part %r\n' % key)
301 op.ui.debug('ignoring unknown advisory part %r\n' % key)
305 # consuming the part
302 # consuming the part
306 part.read()
303 part.read()
307 continue
304 continue
308
305
309 # handler is called outside the above try block so that we don't
306 # handler is called outside the above try block so that we don't
310 # risk catching KeyErrors from anything other than the
307 # risk catching KeyErrors from anything other than the
311 # parthandlermapping lookup (any KeyError raised by handler()
308 # parthandlermapping lookup (any KeyError raised by handler()
312 # itself represents a defect of a different variety).
309 # itself represents a defect of a different variety).
313 handler(op, part)
310 handler(op, part)
314 part.read()
311 part.read()
315 except Exception:
312 except Exception:
316 if part is not None:
313 if part is not None:
317 # consume the bundle content
314 # consume the bundle content
318 part.read()
315 part.read()
319 for part in iterparts:
316 for part in iterparts:
320 # consume the bundle content
317 # consume the bundle content
321 part.read()
318 part.read()
322 raise
319 raise
323 return op
320 return op
324
321
325 class bundle20(object):
322 class bundle20(object):
326 """represent an outgoing bundle2 container
323 """represent an outgoing bundle2 container
327
324
328 Use the `addparam` method to add stream level parameter. and `addpart` to
325 Use the `addparam` method to add stream level parameter. and `addpart` to
329 populate it. Then call `getchunks` to retrieve all the binary chunks of
326 populate it. Then call `getchunks` to retrieve all the binary chunks of
330 data that compose the bundle2 container."""
327 data that compose the bundle2 container."""
331
328
332 def __init__(self, ui):
329 def __init__(self, ui):
333 self.ui = ui
330 self.ui = ui
334 self._params = []
331 self._params = []
335 self._parts = []
332 self._parts = []
336
333
337 def addparam(self, name, value=None):
334 def addparam(self, name, value=None):
338 """add a stream level parameter"""
335 """add a stream level parameter"""
339 if not name:
336 if not name:
340 raise ValueError('empty parameter name')
337 raise ValueError('empty parameter name')
341 if name[0] not in string.letters:
338 if name[0] not in string.letters:
342 raise ValueError('non letter first character: %r' % name)
339 raise ValueError('non letter first character: %r' % name)
343 self._params.append((name, value))
340 self._params.append((name, value))
344
341
345 def addpart(self, part):
342 def addpart(self, part):
346 """add a new part to the bundle2 container
343 """add a new part to the bundle2 container
347
344
348 Parts contains the actual applicative payload."""
345 Parts contains the actual applicative payload."""
349 assert part.id is None
346 assert part.id is None
350 part.id = len(self._parts) # very cheap counter
347 part.id = len(self._parts) # very cheap counter
351 self._parts.append(part)
348 self._parts.append(part)
352
349
353 def getchunks(self):
350 def getchunks(self):
354 self.ui.debug('start emission of %s stream\n' % _magicstring)
351 self.ui.debug('start emission of %s stream\n' % _magicstring)
355 yield _magicstring
352 yield _magicstring
356 param = self._paramchunk()
353 param = self._paramchunk()
357 self.ui.debug('bundle parameter: %s\n' % param)
354 self.ui.debug('bundle parameter: %s\n' % param)
358 yield _pack(_fstreamparamsize, len(param))
355 yield _pack(_fstreamparamsize, len(param))
359 if param:
356 if param:
360 yield param
357 yield param
361
358
362 self.ui.debug('start of parts\n')
359 self.ui.debug('start of parts\n')
363 for part in self._parts:
360 for part in self._parts:
364 self.ui.debug('bundle part: "%s"\n' % part.type)
361 self.ui.debug('bundle part: "%s"\n' % part.type)
365 for chunk in part.getchunks():
362 for chunk in part.getchunks():
366 yield chunk
363 yield chunk
367 self.ui.debug('end of bundle\n')
364 self.ui.debug('end of bundle\n')
368 yield '\0\0'
365 yield '\0\0'
369
366
370 def _paramchunk(self):
367 def _paramchunk(self):
371 """return a encoded version of all stream parameters"""
368 """return a encoded version of all stream parameters"""
372 blocks = []
369 blocks = []
373 for par, value in self._params:
370 for par, value in self._params:
374 par = urllib.quote(par)
371 par = urllib.quote(par)
375 if value is not None:
372 if value is not None:
376 value = urllib.quote(value)
373 value = urllib.quote(value)
377 par = '%s=%s' % (par, value)
374 par = '%s=%s' % (par, value)
378 blocks.append(par)
375 blocks.append(par)
379 return ' '.join(blocks)
376 return ' '.join(blocks)
380
377
381 class unpackermixin(object):
378 class unpackermixin(object):
382 """A mixin to extract bytes and struct data from a stream"""
379 """A mixin to extract bytes and struct data from a stream"""
383
380
384 def __init__(self, fp):
381 def __init__(self, fp):
385 self._fp = fp
382 self._fp = fp
386
383
387 def _unpack(self, format):
384 def _unpack(self, format):
388 """unpack this struct format from the stream"""
385 """unpack this struct format from the stream"""
389 data = self._readexact(struct.calcsize(format))
386 data = self._readexact(struct.calcsize(format))
390 return _unpack(format, data)
387 return _unpack(format, data)
391
388
392 def _readexact(self, size):
389 def _readexact(self, size):
393 """read exactly <size> bytes from the stream"""
390 """read exactly <size> bytes from the stream"""
394 return changegroup.readexactly(self._fp, size)
391 return changegroup.readexactly(self._fp, size)
395
392
396
393
397 class unbundle20(unpackermixin):
394 class unbundle20(unpackermixin):
398 """interpret a bundle2 stream
395 """interpret a bundle2 stream
399
396
400 This class is fed with a binary stream and yields parts through its
397 This class is fed with a binary stream and yields parts through its
401 `iterparts` methods."""
398 `iterparts` methods."""
402
399
403 def __init__(self, ui, fp, header=None):
400 def __init__(self, ui, fp, header=None):
404 """If header is specified, we do not read it out of the stream."""
401 """If header is specified, we do not read it out of the stream."""
405 self.ui = ui
402 self.ui = ui
406 super(unbundle20, self).__init__(fp)
403 super(unbundle20, self).__init__(fp)
407 if header is None:
404 if header is None:
408 header = self._readexact(4)
405 header = self._readexact(4)
409 magic, version = header[0:2], header[2:4]
406 magic, version = header[0:2], header[2:4]
410 if magic != 'HG':
407 if magic != 'HG':
411 raise util.Abort(_('not a Mercurial bundle'))
408 raise util.Abort(_('not a Mercurial bundle'))
412 if version != '20':
409 if version != '20':
413 raise util.Abort(_('unknown bundle version %s') % version)
410 raise util.Abort(_('unknown bundle version %s') % version)
414 self.ui.debug('start processing of %s stream\n' % header)
411 self.ui.debug('start processing of %s stream\n' % header)
415
412
416 @util.propertycache
413 @util.propertycache
417 def params(self):
414 def params(self):
418 """dictionary of stream level parameters"""
415 """dictionary of stream level parameters"""
419 self.ui.debug('reading bundle2 stream parameters\n')
416 self.ui.debug('reading bundle2 stream parameters\n')
420 params = {}
417 params = {}
421 paramssize = self._unpack(_fstreamparamsize)[0]
418 paramssize = self._unpack(_fstreamparamsize)[0]
422 if paramssize:
419 if paramssize:
423 for p in self._readexact(paramssize).split(' '):
420 for p in self._readexact(paramssize).split(' '):
424 p = p.split('=', 1)
421 p = p.split('=', 1)
425 p = [urllib.unquote(i) for i in p]
422 p = [urllib.unquote(i) for i in p]
426 if len(p) < 2:
423 if len(p) < 2:
427 p.append(None)
424 p.append(None)
428 self._processparam(*p)
425 self._processparam(*p)
429 params[p[0]] = p[1]
426 params[p[0]] = p[1]
430 return params
427 return params
431
428
432 def _processparam(self, name, value):
429 def _processparam(self, name, value):
433 """process a parameter, applying its effect if needed
430 """process a parameter, applying its effect if needed
434
431
435 Parameter starting with a lower case letter are advisory and will be
432 Parameter starting with a lower case letter are advisory and will be
436 ignored when unknown. Those starting with an upper case letter are
433 ignored when unknown. Those starting with an upper case letter are
437 mandatory and will this function will raise a KeyError when unknown.
434 mandatory and will this function will raise a KeyError when unknown.
438
435
439 Note: no option are currently supported. Any input will be either
436 Note: no option are currently supported. Any input will be either
440 ignored or failing.
437 ignored or failing.
441 """
438 """
442 if not name:
439 if not name:
443 raise ValueError('empty parameter name')
440 raise ValueError('empty parameter name')
444 if name[0] not in string.letters:
441 if name[0] not in string.letters:
445 raise ValueError('non letter first character: %r' % name)
442 raise ValueError('non letter first character: %r' % name)
446 # Some logic will be later added here to try to process the option for
443 # Some logic will be later added here to try to process the option for
447 # a dict of known parameter.
444 # a dict of known parameter.
448 if name[0].islower():
445 if name[0].islower():
449 self.ui.debug("ignoring unknown parameter %r\n" % name)
446 self.ui.debug("ignoring unknown parameter %r\n" % name)
450 else:
447 else:
451 raise KeyError(name)
448 raise KeyError(name)
452
449
453
450
454 def iterparts(self):
451 def iterparts(self):
455 """yield all parts contained in the stream"""
452 """yield all parts contained in the stream"""
456 # make sure param have been loaded
453 # make sure param have been loaded
457 self.params
454 self.params
458 self.ui.debug('start extraction of bundle2 parts\n')
455 self.ui.debug('start extraction of bundle2 parts\n')
459 headerblock = self._readpartheader()
456 headerblock = self._readpartheader()
460 while headerblock is not None:
457 while headerblock is not None:
461 part = unbundlepart(self.ui, headerblock, self._fp)
458 part = unbundlepart(self.ui, headerblock, self._fp)
462 yield part
459 yield part
463 headerblock = self._readpartheader()
460 headerblock = self._readpartheader()
464 self.ui.debug('end of bundle2 stream\n')
461 self.ui.debug('end of bundle2 stream\n')
465
462
466 def _readpartheader(self):
463 def _readpartheader(self):
467 """reads a part header size and return the bytes blob
464 """reads a part header size and return the bytes blob
468
465
469 returns None if empty"""
466 returns None if empty"""
470 headersize = self._unpack(_fpartheadersize)[0]
467 headersize = self._unpack(_fpartheadersize)[0]
471 self.ui.debug('part header size: %i\n' % headersize)
468 self.ui.debug('part header size: %i\n' % headersize)
472 if headersize:
469 if headersize:
473 return self._readexact(headersize)
470 return self._readexact(headersize)
474 return None
471 return None
475
472
476
473
477 class bundlepart(object):
474 class bundlepart(object):
478 """A bundle2 part contains application level payload
475 """A bundle2 part contains application level payload
479
476
480 The part `type` is used to route the part to the application level
477 The part `type` is used to route the part to the application level
481 handler.
478 handler.
482 """
479 """
483
480
484 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
481 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
485 data=''):
482 data=''):
486 self.id = None
483 self.id = None
487 self.type = parttype
484 self.type = parttype
488 self.data = data
485 self.data = data
489 self.mandatoryparams = mandatoryparams
486 self.mandatoryparams = mandatoryparams
490 self.advisoryparams = advisoryparams
487 self.advisoryparams = advisoryparams
491
488
492 def getchunks(self):
489 def getchunks(self):
493 #### header
490 #### header
494 ## parttype
491 ## parttype
495 header = [_pack(_fparttypesize, len(self.type)),
492 header = [_pack(_fparttypesize, len(self.type)),
496 self.type, _pack(_fpartid, self.id),
493 self.type, _pack(_fpartid, self.id),
497 ]
494 ]
498 ## parameters
495 ## parameters
499 # count
496 # count
500 manpar = self.mandatoryparams
497 manpar = self.mandatoryparams
501 advpar = self.advisoryparams
498 advpar = self.advisoryparams
502 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
499 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
503 # size
500 # size
504 parsizes = []
501 parsizes = []
505 for key, value in manpar:
502 for key, value in manpar:
506 parsizes.append(len(key))
503 parsizes.append(len(key))
507 parsizes.append(len(value))
504 parsizes.append(len(value))
508 for key, value in advpar:
505 for key, value in advpar:
509 parsizes.append(len(key))
506 parsizes.append(len(key))
510 parsizes.append(len(value))
507 parsizes.append(len(value))
511 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
508 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
512 header.append(paramsizes)
509 header.append(paramsizes)
513 # key, value
510 # key, value
514 for key, value in manpar:
511 for key, value in manpar:
515 header.append(key)
512 header.append(key)
516 header.append(value)
513 header.append(value)
517 for key, value in advpar:
514 for key, value in advpar:
518 header.append(key)
515 header.append(key)
519 header.append(value)
516 header.append(value)
520 ## finalize header
517 ## finalize header
521 headerchunk = ''.join(header)
518 headerchunk = ''.join(header)
522 yield _pack(_fpartheadersize, len(headerchunk))
519 yield _pack(_fpartheadersize, len(headerchunk))
523 yield headerchunk
520 yield headerchunk
524 ## payload
521 ## payload
525 for chunk in self._payloadchunks():
522 for chunk in self._payloadchunks():
526 yield _pack(_fpayloadsize, len(chunk))
523 yield _pack(_fpayloadsize, len(chunk))
527 yield chunk
524 yield chunk
528 # end of payload
525 # end of payload
529 yield _pack(_fpayloadsize, 0)
526 yield _pack(_fpayloadsize, 0)
530
527
531 def _payloadchunks(self):
528 def _payloadchunks(self):
532 """yield chunks of a the part payload
529 """yield chunks of a the part payload
533
530
534 Exists to handle the different methods to provide data to a part."""
531 Exists to handle the different methods to provide data to a part."""
535 # we only support fixed size data now.
532 # we only support fixed size data now.
536 # This will be improved in the future.
533 # This will be improved in the future.
537 if util.safehasattr(self.data, 'next'):
534 if util.safehasattr(self.data, 'next'):
538 buff = util.chunkbuffer(self.data)
535 buff = util.chunkbuffer(self.data)
539 chunk = buff.read(preferedchunksize)
536 chunk = buff.read(preferedchunksize)
540 while chunk:
537 while chunk:
541 yield chunk
538 yield chunk
542 chunk = buff.read(preferedchunksize)
539 chunk = buff.read(preferedchunksize)
543 elif len(self.data):
540 elif len(self.data):
544 yield self.data
541 yield self.data
545
542
546 class unbundlepart(unpackermixin):
543 class unbundlepart(unpackermixin):
547 """a bundle part read from a bundle"""
544 """a bundle part read from a bundle"""
548
545
549 def __init__(self, ui, header, fp):
546 def __init__(self, ui, header, fp):
550 super(unbundlepart, self).__init__(fp)
547 super(unbundlepart, self).__init__(fp)
551 self.ui = ui
548 self.ui = ui
552 # unbundle state attr
549 # unbundle state attr
553 self._headerdata = header
550 self._headerdata = header
554 self._headeroffset = 0
551 self._headeroffset = 0
555 self._initialized = False
552 self._initialized = False
556 self.consumed = False
553 self.consumed = False
557 # part data
554 # part data
558 self.id = None
555 self.id = None
559 self.type = None
556 self.type = None
560 self.mandatoryparams = None
557 self.mandatoryparams = None
561 self.advisoryparams = None
558 self.advisoryparams = None
562 self._payloadstream = None
559 self._payloadstream = None
563 self._readheader()
560 self._readheader()
564
561
565 def _fromheader(self, size):
562 def _fromheader(self, size):
566 """return the next <size> byte from the header"""
563 """return the next <size> byte from the header"""
567 offset = self._headeroffset
564 offset = self._headeroffset
568 data = self._headerdata[offset:(offset + size)]
565 data = self._headerdata[offset:(offset + size)]
569 self._headeroffset = offset + size
566 self._headeroffset = offset + size
570 return data
567 return data
571
568
572 def _unpackheader(self, format):
569 def _unpackheader(self, format):
573 """read given format from header
570 """read given format from header
574
571
575 This automatically compute the size of the format to read."""
572 This automatically compute the size of the format to read."""
576 data = self._fromheader(struct.calcsize(format))
573 data = self._fromheader(struct.calcsize(format))
577 return _unpack(format, data)
574 return _unpack(format, data)
578
575
579 def _readheader(self):
576 def _readheader(self):
580 """read the header and setup the object"""
577 """read the header and setup the object"""
581 typesize = self._unpackheader(_fparttypesize)[0]
578 typesize = self._unpackheader(_fparttypesize)[0]
582 self.type = self._fromheader(typesize)
579 self.type = self._fromheader(typesize)
583 self.ui.debug('part type: "%s"\n' % self.type)
580 self.ui.debug('part type: "%s"\n' % self.type)
584 self.id = self._unpackheader(_fpartid)[0]
581 self.id = self._unpackheader(_fpartid)[0]
585 self.ui.debug('part id: "%s"\n' % self.id)
582 self.ui.debug('part id: "%s"\n' % self.id)
586 ## reading parameters
583 ## reading parameters
587 # param count
584 # param count
588 mancount, advcount = self._unpackheader(_fpartparamcount)
585 mancount, advcount = self._unpackheader(_fpartparamcount)
589 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
586 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
590 # param size
587 # param size
591 fparamsizes = _makefpartparamsizes(mancount + advcount)
588 fparamsizes = _makefpartparamsizes(mancount + advcount)
592 paramsizes = self._unpackheader(fparamsizes)
589 paramsizes = self._unpackheader(fparamsizes)
593 # make it a list of couple again
590 # make it a list of couple again
594 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
591 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
595 # split mandatory from advisory
592 # split mandatory from advisory
596 mansizes = paramsizes[:mancount]
593 mansizes = paramsizes[:mancount]
597 advsizes = paramsizes[mancount:]
594 advsizes = paramsizes[mancount:]
598 # retrive param value
595 # retrive param value
599 manparams = []
596 manparams = []
600 for key, value in mansizes:
597 for key, value in mansizes:
601 manparams.append((self._fromheader(key), self._fromheader(value)))
598 manparams.append((self._fromheader(key), self._fromheader(value)))
602 advparams = []
599 advparams = []
603 for key, value in advsizes:
600 for key, value in advsizes:
604 advparams.append((self._fromheader(key), self._fromheader(value)))
601 advparams.append((self._fromheader(key), self._fromheader(value)))
605 self.mandatoryparams = manparams
602 self.mandatoryparams = manparams
606 self.advisoryparams = advparams
603 self.advisoryparams = advparams
607 ## part payload
604 ## part payload
608 def payloadchunks():
605 def payloadchunks():
609 payloadsize = self._unpack(_fpayloadsize)[0]
606 payloadsize = self._unpack(_fpayloadsize)[0]
610 self.ui.debug('payload chunk size: %i\n' % payloadsize)
607 self.ui.debug('payload chunk size: %i\n' % payloadsize)
611 while payloadsize:
608 while payloadsize:
612 yield self._readexact(payloadsize)
609 yield self._readexact(payloadsize)
613 payloadsize = self._unpack(_fpayloadsize)[0]
610 payloadsize = self._unpack(_fpayloadsize)[0]
614 self.ui.debug('payload chunk size: %i\n' % payloadsize)
611 self.ui.debug('payload chunk size: %i\n' % payloadsize)
615 self._payloadstream = util.chunkbuffer(payloadchunks())
612 self._payloadstream = util.chunkbuffer(payloadchunks())
616 # we read the data, tell it
613 # we read the data, tell it
617 self._initialized = True
614 self._initialized = True
618
615
619 def read(self, size=None):
616 def read(self, size=None):
620 """read payload data"""
617 """read payload data"""
621 if not self._initialized:
618 if not self._initialized:
622 self._readheader()
619 self._readheader()
623 if size is None:
620 if size is None:
624 data = self._payloadstream.read()
621 data = self._payloadstream.read()
625 else:
622 else:
626 data = self._payloadstream.read(size)
623 data = self._payloadstream.read(size)
627 if size is None or len(data) < size:
624 if size is None or len(data) < size:
628 self.consumed = True
625 self.consumed = True
629 return data
626 return data
630
627
631
628
632 @parthandler('changegroup')
629 @parthandler('changegroup')
633 def handlechangegroup(op, inpart):
630 def handlechangegroup(op, inpart):
634 """apply a changegroup part on the repo
631 """apply a changegroup part on the repo
635
632
636 This is a very early implementation that will massive rework before being
633 This is a very early implementation that will massive rework before being
637 inflicted to any end-user.
634 inflicted to any end-user.
638 """
635 """
639 # Make sure we trigger a transaction creation
636 # Make sure we trigger a transaction creation
640 #
637 #
641 # The addchangegroup function will get a transaction object by itself, but
638 # The addchangegroup function will get a transaction object by itself, but
642 # we need to make sure we trigger the creation of a transaction object used
639 # we need to make sure we trigger the creation of a transaction object used
643 # for the whole processing scope.
640 # for the whole processing scope.
644 op.gettransaction()
641 op.gettransaction()
645 cg = changegroup.unbundle10(inpart, 'UN')
642 cg = changegroup.unbundle10(inpart, 'UN')
646 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
643 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
647 op.records.add('changegroup', {'return': ret})
644 op.records.add('changegroup', {'return': ret})
648 if op.reply is not None:
645 if op.reply is not None:
649 # This is definitly not the final form of this
646 # This is definitly not the final form of this
650 # return. But one need to start somewhere.
647 # return. But one need to start somewhere.
651 part = bundlepart('reply:changegroup', (),
648 part = bundlepart('reply:changegroup', (),
652 [('in-reply-to', str(inpart.id)),
649 [('in-reply-to', str(inpart.id)),
653 ('return', '%i' % ret)])
650 ('return', '%i' % ret)])
654 op.reply.addpart(part)
651 op.reply.addpart(part)
655 assert not inpart.read()
652 assert not inpart.read()
656
653
657 @parthandler('reply:changegroup')
654 @parthandler('reply:changegroup')
658 def handlechangegroup(op, inpart):
655 def handlechangegroup(op, inpart):
659 p = dict(inpart.advisoryparams)
656 p = dict(inpart.advisoryparams)
660 ret = int(p['return'])
657 ret = int(p['return'])
661 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
658 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
662
659
663 @parthandler('check:heads')
660 @parthandler('check:heads')
664 def handlechangegroup(op, inpart):
661 def handlechangegroup(op, inpart):
665 """check that head of the repo did not change
662 """check that head of the repo did not change
666
663
667 This is used to detect a push race when using unbundle.
664 This is used to detect a push race when using unbundle.
668 This replaces the "heads" argument of unbundle."""
665 This replaces the "heads" argument of unbundle."""
669 h = inpart.read(20)
666 h = inpart.read(20)
670 heads = []
667 heads = []
671 while len(h) == 20:
668 while len(h) == 20:
672 heads.append(h)
669 heads.append(h)
673 h = inpart.read(20)
670 h = inpart.read(20)
674 assert not h
671 assert not h
675 if heads != op.repo.heads():
672 if heads != op.repo.heads():
676 raise exchange.PushRaced()
673 raise exchange.PushRaced()
674
675 @parthandler('replycaps')
676 def handlereplycaps(op, inpart):
677 """Notify that a reply bundle should be created
678
679 Will convey bundle capability at some point too."""
680 if op.reply is None:
681 op.reply = bundle20(op.ui)
@@ -1,705 +1,706 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno
10 import errno
11 import util, scmutil, changegroup, base85
11 import util, scmutil, changegroup, base85
12 import discovery, phases, obsolete, bookmarks, bundle2
12 import discovery, phases, obsolete, bookmarks, bundle2
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.unbundle10(fh, alg)
34 return changegroup.unbundle10(fh, alg)
35 elif version == '20':
35 elif version == '20':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40
40
41 class pushoperation(object):
41 class pushoperation(object):
42 """A object that represent a single push operation
42 """A object that represent a single push operation
43
43
44 It purpose is to carry push related state and very common operation.
44 It purpose is to carry push related state and very common operation.
45
45
46 A new should be created at the beginning of each push and discarded
46 A new should be created at the beginning of each push and discarded
47 afterward.
47 afterward.
48 """
48 """
49
49
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
51 # repo we push from
51 # repo we push from
52 self.repo = repo
52 self.repo = repo
53 self.ui = repo.ui
53 self.ui = repo.ui
54 # repo we push to
54 # repo we push to
55 self.remote = remote
55 self.remote = remote
56 # force option provided
56 # force option provided
57 self.force = force
57 self.force = force
58 # revs to be pushed (None is "all")
58 # revs to be pushed (None is "all")
59 self.revs = revs
59 self.revs = revs
60 # allow push of new branch
60 # allow push of new branch
61 self.newbranch = newbranch
61 self.newbranch = newbranch
62 # did a local lock get acquired?
62 # did a local lock get acquired?
63 self.locallocked = None
63 self.locallocked = None
64 # Integer version of the push result
64 # Integer version of the push result
65 # - None means nothing to push
65 # - None means nothing to push
66 # - 0 means HTTP error
66 # - 0 means HTTP error
67 # - 1 means we pushed and remote head count is unchanged *or*
67 # - 1 means we pushed and remote head count is unchanged *or*
68 # we have outgoing changesets but refused to push
68 # we have outgoing changesets but refused to push
69 # - other values as described by addchangegroup()
69 # - other values as described by addchangegroup()
70 self.ret = None
70 self.ret = None
71 # discover.outgoing object (contains common and outgoing data)
71 # discover.outgoing object (contains common and outgoing data)
72 self.outgoing = None
72 self.outgoing = None
73 # all remote heads before the push
73 # all remote heads before the push
74 self.remoteheads = None
74 self.remoteheads = None
75 # testable as a boolean indicating if any nodes are missing locally.
75 # testable as a boolean indicating if any nodes are missing locally.
76 self.incoming = None
76 self.incoming = None
77 # set of all heads common after changeset bundle push
77 # set of all heads common after changeset bundle push
78 self.commonheads = None
78 self.commonheads = None
79
79
80 def push(repo, remote, force=False, revs=None, newbranch=False):
80 def push(repo, remote, force=False, revs=None, newbranch=False):
81 '''Push outgoing changesets (limited by revs) from a local
81 '''Push outgoing changesets (limited by revs) from a local
82 repository to remote. Return an integer:
82 repository to remote. Return an integer:
83 - None means nothing to push
83 - None means nothing to push
84 - 0 means HTTP error
84 - 0 means HTTP error
85 - 1 means we pushed and remote head count is unchanged *or*
85 - 1 means we pushed and remote head count is unchanged *or*
86 we have outgoing changesets but refused to push
86 we have outgoing changesets but refused to push
87 - other values as described by addchangegroup()
87 - other values as described by addchangegroup()
88 '''
88 '''
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
90 if pushop.remote.local():
90 if pushop.remote.local():
91 missing = (set(pushop.repo.requirements)
91 missing = (set(pushop.repo.requirements)
92 - pushop.remote.local().supported)
92 - pushop.remote.local().supported)
93 if missing:
93 if missing:
94 msg = _("required features are not"
94 msg = _("required features are not"
95 " supported in the destination:"
95 " supported in the destination:"
96 " %s") % (', '.join(sorted(missing)))
96 " %s") % (', '.join(sorted(missing)))
97 raise util.Abort(msg)
97 raise util.Abort(msg)
98
98
99 # there are two ways to push to remote repo:
99 # there are two ways to push to remote repo:
100 #
100 #
101 # addchangegroup assumes local user can lock remote
101 # addchangegroup assumes local user can lock remote
102 # repo (local filesystem, old ssh servers).
102 # repo (local filesystem, old ssh servers).
103 #
103 #
104 # unbundle assumes local user cannot lock remote repo (new ssh
104 # unbundle assumes local user cannot lock remote repo (new ssh
105 # servers, http servers).
105 # servers, http servers).
106
106
107 if not pushop.remote.canpush():
107 if not pushop.remote.canpush():
108 raise util.Abort(_("destination does not support push"))
108 raise util.Abort(_("destination does not support push"))
109 # get local lock as we might write phase data
109 # get local lock as we might write phase data
110 locallock = None
110 locallock = None
111 try:
111 try:
112 locallock = pushop.repo.lock()
112 locallock = pushop.repo.lock()
113 pushop.locallocked = True
113 pushop.locallocked = True
114 except IOError, err:
114 except IOError, err:
115 pushop.locallocked = False
115 pushop.locallocked = False
116 if err.errno != errno.EACCES:
116 if err.errno != errno.EACCES:
117 raise
117 raise
118 # source repo cannot be locked.
118 # source repo cannot be locked.
119 # We do not abort the push, but just disable the local phase
119 # We do not abort the push, but just disable the local phase
120 # synchronisation.
120 # synchronisation.
121 msg = 'cannot lock source repository: %s\n' % err
121 msg = 'cannot lock source repository: %s\n' % err
122 pushop.ui.debug(msg)
122 pushop.ui.debug(msg)
123 try:
123 try:
124 pushop.repo.checkpush(pushop)
124 pushop.repo.checkpush(pushop)
125 lock = None
125 lock = None
126 unbundle = pushop.remote.capable('unbundle')
126 unbundle = pushop.remote.capable('unbundle')
127 if not unbundle:
127 if not unbundle:
128 lock = pushop.remote.lock()
128 lock = pushop.remote.lock()
129 try:
129 try:
130 _pushdiscovery(pushop)
130 _pushdiscovery(pushop)
131 if _pushcheckoutgoing(pushop):
131 if _pushcheckoutgoing(pushop):
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
133 pushop.remote,
133 pushop.remote,
134 pushop.outgoing)
134 pushop.outgoing)
135 if pushop.remote.capable('bundle2'):
135 if pushop.remote.capable('bundle2'):
136 _pushbundle2(pushop)
136 _pushbundle2(pushop)
137 else:
137 else:
138 _pushchangeset(pushop)
138 _pushchangeset(pushop)
139 _pushcomputecommonheads(pushop)
139 _pushcomputecommonheads(pushop)
140 _pushsyncphase(pushop)
140 _pushsyncphase(pushop)
141 _pushobsolete(pushop)
141 _pushobsolete(pushop)
142 finally:
142 finally:
143 if lock is not None:
143 if lock is not None:
144 lock.release()
144 lock.release()
145 finally:
145 finally:
146 if locallock is not None:
146 if locallock is not None:
147 locallock.release()
147 locallock.release()
148
148
149 _pushbookmark(pushop)
149 _pushbookmark(pushop)
150 return pushop.ret
150 return pushop.ret
151
151
152 def _pushdiscovery(pushop):
152 def _pushdiscovery(pushop):
153 # discovery
153 # discovery
154 unfi = pushop.repo.unfiltered()
154 unfi = pushop.repo.unfiltered()
155 fci = discovery.findcommonincoming
155 fci = discovery.findcommonincoming
156 commoninc = fci(unfi, pushop.remote, force=pushop.force)
156 commoninc = fci(unfi, pushop.remote, force=pushop.force)
157 common, inc, remoteheads = commoninc
157 common, inc, remoteheads = commoninc
158 fco = discovery.findcommonoutgoing
158 fco = discovery.findcommonoutgoing
159 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
159 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
160 commoninc=commoninc, force=pushop.force)
160 commoninc=commoninc, force=pushop.force)
161 pushop.outgoing = outgoing
161 pushop.outgoing = outgoing
162 pushop.remoteheads = remoteheads
162 pushop.remoteheads = remoteheads
163 pushop.incoming = inc
163 pushop.incoming = inc
164
164
165 def _pushcheckoutgoing(pushop):
165 def _pushcheckoutgoing(pushop):
166 outgoing = pushop.outgoing
166 outgoing = pushop.outgoing
167 unfi = pushop.repo.unfiltered()
167 unfi = pushop.repo.unfiltered()
168 if not outgoing.missing:
168 if not outgoing.missing:
169 # nothing to push
169 # nothing to push
170 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
170 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
171 return False
171 return False
172 # something to push
172 # something to push
173 if not pushop.force:
173 if not pushop.force:
174 # if repo.obsstore == False --> no obsolete
174 # if repo.obsstore == False --> no obsolete
175 # then, save the iteration
175 # then, save the iteration
176 if unfi.obsstore:
176 if unfi.obsstore:
177 # this message are here for 80 char limit reason
177 # this message are here for 80 char limit reason
178 mso = _("push includes obsolete changeset: %s!")
178 mso = _("push includes obsolete changeset: %s!")
179 mst = "push includes %s changeset: %s!"
179 mst = "push includes %s changeset: %s!"
180 # plain versions for i18n tool to detect them
180 # plain versions for i18n tool to detect them
181 _("push includes unstable changeset: %s!")
181 _("push includes unstable changeset: %s!")
182 _("push includes bumped changeset: %s!")
182 _("push includes bumped changeset: %s!")
183 _("push includes divergent changeset: %s!")
183 _("push includes divergent changeset: %s!")
184 # If we are to push if there is at least one
184 # If we are to push if there is at least one
185 # obsolete or unstable changeset in missing, at
185 # obsolete or unstable changeset in missing, at
186 # least one of the missinghead will be obsolete or
186 # least one of the missinghead will be obsolete or
187 # unstable. So checking heads only is ok
187 # unstable. So checking heads only is ok
188 for node in outgoing.missingheads:
188 for node in outgoing.missingheads:
189 ctx = unfi[node]
189 ctx = unfi[node]
190 if ctx.obsolete():
190 if ctx.obsolete():
191 raise util.Abort(mso % ctx)
191 raise util.Abort(mso % ctx)
192 elif ctx.troubled():
192 elif ctx.troubled():
193 raise util.Abort(_(mst)
193 raise util.Abort(_(mst)
194 % (ctx.troubles()[0],
194 % (ctx.troubles()[0],
195 ctx))
195 ctx))
196 newbm = pushop.ui.configlist('bookmarks', 'pushing')
196 newbm = pushop.ui.configlist('bookmarks', 'pushing')
197 discovery.checkheads(unfi, pushop.remote, outgoing,
197 discovery.checkheads(unfi, pushop.remote, outgoing,
198 pushop.remoteheads,
198 pushop.remoteheads,
199 pushop.newbranch,
199 pushop.newbranch,
200 bool(pushop.incoming),
200 bool(pushop.incoming),
201 newbm)
201 newbm)
202 return True
202 return True
203
203
204 def _pushbundle2(pushop):
204 def _pushbundle2(pushop):
205 """push data to the remote using bundle2
205 """push data to the remote using bundle2
206
206
207 The only currently supported type of data is changegroup but this will
207 The only currently supported type of data is changegroup but this will
208 evolve in the future."""
208 evolve in the future."""
209 # Send known head to the server for race detection.
209 # Send known head to the server for race detection.
210 bundler = bundle2.bundle20(pushop.ui)
210 bundler = bundle2.bundle20(pushop.ui)
211 bundler.addpart(bundle2.bundlepart('replycaps'))
211 if not pushop.force:
212 if not pushop.force:
212 part = bundle2.bundlepart('CHECK:HEADS', data=iter(pushop.remoteheads))
213 part = bundle2.bundlepart('CHECK:HEADS', data=iter(pushop.remoteheads))
213 bundler.addpart(part)
214 bundler.addpart(part)
214 # add the changegroup bundle
215 # add the changegroup bundle
215 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
216 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
216 cgpart = bundle2.bundlepart('CHANGEGROUP', data=cg.getchunks())
217 cgpart = bundle2.bundlepart('CHANGEGROUP', data=cg.getchunks())
217 bundler.addpart(cgpart)
218 bundler.addpart(cgpart)
218 stream = util.chunkbuffer(bundler.getchunks())
219 stream = util.chunkbuffer(bundler.getchunks())
219 reply = pushop.remote.unbundle(stream, ['force'], 'push')
220 reply = pushop.remote.unbundle(stream, ['force'], 'push')
220 try:
221 try:
221 op = bundle2.processbundle(pushop.repo, reply)
222 op = bundle2.processbundle(pushop.repo, reply)
222 except KeyError, exc:
223 except KeyError, exc:
223 raise util.Abort('missing support for %s' % exc)
224 raise util.Abort('missing support for %s' % exc)
224 cgreplies = op.records.getreplies(cgpart.id)
225 cgreplies = op.records.getreplies(cgpart.id)
225 assert len(cgreplies['changegroup']) == 1
226 assert len(cgreplies['changegroup']) == 1
226 pushop.ret = cgreplies['changegroup'][0]['return']
227 pushop.ret = cgreplies['changegroup'][0]['return']
227
228
228 def _pushchangeset(pushop):
229 def _pushchangeset(pushop):
229 """Make the actual push of changeset bundle to remote repo"""
230 """Make the actual push of changeset bundle to remote repo"""
230 outgoing = pushop.outgoing
231 outgoing = pushop.outgoing
231 unbundle = pushop.remote.capable('unbundle')
232 unbundle = pushop.remote.capable('unbundle')
232 # TODO: get bundlecaps from remote
233 # TODO: get bundlecaps from remote
233 bundlecaps = None
234 bundlecaps = None
234 # create a changegroup from local
235 # create a changegroup from local
235 if pushop.revs is None and not (outgoing.excluded
236 if pushop.revs is None and not (outgoing.excluded
236 or pushop.repo.changelog.filteredrevs):
237 or pushop.repo.changelog.filteredrevs):
237 # push everything,
238 # push everything,
238 # use the fast path, no race possible on push
239 # use the fast path, no race possible on push
239 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
240 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
240 cg = changegroup.getsubset(pushop.repo,
241 cg = changegroup.getsubset(pushop.repo,
241 outgoing,
242 outgoing,
242 bundler,
243 bundler,
243 'push',
244 'push',
244 fastpath=True)
245 fastpath=True)
245 else:
246 else:
246 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
247 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
247 bundlecaps)
248 bundlecaps)
248
249
249 # apply changegroup to remote
250 # apply changegroup to remote
250 if unbundle:
251 if unbundle:
251 # local repo finds heads on server, finds out what
252 # local repo finds heads on server, finds out what
252 # revs it must push. once revs transferred, if server
253 # revs it must push. once revs transferred, if server
253 # finds it has different heads (someone else won
254 # finds it has different heads (someone else won
254 # commit/push race), server aborts.
255 # commit/push race), server aborts.
255 if pushop.force:
256 if pushop.force:
256 remoteheads = ['force']
257 remoteheads = ['force']
257 else:
258 else:
258 remoteheads = pushop.remoteheads
259 remoteheads = pushop.remoteheads
259 # ssh: return remote's addchangegroup()
260 # ssh: return remote's addchangegroup()
260 # http: return remote's addchangegroup() or 0 for error
261 # http: return remote's addchangegroup() or 0 for error
261 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
262 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
262 'push')
263 'push')
263 else:
264 else:
264 # we return an integer indicating remote head count
265 # we return an integer indicating remote head count
265 # change
266 # change
266 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
267 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
267
268
268 def _pushcomputecommonheads(pushop):
269 def _pushcomputecommonheads(pushop):
269 unfi = pushop.repo.unfiltered()
270 unfi = pushop.repo.unfiltered()
270 if pushop.ret:
271 if pushop.ret:
271 # push succeed, synchronize target of the push
272 # push succeed, synchronize target of the push
272 cheads = pushop.outgoing.missingheads
273 cheads = pushop.outgoing.missingheads
273 elif pushop.revs is None:
274 elif pushop.revs is None:
274 # All out push fails. synchronize all common
275 # All out push fails. synchronize all common
275 cheads = pushop.outgoing.commonheads
276 cheads = pushop.outgoing.commonheads
276 else:
277 else:
277 # I want cheads = heads(::missingheads and ::commonheads)
278 # I want cheads = heads(::missingheads and ::commonheads)
278 # (missingheads is revs with secret changeset filtered out)
279 # (missingheads is revs with secret changeset filtered out)
279 #
280 #
280 # This can be expressed as:
281 # This can be expressed as:
281 # cheads = ( (missingheads and ::commonheads)
282 # cheads = ( (missingheads and ::commonheads)
282 # + (commonheads and ::missingheads))"
283 # + (commonheads and ::missingheads))"
283 # )
284 # )
284 #
285 #
285 # while trying to push we already computed the following:
286 # while trying to push we already computed the following:
286 # common = (::commonheads)
287 # common = (::commonheads)
287 # missing = ((commonheads::missingheads) - commonheads)
288 # missing = ((commonheads::missingheads) - commonheads)
288 #
289 #
289 # We can pick:
290 # We can pick:
290 # * missingheads part of common (::commonheads)
291 # * missingheads part of common (::commonheads)
291 common = set(pushop.outgoing.common)
292 common = set(pushop.outgoing.common)
292 nm = pushop.repo.changelog.nodemap
293 nm = pushop.repo.changelog.nodemap
293 cheads = [node for node in pushop.revs if nm[node] in common]
294 cheads = [node for node in pushop.revs if nm[node] in common]
294 # and
295 # and
295 # * commonheads parents on missing
296 # * commonheads parents on missing
296 revset = unfi.set('%ln and parents(roots(%ln))',
297 revset = unfi.set('%ln and parents(roots(%ln))',
297 pushop.outgoing.commonheads,
298 pushop.outgoing.commonheads,
298 pushop.outgoing.missing)
299 pushop.outgoing.missing)
299 cheads.extend(c.node() for c in revset)
300 cheads.extend(c.node() for c in revset)
300 pushop.commonheads = cheads
301 pushop.commonheads = cheads
301
302
302 def _pushsyncphase(pushop):
303 def _pushsyncphase(pushop):
303 """synchronise phase information locally and remotely"""
304 """synchronise phase information locally and remotely"""
304 unfi = pushop.repo.unfiltered()
305 unfi = pushop.repo.unfiltered()
305 cheads = pushop.commonheads
306 cheads = pushop.commonheads
306 if pushop.ret:
307 if pushop.ret:
307 # push succeed, synchronize target of the push
308 # push succeed, synchronize target of the push
308 cheads = pushop.outgoing.missingheads
309 cheads = pushop.outgoing.missingheads
309 elif pushop.revs is None:
310 elif pushop.revs is None:
310 # All out push fails. synchronize all common
311 # All out push fails. synchronize all common
311 cheads = pushop.outgoing.commonheads
312 cheads = pushop.outgoing.commonheads
312 else:
313 else:
313 # I want cheads = heads(::missingheads and ::commonheads)
314 # I want cheads = heads(::missingheads and ::commonheads)
314 # (missingheads is revs with secret changeset filtered out)
315 # (missingheads is revs with secret changeset filtered out)
315 #
316 #
316 # This can be expressed as:
317 # This can be expressed as:
317 # cheads = ( (missingheads and ::commonheads)
318 # cheads = ( (missingheads and ::commonheads)
318 # + (commonheads and ::missingheads))"
319 # + (commonheads and ::missingheads))"
319 # )
320 # )
320 #
321 #
321 # while trying to push we already computed the following:
322 # while trying to push we already computed the following:
322 # common = (::commonheads)
323 # common = (::commonheads)
323 # missing = ((commonheads::missingheads) - commonheads)
324 # missing = ((commonheads::missingheads) - commonheads)
324 #
325 #
325 # We can pick:
326 # We can pick:
326 # * missingheads part of common (::commonheads)
327 # * missingheads part of common (::commonheads)
327 common = set(pushop.outgoing.common)
328 common = set(pushop.outgoing.common)
328 nm = pushop.repo.changelog.nodemap
329 nm = pushop.repo.changelog.nodemap
329 cheads = [node for node in pushop.revs if nm[node] in common]
330 cheads = [node for node in pushop.revs if nm[node] in common]
330 # and
331 # and
331 # * commonheads parents on missing
332 # * commonheads parents on missing
332 revset = unfi.set('%ln and parents(roots(%ln))',
333 revset = unfi.set('%ln and parents(roots(%ln))',
333 pushop.outgoing.commonheads,
334 pushop.outgoing.commonheads,
334 pushop.outgoing.missing)
335 pushop.outgoing.missing)
335 cheads.extend(c.node() for c in revset)
336 cheads.extend(c.node() for c in revset)
336 pushop.commonheads = cheads
337 pushop.commonheads = cheads
337 # even when we don't push, exchanging phase data is useful
338 # even when we don't push, exchanging phase data is useful
338 remotephases = pushop.remote.listkeys('phases')
339 remotephases = pushop.remote.listkeys('phases')
339 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
340 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
340 and remotephases # server supports phases
341 and remotephases # server supports phases
341 and pushop.ret is None # nothing was pushed
342 and pushop.ret is None # nothing was pushed
342 and remotephases.get('publishing', False)):
343 and remotephases.get('publishing', False)):
343 # When:
344 # When:
344 # - this is a subrepo push
345 # - this is a subrepo push
345 # - and remote support phase
346 # - and remote support phase
346 # - and no changeset was pushed
347 # - and no changeset was pushed
347 # - and remote is publishing
348 # - and remote is publishing
348 # We may be in issue 3871 case!
349 # We may be in issue 3871 case!
349 # We drop the possible phase synchronisation done by
350 # We drop the possible phase synchronisation done by
350 # courtesy to publish changesets possibly locally draft
351 # courtesy to publish changesets possibly locally draft
351 # on the remote.
352 # on the remote.
352 remotephases = {'publishing': 'True'}
353 remotephases = {'publishing': 'True'}
353 if not remotephases: # old server or public only reply from non-publishing
354 if not remotephases: # old server or public only reply from non-publishing
354 _localphasemove(pushop, cheads)
355 _localphasemove(pushop, cheads)
355 # don't push any phase data as there is nothing to push
356 # don't push any phase data as there is nothing to push
356 else:
357 else:
357 ana = phases.analyzeremotephases(pushop.repo, cheads,
358 ana = phases.analyzeremotephases(pushop.repo, cheads,
358 remotephases)
359 remotephases)
359 pheads, droots = ana
360 pheads, droots = ana
360 ### Apply remote phase on local
361 ### Apply remote phase on local
361 if remotephases.get('publishing', False):
362 if remotephases.get('publishing', False):
362 _localphasemove(pushop, cheads)
363 _localphasemove(pushop, cheads)
363 else: # publish = False
364 else: # publish = False
364 _localphasemove(pushop, pheads)
365 _localphasemove(pushop, pheads)
365 _localphasemove(pushop, cheads, phases.draft)
366 _localphasemove(pushop, cheads, phases.draft)
366 ### Apply local phase on remote
367 ### Apply local phase on remote
367
368
368 # Get the list of all revs draft on remote by public here.
369 # Get the list of all revs draft on remote by public here.
369 # XXX Beware that revset break if droots is not strictly
370 # XXX Beware that revset break if droots is not strictly
370 # XXX root we may want to ensure it is but it is costly
371 # XXX root we may want to ensure it is but it is costly
371 outdated = unfi.set('heads((%ln::%ln) and public())',
372 outdated = unfi.set('heads((%ln::%ln) and public())',
372 droots, cheads)
373 droots, cheads)
373 for newremotehead in outdated:
374 for newremotehead in outdated:
374 r = pushop.remote.pushkey('phases',
375 r = pushop.remote.pushkey('phases',
375 newremotehead.hex(),
376 newremotehead.hex(),
376 str(phases.draft),
377 str(phases.draft),
377 str(phases.public))
378 str(phases.public))
378 if not r:
379 if not r:
379 pushop.ui.warn(_('updating %s to public failed!\n')
380 pushop.ui.warn(_('updating %s to public failed!\n')
380 % newremotehead)
381 % newremotehead)
381
382
382 def _localphasemove(pushop, nodes, phase=phases.public):
383 def _localphasemove(pushop, nodes, phase=phases.public):
383 """move <nodes> to <phase> in the local source repo"""
384 """move <nodes> to <phase> in the local source repo"""
384 if pushop.locallocked:
385 if pushop.locallocked:
385 phases.advanceboundary(pushop.repo, phase, nodes)
386 phases.advanceboundary(pushop.repo, phase, nodes)
386 else:
387 else:
387 # repo is not locked, do not change any phases!
388 # repo is not locked, do not change any phases!
388 # Informs the user that phases should have been moved when
389 # Informs the user that phases should have been moved when
389 # applicable.
390 # applicable.
390 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
391 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
391 phasestr = phases.phasenames[phase]
392 phasestr = phases.phasenames[phase]
392 if actualmoves:
393 if actualmoves:
393 pushop.ui.status(_('cannot lock source repo, skipping '
394 pushop.ui.status(_('cannot lock source repo, skipping '
394 'local %s phase update\n') % phasestr)
395 'local %s phase update\n') % phasestr)
395
396
396 def _pushobsolete(pushop):
397 def _pushobsolete(pushop):
397 """utility function to push obsolete markers to a remote"""
398 """utility function to push obsolete markers to a remote"""
398 pushop.ui.debug('try to push obsolete markers to remote\n')
399 pushop.ui.debug('try to push obsolete markers to remote\n')
399 repo = pushop.repo
400 repo = pushop.repo
400 remote = pushop.remote
401 remote = pushop.remote
401 if (obsolete._enabled and repo.obsstore and
402 if (obsolete._enabled and repo.obsstore and
402 'obsolete' in remote.listkeys('namespaces')):
403 'obsolete' in remote.listkeys('namespaces')):
403 rslts = []
404 rslts = []
404 remotedata = repo.listkeys('obsolete')
405 remotedata = repo.listkeys('obsolete')
405 for key in sorted(remotedata, reverse=True):
406 for key in sorted(remotedata, reverse=True):
406 # reverse sort to ensure we end with dump0
407 # reverse sort to ensure we end with dump0
407 data = remotedata[key]
408 data = remotedata[key]
408 rslts.append(remote.pushkey('obsolete', key, '', data))
409 rslts.append(remote.pushkey('obsolete', key, '', data))
409 if [r for r in rslts if not r]:
410 if [r for r in rslts if not r]:
410 msg = _('failed to push some obsolete markers!\n')
411 msg = _('failed to push some obsolete markers!\n')
411 repo.ui.warn(msg)
412 repo.ui.warn(msg)
412
413
413 def _pushbookmark(pushop):
414 def _pushbookmark(pushop):
414 """Update bookmark position on remote"""
415 """Update bookmark position on remote"""
415 ui = pushop.ui
416 ui = pushop.ui
416 repo = pushop.repo.unfiltered()
417 repo = pushop.repo.unfiltered()
417 remote = pushop.remote
418 remote = pushop.remote
418 ui.debug("checking for updated bookmarks\n")
419 ui.debug("checking for updated bookmarks\n")
419 revnums = map(repo.changelog.rev, pushop.revs or [])
420 revnums = map(repo.changelog.rev, pushop.revs or [])
420 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
421 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
421 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
422 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
422 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
423 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
423 srchex=hex)
424 srchex=hex)
424
425
425 for b, scid, dcid in advsrc:
426 for b, scid, dcid in advsrc:
426 if ancestors and repo[scid].rev() not in ancestors:
427 if ancestors and repo[scid].rev() not in ancestors:
427 continue
428 continue
428 if remote.pushkey('bookmarks', b, dcid, scid):
429 if remote.pushkey('bookmarks', b, dcid, scid):
429 ui.status(_("updating bookmark %s\n") % b)
430 ui.status(_("updating bookmark %s\n") % b)
430 else:
431 else:
431 ui.warn(_('updating bookmark %s failed!\n') % b)
432 ui.warn(_('updating bookmark %s failed!\n') % b)
432
433
433 class pulloperation(object):
434 class pulloperation(object):
434 """A object that represent a single pull operation
435 """A object that represent a single pull operation
435
436
436 It purpose is to carry push related state and very common operation.
437 It purpose is to carry push related state and very common operation.
437
438
438 A new should be created at the beginning of each pull and discarded
439 A new should be created at the beginning of each pull and discarded
439 afterward.
440 afterward.
440 """
441 """
441
442
442 def __init__(self, repo, remote, heads=None, force=False):
443 def __init__(self, repo, remote, heads=None, force=False):
443 # repo we pull into
444 # repo we pull into
444 self.repo = repo
445 self.repo = repo
445 # repo we pull from
446 # repo we pull from
446 self.remote = remote
447 self.remote = remote
447 # revision we try to pull (None is "all")
448 # revision we try to pull (None is "all")
448 self.heads = heads
449 self.heads = heads
449 # do we force pull?
450 # do we force pull?
450 self.force = force
451 self.force = force
451 # the name the pull transaction
452 # the name the pull transaction
452 self._trname = 'pull\n' + util.hidepassword(remote.url())
453 self._trname = 'pull\n' + util.hidepassword(remote.url())
453 # hold the transaction once created
454 # hold the transaction once created
454 self._tr = None
455 self._tr = None
455 # set of common changeset between local and remote before pull
456 # set of common changeset between local and remote before pull
456 self.common = None
457 self.common = None
457 # set of pulled head
458 # set of pulled head
458 self.rheads = None
459 self.rheads = None
459 # list of missing changeset to fetch remotely
460 # list of missing changeset to fetch remotely
460 self.fetch = None
461 self.fetch = None
461 # result of changegroup pulling (used as return code by pull)
462 # result of changegroup pulling (used as return code by pull)
462 self.cgresult = None
463 self.cgresult = None
463 # list of step remaining todo (related to future bundle2 usage)
464 # list of step remaining todo (related to future bundle2 usage)
464 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
465 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
465
466
466 @util.propertycache
467 @util.propertycache
467 def pulledsubset(self):
468 def pulledsubset(self):
468 """heads of the set of changeset target by the pull"""
469 """heads of the set of changeset target by the pull"""
469 # compute target subset
470 # compute target subset
470 if self.heads is None:
471 if self.heads is None:
471 # We pulled every thing possible
472 # We pulled every thing possible
472 # sync on everything common
473 # sync on everything common
473 c = set(self.common)
474 c = set(self.common)
474 ret = list(self.common)
475 ret = list(self.common)
475 for n in self.rheads:
476 for n in self.rheads:
476 if n not in c:
477 if n not in c:
477 ret.append(n)
478 ret.append(n)
478 return ret
479 return ret
479 else:
480 else:
480 # We pulled a specific subset
481 # We pulled a specific subset
481 # sync on this subset
482 # sync on this subset
482 return self.heads
483 return self.heads
483
484
484 def gettransaction(self):
485 def gettransaction(self):
485 """get appropriate pull transaction, creating it if needed"""
486 """get appropriate pull transaction, creating it if needed"""
486 if self._tr is None:
487 if self._tr is None:
487 self._tr = self.repo.transaction(self._trname)
488 self._tr = self.repo.transaction(self._trname)
488 return self._tr
489 return self._tr
489
490
490 def closetransaction(self):
491 def closetransaction(self):
491 """close transaction if created"""
492 """close transaction if created"""
492 if self._tr is not None:
493 if self._tr is not None:
493 self._tr.close()
494 self._tr.close()
494
495
495 def releasetransaction(self):
496 def releasetransaction(self):
496 """release transaction if created"""
497 """release transaction if created"""
497 if self._tr is not None:
498 if self._tr is not None:
498 self._tr.release()
499 self._tr.release()
499
500
500 def pull(repo, remote, heads=None, force=False):
501 def pull(repo, remote, heads=None, force=False):
501 pullop = pulloperation(repo, remote, heads, force)
502 pullop = pulloperation(repo, remote, heads, force)
502 if pullop.remote.local():
503 if pullop.remote.local():
503 missing = set(pullop.remote.requirements) - pullop.repo.supported
504 missing = set(pullop.remote.requirements) - pullop.repo.supported
504 if missing:
505 if missing:
505 msg = _("required features are not"
506 msg = _("required features are not"
506 " supported in the destination:"
507 " supported in the destination:"
507 " %s") % (', '.join(sorted(missing)))
508 " %s") % (', '.join(sorted(missing)))
508 raise util.Abort(msg)
509 raise util.Abort(msg)
509
510
510 lock = pullop.repo.lock()
511 lock = pullop.repo.lock()
511 try:
512 try:
512 _pulldiscovery(pullop)
513 _pulldiscovery(pullop)
513 if pullop.remote.capable('bundle2'):
514 if pullop.remote.capable('bundle2'):
514 _pullbundle2(pullop)
515 _pullbundle2(pullop)
515 if 'changegroup' in pullop.todosteps:
516 if 'changegroup' in pullop.todosteps:
516 _pullchangeset(pullop)
517 _pullchangeset(pullop)
517 if 'phases' in pullop.todosteps:
518 if 'phases' in pullop.todosteps:
518 _pullphase(pullop)
519 _pullphase(pullop)
519 if 'obsmarkers' in pullop.todosteps:
520 if 'obsmarkers' in pullop.todosteps:
520 _pullobsolete(pullop)
521 _pullobsolete(pullop)
521 pullop.closetransaction()
522 pullop.closetransaction()
522 finally:
523 finally:
523 pullop.releasetransaction()
524 pullop.releasetransaction()
524 lock.release()
525 lock.release()
525
526
526 return pullop.cgresult
527 return pullop.cgresult
527
528
528 def _pulldiscovery(pullop):
529 def _pulldiscovery(pullop):
529 """discovery phase for the pull
530 """discovery phase for the pull
530
531
531 Current handle changeset discovery only, will change handle all discovery
532 Current handle changeset discovery only, will change handle all discovery
532 at some point."""
533 at some point."""
533 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
534 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
534 pullop.remote,
535 pullop.remote,
535 heads=pullop.heads,
536 heads=pullop.heads,
536 force=pullop.force)
537 force=pullop.force)
537 pullop.common, pullop.fetch, pullop.rheads = tmp
538 pullop.common, pullop.fetch, pullop.rheads = tmp
538
539
539 def _pullbundle2(pullop):
540 def _pullbundle2(pullop):
540 """pull data using bundle2
541 """pull data using bundle2
541
542
542 For now, the only supported data are changegroup."""
543 For now, the only supported data are changegroup."""
543 kwargs = {'bundlecaps': set(['HG20'])}
544 kwargs = {'bundlecaps': set(['HG20'])}
544 # pulling changegroup
545 # pulling changegroup
545 pullop.todosteps.remove('changegroup')
546 pullop.todosteps.remove('changegroup')
546 if not pullop.fetch:
547 if not pullop.fetch:
547 pullop.repo.ui.status(_("no changes found\n"))
548 pullop.repo.ui.status(_("no changes found\n"))
548 pullop.cgresult = 0
549 pullop.cgresult = 0
549 else:
550 else:
550 kwargs['common'] = pullop.common
551 kwargs['common'] = pullop.common
551 kwargs['heads'] = pullop.heads or pullop.rheads
552 kwargs['heads'] = pullop.heads or pullop.rheads
552 if pullop.heads is None and list(pullop.common) == [nullid]:
553 if pullop.heads is None and list(pullop.common) == [nullid]:
553 pullop.repo.ui.status(_("requesting all changes\n"))
554 pullop.repo.ui.status(_("requesting all changes\n"))
554 if kwargs.keys() == ['format']:
555 if kwargs.keys() == ['format']:
555 return # nothing to pull
556 return # nothing to pull
556 bundle = pullop.remote.getbundle('pull', **kwargs)
557 bundle = pullop.remote.getbundle('pull', **kwargs)
557 try:
558 try:
558 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
559 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
559 except KeyError, exc:
560 except KeyError, exc:
560 raise util.Abort('missing support for %s' % exc)
561 raise util.Abort('missing support for %s' % exc)
561 assert len(op.records['changegroup']) == 1
562 assert len(op.records['changegroup']) == 1
562 pullop.cgresult = op.records['changegroup'][0]['return']
563 pullop.cgresult = op.records['changegroup'][0]['return']
563
564
564 def _pullchangeset(pullop):
565 def _pullchangeset(pullop):
565 """pull changeset from unbundle into the local repo"""
566 """pull changeset from unbundle into the local repo"""
566 # We delay the open of the transaction as late as possible so we
567 # We delay the open of the transaction as late as possible so we
567 # don't open transaction for nothing or you break future useful
568 # don't open transaction for nothing or you break future useful
568 # rollback call
569 # rollback call
569 pullop.todosteps.remove('changegroup')
570 pullop.todosteps.remove('changegroup')
570 if not pullop.fetch:
571 if not pullop.fetch:
571 pullop.repo.ui.status(_("no changes found\n"))
572 pullop.repo.ui.status(_("no changes found\n"))
572 pullop.cgresult = 0
573 pullop.cgresult = 0
573 return
574 return
574 pullop.gettransaction()
575 pullop.gettransaction()
575 if pullop.heads is None and list(pullop.common) == [nullid]:
576 if pullop.heads is None and list(pullop.common) == [nullid]:
576 pullop.repo.ui.status(_("requesting all changes\n"))
577 pullop.repo.ui.status(_("requesting all changes\n"))
577 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
578 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
578 # issue1320, avoid a race if remote changed after discovery
579 # issue1320, avoid a race if remote changed after discovery
579 pullop.heads = pullop.rheads
580 pullop.heads = pullop.rheads
580
581
581 if pullop.remote.capable('getbundle'):
582 if pullop.remote.capable('getbundle'):
582 # TODO: get bundlecaps from remote
583 # TODO: get bundlecaps from remote
583 cg = pullop.remote.getbundle('pull', common=pullop.common,
584 cg = pullop.remote.getbundle('pull', common=pullop.common,
584 heads=pullop.heads or pullop.rheads)
585 heads=pullop.heads or pullop.rheads)
585 elif pullop.heads is None:
586 elif pullop.heads is None:
586 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
587 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
587 elif not pullop.remote.capable('changegroupsubset'):
588 elif not pullop.remote.capable('changegroupsubset'):
588 raise util.Abort(_("partial pull cannot be done because "
589 raise util.Abort(_("partial pull cannot be done because "
589 "other repository doesn't support "
590 "other repository doesn't support "
590 "changegroupsubset."))
591 "changegroupsubset."))
591 else:
592 else:
592 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
593 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
593 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
594 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
594 pullop.remote.url())
595 pullop.remote.url())
595
596
596 def _pullphase(pullop):
597 def _pullphase(pullop):
597 # Get remote phases data from remote
598 # Get remote phases data from remote
598 pullop.todosteps.remove('phases')
599 pullop.todosteps.remove('phases')
599 remotephases = pullop.remote.listkeys('phases')
600 remotephases = pullop.remote.listkeys('phases')
600 publishing = bool(remotephases.get('publishing', False))
601 publishing = bool(remotephases.get('publishing', False))
601 if remotephases and not publishing:
602 if remotephases and not publishing:
602 # remote is new and unpublishing
603 # remote is new and unpublishing
603 pheads, _dr = phases.analyzeremotephases(pullop.repo,
604 pheads, _dr = phases.analyzeremotephases(pullop.repo,
604 pullop.pulledsubset,
605 pullop.pulledsubset,
605 remotephases)
606 remotephases)
606 phases.advanceboundary(pullop.repo, phases.public, pheads)
607 phases.advanceboundary(pullop.repo, phases.public, pheads)
607 phases.advanceboundary(pullop.repo, phases.draft,
608 phases.advanceboundary(pullop.repo, phases.draft,
608 pullop.pulledsubset)
609 pullop.pulledsubset)
609 else:
610 else:
610 # Remote is old or publishing all common changesets
611 # Remote is old or publishing all common changesets
611 # should be seen as public
612 # should be seen as public
612 phases.advanceboundary(pullop.repo, phases.public,
613 phases.advanceboundary(pullop.repo, phases.public,
613 pullop.pulledsubset)
614 pullop.pulledsubset)
614
615
615 def _pullobsolete(pullop):
616 def _pullobsolete(pullop):
616 """utility function to pull obsolete markers from a remote
617 """utility function to pull obsolete markers from a remote
617
618
618 The `gettransaction` is function that return the pull transaction, creating
619 The `gettransaction` is function that return the pull transaction, creating
619 one if necessary. We return the transaction to inform the calling code that
620 one if necessary. We return the transaction to inform the calling code that
620 a new transaction have been created (when applicable).
621 a new transaction have been created (when applicable).
621
622
622 Exists mostly to allow overriding for experimentation purpose"""
623 Exists mostly to allow overriding for experimentation purpose"""
623 pullop.todosteps.remove('obsmarkers')
624 pullop.todosteps.remove('obsmarkers')
624 tr = None
625 tr = None
625 if obsolete._enabled:
626 if obsolete._enabled:
626 pullop.repo.ui.debug('fetching remote obsolete markers\n')
627 pullop.repo.ui.debug('fetching remote obsolete markers\n')
627 remoteobs = pullop.remote.listkeys('obsolete')
628 remoteobs = pullop.remote.listkeys('obsolete')
628 if 'dump0' in remoteobs:
629 if 'dump0' in remoteobs:
629 tr = pullop.gettransaction()
630 tr = pullop.gettransaction()
630 for key in sorted(remoteobs, reverse=True):
631 for key in sorted(remoteobs, reverse=True):
631 if key.startswith('dump'):
632 if key.startswith('dump'):
632 data = base85.b85decode(remoteobs[key])
633 data = base85.b85decode(remoteobs[key])
633 pullop.repo.obsstore.mergemarkers(tr, data)
634 pullop.repo.obsstore.mergemarkers(tr, data)
634 pullop.repo.invalidatevolatilesets()
635 pullop.repo.invalidatevolatilesets()
635 return tr
636 return tr
636
637
637 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
638 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
638 """return a full bundle (with potentially multiple kind of parts)
639 """return a full bundle (with potentially multiple kind of parts)
639
640
640 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
641 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
641 passed. For now, the bundle can contain only changegroup, but this will
642 passed. For now, the bundle can contain only changegroup, but this will
642 changes when more part type will be available for bundle2.
643 changes when more part type will be available for bundle2.
643
644
644 This is different from changegroup.getbundle that only returns an HG10
645 This is different from changegroup.getbundle that only returns an HG10
645 changegroup bundle. They may eventually get reunited in the future when we
646 changegroup bundle. They may eventually get reunited in the future when we
646 have a clearer idea of the API we what to query different data.
647 have a clearer idea of the API we what to query different data.
647
648
648 The implementation is at a very early stage and will get massive rework
649 The implementation is at a very early stage and will get massive rework
649 when the API of bundle is refined.
650 when the API of bundle is refined.
650 """
651 """
651 # build bundle here.
652 # build bundle here.
652 cg = changegroup.getbundle(repo, source, heads=heads,
653 cg = changegroup.getbundle(repo, source, heads=heads,
653 common=common, bundlecaps=bundlecaps)
654 common=common, bundlecaps=bundlecaps)
654 if bundlecaps is None or 'HG20' not in bundlecaps:
655 if bundlecaps is None or 'HG20' not in bundlecaps:
655 return cg
656 return cg
656 # very crude first implementation,
657 # very crude first implementation,
657 # the bundle API will change and the generation will be done lazily.
658 # the bundle API will change and the generation will be done lazily.
658 bundler = bundle2.bundle20(repo.ui)
659 bundler = bundle2.bundle20(repo.ui)
659 part = bundle2.bundlepart('changegroup', data=cg.getchunks())
660 part = bundle2.bundlepart('changegroup', data=cg.getchunks())
660 bundler.addpart(part)
661 bundler.addpart(part)
661 return util.chunkbuffer(bundler.getchunks())
662 return util.chunkbuffer(bundler.getchunks())
662
663
663 class PushRaced(RuntimeError):
664 class PushRaced(RuntimeError):
664 """An exception raised during unbundling that indicate a push race"""
665 """An exception raised during unbundling that indicate a push race"""
665
666
666 def check_heads(repo, their_heads, context):
667 def check_heads(repo, their_heads, context):
667 """check if the heads of a repo have been modified
668 """check if the heads of a repo have been modified
668
669
669 Used by peer for unbundling.
670 Used by peer for unbundling.
670 """
671 """
671 heads = repo.heads()
672 heads = repo.heads()
672 heads_hash = util.sha1(''.join(sorted(heads))).digest()
673 heads_hash = util.sha1(''.join(sorted(heads))).digest()
673 if not (their_heads == ['force'] or their_heads == heads or
674 if not (their_heads == ['force'] or their_heads == heads or
674 their_heads == ['hashed', heads_hash]):
675 their_heads == ['hashed', heads_hash]):
675 # someone else committed/pushed/unbundled while we
676 # someone else committed/pushed/unbundled while we
676 # were transferring data
677 # were transferring data
677 raise PushRaced('repository changed while %s - '
678 raise PushRaced('repository changed while %s - '
678 'please try again' % context)
679 'please try again' % context)
679
680
680 def unbundle(repo, cg, heads, source, url):
681 def unbundle(repo, cg, heads, source, url):
681 """Apply a bundle to a repo.
682 """Apply a bundle to a repo.
682
683
683 this function makes sure the repo is locked during the application and have
684 this function makes sure the repo is locked during the application and have
684 mechanism to check that no push race occurred between the creation of the
685 mechanism to check that no push race occurred between the creation of the
685 bundle and its application.
686 bundle and its application.
686
687
687 If the push was raced as PushRaced exception is raised."""
688 If the push was raced as PushRaced exception is raised."""
688 r = 0
689 r = 0
689 # need a transaction when processing a bundle2 stream
690 # need a transaction when processing a bundle2 stream
690 tr = None
691 tr = None
691 lock = repo.lock()
692 lock = repo.lock()
692 try:
693 try:
693 check_heads(repo, heads, 'uploading changes')
694 check_heads(repo, heads, 'uploading changes')
694 # push can proceed
695 # push can proceed
695 if util.safehasattr(cg, 'params'):
696 if util.safehasattr(cg, 'params'):
696 tr = repo.transaction('unbundle')
697 tr = repo.transaction('unbundle')
697 r = bundle2.processbundle(repo, cg, lambda: tr).reply
698 r = bundle2.processbundle(repo, cg, lambda: tr).reply
698 tr.close()
699 tr.close()
699 else:
700 else:
700 r = changegroup.addchangegroup(repo, cg, source, url)
701 r = changegroup.addchangegroup(repo, cg, source, url)
701 finally:
702 finally:
702 if tr is not None:
703 if tr is not None:
703 tr.release()
704 tr.release()
704 lock.release()
705 lock.release()
705 return r
706 return r
@@ -1,785 +1,801 b''
1
1
2 Create an extension to test bundle2 API
2 Create an extension to test bundle2 API
3
3
4 $ cat > bundle2.py << EOF
4 $ cat > bundle2.py << EOF
5 > """A small extension to test bundle2 implementation
5 > """A small extension to test bundle2 implementation
6 >
6 >
7 > Current bundle2 implementation is far too limited to be used in any core
7 > Current bundle2 implementation is far too limited to be used in any core
8 > code. We still need to be able to test it while it grow up.
8 > code. We still need to be able to test it while it grow up.
9 > """
9 > """
10 >
10 >
11 > import sys
11 > import sys
12 > from mercurial import cmdutil
12 > from mercurial import cmdutil
13 > from mercurial import util
13 > from mercurial import util
14 > from mercurial import bundle2
14 > from mercurial import bundle2
15 > from mercurial import scmutil
15 > from mercurial import scmutil
16 > from mercurial import discovery
16 > from mercurial import discovery
17 > from mercurial import changegroup
17 > from mercurial import changegroup
18 > cmdtable = {}
18 > cmdtable = {}
19 > command = cmdutil.command(cmdtable)
19 > command = cmdutil.command(cmdtable)
20 >
20 >
21 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
21 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
22 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
22 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
23 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
23 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
24 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
24 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
25 >
25 >
26 > @bundle2.parthandler('test:song')
26 > @bundle2.parthandler('test:song')
27 > def songhandler(op, part):
27 > def songhandler(op, part):
28 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
28 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
29 > op.ui.write('The choir starts singing:\n')
29 > op.ui.write('The choir starts singing:\n')
30 > verses = 0
30 > verses = 0
31 > for line in part.read().split('\n'):
31 > for line in part.read().split('\n'):
32 > op.ui.write(' %s\n' % line)
32 > op.ui.write(' %s\n' % line)
33 > verses += 1
33 > verses += 1
34 > op.records.add('song', {'verses': verses})
34 > op.records.add('song', {'verses': verses})
35 >
35 >
36 > @bundle2.parthandler('test:ping')
36 > @bundle2.parthandler('test:ping')
37 > def pinghandler(op, part):
37 > def pinghandler(op, part):
38 > op.ui.write('received ping request (id %i)\n' % part.id)
38 > op.ui.write('received ping request (id %i)\n' % part.id)
39 > if op.reply is not None:
39 > if op.reply is not None:
40 > rpart = bundle2.bundlepart('test:pong',
40 > rpart = bundle2.bundlepart('test:pong',
41 > [('in-reply-to', str(part.id))])
41 > [('in-reply-to', str(part.id))])
42 > op.reply.addpart(rpart)
42 > op.reply.addpart(rpart)
43 >
43 >
44 > @command('bundle2',
44 > @command('bundle2',
45 > [('', 'param', [], 'stream level parameter'),
45 > [('', 'param', [], 'stream level parameter'),
46 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
46 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
47 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
47 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
48 > ('', 'reply', False, 'produce a reply bundle'),
48 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
49 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
49 > '[OUTPUTFILE]')
50 > '[OUTPUTFILE]')
50 > def cmdbundle2(ui, repo, path=None, **opts):
51 > def cmdbundle2(ui, repo, path=None, **opts):
51 > """write a bundle2 container on standard ouput"""
52 > """write a bundle2 container on standard ouput"""
52 > bundler = bundle2.bundle20(ui)
53 > bundler = bundle2.bundle20(ui)
53 > for p in opts['param']:
54 > for p in opts['param']:
54 > p = p.split('=', 1)
55 > p = p.split('=', 1)
55 > try:
56 > try:
56 > bundler.addparam(*p)
57 > bundler.addparam(*p)
57 > except ValueError, exc:
58 > except ValueError, exc:
58 > raise util.Abort('%s' % exc)
59 > raise util.Abort('%s' % exc)
59 >
60 >
61 > if opts['reply']:
62 > bundler.addpart(bundle2.bundlepart('replycaps'))
63 >
60 > revs = opts['rev']
64 > revs = opts['rev']
61 > if 'rev' in opts:
65 > if 'rev' in opts:
62 > revs = scmutil.revrange(repo, opts['rev'])
66 > revs = scmutil.revrange(repo, opts['rev'])
63 > if revs:
67 > if revs:
64 > # very crude version of a changegroup part creation
68 > # very crude version of a changegroup part creation
65 > bundled = repo.revs('%ld::%ld', revs, revs)
69 > bundled = repo.revs('%ld::%ld', revs, revs)
66 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
70 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
67 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
71 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
68 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
72 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
69 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
73 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
70 > part = bundle2.bundlepart('changegroup', data=cg.getchunks())
74 > part = bundle2.bundlepart('changegroup', data=cg.getchunks())
71 > bundler.addpart(part)
75 > bundler.addpart(part)
72 >
76 >
73 > if opts['parts']:
77 > if opts['parts']:
74 > part = bundle2.bundlepart('test:empty')
78 > part = bundle2.bundlepart('test:empty')
75 > bundler.addpart(part)
79 > bundler.addpart(part)
76 > # add a second one to make sure we handle multiple parts
80 > # add a second one to make sure we handle multiple parts
77 > part = bundle2.bundlepart('test:empty')
81 > part = bundle2.bundlepart('test:empty')
78 > bundler.addpart(part)
82 > bundler.addpart(part)
79 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
83 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
80 > bundler.addpart(part)
84 > bundler.addpart(part)
81 > part = bundle2.bundlepart('test:math',
85 > part = bundle2.bundlepart('test:math',
82 > [('pi', '3.14'), ('e', '2.72')],
86 > [('pi', '3.14'), ('e', '2.72')],
83 > [('cooking', 'raw')],
87 > [('cooking', 'raw')],
84 > '42')
88 > '42')
85 > bundler.addpart(part)
89 > bundler.addpart(part)
86 > if opts['unknown']:
90 > if opts['unknown']:
87 > part = bundle2.bundlepart('test:UNKNOWN',
91 > part = bundle2.bundlepart('test:UNKNOWN',
88 > data='some random content')
92 > data='some random content')
89 > bundler.addpart(part)
93 > bundler.addpart(part)
90 > if opts['parts']:
94 > if opts['parts']:
91 > part = bundle2.bundlepart('test:ping')
95 > part = bundle2.bundlepart('test:ping')
92 > bundler.addpart(part)
96 > bundler.addpart(part)
93 >
97 >
94 > if path is None:
98 > if path is None:
95 > file = sys.stdout
99 > file = sys.stdout
96 > else:
100 > else:
97 > file = open(path, 'w')
101 > file = open(path, 'w')
98 >
102 >
99 > for chunk in bundler.getchunks():
103 > for chunk in bundler.getchunks():
100 > file.write(chunk)
104 > file.write(chunk)
101 >
105 >
102 > @command('unbundle2', [], '')
106 > @command('unbundle2', [], '')
103 > def cmdunbundle2(ui, repo, replypath=None):
107 > def cmdunbundle2(ui, repo, replypath=None):
104 > """process a bundle2 stream from stdin on the current repo"""
108 > """process a bundle2 stream from stdin on the current repo"""
105 > try:
109 > try:
106 > tr = None
110 > tr = None
107 > lock = repo.lock()
111 > lock = repo.lock()
108 > tr = repo.transaction('processbundle')
112 > tr = repo.transaction('processbundle')
109 > try:
113 > try:
110 > unbundler = bundle2.unbundle20(ui, sys.stdin)
114 > unbundler = bundle2.unbundle20(ui, sys.stdin)
111 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
115 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
112 > tr.close()
116 > tr.close()
113 > except KeyError, exc:
117 > except KeyError, exc:
114 > raise util.Abort('missing support for %s' % exc)
118 > raise util.Abort('missing support for %s' % exc)
115 > finally:
119 > finally:
116 > if tr is not None:
120 > if tr is not None:
117 > tr.release()
121 > tr.release()
118 > lock.release()
122 > lock.release()
119 > remains = sys.stdin.read()
123 > remains = sys.stdin.read()
120 > ui.write('%i unread bytes\n' % len(remains))
124 > ui.write('%i unread bytes\n' % len(remains))
121 > if op.records['song']:
125 > if op.records['song']:
122 > totalverses = sum(r['verses'] for r in op.records['song'])
126 > totalverses = sum(r['verses'] for r in op.records['song'])
123 > ui.write('%i total verses sung\n' % totalverses)
127 > ui.write('%i total verses sung\n' % totalverses)
124 > for rec in op.records['changegroup']:
128 > for rec in op.records['changegroup']:
125 > ui.write('addchangegroup return: %i\n' % rec['return'])
129 > ui.write('addchangegroup return: %i\n' % rec['return'])
126 > if op.reply is not None and replypath is not None:
130 > if op.reply is not None and replypath is not None:
127 > file = open(replypath, 'w')
131 > file = open(replypath, 'w')
128 > for chunk in op.reply.getchunks():
132 > for chunk in op.reply.getchunks():
129 > file.write(chunk)
133 > file.write(chunk)
130 >
134 >
131 > @command('statbundle2', [], '')
135 > @command('statbundle2', [], '')
132 > def cmdstatbundle2(ui, repo):
136 > def cmdstatbundle2(ui, repo):
133 > """print statistic on the bundle2 container read from stdin"""
137 > """print statistic on the bundle2 container read from stdin"""
134 > unbundler = bundle2.unbundle20(ui, sys.stdin)
138 > unbundler = bundle2.unbundle20(ui, sys.stdin)
135 > try:
139 > try:
136 > params = unbundler.params
140 > params = unbundler.params
137 > except KeyError, exc:
141 > except KeyError, exc:
138 > raise util.Abort('unknown parameters: %s' % exc)
142 > raise util.Abort('unknown parameters: %s' % exc)
139 > ui.write('options count: %i\n' % len(params))
143 > ui.write('options count: %i\n' % len(params))
140 > for key in sorted(params):
144 > for key in sorted(params):
141 > ui.write('- %s\n' % key)
145 > ui.write('- %s\n' % key)
142 > value = params[key]
146 > value = params[key]
143 > if value is not None:
147 > if value is not None:
144 > ui.write(' %s\n' % value)
148 > ui.write(' %s\n' % value)
145 > count = 0
149 > count = 0
146 > for p in unbundler.iterparts():
150 > for p in unbundler.iterparts():
147 > count += 1
151 > count += 1
148 > ui.write(' :%s:\n' % p.type)
152 > ui.write(' :%s:\n' % p.type)
149 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
153 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
150 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
154 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
151 > ui.write(' payload: %i bytes\n' % len(p.read()))
155 > ui.write(' payload: %i bytes\n' % len(p.read()))
152 > ui.write('parts count: %i\n' % count)
156 > ui.write('parts count: %i\n' % count)
153 > EOF
157 > EOF
154 $ cat >> $HGRCPATH << EOF
158 $ cat >> $HGRCPATH << EOF
155 > [extensions]
159 > [extensions]
156 > bundle2=$TESTTMP/bundle2.py
160 > bundle2=$TESTTMP/bundle2.py
157 > [server]
161 > [server]
158 > bundle2=True
162 > bundle2=True
159 > [ui]
163 > [ui]
160 > ssh=python "$TESTDIR/dummyssh"
164 > ssh=python "$TESTDIR/dummyssh"
161 > [web]
165 > [web]
162 > push_ssl = false
166 > push_ssl = false
163 > allow_push = *
167 > allow_push = *
164 > EOF
168 > EOF
165
169
166 The extension requires a repo (currently unused)
170 The extension requires a repo (currently unused)
167
171
168 $ hg init main
172 $ hg init main
169 $ cd main
173 $ cd main
170 $ touch a
174 $ touch a
171 $ hg add a
175 $ hg add a
172 $ hg commit -m 'a'
176 $ hg commit -m 'a'
173
177
174
178
175 Empty bundle
179 Empty bundle
176 =================
180 =================
177
181
178 - no option
182 - no option
179 - no parts
183 - no parts
180
184
181 Test bundling
185 Test bundling
182
186
183 $ hg bundle2
187 $ hg bundle2
184 HG20\x00\x00\x00\x00 (no-eol) (esc)
188 HG20\x00\x00\x00\x00 (no-eol) (esc)
185
189
186 Test unbundling
190 Test unbundling
187
191
188 $ hg bundle2 | hg statbundle2
192 $ hg bundle2 | hg statbundle2
189 options count: 0
193 options count: 0
190 parts count: 0
194 parts count: 0
191
195
192 Test old style bundle are detected and refused
196 Test old style bundle are detected and refused
193
197
194 $ hg bundle --all ../bundle.hg
198 $ hg bundle --all ../bundle.hg
195 1 changesets found
199 1 changesets found
196 $ hg statbundle2 < ../bundle.hg
200 $ hg statbundle2 < ../bundle.hg
197 abort: unknown bundle version 10
201 abort: unknown bundle version 10
198 [255]
202 [255]
199
203
200 Test parameters
204 Test parameters
201 =================
205 =================
202
206
203 - some options
207 - some options
204 - no parts
208 - no parts
205
209
206 advisory parameters, no value
210 advisory parameters, no value
207 -------------------------------
211 -------------------------------
208
212
209 Simplest possible parameters form
213 Simplest possible parameters form
210
214
211 Test generation simple option
215 Test generation simple option
212
216
213 $ hg bundle2 --param 'caution'
217 $ hg bundle2 --param 'caution'
214 HG20\x00\x07caution\x00\x00 (no-eol) (esc)
218 HG20\x00\x07caution\x00\x00 (no-eol) (esc)
215
219
216 Test unbundling
220 Test unbundling
217
221
218 $ hg bundle2 --param 'caution' | hg statbundle2
222 $ hg bundle2 --param 'caution' | hg statbundle2
219 options count: 1
223 options count: 1
220 - caution
224 - caution
221 parts count: 0
225 parts count: 0
222
226
223 Test generation multiple option
227 Test generation multiple option
224
228
225 $ hg bundle2 --param 'caution' --param 'meal'
229 $ hg bundle2 --param 'caution' --param 'meal'
226 HG20\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
230 HG20\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
227
231
228 Test unbundling
232 Test unbundling
229
233
230 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
234 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
231 options count: 2
235 options count: 2
232 - caution
236 - caution
233 - meal
237 - meal
234 parts count: 0
238 parts count: 0
235
239
236 advisory parameters, with value
240 advisory parameters, with value
237 -------------------------------
241 -------------------------------
238
242
239 Test generation
243 Test generation
240
244
241 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
245 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
242 HG20\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
246 HG20\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
243
247
244 Test unbundling
248 Test unbundling
245
249
246 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
250 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
247 options count: 3
251 options count: 3
248 - caution
252 - caution
249 - elephants
253 - elephants
250 - meal
254 - meal
251 vegan
255 vegan
252 parts count: 0
256 parts count: 0
253
257
254 parameter with special char in value
258 parameter with special char in value
255 ---------------------------------------------------
259 ---------------------------------------------------
256
260
257 Test generation
261 Test generation
258
262
259 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
263 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
260 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
264 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
261
265
262 Test unbundling
266 Test unbundling
263
267
264 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
268 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
265 options count: 2
269 options count: 2
266 - e|! 7/
270 - e|! 7/
267 babar%#==tutu
271 babar%#==tutu
268 - simple
272 - simple
269 parts count: 0
273 parts count: 0
270
274
271 Test unknown mandatory option
275 Test unknown mandatory option
272 ---------------------------------------------------
276 ---------------------------------------------------
273
277
274 $ hg bundle2 --param 'Gravity' | hg statbundle2
278 $ hg bundle2 --param 'Gravity' | hg statbundle2
275 abort: unknown parameters: 'Gravity'
279 abort: unknown parameters: 'Gravity'
276 [255]
280 [255]
277
281
278 Test debug output
282 Test debug output
279 ---------------------------------------------------
283 ---------------------------------------------------
280
284
281 bundling debug
285 bundling debug
282
286
283 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
287 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
284 start emission of HG20 stream
288 start emission of HG20 stream
285 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
289 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
286 start of parts
290 start of parts
287 end of bundle
291 end of bundle
288
292
289 file content is ok
293 file content is ok
290
294
291 $ cat ../out.hg2
295 $ cat ../out.hg2
292 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
296 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
293
297
294 unbundling debug
298 unbundling debug
295
299
296 $ hg statbundle2 --debug < ../out.hg2
300 $ hg statbundle2 --debug < ../out.hg2
297 start processing of HG20 stream
301 start processing of HG20 stream
298 reading bundle2 stream parameters
302 reading bundle2 stream parameters
299 ignoring unknown parameter 'e|! 7/'
303 ignoring unknown parameter 'e|! 7/'
300 ignoring unknown parameter 'simple'
304 ignoring unknown parameter 'simple'
301 options count: 2
305 options count: 2
302 - e|! 7/
306 - e|! 7/
303 babar%#==tutu
307 babar%#==tutu
304 - simple
308 - simple
305 start extraction of bundle2 parts
309 start extraction of bundle2 parts
306 part header size: 0
310 part header size: 0
307 end of bundle2 stream
311 end of bundle2 stream
308 parts count: 0
312 parts count: 0
309
313
310
314
311 Test buggy input
315 Test buggy input
312 ---------------------------------------------------
316 ---------------------------------------------------
313
317
314 empty parameter name
318 empty parameter name
315
319
316 $ hg bundle2 --param '' --quiet
320 $ hg bundle2 --param '' --quiet
317 abort: empty parameter name
321 abort: empty parameter name
318 [255]
322 [255]
319
323
320 bad parameter name
324 bad parameter name
321
325
322 $ hg bundle2 --param 42babar
326 $ hg bundle2 --param 42babar
323 abort: non letter first character: '42babar'
327 abort: non letter first character: '42babar'
324 [255]
328 [255]
325
329
326
330
327 Test part
331 Test part
328 =================
332 =================
329
333
330 $ hg bundle2 --parts ../parts.hg2 --debug
334 $ hg bundle2 --parts ../parts.hg2 --debug
331 start emission of HG20 stream
335 start emission of HG20 stream
332 bundle parameter:
336 bundle parameter:
333 start of parts
337 start of parts
334 bundle part: "test:empty"
338 bundle part: "test:empty"
335 bundle part: "test:empty"
339 bundle part: "test:empty"
336 bundle part: "test:song"
340 bundle part: "test:song"
337 bundle part: "test:math"
341 bundle part: "test:math"
338 bundle part: "test:ping"
342 bundle part: "test:ping"
339 end of bundle
343 end of bundle
340
344
341 $ cat ../parts.hg2
345 $ cat ../parts.hg2
342 HG20\x00\x00\x00\x11 (esc)
346 HG20\x00\x00\x00\x11 (esc)
343 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
347 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
344 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
348 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
345 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
349 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
346 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x03\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
350 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x03\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
347
351
348
352
349 $ hg statbundle2 < ../parts.hg2
353 $ hg statbundle2 < ../parts.hg2
350 options count: 0
354 options count: 0
351 :test:empty:
355 :test:empty:
352 mandatory: 0
356 mandatory: 0
353 advisory: 0
357 advisory: 0
354 payload: 0 bytes
358 payload: 0 bytes
355 :test:empty:
359 :test:empty:
356 mandatory: 0
360 mandatory: 0
357 advisory: 0
361 advisory: 0
358 payload: 0 bytes
362 payload: 0 bytes
359 :test:song:
363 :test:song:
360 mandatory: 0
364 mandatory: 0
361 advisory: 0
365 advisory: 0
362 payload: 178 bytes
366 payload: 178 bytes
363 :test:math:
367 :test:math:
364 mandatory: 2
368 mandatory: 2
365 advisory: 1
369 advisory: 1
366 payload: 2 bytes
370 payload: 2 bytes
367 :test:ping:
371 :test:ping:
368 mandatory: 0
372 mandatory: 0
369 advisory: 0
373 advisory: 0
370 payload: 0 bytes
374 payload: 0 bytes
371 parts count: 5
375 parts count: 5
372
376
373 $ hg statbundle2 --debug < ../parts.hg2
377 $ hg statbundle2 --debug < ../parts.hg2
374 start processing of HG20 stream
378 start processing of HG20 stream
375 reading bundle2 stream parameters
379 reading bundle2 stream parameters
376 options count: 0
380 options count: 0
377 start extraction of bundle2 parts
381 start extraction of bundle2 parts
378 part header size: 17
382 part header size: 17
379 part type: "test:empty"
383 part type: "test:empty"
380 part id: "0"
384 part id: "0"
381 part parameters: 0
385 part parameters: 0
382 :test:empty:
386 :test:empty:
383 mandatory: 0
387 mandatory: 0
384 advisory: 0
388 advisory: 0
385 payload chunk size: 0
389 payload chunk size: 0
386 payload: 0 bytes
390 payload: 0 bytes
387 part header size: 17
391 part header size: 17
388 part type: "test:empty"
392 part type: "test:empty"
389 part id: "1"
393 part id: "1"
390 part parameters: 0
394 part parameters: 0
391 :test:empty:
395 :test:empty:
392 mandatory: 0
396 mandatory: 0
393 advisory: 0
397 advisory: 0
394 payload chunk size: 0
398 payload chunk size: 0
395 payload: 0 bytes
399 payload: 0 bytes
396 part header size: 16
400 part header size: 16
397 part type: "test:song"
401 part type: "test:song"
398 part id: "2"
402 part id: "2"
399 part parameters: 0
403 part parameters: 0
400 :test:song:
404 :test:song:
401 mandatory: 0
405 mandatory: 0
402 advisory: 0
406 advisory: 0
403 payload chunk size: 178
407 payload chunk size: 178
404 payload chunk size: 0
408 payload chunk size: 0
405 payload: 178 bytes
409 payload: 178 bytes
406 part header size: 43
410 part header size: 43
407 part type: "test:math"
411 part type: "test:math"
408 part id: "3"
412 part id: "3"
409 part parameters: 3
413 part parameters: 3
410 :test:math:
414 :test:math:
411 mandatory: 2
415 mandatory: 2
412 advisory: 1
416 advisory: 1
413 payload chunk size: 2
417 payload chunk size: 2
414 payload chunk size: 0
418 payload chunk size: 0
415 payload: 2 bytes
419 payload: 2 bytes
416 part header size: 16
420 part header size: 16
417 part type: "test:ping"
421 part type: "test:ping"
418 part id: "4"
422 part id: "4"
419 part parameters: 0
423 part parameters: 0
420 :test:ping:
424 :test:ping:
421 mandatory: 0
425 mandatory: 0
422 advisory: 0
426 advisory: 0
423 payload chunk size: 0
427 payload chunk size: 0
424 payload: 0 bytes
428 payload: 0 bytes
425 part header size: 0
429 part header size: 0
426 end of bundle2 stream
430 end of bundle2 stream
427 parts count: 5
431 parts count: 5
428
432
429 Test actual unbundling of test part
433 Test actual unbundling of test part
430 =======================================
434 =======================================
431
435
432 Process the bundle
436 Process the bundle
433
437
434 $ hg unbundle2 --debug < ../parts.hg2
438 $ hg unbundle2 --debug < ../parts.hg2
435 start processing of HG20 stream
439 start processing of HG20 stream
436 reading bundle2 stream parameters
440 reading bundle2 stream parameters
437 start extraction of bundle2 parts
441 start extraction of bundle2 parts
438 part header size: 17
442 part header size: 17
439 part type: "test:empty"
443 part type: "test:empty"
440 part id: "0"
444 part id: "0"
441 part parameters: 0
445 part parameters: 0
442 ignoring unknown advisory part 'test:empty'
446 ignoring unknown advisory part 'test:empty'
443 payload chunk size: 0
447 payload chunk size: 0
444 part header size: 17
448 part header size: 17
445 part type: "test:empty"
449 part type: "test:empty"
446 part id: "1"
450 part id: "1"
447 part parameters: 0
451 part parameters: 0
448 ignoring unknown advisory part 'test:empty'
452 ignoring unknown advisory part 'test:empty'
449 payload chunk size: 0
453 payload chunk size: 0
450 part header size: 16
454 part header size: 16
451 part type: "test:song"
455 part type: "test:song"
452 part id: "2"
456 part id: "2"
453 part parameters: 0
457 part parameters: 0
454 found a handler for part 'test:song'
458 found a handler for part 'test:song'
455 The choir starts singing:
459 The choir starts singing:
456 payload chunk size: 178
460 payload chunk size: 178
457 payload chunk size: 0
461 payload chunk size: 0
458 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
462 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
459 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
463 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
460 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
464 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
461 part header size: 43
465 part header size: 43
462 part type: "test:math"
466 part type: "test:math"
463 part id: "3"
467 part id: "3"
464 part parameters: 3
468 part parameters: 3
465 ignoring unknown advisory part 'test:math'
469 ignoring unknown advisory part 'test:math'
466 payload chunk size: 2
470 payload chunk size: 2
467 payload chunk size: 0
471 payload chunk size: 0
468 part header size: 16
472 part header size: 16
469 part type: "test:ping"
473 part type: "test:ping"
470 part id: "4"
474 part id: "4"
471 part parameters: 0
475 part parameters: 0
472 found a handler for part 'test:ping'
476 found a handler for part 'test:ping'
473 received ping request (id 4)
477 received ping request (id 4)
474 payload chunk size: 0
478 payload chunk size: 0
475 part header size: 0
479 part header size: 0
476 end of bundle2 stream
480 end of bundle2 stream
477 0 unread bytes
481 0 unread bytes
478 3 total verses sung
482 3 total verses sung
479
483
480 Unbundle with an unknown mandatory part
484 Unbundle with an unknown mandatory part
481 (should abort)
485 (should abort)
482
486
483 $ hg bundle2 --parts --unknown ../unknown.hg2
487 $ hg bundle2 --parts --unknown ../unknown.hg2
484
488
485 $ hg unbundle2 < ../unknown.hg2
489 $ hg unbundle2 < ../unknown.hg2
486 The choir starts singing:
490 The choir starts singing:
487 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
491 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
488 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
492 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
489 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
493 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
490 0 unread bytes
494 0 unread bytes
491 abort: missing support for 'test:unknown'
495 abort: missing support for 'test:unknown'
492 [255]
496 [255]
493
497
494 unbundle with a reply
498 unbundle with a reply
495
499
496 $ hg unbundle2 ../reply.hg2 < ../parts.hg2
500 $ hg bundle2 --parts --reply ../parts-reply.hg2
501 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
497 The choir starts singing:
502 The choir starts singing:
498 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
503 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
499 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
504 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
500 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
505 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
501 received ping request (id 4)
506 received ping request (id 5)
502 0 unread bytes
507 0 unread bytes
503 3 total verses sung
508 3 total verses sung
504
509
505 The reply is a bundle
510 The reply is a bundle
506
511
507 $ cat ../reply.hg2
512 $ cat ../reply.hg2
508 HG20\x00\x00\x00\x1e test:pong\x00\x00\x00\x00\x01\x00\x0b\x01in-reply-to4\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
513 HG20\x00\x00\x00\x1e test:pong\x00\x00\x00\x00\x01\x00\x0b\x01in-reply-to5\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
509
514
510 The reply is valid
515 The reply is valid
511
516
512 $ hg statbundle2 < ../reply.hg2
517 $ hg statbundle2 < ../reply.hg2
513 options count: 0
518 options count: 0
514 :test:pong:
519 :test:pong:
515 mandatory: 1
520 mandatory: 1
516 advisory: 0
521 advisory: 0
517 payload: 0 bytes
522 payload: 0 bytes
518 parts count: 1
523 parts count: 1
519
524
520 Support for changegroup
525 Support for changegroup
521 ===================================
526 ===================================
522
527
523 $ hg unbundle $TESTDIR/bundles/rebase.hg
528 $ hg unbundle $TESTDIR/bundles/rebase.hg
524 adding changesets
529 adding changesets
525 adding manifests
530 adding manifests
526 adding file changes
531 adding file changes
527 added 8 changesets with 7 changes to 7 files (+3 heads)
532 added 8 changesets with 7 changes to 7 files (+3 heads)
528 (run 'hg heads' to see heads, 'hg merge' to merge)
533 (run 'hg heads' to see heads, 'hg merge' to merge)
529
534
530 $ hg log -G
535 $ hg log -G
531 o changeset: 8:02de42196ebe
536 o changeset: 8:02de42196ebe
532 | tag: tip
537 | tag: tip
533 | parent: 6:24b6387c8c8c
538 | parent: 6:24b6387c8c8c
534 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
539 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
535 | date: Sat Apr 30 15:24:48 2011 +0200
540 | date: Sat Apr 30 15:24:48 2011 +0200
536 | summary: H
541 | summary: H
537 |
542 |
538 | o changeset: 7:eea13746799a
543 | o changeset: 7:eea13746799a
539 |/| parent: 6:24b6387c8c8c
544 |/| parent: 6:24b6387c8c8c
540 | | parent: 5:9520eea781bc
545 | | parent: 5:9520eea781bc
541 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
546 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
542 | | date: Sat Apr 30 15:24:48 2011 +0200
547 | | date: Sat Apr 30 15:24:48 2011 +0200
543 | | summary: G
548 | | summary: G
544 | |
549 | |
545 o | changeset: 6:24b6387c8c8c
550 o | changeset: 6:24b6387c8c8c
546 | | parent: 1:cd010b8cd998
551 | | parent: 1:cd010b8cd998
547 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
552 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
548 | | date: Sat Apr 30 15:24:48 2011 +0200
553 | | date: Sat Apr 30 15:24:48 2011 +0200
549 | | summary: F
554 | | summary: F
550 | |
555 | |
551 | o changeset: 5:9520eea781bc
556 | o changeset: 5:9520eea781bc
552 |/ parent: 1:cd010b8cd998
557 |/ parent: 1:cd010b8cd998
553 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
558 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
554 | date: Sat Apr 30 15:24:48 2011 +0200
559 | date: Sat Apr 30 15:24:48 2011 +0200
555 | summary: E
560 | summary: E
556 |
561 |
557 | o changeset: 4:32af7686d403
562 | o changeset: 4:32af7686d403
558 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
563 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
559 | | date: Sat Apr 30 15:24:48 2011 +0200
564 | | date: Sat Apr 30 15:24:48 2011 +0200
560 | | summary: D
565 | | summary: D
561 | |
566 | |
562 | o changeset: 3:5fddd98957c8
567 | o changeset: 3:5fddd98957c8
563 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
568 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
564 | | date: Sat Apr 30 15:24:48 2011 +0200
569 | | date: Sat Apr 30 15:24:48 2011 +0200
565 | | summary: C
570 | | summary: C
566 | |
571 | |
567 | o changeset: 2:42ccdea3bb16
572 | o changeset: 2:42ccdea3bb16
568 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
573 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
569 | date: Sat Apr 30 15:24:48 2011 +0200
574 | date: Sat Apr 30 15:24:48 2011 +0200
570 | summary: B
575 | summary: B
571 |
576 |
572 o changeset: 1:cd010b8cd998
577 o changeset: 1:cd010b8cd998
573 parent: -1:000000000000
578 parent: -1:000000000000
574 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
579 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
575 date: Sat Apr 30 15:24:48 2011 +0200
580 date: Sat Apr 30 15:24:48 2011 +0200
576 summary: A
581 summary: A
577
582
578 @ changeset: 0:3903775176ed
583 @ changeset: 0:3903775176ed
579 user: test
584 user: test
580 date: Thu Jan 01 00:00:00 1970 +0000
585 date: Thu Jan 01 00:00:00 1970 +0000
581 summary: a
586 summary: a
582
587
583
588
584 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
589 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
585 4 changesets found
590 4 changesets found
586 list of changesets:
591 list of changesets:
587 32af7686d403cf45b5d95f2d70cebea587ac806a
592 32af7686d403cf45b5d95f2d70cebea587ac806a
588 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
593 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
589 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
594 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
590 02de42196ebee42ef284b6780a87cdc96e8eaab6
595 02de42196ebee42ef284b6780a87cdc96e8eaab6
591 start emission of HG20 stream
596 start emission of HG20 stream
592 bundle parameter:
597 bundle parameter:
593 start of parts
598 start of parts
594 bundle part: "changegroup"
599 bundle part: "changegroup"
595 bundling: 1/4 changesets (25.00%)
600 bundling: 1/4 changesets (25.00%)
596 bundling: 2/4 changesets (50.00%)
601 bundling: 2/4 changesets (50.00%)
597 bundling: 3/4 changesets (75.00%)
602 bundling: 3/4 changesets (75.00%)
598 bundling: 4/4 changesets (100.00%)
603 bundling: 4/4 changesets (100.00%)
599 bundling: 1/4 manifests (25.00%)
604 bundling: 1/4 manifests (25.00%)
600 bundling: 2/4 manifests (50.00%)
605 bundling: 2/4 manifests (50.00%)
601 bundling: 3/4 manifests (75.00%)
606 bundling: 3/4 manifests (75.00%)
602 bundling: 4/4 manifests (100.00%)
607 bundling: 4/4 manifests (100.00%)
603 bundling: D 1/3 files (33.33%)
608 bundling: D 1/3 files (33.33%)
604 bundling: E 2/3 files (66.67%)
609 bundling: E 2/3 files (66.67%)
605 bundling: H 3/3 files (100.00%)
610 bundling: H 3/3 files (100.00%)
606 end of bundle
611 end of bundle
607
612
608 $ cat ../rev.hg2
613 $ cat ../rev.hg2
609 HG20\x00\x00\x00\x12\x0bchangegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
614 HG20\x00\x00\x00\x12\x0bchangegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
610 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
615 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
611 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
616 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
612 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
617 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
613 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
618 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
614 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
619 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
615 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
620 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
616 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
621 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
617 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
622 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
618 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
623 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
619 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
624 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
620 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
625 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
621 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
626 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
622 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
627 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
623 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
628 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
624 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
629 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
625 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
630 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
626 l\r (no-eol) (esc)
631 l\r (no-eol) (esc)
627 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
632 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
628 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
633 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
629 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
634 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
630 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
635 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
631
636
632 $ hg unbundle2 ../rev-replay.hg2 < ../rev.hg2
637 $ hg unbundle2 < ../rev.hg2
633 adding changesets
638 adding changesets
634 adding manifests
639 adding manifests
635 adding file changes
640 adding file changes
636 added 0 changesets with 0 changes to 3 files
641 added 0 changesets with 0 changes to 3 files
637 0 unread bytes
642 0 unread bytes
638 addchangegroup return: 1
643 addchangegroup return: 1
639
644
640 $ cat ../rev-replay.hg2
645 with reply
641 HG20\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to0return1\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
646
647 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
648 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
649 adding changesets
650 adding manifests
651 adding file changes
652 added 0 changesets with 0 changes to 3 files
653 0 unread bytes
654 addchangegroup return: 1
655
656 $ cat ../rev-reply.hg2
657 HG20\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
642
658
643 Real world exchange
659 Real world exchange
644 =====================
660 =====================
645
661
646
662
647 clone --pull
663 clone --pull
648
664
649 $ cd ..
665 $ cd ..
650 $ hg clone main other --pull --rev 9520eea781bc
666 $ hg clone main other --pull --rev 9520eea781bc
651 adding changesets
667 adding changesets
652 adding manifests
668 adding manifests
653 adding file changes
669 adding file changes
654 added 2 changesets with 2 changes to 2 files
670 added 2 changesets with 2 changes to 2 files
655 updating to branch default
671 updating to branch default
656 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
672 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
657 $ hg -R other log -G
673 $ hg -R other log -G
658 @ changeset: 1:9520eea781bc
674 @ changeset: 1:9520eea781bc
659 | tag: tip
675 | tag: tip
660 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
676 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
661 | date: Sat Apr 30 15:24:48 2011 +0200
677 | date: Sat Apr 30 15:24:48 2011 +0200
662 | summary: E
678 | summary: E
663 |
679 |
664 o changeset: 0:cd010b8cd998
680 o changeset: 0:cd010b8cd998
665 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
681 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
666 date: Sat Apr 30 15:24:48 2011 +0200
682 date: Sat Apr 30 15:24:48 2011 +0200
667 summary: A
683 summary: A
668
684
669
685
670 pull
686 pull
671
687
672 $ hg -R other pull -r 24b6387c8c8c
688 $ hg -R other pull -r 24b6387c8c8c
673 pulling from $TESTTMP/main (glob)
689 pulling from $TESTTMP/main (glob)
674 searching for changes
690 searching for changes
675 adding changesets
691 adding changesets
676 adding manifests
692 adding manifests
677 adding file changes
693 adding file changes
678 added 1 changesets with 1 changes to 1 files (+1 heads)
694 added 1 changesets with 1 changes to 1 files (+1 heads)
679 (run 'hg heads' to see heads, 'hg merge' to merge)
695 (run 'hg heads' to see heads, 'hg merge' to merge)
680
696
681 push
697 push
682
698
683 $ hg -R main push other --rev eea13746799a
699 $ hg -R main push other --rev eea13746799a
684 pushing to other
700 pushing to other
685 searching for changes
701 searching for changes
686 adding changesets
702 adding changesets
687 adding manifests
703 adding manifests
688 adding file changes
704 adding file changes
689 added 1 changesets with 0 changes to 0 files (-1 heads)
705 added 1 changesets with 0 changes to 0 files (-1 heads)
690
706
691 pull over ssh
707 pull over ssh
692
708
693 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
709 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
694 pulling from ssh://user@dummy/main
710 pulling from ssh://user@dummy/main
695 searching for changes
711 searching for changes
696 adding changesets
712 adding changesets
697 adding manifests
713 adding manifests
698 adding file changes
714 adding file changes
699 added 1 changesets with 1 changes to 1 files (+1 heads)
715 added 1 changesets with 1 changes to 1 files (+1 heads)
700 (run 'hg heads' to see heads, 'hg merge' to merge)
716 (run 'hg heads' to see heads, 'hg merge' to merge)
701
717
702 pull over http
718 pull over http
703
719
704 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
720 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
705 $ cat main.pid >> $DAEMON_PIDS
721 $ cat main.pid >> $DAEMON_PIDS
706
722
707 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
723 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
708 pulling from http://localhost:$HGPORT/
724 pulling from http://localhost:$HGPORT/
709 searching for changes
725 searching for changes
710 adding changesets
726 adding changesets
711 adding manifests
727 adding manifests
712 adding file changes
728 adding file changes
713 added 1 changesets with 1 changes to 1 files (+1 heads)
729 added 1 changesets with 1 changes to 1 files (+1 heads)
714 (run 'hg heads .' to see heads, 'hg merge' to merge)
730 (run 'hg heads .' to see heads, 'hg merge' to merge)
715 $ cat main-error.log
731 $ cat main-error.log
716
732
717 push over ssh
733 push over ssh
718
734
719 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
735 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
720 pushing to ssh://user@dummy/other
736 pushing to ssh://user@dummy/other
721 searching for changes
737 searching for changes
722 remote: adding changesets
738 remote: adding changesets
723 remote: adding manifests
739 remote: adding manifests
724 remote: adding file changes
740 remote: adding file changes
725 remote: added 1 changesets with 1 changes to 1 files
741 remote: added 1 changesets with 1 changes to 1 files
726
742
727 push over http
743 push over http
728
744
729 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
745 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
730 $ cat other.pid >> $DAEMON_PIDS
746 $ cat other.pid >> $DAEMON_PIDS
731
747
732 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
748 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
733 pushing to http://localhost:$HGPORT2/
749 pushing to http://localhost:$HGPORT2/
734 searching for changes
750 searching for changes
735 $ cat other-error.log
751 $ cat other-error.log
736
752
737 Check final content.
753 Check final content.
738
754
739 $ hg -R other log -G
755 $ hg -R other log -G
740 o changeset: 7:32af7686d403
756 o changeset: 7:32af7686d403
741 | tag: tip
757 | tag: tip
742 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
758 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
743 | date: Sat Apr 30 15:24:48 2011 +0200
759 | date: Sat Apr 30 15:24:48 2011 +0200
744 | summary: D
760 | summary: D
745 |
761 |
746 o changeset: 6:5fddd98957c8
762 o changeset: 6:5fddd98957c8
747 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
763 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
748 | date: Sat Apr 30 15:24:48 2011 +0200
764 | date: Sat Apr 30 15:24:48 2011 +0200
749 | summary: C
765 | summary: C
750 |
766 |
751 o changeset: 5:42ccdea3bb16
767 o changeset: 5:42ccdea3bb16
752 | parent: 0:cd010b8cd998
768 | parent: 0:cd010b8cd998
753 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
769 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
754 | date: Sat Apr 30 15:24:48 2011 +0200
770 | date: Sat Apr 30 15:24:48 2011 +0200
755 | summary: B
771 | summary: B
756 |
772 |
757 | o changeset: 4:02de42196ebe
773 | o changeset: 4:02de42196ebe
758 | | parent: 2:24b6387c8c8c
774 | | parent: 2:24b6387c8c8c
759 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
775 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
760 | | date: Sat Apr 30 15:24:48 2011 +0200
776 | | date: Sat Apr 30 15:24:48 2011 +0200
761 | | summary: H
777 | | summary: H
762 | |
778 | |
763 | | o changeset: 3:eea13746799a
779 | | o changeset: 3:eea13746799a
764 | |/| parent: 2:24b6387c8c8c
780 | |/| parent: 2:24b6387c8c8c
765 | | | parent: 1:9520eea781bc
781 | | | parent: 1:9520eea781bc
766 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
782 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
767 | | | date: Sat Apr 30 15:24:48 2011 +0200
783 | | | date: Sat Apr 30 15:24:48 2011 +0200
768 | | | summary: G
784 | | | summary: G
769 | | |
785 | | |
770 | o | changeset: 2:24b6387c8c8c
786 | o | changeset: 2:24b6387c8c8c
771 |/ / parent: 0:cd010b8cd998
787 |/ / parent: 0:cd010b8cd998
772 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
788 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
773 | | date: Sat Apr 30 15:24:48 2011 +0200
789 | | date: Sat Apr 30 15:24:48 2011 +0200
774 | | summary: F
790 | | summary: F
775 | |
791 | |
776 | @ changeset: 1:9520eea781bc
792 | @ changeset: 1:9520eea781bc
777 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
793 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
778 | date: Sat Apr 30 15:24:48 2011 +0200
794 | date: Sat Apr 30 15:24:48 2011 +0200
779 | summary: E
795 | summary: E
780 |
796 |
781 o changeset: 0:cd010b8cd998
797 o changeset: 0:cd010b8cd998
782 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
798 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
783 date: Sat Apr 30 15:24:48 2011 +0200
799 date: Sat Apr 30 15:24:48 2011 +0200
784 summary: A
800 summary: A
785
801
General Comments 0
You need to be logged in to leave comments. Login now