##// END OF EJS Templates
bundle2: also save output when error happens during part processing...
Pierre-Yves David -
r24851:df0ce98c stable
parent child Browse files
Show More
@@ -1,1264 +1,1270 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 import errno
148 import errno
149 import sys
149 import sys
150 import util
150 import util
151 import struct
151 import struct
152 import urllib
152 import urllib
153 import string
153 import string
154 import obsolete
154 import obsolete
155 import pushkey
155 import pushkey
156 import url
156 import url
157 import re
157 import re
158
158
159 import changegroup, error
159 import changegroup, error
160 from i18n import _
160 from i18n import _
161
161
162 _pack = struct.pack
162 _pack = struct.pack
163 _unpack = struct.unpack
163 _unpack = struct.unpack
164
164
165 _fstreamparamsize = '>i'
165 _fstreamparamsize = '>i'
166 _fpartheadersize = '>i'
166 _fpartheadersize = '>i'
167 _fparttypesize = '>B'
167 _fparttypesize = '>B'
168 _fpartid = '>I'
168 _fpartid = '>I'
169 _fpayloadsize = '>i'
169 _fpayloadsize = '>i'
170 _fpartparamcount = '>BB'
170 _fpartparamcount = '>BB'
171
171
172 preferedchunksize = 4096
172 preferedchunksize = 4096
173
173
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
175
175
176 def validateparttype(parttype):
176 def validateparttype(parttype):
177 """raise ValueError if a parttype contains invalid character"""
177 """raise ValueError if a parttype contains invalid character"""
178 if _parttypeforbidden.search(parttype):
178 if _parttypeforbidden.search(parttype):
179 raise ValueError(parttype)
179 raise ValueError(parttype)
180
180
181 def _makefpartparamsizes(nbparams):
181 def _makefpartparamsizes(nbparams):
182 """return a struct format to read part parameter sizes
182 """return a struct format to read part parameter sizes
183
183
184 The number parameters is variable so we need to build that format
184 The number parameters is variable so we need to build that format
185 dynamically.
185 dynamically.
186 """
186 """
187 return '>'+('BB'*nbparams)
187 return '>'+('BB'*nbparams)
188
188
189 parthandlermapping = {}
189 parthandlermapping = {}
190
190
191 def parthandler(parttype, params=()):
191 def parthandler(parttype, params=()):
192 """decorator that register a function as a bundle2 part handler
192 """decorator that register a function as a bundle2 part handler
193
193
194 eg::
194 eg::
195
195
196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
197 def myparttypehandler(...):
197 def myparttypehandler(...):
198 '''process a part of type "my part".'''
198 '''process a part of type "my part".'''
199 ...
199 ...
200 """
200 """
201 validateparttype(parttype)
201 validateparttype(parttype)
202 def _decorator(func):
202 def _decorator(func):
203 lparttype = parttype.lower() # enforce lower case matching.
203 lparttype = parttype.lower() # enforce lower case matching.
204 assert lparttype not in parthandlermapping
204 assert lparttype not in parthandlermapping
205 parthandlermapping[lparttype] = func
205 parthandlermapping[lparttype] = func
206 func.params = frozenset(params)
206 func.params = frozenset(params)
207 return func
207 return func
208 return _decorator
208 return _decorator
209
209
210 class unbundlerecords(object):
210 class unbundlerecords(object):
211 """keep record of what happens during and unbundle
211 """keep record of what happens during and unbundle
212
212
213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
214 category of record and obj is an arbitrary object.
214 category of record and obj is an arbitrary object.
215
215
216 `records['cat']` will return all entries of this category 'cat'.
216 `records['cat']` will return all entries of this category 'cat'.
217
217
218 Iterating on the object itself will yield `('category', obj)` tuples
218 Iterating on the object itself will yield `('category', obj)` tuples
219 for all entries.
219 for all entries.
220
220
221 All iterations happens in chronological order.
221 All iterations happens in chronological order.
222 """
222 """
223
223
224 def __init__(self):
224 def __init__(self):
225 self._categories = {}
225 self._categories = {}
226 self._sequences = []
226 self._sequences = []
227 self._replies = {}
227 self._replies = {}
228
228
229 def add(self, category, entry, inreplyto=None):
229 def add(self, category, entry, inreplyto=None):
230 """add a new record of a given category.
230 """add a new record of a given category.
231
231
232 The entry can then be retrieved in the list returned by
232 The entry can then be retrieved in the list returned by
233 self['category']."""
233 self['category']."""
234 self._categories.setdefault(category, []).append(entry)
234 self._categories.setdefault(category, []).append(entry)
235 self._sequences.append((category, entry))
235 self._sequences.append((category, entry))
236 if inreplyto is not None:
236 if inreplyto is not None:
237 self.getreplies(inreplyto).add(category, entry)
237 self.getreplies(inreplyto).add(category, entry)
238
238
239 def getreplies(self, partid):
239 def getreplies(self, partid):
240 """get the records that are replies to a specific part"""
240 """get the records that are replies to a specific part"""
241 return self._replies.setdefault(partid, unbundlerecords())
241 return self._replies.setdefault(partid, unbundlerecords())
242
242
243 def __getitem__(self, cat):
243 def __getitem__(self, cat):
244 return tuple(self._categories.get(cat, ()))
244 return tuple(self._categories.get(cat, ()))
245
245
246 def __iter__(self):
246 def __iter__(self):
247 return iter(self._sequences)
247 return iter(self._sequences)
248
248
249 def __len__(self):
249 def __len__(self):
250 return len(self._sequences)
250 return len(self._sequences)
251
251
252 def __nonzero__(self):
252 def __nonzero__(self):
253 return bool(self._sequences)
253 return bool(self._sequences)
254
254
255 class bundleoperation(object):
255 class bundleoperation(object):
256 """an object that represents a single bundling process
256 """an object that represents a single bundling process
257
257
258 Its purpose is to carry unbundle-related objects and states.
258 Its purpose is to carry unbundle-related objects and states.
259
259
260 A new object should be created at the beginning of each bundle processing.
260 A new object should be created at the beginning of each bundle processing.
261 The object is to be returned by the processing function.
261 The object is to be returned by the processing function.
262
262
263 The object has very little content now it will ultimately contain:
263 The object has very little content now it will ultimately contain:
264 * an access to the repo the bundle is applied to,
264 * an access to the repo the bundle is applied to,
265 * a ui object,
265 * a ui object,
266 * a way to retrieve a transaction to add changes to the repo,
266 * a way to retrieve a transaction to add changes to the repo,
267 * a way to record the result of processing each part,
267 * a way to record the result of processing each part,
268 * a way to construct a bundle response when applicable.
268 * a way to construct a bundle response when applicable.
269 """
269 """
270
270
271 def __init__(self, repo, transactiongetter):
271 def __init__(self, repo, transactiongetter):
272 self.repo = repo
272 self.repo = repo
273 self.ui = repo.ui
273 self.ui = repo.ui
274 self.records = unbundlerecords()
274 self.records = unbundlerecords()
275 self.gettransaction = transactiongetter
275 self.gettransaction = transactiongetter
276 self.reply = None
276 self.reply = None
277
277
278 class TransactionUnavailable(RuntimeError):
278 class TransactionUnavailable(RuntimeError):
279 pass
279 pass
280
280
281 def _notransaction():
281 def _notransaction():
282 """default method to get a transaction while processing a bundle
282 """default method to get a transaction while processing a bundle
283
283
284 Raise an exception to highlight the fact that no transaction was expected
284 Raise an exception to highlight the fact that no transaction was expected
285 to be created"""
285 to be created"""
286 raise TransactionUnavailable()
286 raise TransactionUnavailable()
287
287
288 def processbundle(repo, unbundler, transactiongetter=None):
288 def processbundle(repo, unbundler, transactiongetter=None, op=None):
289 """This function process a bundle, apply effect to/from a repo
289 """This function process a bundle, apply effect to/from a repo
290
290
291 It iterates over each part then searches for and uses the proper handling
291 It iterates over each part then searches for and uses the proper handling
292 code to process the part. Parts are processed in order.
292 code to process the part. Parts are processed in order.
293
293
294 This is very early version of this function that will be strongly reworked
294 This is very early version of this function that will be strongly reworked
295 before final usage.
295 before final usage.
296
296
297 Unknown Mandatory part will abort the process.
297 Unknown Mandatory part will abort the process.
298
299 It is temporarily possible to provide a prebuilt bundleoperation to the
300 function. This is used to ensure output is properly propagated in case of
301 an error during the unbundling. This output capturing part will likely be
302 reworked and this ability will probably go away in the process.
298 """
303 """
304 if op is None:
299 if transactiongetter is None:
305 if transactiongetter is None:
300 transactiongetter = _notransaction
306 transactiongetter = _notransaction
301 op = bundleoperation(repo, transactiongetter)
307 op = bundleoperation(repo, transactiongetter)
302 # todo:
308 # todo:
303 # - replace this is a init function soon.
309 # - replace this is a init function soon.
304 # - exception catching
310 # - exception catching
305 unbundler.params
311 unbundler.params
306 iterparts = unbundler.iterparts()
312 iterparts = unbundler.iterparts()
307 part = None
313 part = None
308 try:
314 try:
309 for part in iterparts:
315 for part in iterparts:
310 _processpart(op, part)
316 _processpart(op, part)
311 except Exception, exc:
317 except Exception, exc:
312 for part in iterparts:
318 for part in iterparts:
313 # consume the bundle content
319 # consume the bundle content
314 part.seek(0, 2)
320 part.seek(0, 2)
315 # Small hack to let caller code distinguish exceptions from bundle2
321 # Small hack to let caller code distinguish exceptions from bundle2
316 # processing from processing the old format. This is mostly
322 # processing from processing the old format. This is mostly
317 # needed to handle different return codes to unbundle according to the
323 # needed to handle different return codes to unbundle according to the
318 # type of bundle. We should probably clean up or drop this return code
324 # type of bundle. We should probably clean up or drop this return code
319 # craziness in a future version.
325 # craziness in a future version.
320 exc.duringunbundle2 = True
326 exc.duringunbundle2 = True
321 salvaged = []
327 salvaged = []
322 if op.reply is not None:
328 if op.reply is not None:
323 salvaged = op.reply.salvageoutput()
329 salvaged = op.reply.salvageoutput()
324 exc._bundle2salvagedoutput = salvaged
330 exc._bundle2salvagedoutput = salvaged
325 raise
331 raise
326 return op
332 return op
327
333
328 def _processpart(op, part):
334 def _processpart(op, part):
329 """process a single part from a bundle
335 """process a single part from a bundle
330
336
331 The part is guaranteed to have been fully consumed when the function exits
337 The part is guaranteed to have been fully consumed when the function exits
332 (even if an exception is raised)."""
338 (even if an exception is raised)."""
333 try:
339 try:
334 try:
340 try:
335 handler = parthandlermapping.get(part.type)
341 handler = parthandlermapping.get(part.type)
336 if handler is None:
342 if handler is None:
337 raise error.UnsupportedPartError(parttype=part.type)
343 raise error.UnsupportedPartError(parttype=part.type)
338 op.ui.debug('found a handler for part %r\n' % part.type)
344 op.ui.debug('found a handler for part %r\n' % part.type)
339 unknownparams = part.mandatorykeys - handler.params
345 unknownparams = part.mandatorykeys - handler.params
340 if unknownparams:
346 if unknownparams:
341 unknownparams = list(unknownparams)
347 unknownparams = list(unknownparams)
342 unknownparams.sort()
348 unknownparams.sort()
343 raise error.UnsupportedPartError(parttype=part.type,
349 raise error.UnsupportedPartError(parttype=part.type,
344 params=unknownparams)
350 params=unknownparams)
345 except error.UnsupportedPartError, exc:
351 except error.UnsupportedPartError, exc:
346 if part.mandatory: # mandatory parts
352 if part.mandatory: # mandatory parts
347 raise
353 raise
348 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
354 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
349 return # skip to part processing
355 return # skip to part processing
350
356
351 # handler is called outside the above try block so that we don't
357 # handler is called outside the above try block so that we don't
352 # risk catching KeyErrors from anything other than the
358 # risk catching KeyErrors from anything other than the
353 # parthandlermapping lookup (any KeyError raised by handler()
359 # parthandlermapping lookup (any KeyError raised by handler()
354 # itself represents a defect of a different variety).
360 # itself represents a defect of a different variety).
355 output = None
361 output = None
356 if op.reply is not None:
362 if op.reply is not None:
357 op.ui.pushbuffer(error=True, subproc=True)
363 op.ui.pushbuffer(error=True, subproc=True)
358 output = ''
364 output = ''
359 try:
365 try:
360 handler(op, part)
366 handler(op, part)
361 finally:
367 finally:
362 if output is not None:
368 if output is not None:
363 output = op.ui.popbuffer()
369 output = op.ui.popbuffer()
364 if output:
370 if output:
365 outpart = op.reply.newpart('output', data=output,
371 outpart = op.reply.newpart('output', data=output,
366 mandatory=False)
372 mandatory=False)
367 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
373 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
368 finally:
374 finally:
369 # consume the part content to not corrupt the stream.
375 # consume the part content to not corrupt the stream.
370 part.seek(0, 2)
376 part.seek(0, 2)
371
377
372
378
373 def decodecaps(blob):
379 def decodecaps(blob):
374 """decode a bundle2 caps bytes blob into a dictionary
380 """decode a bundle2 caps bytes blob into a dictionary
375
381
376 The blob is a list of capabilities (one per line)
382 The blob is a list of capabilities (one per line)
377 Capabilities may have values using a line of the form::
383 Capabilities may have values using a line of the form::
378
384
379 capability=value1,value2,value3
385 capability=value1,value2,value3
380
386
381 The values are always a list."""
387 The values are always a list."""
382 caps = {}
388 caps = {}
383 for line in blob.splitlines():
389 for line in blob.splitlines():
384 if not line:
390 if not line:
385 continue
391 continue
386 if '=' not in line:
392 if '=' not in line:
387 key, vals = line, ()
393 key, vals = line, ()
388 else:
394 else:
389 key, vals = line.split('=', 1)
395 key, vals = line.split('=', 1)
390 vals = vals.split(',')
396 vals = vals.split(',')
391 key = urllib.unquote(key)
397 key = urllib.unquote(key)
392 vals = [urllib.unquote(v) for v in vals]
398 vals = [urllib.unquote(v) for v in vals]
393 caps[key] = vals
399 caps[key] = vals
394 return caps
400 return caps
395
401
396 def encodecaps(caps):
402 def encodecaps(caps):
397 """encode a bundle2 caps dictionary into a bytes blob"""
403 """encode a bundle2 caps dictionary into a bytes blob"""
398 chunks = []
404 chunks = []
399 for ca in sorted(caps):
405 for ca in sorted(caps):
400 vals = caps[ca]
406 vals = caps[ca]
401 ca = urllib.quote(ca)
407 ca = urllib.quote(ca)
402 vals = [urllib.quote(v) for v in vals]
408 vals = [urllib.quote(v) for v in vals]
403 if vals:
409 if vals:
404 ca = "%s=%s" % (ca, ','.join(vals))
410 ca = "%s=%s" % (ca, ','.join(vals))
405 chunks.append(ca)
411 chunks.append(ca)
406 return '\n'.join(chunks)
412 return '\n'.join(chunks)
407
413
408 class bundle20(object):
414 class bundle20(object):
409 """represent an outgoing bundle2 container
415 """represent an outgoing bundle2 container
410
416
411 Use the `addparam` method to add stream level parameter. and `newpart` to
417 Use the `addparam` method to add stream level parameter. and `newpart` to
412 populate it. Then call `getchunks` to retrieve all the binary chunks of
418 populate it. Then call `getchunks` to retrieve all the binary chunks of
413 data that compose the bundle2 container."""
419 data that compose the bundle2 container."""
414
420
415 _magicstring = 'HG20'
421 _magicstring = 'HG20'
416
422
417 def __init__(self, ui, capabilities=()):
423 def __init__(self, ui, capabilities=()):
418 self.ui = ui
424 self.ui = ui
419 self._params = []
425 self._params = []
420 self._parts = []
426 self._parts = []
421 self.capabilities = dict(capabilities)
427 self.capabilities = dict(capabilities)
422
428
423 @property
429 @property
424 def nbparts(self):
430 def nbparts(self):
425 """total number of parts added to the bundler"""
431 """total number of parts added to the bundler"""
426 return len(self._parts)
432 return len(self._parts)
427
433
428 # methods used to defines the bundle2 content
434 # methods used to defines the bundle2 content
429 def addparam(self, name, value=None):
435 def addparam(self, name, value=None):
430 """add a stream level parameter"""
436 """add a stream level parameter"""
431 if not name:
437 if not name:
432 raise ValueError('empty parameter name')
438 raise ValueError('empty parameter name')
433 if name[0] not in string.letters:
439 if name[0] not in string.letters:
434 raise ValueError('non letter first character: %r' % name)
440 raise ValueError('non letter first character: %r' % name)
435 self._params.append((name, value))
441 self._params.append((name, value))
436
442
437 def addpart(self, part):
443 def addpart(self, part):
438 """add a new part to the bundle2 container
444 """add a new part to the bundle2 container
439
445
440 Parts contains the actual applicative payload."""
446 Parts contains the actual applicative payload."""
441 assert part.id is None
447 assert part.id is None
442 part.id = len(self._parts) # very cheap counter
448 part.id = len(self._parts) # very cheap counter
443 self._parts.append(part)
449 self._parts.append(part)
444
450
445 def newpart(self, typeid, *args, **kwargs):
451 def newpart(self, typeid, *args, **kwargs):
446 """create a new part and add it to the containers
452 """create a new part and add it to the containers
447
453
448 As the part is directly added to the containers. For now, this means
454 As the part is directly added to the containers. For now, this means
449 that any failure to properly initialize the part after calling
455 that any failure to properly initialize the part after calling
450 ``newpart`` should result in a failure of the whole bundling process.
456 ``newpart`` should result in a failure of the whole bundling process.
451
457
452 You can still fall back to manually create and add if you need better
458 You can still fall back to manually create and add if you need better
453 control."""
459 control."""
454 part = bundlepart(typeid, *args, **kwargs)
460 part = bundlepart(typeid, *args, **kwargs)
455 self.addpart(part)
461 self.addpart(part)
456 return part
462 return part
457
463
458 # methods used to generate the bundle2 stream
464 # methods used to generate the bundle2 stream
459 def getchunks(self):
465 def getchunks(self):
460 self.ui.debug('start emission of %s stream\n' % self._magicstring)
466 self.ui.debug('start emission of %s stream\n' % self._magicstring)
461 yield self._magicstring
467 yield self._magicstring
462 param = self._paramchunk()
468 param = self._paramchunk()
463 self.ui.debug('bundle parameter: %s\n' % param)
469 self.ui.debug('bundle parameter: %s\n' % param)
464 yield _pack(_fstreamparamsize, len(param))
470 yield _pack(_fstreamparamsize, len(param))
465 if param:
471 if param:
466 yield param
472 yield param
467
473
468 self.ui.debug('start of parts\n')
474 self.ui.debug('start of parts\n')
469 for part in self._parts:
475 for part in self._parts:
470 self.ui.debug('bundle part: "%s"\n' % part.type)
476 self.ui.debug('bundle part: "%s"\n' % part.type)
471 for chunk in part.getchunks():
477 for chunk in part.getchunks():
472 yield chunk
478 yield chunk
473 self.ui.debug('end of bundle\n')
479 self.ui.debug('end of bundle\n')
474 yield _pack(_fpartheadersize, 0)
480 yield _pack(_fpartheadersize, 0)
475
481
476 def _paramchunk(self):
482 def _paramchunk(self):
477 """return a encoded version of all stream parameters"""
483 """return a encoded version of all stream parameters"""
478 blocks = []
484 blocks = []
479 for par, value in self._params:
485 for par, value in self._params:
480 par = urllib.quote(par)
486 par = urllib.quote(par)
481 if value is not None:
487 if value is not None:
482 value = urllib.quote(value)
488 value = urllib.quote(value)
483 par = '%s=%s' % (par, value)
489 par = '%s=%s' % (par, value)
484 blocks.append(par)
490 blocks.append(par)
485 return ' '.join(blocks)
491 return ' '.join(blocks)
486
492
487 def salvageoutput(self):
493 def salvageoutput(self):
488 """return a list with a copy of all output parts in the bundle
494 """return a list with a copy of all output parts in the bundle
489
495
490 This is meant to be used during error handling to make sure we preserve
496 This is meant to be used during error handling to make sure we preserve
491 server output"""
497 server output"""
492 salvaged = []
498 salvaged = []
493 for part in self._parts:
499 for part in self._parts:
494 if part.type.startswith('output'):
500 if part.type.startswith('output'):
495 salvaged.append(part.copy())
501 salvaged.append(part.copy())
496 return salvaged
502 return salvaged
497
503
498
504
499 class unpackermixin(object):
505 class unpackermixin(object):
500 """A mixin to extract bytes and struct data from a stream"""
506 """A mixin to extract bytes and struct data from a stream"""
501
507
502 def __init__(self, fp):
508 def __init__(self, fp):
503 self._fp = fp
509 self._fp = fp
504 self._seekable = (util.safehasattr(fp, 'seek') and
510 self._seekable = (util.safehasattr(fp, 'seek') and
505 util.safehasattr(fp, 'tell'))
511 util.safehasattr(fp, 'tell'))
506
512
507 def _unpack(self, format):
513 def _unpack(self, format):
508 """unpack this struct format from the stream"""
514 """unpack this struct format from the stream"""
509 data = self._readexact(struct.calcsize(format))
515 data = self._readexact(struct.calcsize(format))
510 return _unpack(format, data)
516 return _unpack(format, data)
511
517
512 def _readexact(self, size):
518 def _readexact(self, size):
513 """read exactly <size> bytes from the stream"""
519 """read exactly <size> bytes from the stream"""
514 return changegroup.readexactly(self._fp, size)
520 return changegroup.readexactly(self._fp, size)
515
521
516 def seek(self, offset, whence=0):
522 def seek(self, offset, whence=0):
517 """move the underlying file pointer"""
523 """move the underlying file pointer"""
518 if self._seekable:
524 if self._seekable:
519 return self._fp.seek(offset, whence)
525 return self._fp.seek(offset, whence)
520 else:
526 else:
521 raise NotImplementedError(_('File pointer is not seekable'))
527 raise NotImplementedError(_('File pointer is not seekable'))
522
528
523 def tell(self):
529 def tell(self):
524 """return the file offset, or None if file is not seekable"""
530 """return the file offset, or None if file is not seekable"""
525 if self._seekable:
531 if self._seekable:
526 try:
532 try:
527 return self._fp.tell()
533 return self._fp.tell()
528 except IOError, e:
534 except IOError, e:
529 if e.errno == errno.ESPIPE:
535 if e.errno == errno.ESPIPE:
530 self._seekable = False
536 self._seekable = False
531 else:
537 else:
532 raise
538 raise
533 return None
539 return None
534
540
535 def close(self):
541 def close(self):
536 """close underlying file"""
542 """close underlying file"""
537 if util.safehasattr(self._fp, 'close'):
543 if util.safehasattr(self._fp, 'close'):
538 return self._fp.close()
544 return self._fp.close()
539
545
540 def getunbundler(ui, fp, header=None):
546 def getunbundler(ui, fp, header=None):
541 """return a valid unbundler object for a given header"""
547 """return a valid unbundler object for a given header"""
542 if header is None:
548 if header is None:
543 header = changegroup.readexactly(fp, 4)
549 header = changegroup.readexactly(fp, 4)
544 magic, version = header[0:2], header[2:4]
550 magic, version = header[0:2], header[2:4]
545 if magic != 'HG':
551 if magic != 'HG':
546 raise util.Abort(_('not a Mercurial bundle'))
552 raise util.Abort(_('not a Mercurial bundle'))
547 unbundlerclass = formatmap.get(version)
553 unbundlerclass = formatmap.get(version)
548 if unbundlerclass is None:
554 if unbundlerclass is None:
549 raise util.Abort(_('unknown bundle version %s') % version)
555 raise util.Abort(_('unknown bundle version %s') % version)
550 unbundler = unbundlerclass(ui, fp)
556 unbundler = unbundlerclass(ui, fp)
551 ui.debug('start processing of %s stream\n' % header)
557 ui.debug('start processing of %s stream\n' % header)
552 return unbundler
558 return unbundler
553
559
554 class unbundle20(unpackermixin):
560 class unbundle20(unpackermixin):
555 """interpret a bundle2 stream
561 """interpret a bundle2 stream
556
562
557 This class is fed with a binary stream and yields parts through its
563 This class is fed with a binary stream and yields parts through its
558 `iterparts` methods."""
564 `iterparts` methods."""
559
565
560 def __init__(self, ui, fp):
566 def __init__(self, ui, fp):
561 """If header is specified, we do not read it out of the stream."""
567 """If header is specified, we do not read it out of the stream."""
562 self.ui = ui
568 self.ui = ui
563 super(unbundle20, self).__init__(fp)
569 super(unbundle20, self).__init__(fp)
564
570
565 @util.propertycache
571 @util.propertycache
566 def params(self):
572 def params(self):
567 """dictionary of stream level parameters"""
573 """dictionary of stream level parameters"""
568 self.ui.debug('reading bundle2 stream parameters\n')
574 self.ui.debug('reading bundle2 stream parameters\n')
569 params = {}
575 params = {}
570 paramssize = self._unpack(_fstreamparamsize)[0]
576 paramssize = self._unpack(_fstreamparamsize)[0]
571 if paramssize < 0:
577 if paramssize < 0:
572 raise error.BundleValueError('negative bundle param size: %i'
578 raise error.BundleValueError('negative bundle param size: %i'
573 % paramssize)
579 % paramssize)
574 if paramssize:
580 if paramssize:
575 for p in self._readexact(paramssize).split(' '):
581 for p in self._readexact(paramssize).split(' '):
576 p = p.split('=', 1)
582 p = p.split('=', 1)
577 p = [urllib.unquote(i) for i in p]
583 p = [urllib.unquote(i) for i in p]
578 if len(p) < 2:
584 if len(p) < 2:
579 p.append(None)
585 p.append(None)
580 self._processparam(*p)
586 self._processparam(*p)
581 params[p[0]] = p[1]
587 params[p[0]] = p[1]
582 return params
588 return params
583
589
584 def _processparam(self, name, value):
590 def _processparam(self, name, value):
585 """process a parameter, applying its effect if needed
591 """process a parameter, applying its effect if needed
586
592
587 Parameter starting with a lower case letter are advisory and will be
593 Parameter starting with a lower case letter are advisory and will be
588 ignored when unknown. Those starting with an upper case letter are
594 ignored when unknown. Those starting with an upper case letter are
589 mandatory and will this function will raise a KeyError when unknown.
595 mandatory and will this function will raise a KeyError when unknown.
590
596
591 Note: no option are currently supported. Any input will be either
597 Note: no option are currently supported. Any input will be either
592 ignored or failing.
598 ignored or failing.
593 """
599 """
594 if not name:
600 if not name:
595 raise ValueError('empty parameter name')
601 raise ValueError('empty parameter name')
596 if name[0] not in string.letters:
602 if name[0] not in string.letters:
597 raise ValueError('non letter first character: %r' % name)
603 raise ValueError('non letter first character: %r' % name)
598 # Some logic will be later added here to try to process the option for
604 # Some logic will be later added here to try to process the option for
599 # a dict of known parameter.
605 # a dict of known parameter.
600 if name[0].islower():
606 if name[0].islower():
601 self.ui.debug("ignoring unknown parameter %r\n" % name)
607 self.ui.debug("ignoring unknown parameter %r\n" % name)
602 else:
608 else:
603 raise error.UnsupportedPartError(params=(name,))
609 raise error.UnsupportedPartError(params=(name,))
604
610
605
611
606 def iterparts(self):
612 def iterparts(self):
607 """yield all parts contained in the stream"""
613 """yield all parts contained in the stream"""
608 # make sure param have been loaded
614 # make sure param have been loaded
609 self.params
615 self.params
610 self.ui.debug('start extraction of bundle2 parts\n')
616 self.ui.debug('start extraction of bundle2 parts\n')
611 headerblock = self._readpartheader()
617 headerblock = self._readpartheader()
612 while headerblock is not None:
618 while headerblock is not None:
613 part = unbundlepart(self.ui, headerblock, self._fp)
619 part = unbundlepart(self.ui, headerblock, self._fp)
614 yield part
620 yield part
615 part.seek(0, 2)
621 part.seek(0, 2)
616 headerblock = self._readpartheader()
622 headerblock = self._readpartheader()
617 self.ui.debug('end of bundle2 stream\n')
623 self.ui.debug('end of bundle2 stream\n')
618
624
619 def _readpartheader(self):
625 def _readpartheader(self):
620 """reads a part header size and return the bytes blob
626 """reads a part header size and return the bytes blob
621
627
622 returns None if empty"""
628 returns None if empty"""
623 headersize = self._unpack(_fpartheadersize)[0]
629 headersize = self._unpack(_fpartheadersize)[0]
624 if headersize < 0:
630 if headersize < 0:
625 raise error.BundleValueError('negative part header size: %i'
631 raise error.BundleValueError('negative part header size: %i'
626 % headersize)
632 % headersize)
627 self.ui.debug('part header size: %i\n' % headersize)
633 self.ui.debug('part header size: %i\n' % headersize)
628 if headersize:
634 if headersize:
629 return self._readexact(headersize)
635 return self._readexact(headersize)
630 return None
636 return None
631
637
632 def compressed(self):
638 def compressed(self):
633 return False
639 return False
634
640
635 formatmap = {'20': unbundle20}
641 formatmap = {'20': unbundle20}
636
642
637 class bundlepart(object):
643 class bundlepart(object):
638 """A bundle2 part contains application level payload
644 """A bundle2 part contains application level payload
639
645
640 The part `type` is used to route the part to the application level
646 The part `type` is used to route the part to the application level
641 handler.
647 handler.
642
648
643 The part payload is contained in ``part.data``. It could be raw bytes or a
649 The part payload is contained in ``part.data``. It could be raw bytes or a
644 generator of byte chunks.
650 generator of byte chunks.
645
651
646 You can add parameters to the part using the ``addparam`` method.
652 You can add parameters to the part using the ``addparam`` method.
647 Parameters can be either mandatory (default) or advisory. Remote side
653 Parameters can be either mandatory (default) or advisory. Remote side
648 should be able to safely ignore the advisory ones.
654 should be able to safely ignore the advisory ones.
649
655
650 Both data and parameters cannot be modified after the generation has begun.
656 Both data and parameters cannot be modified after the generation has begun.
651 """
657 """
652
658
653 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
659 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
654 data='', mandatory=True):
660 data='', mandatory=True):
655 validateparttype(parttype)
661 validateparttype(parttype)
656 self.id = None
662 self.id = None
657 self.type = parttype
663 self.type = parttype
658 self._data = data
664 self._data = data
659 self._mandatoryparams = list(mandatoryparams)
665 self._mandatoryparams = list(mandatoryparams)
660 self._advisoryparams = list(advisoryparams)
666 self._advisoryparams = list(advisoryparams)
661 # checking for duplicated entries
667 # checking for duplicated entries
662 self._seenparams = set()
668 self._seenparams = set()
663 for pname, __ in self._mandatoryparams + self._advisoryparams:
669 for pname, __ in self._mandatoryparams + self._advisoryparams:
664 if pname in self._seenparams:
670 if pname in self._seenparams:
665 raise RuntimeError('duplicated params: %s' % pname)
671 raise RuntimeError('duplicated params: %s' % pname)
666 self._seenparams.add(pname)
672 self._seenparams.add(pname)
667 # status of the part's generation:
673 # status of the part's generation:
668 # - None: not started,
674 # - None: not started,
669 # - False: currently generated,
675 # - False: currently generated,
670 # - True: generation done.
676 # - True: generation done.
671 self._generated = None
677 self._generated = None
672 self.mandatory = mandatory
678 self.mandatory = mandatory
673
679
674 def copy(self):
680 def copy(self):
675 """return a copy of the part
681 """return a copy of the part
676
682
677 The new part have the very same content but no partid assigned yet.
683 The new part have the very same content but no partid assigned yet.
678 Parts with generated data cannot be copied."""
684 Parts with generated data cannot be copied."""
679 assert not util.safehasattr(self.data, 'next')
685 assert not util.safehasattr(self.data, 'next')
680 return self.__class__(self.type, self._mandatoryparams,
686 return self.__class__(self.type, self._mandatoryparams,
681 self._advisoryparams, self._data, self.mandatory)
687 self._advisoryparams, self._data, self.mandatory)
682
688
683 # methods used to defines the part content
689 # methods used to defines the part content
684 def __setdata(self, data):
690 def __setdata(self, data):
685 if self._generated is not None:
691 if self._generated is not None:
686 raise error.ReadOnlyPartError('part is being generated')
692 raise error.ReadOnlyPartError('part is being generated')
687 self._data = data
693 self._data = data
688 def __getdata(self):
694 def __getdata(self):
689 return self._data
695 return self._data
690 data = property(__getdata, __setdata)
696 data = property(__getdata, __setdata)
691
697
692 @property
698 @property
693 def mandatoryparams(self):
699 def mandatoryparams(self):
694 # make it an immutable tuple to force people through ``addparam``
700 # make it an immutable tuple to force people through ``addparam``
695 return tuple(self._mandatoryparams)
701 return tuple(self._mandatoryparams)
696
702
697 @property
703 @property
698 def advisoryparams(self):
704 def advisoryparams(self):
699 # make it an immutable tuple to force people through ``addparam``
705 # make it an immutable tuple to force people through ``addparam``
700 return tuple(self._advisoryparams)
706 return tuple(self._advisoryparams)
701
707
702 def addparam(self, name, value='', mandatory=True):
708 def addparam(self, name, value='', mandatory=True):
703 if self._generated is not None:
709 if self._generated is not None:
704 raise error.ReadOnlyPartError('part is being generated')
710 raise error.ReadOnlyPartError('part is being generated')
705 if name in self._seenparams:
711 if name in self._seenparams:
706 raise ValueError('duplicated params: %s' % name)
712 raise ValueError('duplicated params: %s' % name)
707 self._seenparams.add(name)
713 self._seenparams.add(name)
708 params = self._advisoryparams
714 params = self._advisoryparams
709 if mandatory:
715 if mandatory:
710 params = self._mandatoryparams
716 params = self._mandatoryparams
711 params.append((name, value))
717 params.append((name, value))
712
718
713 # methods used to generates the bundle2 stream
719 # methods used to generates the bundle2 stream
714 def getchunks(self):
720 def getchunks(self):
715 if self._generated is not None:
721 if self._generated is not None:
716 raise RuntimeError('part can only be consumed once')
722 raise RuntimeError('part can only be consumed once')
717 self._generated = False
723 self._generated = False
718 #### header
724 #### header
719 if self.mandatory:
725 if self.mandatory:
720 parttype = self.type.upper()
726 parttype = self.type.upper()
721 else:
727 else:
722 parttype = self.type.lower()
728 parttype = self.type.lower()
723 ## parttype
729 ## parttype
724 header = [_pack(_fparttypesize, len(parttype)),
730 header = [_pack(_fparttypesize, len(parttype)),
725 parttype, _pack(_fpartid, self.id),
731 parttype, _pack(_fpartid, self.id),
726 ]
732 ]
727 ## parameters
733 ## parameters
728 # count
734 # count
729 manpar = self.mandatoryparams
735 manpar = self.mandatoryparams
730 advpar = self.advisoryparams
736 advpar = self.advisoryparams
731 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
737 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
732 # size
738 # size
733 parsizes = []
739 parsizes = []
734 for key, value in manpar:
740 for key, value in manpar:
735 parsizes.append(len(key))
741 parsizes.append(len(key))
736 parsizes.append(len(value))
742 parsizes.append(len(value))
737 for key, value in advpar:
743 for key, value in advpar:
738 parsizes.append(len(key))
744 parsizes.append(len(key))
739 parsizes.append(len(value))
745 parsizes.append(len(value))
740 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
746 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
741 header.append(paramsizes)
747 header.append(paramsizes)
742 # key, value
748 # key, value
743 for key, value in manpar:
749 for key, value in manpar:
744 header.append(key)
750 header.append(key)
745 header.append(value)
751 header.append(value)
746 for key, value in advpar:
752 for key, value in advpar:
747 header.append(key)
753 header.append(key)
748 header.append(value)
754 header.append(value)
749 ## finalize header
755 ## finalize header
750 headerchunk = ''.join(header)
756 headerchunk = ''.join(header)
751 yield _pack(_fpartheadersize, len(headerchunk))
757 yield _pack(_fpartheadersize, len(headerchunk))
752 yield headerchunk
758 yield headerchunk
753 ## payload
759 ## payload
754 try:
760 try:
755 for chunk in self._payloadchunks():
761 for chunk in self._payloadchunks():
756 yield _pack(_fpayloadsize, len(chunk))
762 yield _pack(_fpayloadsize, len(chunk))
757 yield chunk
763 yield chunk
758 except Exception, exc:
764 except Exception, exc:
759 # backup exception data for later
765 # backup exception data for later
760 exc_info = sys.exc_info()
766 exc_info = sys.exc_info()
761 msg = 'unexpected error: %s' % exc
767 msg = 'unexpected error: %s' % exc
762 interpart = bundlepart('error:abort', [('message', msg)],
768 interpart = bundlepart('error:abort', [('message', msg)],
763 mandatory=False)
769 mandatory=False)
764 interpart.id = 0
770 interpart.id = 0
765 yield _pack(_fpayloadsize, -1)
771 yield _pack(_fpayloadsize, -1)
766 for chunk in interpart.getchunks():
772 for chunk in interpart.getchunks():
767 yield chunk
773 yield chunk
768 # abort current part payload
774 # abort current part payload
769 yield _pack(_fpayloadsize, 0)
775 yield _pack(_fpayloadsize, 0)
770 raise exc_info[0], exc_info[1], exc_info[2]
776 raise exc_info[0], exc_info[1], exc_info[2]
771 # end of payload
777 # end of payload
772 yield _pack(_fpayloadsize, 0)
778 yield _pack(_fpayloadsize, 0)
773 self._generated = True
779 self._generated = True
774
780
775 def _payloadchunks(self):
781 def _payloadchunks(self):
776 """yield chunks of a the part payload
782 """yield chunks of a the part payload
777
783
778 Exists to handle the different methods to provide data to a part."""
784 Exists to handle the different methods to provide data to a part."""
779 # we only support fixed size data now.
785 # we only support fixed size data now.
780 # This will be improved in the future.
786 # This will be improved in the future.
781 if util.safehasattr(self.data, 'next'):
787 if util.safehasattr(self.data, 'next'):
782 buff = util.chunkbuffer(self.data)
788 buff = util.chunkbuffer(self.data)
783 chunk = buff.read(preferedchunksize)
789 chunk = buff.read(preferedchunksize)
784 while chunk:
790 while chunk:
785 yield chunk
791 yield chunk
786 chunk = buff.read(preferedchunksize)
792 chunk = buff.read(preferedchunksize)
787 elif len(self.data):
793 elif len(self.data):
788 yield self.data
794 yield self.data
789
795
790
796
791 flaginterrupt = -1
797 flaginterrupt = -1
792
798
793 class interrupthandler(unpackermixin):
799 class interrupthandler(unpackermixin):
794 """read one part and process it with restricted capability
800 """read one part and process it with restricted capability
795
801
796 This allows to transmit exception raised on the producer size during part
802 This allows to transmit exception raised on the producer size during part
797 iteration while the consumer is reading a part.
803 iteration while the consumer is reading a part.
798
804
799 Part processed in this manner only have access to a ui object,"""
805 Part processed in this manner only have access to a ui object,"""
800
806
801 def __init__(self, ui, fp):
807 def __init__(self, ui, fp):
802 super(interrupthandler, self).__init__(fp)
808 super(interrupthandler, self).__init__(fp)
803 self.ui = ui
809 self.ui = ui
804
810
805 def _readpartheader(self):
811 def _readpartheader(self):
806 """reads a part header size and return the bytes blob
812 """reads a part header size and return the bytes blob
807
813
808 returns None if empty"""
814 returns None if empty"""
809 headersize = self._unpack(_fpartheadersize)[0]
815 headersize = self._unpack(_fpartheadersize)[0]
810 if headersize < 0:
816 if headersize < 0:
811 raise error.BundleValueError('negative part header size: %i'
817 raise error.BundleValueError('negative part header size: %i'
812 % headersize)
818 % headersize)
813 self.ui.debug('part header size: %i\n' % headersize)
819 self.ui.debug('part header size: %i\n' % headersize)
814 if headersize:
820 if headersize:
815 return self._readexact(headersize)
821 return self._readexact(headersize)
816 return None
822 return None
817
823
818 def __call__(self):
824 def __call__(self):
819 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
825 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
820 headerblock = self._readpartheader()
826 headerblock = self._readpartheader()
821 if headerblock is None:
827 if headerblock is None:
822 self.ui.debug('no part found during interruption.\n')
828 self.ui.debug('no part found during interruption.\n')
823 return
829 return
824 part = unbundlepart(self.ui, headerblock, self._fp)
830 part = unbundlepart(self.ui, headerblock, self._fp)
825 op = interruptoperation(self.ui)
831 op = interruptoperation(self.ui)
826 _processpart(op, part)
832 _processpart(op, part)
827
833
828 class interruptoperation(object):
834 class interruptoperation(object):
829 """A limited operation to be use by part handler during interruption
835 """A limited operation to be use by part handler during interruption
830
836
831 It only have access to an ui object.
837 It only have access to an ui object.
832 """
838 """
833
839
834 def __init__(self, ui):
840 def __init__(self, ui):
835 self.ui = ui
841 self.ui = ui
836 self.reply = None
842 self.reply = None
837
843
838 @property
844 @property
839 def repo(self):
845 def repo(self):
840 raise RuntimeError('no repo access from stream interruption')
846 raise RuntimeError('no repo access from stream interruption')
841
847
842 def gettransaction(self):
848 def gettransaction(self):
843 raise TransactionUnavailable('no repo access from stream interruption')
849 raise TransactionUnavailable('no repo access from stream interruption')
844
850
845 class unbundlepart(unpackermixin):
851 class unbundlepart(unpackermixin):
846 """a bundle part read from a bundle"""
852 """a bundle part read from a bundle"""
847
853
848 def __init__(self, ui, header, fp):
854 def __init__(self, ui, header, fp):
849 super(unbundlepart, self).__init__(fp)
855 super(unbundlepart, self).__init__(fp)
850 self.ui = ui
856 self.ui = ui
851 # unbundle state attr
857 # unbundle state attr
852 self._headerdata = header
858 self._headerdata = header
853 self._headeroffset = 0
859 self._headeroffset = 0
854 self._initialized = False
860 self._initialized = False
855 self.consumed = False
861 self.consumed = False
856 # part data
862 # part data
857 self.id = None
863 self.id = None
858 self.type = None
864 self.type = None
859 self.mandatoryparams = None
865 self.mandatoryparams = None
860 self.advisoryparams = None
866 self.advisoryparams = None
861 self.params = None
867 self.params = None
862 self.mandatorykeys = ()
868 self.mandatorykeys = ()
863 self._payloadstream = None
869 self._payloadstream = None
864 self._readheader()
870 self._readheader()
865 self._mandatory = None
871 self._mandatory = None
866 self._chunkindex = [] #(payload, file) position tuples for chunk starts
872 self._chunkindex = [] #(payload, file) position tuples for chunk starts
867 self._pos = 0
873 self._pos = 0
868
874
869 def _fromheader(self, size):
875 def _fromheader(self, size):
870 """return the next <size> byte from the header"""
876 """return the next <size> byte from the header"""
871 offset = self._headeroffset
877 offset = self._headeroffset
872 data = self._headerdata[offset:(offset + size)]
878 data = self._headerdata[offset:(offset + size)]
873 self._headeroffset = offset + size
879 self._headeroffset = offset + size
874 return data
880 return data
875
881
876 def _unpackheader(self, format):
882 def _unpackheader(self, format):
877 """read given format from header
883 """read given format from header
878
884
879 This automatically compute the size of the format to read."""
885 This automatically compute the size of the format to read."""
880 data = self._fromheader(struct.calcsize(format))
886 data = self._fromheader(struct.calcsize(format))
881 return _unpack(format, data)
887 return _unpack(format, data)
882
888
883 def _initparams(self, mandatoryparams, advisoryparams):
889 def _initparams(self, mandatoryparams, advisoryparams):
884 """internal function to setup all logic related parameters"""
890 """internal function to setup all logic related parameters"""
885 # make it read only to prevent people touching it by mistake.
891 # make it read only to prevent people touching it by mistake.
886 self.mandatoryparams = tuple(mandatoryparams)
892 self.mandatoryparams = tuple(mandatoryparams)
887 self.advisoryparams = tuple(advisoryparams)
893 self.advisoryparams = tuple(advisoryparams)
888 # user friendly UI
894 # user friendly UI
889 self.params = dict(self.mandatoryparams)
895 self.params = dict(self.mandatoryparams)
890 self.params.update(dict(self.advisoryparams))
896 self.params.update(dict(self.advisoryparams))
891 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
897 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
892
898
893 def _payloadchunks(self, chunknum=0):
899 def _payloadchunks(self, chunknum=0):
894 '''seek to specified chunk and start yielding data'''
900 '''seek to specified chunk and start yielding data'''
895 if len(self._chunkindex) == 0:
901 if len(self._chunkindex) == 0:
896 assert chunknum == 0, 'Must start with chunk 0'
902 assert chunknum == 0, 'Must start with chunk 0'
897 self._chunkindex.append((0, super(unbundlepart, self).tell()))
903 self._chunkindex.append((0, super(unbundlepart, self).tell()))
898 else:
904 else:
899 assert chunknum < len(self._chunkindex), \
905 assert chunknum < len(self._chunkindex), \
900 'Unknown chunk %d' % chunknum
906 'Unknown chunk %d' % chunknum
901 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
907 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
902
908
903 pos = self._chunkindex[chunknum][0]
909 pos = self._chunkindex[chunknum][0]
904 payloadsize = self._unpack(_fpayloadsize)[0]
910 payloadsize = self._unpack(_fpayloadsize)[0]
905 self.ui.debug('payload chunk size: %i\n' % payloadsize)
911 self.ui.debug('payload chunk size: %i\n' % payloadsize)
906 while payloadsize:
912 while payloadsize:
907 if payloadsize == flaginterrupt:
913 if payloadsize == flaginterrupt:
908 # interruption detection, the handler will now read a
914 # interruption detection, the handler will now read a
909 # single part and process it.
915 # single part and process it.
910 interrupthandler(self.ui, self._fp)()
916 interrupthandler(self.ui, self._fp)()
911 elif payloadsize < 0:
917 elif payloadsize < 0:
912 msg = 'negative payload chunk size: %i' % payloadsize
918 msg = 'negative payload chunk size: %i' % payloadsize
913 raise error.BundleValueError(msg)
919 raise error.BundleValueError(msg)
914 else:
920 else:
915 result = self._readexact(payloadsize)
921 result = self._readexact(payloadsize)
916 chunknum += 1
922 chunknum += 1
917 pos += payloadsize
923 pos += payloadsize
918 if chunknum == len(self._chunkindex):
924 if chunknum == len(self._chunkindex):
919 self._chunkindex.append((pos,
925 self._chunkindex.append((pos,
920 super(unbundlepart, self).tell()))
926 super(unbundlepart, self).tell()))
921 yield result
927 yield result
922 payloadsize = self._unpack(_fpayloadsize)[0]
928 payloadsize = self._unpack(_fpayloadsize)[0]
923 self.ui.debug('payload chunk size: %i\n' % payloadsize)
929 self.ui.debug('payload chunk size: %i\n' % payloadsize)
924
930
925 def _findchunk(self, pos):
931 def _findchunk(self, pos):
926 '''for a given payload position, return a chunk number and offset'''
932 '''for a given payload position, return a chunk number and offset'''
927 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
933 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
928 if ppos == pos:
934 if ppos == pos:
929 return chunk, 0
935 return chunk, 0
930 elif ppos > pos:
936 elif ppos > pos:
931 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
937 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
932 raise ValueError('Unknown chunk')
938 raise ValueError('Unknown chunk')
933
939
934 def _readheader(self):
940 def _readheader(self):
935 """read the header and setup the object"""
941 """read the header and setup the object"""
936 typesize = self._unpackheader(_fparttypesize)[0]
942 typesize = self._unpackheader(_fparttypesize)[0]
937 self.type = self._fromheader(typesize)
943 self.type = self._fromheader(typesize)
938 self.ui.debug('part type: "%s"\n' % self.type)
944 self.ui.debug('part type: "%s"\n' % self.type)
939 self.id = self._unpackheader(_fpartid)[0]
945 self.id = self._unpackheader(_fpartid)[0]
940 self.ui.debug('part id: "%s"\n' % self.id)
946 self.ui.debug('part id: "%s"\n' % self.id)
941 # extract mandatory bit from type
947 # extract mandatory bit from type
942 self.mandatory = (self.type != self.type.lower())
948 self.mandatory = (self.type != self.type.lower())
943 self.type = self.type.lower()
949 self.type = self.type.lower()
944 ## reading parameters
950 ## reading parameters
945 # param count
951 # param count
946 mancount, advcount = self._unpackheader(_fpartparamcount)
952 mancount, advcount = self._unpackheader(_fpartparamcount)
947 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
953 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
948 # param size
954 # param size
949 fparamsizes = _makefpartparamsizes(mancount + advcount)
955 fparamsizes = _makefpartparamsizes(mancount + advcount)
950 paramsizes = self._unpackheader(fparamsizes)
956 paramsizes = self._unpackheader(fparamsizes)
951 # make it a list of couple again
957 # make it a list of couple again
952 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
958 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
953 # split mandatory from advisory
959 # split mandatory from advisory
954 mansizes = paramsizes[:mancount]
960 mansizes = paramsizes[:mancount]
955 advsizes = paramsizes[mancount:]
961 advsizes = paramsizes[mancount:]
956 # retrieve param value
962 # retrieve param value
957 manparams = []
963 manparams = []
958 for key, value in mansizes:
964 for key, value in mansizes:
959 manparams.append((self._fromheader(key), self._fromheader(value)))
965 manparams.append((self._fromheader(key), self._fromheader(value)))
960 advparams = []
966 advparams = []
961 for key, value in advsizes:
967 for key, value in advsizes:
962 advparams.append((self._fromheader(key), self._fromheader(value)))
968 advparams.append((self._fromheader(key), self._fromheader(value)))
963 self._initparams(manparams, advparams)
969 self._initparams(manparams, advparams)
964 ## part payload
970 ## part payload
965 self._payloadstream = util.chunkbuffer(self._payloadchunks())
971 self._payloadstream = util.chunkbuffer(self._payloadchunks())
966 # we read the data, tell it
972 # we read the data, tell it
967 self._initialized = True
973 self._initialized = True
968
974
969 def read(self, size=None):
975 def read(self, size=None):
970 """read payload data"""
976 """read payload data"""
971 if not self._initialized:
977 if not self._initialized:
972 self._readheader()
978 self._readheader()
973 if size is None:
979 if size is None:
974 data = self._payloadstream.read()
980 data = self._payloadstream.read()
975 else:
981 else:
976 data = self._payloadstream.read(size)
982 data = self._payloadstream.read(size)
977 if size is None or len(data) < size:
983 if size is None or len(data) < size:
978 self.consumed = True
984 self.consumed = True
979 self._pos += len(data)
985 self._pos += len(data)
980 return data
986 return data
981
987
982 def tell(self):
988 def tell(self):
983 return self._pos
989 return self._pos
984
990
985 def seek(self, offset, whence=0):
991 def seek(self, offset, whence=0):
986 if whence == 0:
992 if whence == 0:
987 newpos = offset
993 newpos = offset
988 elif whence == 1:
994 elif whence == 1:
989 newpos = self._pos + offset
995 newpos = self._pos + offset
990 elif whence == 2:
996 elif whence == 2:
991 if not self.consumed:
997 if not self.consumed:
992 self.read()
998 self.read()
993 newpos = self._chunkindex[-1][0] - offset
999 newpos = self._chunkindex[-1][0] - offset
994 else:
1000 else:
995 raise ValueError('Unknown whence value: %r' % (whence,))
1001 raise ValueError('Unknown whence value: %r' % (whence,))
996
1002
997 if newpos > self._chunkindex[-1][0] and not self.consumed:
1003 if newpos > self._chunkindex[-1][0] and not self.consumed:
998 self.read()
1004 self.read()
999 if not 0 <= newpos <= self._chunkindex[-1][0]:
1005 if not 0 <= newpos <= self._chunkindex[-1][0]:
1000 raise ValueError('Offset out of range')
1006 raise ValueError('Offset out of range')
1001
1007
1002 if self._pos != newpos:
1008 if self._pos != newpos:
1003 chunk, internaloffset = self._findchunk(newpos)
1009 chunk, internaloffset = self._findchunk(newpos)
1004 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1010 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1005 adjust = self.read(internaloffset)
1011 adjust = self.read(internaloffset)
1006 if len(adjust) != internaloffset:
1012 if len(adjust) != internaloffset:
1007 raise util.Abort(_('Seek failed\n'))
1013 raise util.Abort(_('Seek failed\n'))
1008 self._pos = newpos
1014 self._pos = newpos
1009
1015
1010 capabilities = {'HG20': (),
1016 capabilities = {'HG20': (),
1011 'listkeys': (),
1017 'listkeys': (),
1012 'pushkey': (),
1018 'pushkey': (),
1013 'digests': tuple(sorted(util.DIGESTS.keys())),
1019 'digests': tuple(sorted(util.DIGESTS.keys())),
1014 'remote-changegroup': ('http', 'https'),
1020 'remote-changegroup': ('http', 'https'),
1015 }
1021 }
1016
1022
1017 def getrepocaps(repo, allowpushback=False):
1023 def getrepocaps(repo, allowpushback=False):
1018 """return the bundle2 capabilities for a given repo
1024 """return the bundle2 capabilities for a given repo
1019
1025
1020 Exists to allow extensions (like evolution) to mutate the capabilities.
1026 Exists to allow extensions (like evolution) to mutate the capabilities.
1021 """
1027 """
1022 caps = capabilities.copy()
1028 caps = capabilities.copy()
1023 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1029 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1024 if obsolete.isenabled(repo, obsolete.exchangeopt):
1030 if obsolete.isenabled(repo, obsolete.exchangeopt):
1025 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1031 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1026 caps['obsmarkers'] = supportedformat
1032 caps['obsmarkers'] = supportedformat
1027 if allowpushback:
1033 if allowpushback:
1028 caps['pushback'] = ()
1034 caps['pushback'] = ()
1029 return caps
1035 return caps
1030
1036
1031 def bundle2caps(remote):
1037 def bundle2caps(remote):
1032 """return the bundle capabilities of a peer as dict"""
1038 """return the bundle capabilities of a peer as dict"""
1033 raw = remote.capable('bundle2')
1039 raw = remote.capable('bundle2')
1034 if not raw and raw != '':
1040 if not raw and raw != '':
1035 return {}
1041 return {}
1036 capsblob = urllib.unquote(remote.capable('bundle2'))
1042 capsblob = urllib.unquote(remote.capable('bundle2'))
1037 return decodecaps(capsblob)
1043 return decodecaps(capsblob)
1038
1044
1039 def obsmarkersversion(caps):
1045 def obsmarkersversion(caps):
1040 """extract the list of supported obsmarkers versions from a bundle2caps dict
1046 """extract the list of supported obsmarkers versions from a bundle2caps dict
1041 """
1047 """
1042 obscaps = caps.get('obsmarkers', ())
1048 obscaps = caps.get('obsmarkers', ())
1043 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1049 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1044
1050
1045 @parthandler('changegroup', ('version',))
1051 @parthandler('changegroup', ('version',))
1046 def handlechangegroup(op, inpart):
1052 def handlechangegroup(op, inpart):
1047 """apply a changegroup part on the repo
1053 """apply a changegroup part on the repo
1048
1054
1049 This is a very early implementation that will massive rework before being
1055 This is a very early implementation that will massive rework before being
1050 inflicted to any end-user.
1056 inflicted to any end-user.
1051 """
1057 """
1052 # Make sure we trigger a transaction creation
1058 # Make sure we trigger a transaction creation
1053 #
1059 #
1054 # The addchangegroup function will get a transaction object by itself, but
1060 # The addchangegroup function will get a transaction object by itself, but
1055 # we need to make sure we trigger the creation of a transaction object used
1061 # we need to make sure we trigger the creation of a transaction object used
1056 # for the whole processing scope.
1062 # for the whole processing scope.
1057 op.gettransaction()
1063 op.gettransaction()
1058 unpackerversion = inpart.params.get('version', '01')
1064 unpackerversion = inpart.params.get('version', '01')
1059 # We should raise an appropriate exception here
1065 # We should raise an appropriate exception here
1060 unpacker = changegroup.packermap[unpackerversion][1]
1066 unpacker = changegroup.packermap[unpackerversion][1]
1061 cg = unpacker(inpart, 'UN')
1067 cg = unpacker(inpart, 'UN')
1062 # the source and url passed here are overwritten by the one contained in
1068 # the source and url passed here are overwritten by the one contained in
1063 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1069 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1064 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1070 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1065 op.records.add('changegroup', {'return': ret})
1071 op.records.add('changegroup', {'return': ret})
1066 if op.reply is not None:
1072 if op.reply is not None:
1067 # This is definitely not the final form of this
1073 # This is definitely not the final form of this
1068 # return. But one need to start somewhere.
1074 # return. But one need to start somewhere.
1069 part = op.reply.newpart('reply:changegroup', mandatory=False)
1075 part = op.reply.newpart('reply:changegroup', mandatory=False)
1070 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1076 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1071 part.addparam('return', '%i' % ret, mandatory=False)
1077 part.addparam('return', '%i' % ret, mandatory=False)
1072 assert not inpart.read()
1078 assert not inpart.read()
1073
1079
1074 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1080 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1075 ['digest:%s' % k for k in util.DIGESTS.keys()])
1081 ['digest:%s' % k for k in util.DIGESTS.keys()])
1076 @parthandler('remote-changegroup', _remotechangegroupparams)
1082 @parthandler('remote-changegroup', _remotechangegroupparams)
1077 def handleremotechangegroup(op, inpart):
1083 def handleremotechangegroup(op, inpart):
1078 """apply a bundle10 on the repo, given an url and validation information
1084 """apply a bundle10 on the repo, given an url and validation information
1079
1085
1080 All the information about the remote bundle to import are given as
1086 All the information about the remote bundle to import are given as
1081 parameters. The parameters include:
1087 parameters. The parameters include:
1082 - url: the url to the bundle10.
1088 - url: the url to the bundle10.
1083 - size: the bundle10 file size. It is used to validate what was
1089 - size: the bundle10 file size. It is used to validate what was
1084 retrieved by the client matches the server knowledge about the bundle.
1090 retrieved by the client matches the server knowledge about the bundle.
1085 - digests: a space separated list of the digest types provided as
1091 - digests: a space separated list of the digest types provided as
1086 parameters.
1092 parameters.
1087 - digest:<digest-type>: the hexadecimal representation of the digest with
1093 - digest:<digest-type>: the hexadecimal representation of the digest with
1088 that name. Like the size, it is used to validate what was retrieved by
1094 that name. Like the size, it is used to validate what was retrieved by
1089 the client matches what the server knows about the bundle.
1095 the client matches what the server knows about the bundle.
1090
1096
1091 When multiple digest types are given, all of them are checked.
1097 When multiple digest types are given, all of them are checked.
1092 """
1098 """
1093 try:
1099 try:
1094 raw_url = inpart.params['url']
1100 raw_url = inpart.params['url']
1095 except KeyError:
1101 except KeyError:
1096 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1102 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1097 parsed_url = util.url(raw_url)
1103 parsed_url = util.url(raw_url)
1098 if parsed_url.scheme not in capabilities['remote-changegroup']:
1104 if parsed_url.scheme not in capabilities['remote-changegroup']:
1099 raise util.Abort(_('remote-changegroup does not support %s urls') %
1105 raise util.Abort(_('remote-changegroup does not support %s urls') %
1100 parsed_url.scheme)
1106 parsed_url.scheme)
1101
1107
1102 try:
1108 try:
1103 size = int(inpart.params['size'])
1109 size = int(inpart.params['size'])
1104 except ValueError:
1110 except ValueError:
1105 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1111 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1106 % 'size')
1112 % 'size')
1107 except KeyError:
1113 except KeyError:
1108 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1114 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1109
1115
1110 digests = {}
1116 digests = {}
1111 for typ in inpart.params.get('digests', '').split():
1117 for typ in inpart.params.get('digests', '').split():
1112 param = 'digest:%s' % typ
1118 param = 'digest:%s' % typ
1113 try:
1119 try:
1114 value = inpart.params[param]
1120 value = inpart.params[param]
1115 except KeyError:
1121 except KeyError:
1116 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1122 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1117 param)
1123 param)
1118 digests[typ] = value
1124 digests[typ] = value
1119
1125
1120 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1126 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1121
1127
1122 # Make sure we trigger a transaction creation
1128 # Make sure we trigger a transaction creation
1123 #
1129 #
1124 # The addchangegroup function will get a transaction object by itself, but
1130 # The addchangegroup function will get a transaction object by itself, but
1125 # we need to make sure we trigger the creation of a transaction object used
1131 # we need to make sure we trigger the creation of a transaction object used
1126 # for the whole processing scope.
1132 # for the whole processing scope.
1127 op.gettransaction()
1133 op.gettransaction()
1128 import exchange
1134 import exchange
1129 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1135 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1130 if not isinstance(cg, changegroup.cg1unpacker):
1136 if not isinstance(cg, changegroup.cg1unpacker):
1131 raise util.Abort(_('%s: not a bundle version 1.0') %
1137 raise util.Abort(_('%s: not a bundle version 1.0') %
1132 util.hidepassword(raw_url))
1138 util.hidepassword(raw_url))
1133 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1139 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1134 op.records.add('changegroup', {'return': ret})
1140 op.records.add('changegroup', {'return': ret})
1135 if op.reply is not None:
1141 if op.reply is not None:
1136 # This is definitely not the final form of this
1142 # This is definitely not the final form of this
1137 # return. But one need to start somewhere.
1143 # return. But one need to start somewhere.
1138 part = op.reply.newpart('reply:changegroup')
1144 part = op.reply.newpart('reply:changegroup')
1139 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1145 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1140 part.addparam('return', '%i' % ret, mandatory=False)
1146 part.addparam('return', '%i' % ret, mandatory=False)
1141 try:
1147 try:
1142 real_part.validate()
1148 real_part.validate()
1143 except util.Abort, e:
1149 except util.Abort, e:
1144 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1150 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1145 (util.hidepassword(raw_url), str(e)))
1151 (util.hidepassword(raw_url), str(e)))
1146 assert not inpart.read()
1152 assert not inpart.read()
1147
1153
1148 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1154 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1149 def handlereplychangegroup(op, inpart):
1155 def handlereplychangegroup(op, inpart):
1150 ret = int(inpart.params['return'])
1156 ret = int(inpart.params['return'])
1151 replyto = int(inpart.params['in-reply-to'])
1157 replyto = int(inpart.params['in-reply-to'])
1152 op.records.add('changegroup', {'return': ret}, replyto)
1158 op.records.add('changegroup', {'return': ret}, replyto)
1153
1159
1154 @parthandler('check:heads')
1160 @parthandler('check:heads')
1155 def handlecheckheads(op, inpart):
1161 def handlecheckheads(op, inpart):
1156 """check that head of the repo did not change
1162 """check that head of the repo did not change
1157
1163
1158 This is used to detect a push race when using unbundle.
1164 This is used to detect a push race when using unbundle.
1159 This replaces the "heads" argument of unbundle."""
1165 This replaces the "heads" argument of unbundle."""
1160 h = inpart.read(20)
1166 h = inpart.read(20)
1161 heads = []
1167 heads = []
1162 while len(h) == 20:
1168 while len(h) == 20:
1163 heads.append(h)
1169 heads.append(h)
1164 h = inpart.read(20)
1170 h = inpart.read(20)
1165 assert not h
1171 assert not h
1166 if heads != op.repo.heads():
1172 if heads != op.repo.heads():
1167 raise error.PushRaced('repository changed while pushing - '
1173 raise error.PushRaced('repository changed while pushing - '
1168 'please try again')
1174 'please try again')
1169
1175
1170 @parthandler('output')
1176 @parthandler('output')
1171 def handleoutput(op, inpart):
1177 def handleoutput(op, inpart):
1172 """forward output captured on the server to the client"""
1178 """forward output captured on the server to the client"""
1173 for line in inpart.read().splitlines():
1179 for line in inpart.read().splitlines():
1174 op.ui.status(('remote: %s\n' % line))
1180 op.ui.status(('remote: %s\n' % line))
1175
1181
1176 @parthandler('replycaps')
1182 @parthandler('replycaps')
1177 def handlereplycaps(op, inpart):
1183 def handlereplycaps(op, inpart):
1178 """Notify that a reply bundle should be created
1184 """Notify that a reply bundle should be created
1179
1185
1180 The payload contains the capabilities information for the reply"""
1186 The payload contains the capabilities information for the reply"""
1181 caps = decodecaps(inpart.read())
1187 caps = decodecaps(inpart.read())
1182 if op.reply is None:
1188 if op.reply is None:
1183 op.reply = bundle20(op.ui, caps)
1189 op.reply = bundle20(op.ui, caps)
1184
1190
1185 @parthandler('error:abort', ('message', 'hint'))
1191 @parthandler('error:abort', ('message', 'hint'))
1186 def handleerrorabort(op, inpart):
1192 def handleerrorabort(op, inpart):
1187 """Used to transmit abort error over the wire"""
1193 """Used to transmit abort error over the wire"""
1188 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1194 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1189
1195
1190 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1196 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1191 def handleerrorunsupportedcontent(op, inpart):
1197 def handleerrorunsupportedcontent(op, inpart):
1192 """Used to transmit unknown content error over the wire"""
1198 """Used to transmit unknown content error over the wire"""
1193 kwargs = {}
1199 kwargs = {}
1194 parttype = inpart.params.get('parttype')
1200 parttype = inpart.params.get('parttype')
1195 if parttype is not None:
1201 if parttype is not None:
1196 kwargs['parttype'] = parttype
1202 kwargs['parttype'] = parttype
1197 params = inpart.params.get('params')
1203 params = inpart.params.get('params')
1198 if params is not None:
1204 if params is not None:
1199 kwargs['params'] = params.split('\0')
1205 kwargs['params'] = params.split('\0')
1200
1206
1201 raise error.UnsupportedPartError(**kwargs)
1207 raise error.UnsupportedPartError(**kwargs)
1202
1208
1203 @parthandler('error:pushraced', ('message',))
1209 @parthandler('error:pushraced', ('message',))
1204 def handleerrorpushraced(op, inpart):
1210 def handleerrorpushraced(op, inpart):
1205 """Used to transmit push race error over the wire"""
1211 """Used to transmit push race error over the wire"""
1206 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1212 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1207
1213
1208 @parthandler('listkeys', ('namespace',))
1214 @parthandler('listkeys', ('namespace',))
1209 def handlelistkeys(op, inpart):
1215 def handlelistkeys(op, inpart):
1210 """retrieve pushkey namespace content stored in a bundle2"""
1216 """retrieve pushkey namespace content stored in a bundle2"""
1211 namespace = inpart.params['namespace']
1217 namespace = inpart.params['namespace']
1212 r = pushkey.decodekeys(inpart.read())
1218 r = pushkey.decodekeys(inpart.read())
1213 op.records.add('listkeys', (namespace, r))
1219 op.records.add('listkeys', (namespace, r))
1214
1220
1215 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1221 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1216 def handlepushkey(op, inpart):
1222 def handlepushkey(op, inpart):
1217 """process a pushkey request"""
1223 """process a pushkey request"""
1218 dec = pushkey.decode
1224 dec = pushkey.decode
1219 namespace = dec(inpart.params['namespace'])
1225 namespace = dec(inpart.params['namespace'])
1220 key = dec(inpart.params['key'])
1226 key = dec(inpart.params['key'])
1221 old = dec(inpart.params['old'])
1227 old = dec(inpart.params['old'])
1222 new = dec(inpart.params['new'])
1228 new = dec(inpart.params['new'])
1223 ret = op.repo.pushkey(namespace, key, old, new)
1229 ret = op.repo.pushkey(namespace, key, old, new)
1224 record = {'namespace': namespace,
1230 record = {'namespace': namespace,
1225 'key': key,
1231 'key': key,
1226 'old': old,
1232 'old': old,
1227 'new': new}
1233 'new': new}
1228 op.records.add('pushkey', record)
1234 op.records.add('pushkey', record)
1229 if op.reply is not None:
1235 if op.reply is not None:
1230 rpart = op.reply.newpart('reply:pushkey')
1236 rpart = op.reply.newpart('reply:pushkey')
1231 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1237 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1232 rpart.addparam('return', '%i' % ret, mandatory=False)
1238 rpart.addparam('return', '%i' % ret, mandatory=False)
1233
1239
1234 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1240 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1235 def handlepushkeyreply(op, inpart):
1241 def handlepushkeyreply(op, inpart):
1236 """retrieve the result of a pushkey request"""
1242 """retrieve the result of a pushkey request"""
1237 ret = int(inpart.params['return'])
1243 ret = int(inpart.params['return'])
1238 partid = int(inpart.params['in-reply-to'])
1244 partid = int(inpart.params['in-reply-to'])
1239 op.records.add('pushkey', {'return': ret}, partid)
1245 op.records.add('pushkey', {'return': ret}, partid)
1240
1246
1241 @parthandler('obsmarkers')
1247 @parthandler('obsmarkers')
1242 def handleobsmarker(op, inpart):
1248 def handleobsmarker(op, inpart):
1243 """add a stream of obsmarkers to the repo"""
1249 """add a stream of obsmarkers to the repo"""
1244 tr = op.gettransaction()
1250 tr = op.gettransaction()
1245 markerdata = inpart.read()
1251 markerdata = inpart.read()
1246 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1252 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1247 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1253 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1248 % len(markerdata))
1254 % len(markerdata))
1249 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1255 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1250 if new:
1256 if new:
1251 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1257 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1252 op.records.add('obsmarkers', {'new': new})
1258 op.records.add('obsmarkers', {'new': new})
1253 if op.reply is not None:
1259 if op.reply is not None:
1254 rpart = op.reply.newpart('reply:obsmarkers')
1260 rpart = op.reply.newpart('reply:obsmarkers')
1255 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1261 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1256 rpart.addparam('new', '%i' % new, mandatory=False)
1262 rpart.addparam('new', '%i' % new, mandatory=False)
1257
1263
1258
1264
1259 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1265 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1260 def handlepushkeyreply(op, inpart):
1266 def handlepushkeyreply(op, inpart):
1261 """retrieve the result of a pushkey request"""
1267 """retrieve the result of a pushkey request"""
1262 ret = int(inpart.params['new'])
1268 ret = int(inpart.params['new'])
1263 partid = int(inpart.params['in-reply-to'])
1269 partid = int(inpart.params['in-reply-to'])
1264 op.records.add('obsmarkers', {'new': ret}, partid)
1270 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,1322 +1,1326 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
13 import lock as lockmod
13 import lock as lockmod
14
14
15 def readbundle(ui, fh, fname, vfs=None):
15 def readbundle(ui, fh, fname, vfs=None):
16 header = changegroup.readexactly(fh, 4)
16 header = changegroup.readexactly(fh, 4)
17
17
18 alg = None
18 alg = None
19 if not fname:
19 if not fname:
20 fname = "stream"
20 fname = "stream"
21 if not header.startswith('HG') and header.startswith('\0'):
21 if not header.startswith('HG') and header.startswith('\0'):
22 fh = changegroup.headerlessfixup(fh, header)
22 fh = changegroup.headerlessfixup(fh, header)
23 header = "HG10"
23 header = "HG10"
24 alg = 'UN'
24 alg = 'UN'
25 elif vfs:
25 elif vfs:
26 fname = vfs.join(fname)
26 fname = vfs.join(fname)
27
27
28 magic, version = header[0:2], header[2:4]
28 magic, version = header[0:2], header[2:4]
29
29
30 if magic != 'HG':
30 if magic != 'HG':
31 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
32 if version == '10':
32 if version == '10':
33 if alg is None:
33 if alg is None:
34 alg = changegroup.readexactly(fh, 2)
34 alg = changegroup.readexactly(fh, 2)
35 return changegroup.cg1unpacker(fh, alg)
35 return changegroup.cg1unpacker(fh, alg)
36 elif version.startswith('2'):
36 elif version.startswith('2'):
37 return bundle2.getunbundler(ui, fh, header=magic + version)
37 return bundle2.getunbundler(ui, fh, header=magic + version)
38 else:
38 else:
39 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
40
40
41 def buildobsmarkerspart(bundler, markers):
41 def buildobsmarkerspart(bundler, markers):
42 """add an obsmarker part to the bundler with <markers>
42 """add an obsmarker part to the bundler with <markers>
43
43
44 No part is created if markers is empty.
44 No part is created if markers is empty.
45 Raises ValueError if the bundler doesn't support any known obsmarker format.
45 Raises ValueError if the bundler doesn't support any known obsmarker format.
46 """
46 """
47 if markers:
47 if markers:
48 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
48 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
49 version = obsolete.commonversion(remoteversions)
49 version = obsolete.commonversion(remoteversions)
50 if version is None:
50 if version is None:
51 raise ValueError('bundler do not support common obsmarker format')
51 raise ValueError('bundler do not support common obsmarker format')
52 stream = obsolete.encodemarkers(markers, True, version=version)
52 stream = obsolete.encodemarkers(markers, True, version=version)
53 return bundler.newpart('obsmarkers', data=stream)
53 return bundler.newpart('obsmarkers', data=stream)
54 return None
54 return None
55
55
56 def _canusebundle2(op):
56 def _canusebundle2(op):
57 """return true if a pull/push can use bundle2
57 """return true if a pull/push can use bundle2
58
58
59 Feel free to nuke this function when we drop the experimental option"""
59 Feel free to nuke this function when we drop the experimental option"""
60 return (op.repo.ui.configbool('experimental', 'bundle2-exp', False)
60 return (op.repo.ui.configbool('experimental', 'bundle2-exp', False)
61 and op.remote.capable('bundle2'))
61 and op.remote.capable('bundle2'))
62
62
63
63
64 class pushoperation(object):
64 class pushoperation(object):
65 """A object that represent a single push operation
65 """A object that represent a single push operation
66
66
67 It purpose is to carry push related state and very common operation.
67 It purpose is to carry push related state and very common operation.
68
68
69 A new should be created at the beginning of each push and discarded
69 A new should be created at the beginning of each push and discarded
70 afterward.
70 afterward.
71 """
71 """
72
72
73 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
73 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
74 bookmarks=()):
74 bookmarks=()):
75 # repo we push from
75 # repo we push from
76 self.repo = repo
76 self.repo = repo
77 self.ui = repo.ui
77 self.ui = repo.ui
78 # repo we push to
78 # repo we push to
79 self.remote = remote
79 self.remote = remote
80 # force option provided
80 # force option provided
81 self.force = force
81 self.force = force
82 # revs to be pushed (None is "all")
82 # revs to be pushed (None is "all")
83 self.revs = revs
83 self.revs = revs
84 # bookmark explicitly pushed
84 # bookmark explicitly pushed
85 self.bookmarks = bookmarks
85 self.bookmarks = bookmarks
86 # allow push of new branch
86 # allow push of new branch
87 self.newbranch = newbranch
87 self.newbranch = newbranch
88 # did a local lock get acquired?
88 # did a local lock get acquired?
89 self.locallocked = None
89 self.locallocked = None
90 # step already performed
90 # step already performed
91 # (used to check what steps have been already performed through bundle2)
91 # (used to check what steps have been already performed through bundle2)
92 self.stepsdone = set()
92 self.stepsdone = set()
93 # Integer version of the changegroup push result
93 # Integer version of the changegroup push result
94 # - None means nothing to push
94 # - None means nothing to push
95 # - 0 means HTTP error
95 # - 0 means HTTP error
96 # - 1 means we pushed and remote head count is unchanged *or*
96 # - 1 means we pushed and remote head count is unchanged *or*
97 # we have outgoing changesets but refused to push
97 # we have outgoing changesets but refused to push
98 # - other values as described by addchangegroup()
98 # - other values as described by addchangegroup()
99 self.cgresult = None
99 self.cgresult = None
100 # Boolean value for the bookmark push
100 # Boolean value for the bookmark push
101 self.bkresult = None
101 self.bkresult = None
102 # discover.outgoing object (contains common and outgoing data)
102 # discover.outgoing object (contains common and outgoing data)
103 self.outgoing = None
103 self.outgoing = None
104 # all remote heads before the push
104 # all remote heads before the push
105 self.remoteheads = None
105 self.remoteheads = None
106 # testable as a boolean indicating if any nodes are missing locally.
106 # testable as a boolean indicating if any nodes are missing locally.
107 self.incoming = None
107 self.incoming = None
108 # phases changes that must be pushed along side the changesets
108 # phases changes that must be pushed along side the changesets
109 self.outdatedphases = None
109 self.outdatedphases = None
110 # phases changes that must be pushed if changeset push fails
110 # phases changes that must be pushed if changeset push fails
111 self.fallbackoutdatedphases = None
111 self.fallbackoutdatedphases = None
112 # outgoing obsmarkers
112 # outgoing obsmarkers
113 self.outobsmarkers = set()
113 self.outobsmarkers = set()
114 # outgoing bookmarks
114 # outgoing bookmarks
115 self.outbookmarks = []
115 self.outbookmarks = []
116 # transaction manager
116 # transaction manager
117 self.trmanager = None
117 self.trmanager = None
118
118
119 @util.propertycache
119 @util.propertycache
120 def futureheads(self):
120 def futureheads(self):
121 """future remote heads if the changeset push succeeds"""
121 """future remote heads if the changeset push succeeds"""
122 return self.outgoing.missingheads
122 return self.outgoing.missingheads
123
123
124 @util.propertycache
124 @util.propertycache
125 def fallbackheads(self):
125 def fallbackheads(self):
126 """future remote heads if the changeset push fails"""
126 """future remote heads if the changeset push fails"""
127 if self.revs is None:
127 if self.revs is None:
128 # not target to push, all common are relevant
128 # not target to push, all common are relevant
129 return self.outgoing.commonheads
129 return self.outgoing.commonheads
130 unfi = self.repo.unfiltered()
130 unfi = self.repo.unfiltered()
131 # I want cheads = heads(::missingheads and ::commonheads)
131 # I want cheads = heads(::missingheads and ::commonheads)
132 # (missingheads is revs with secret changeset filtered out)
132 # (missingheads is revs with secret changeset filtered out)
133 #
133 #
134 # This can be expressed as:
134 # This can be expressed as:
135 # cheads = ( (missingheads and ::commonheads)
135 # cheads = ( (missingheads and ::commonheads)
136 # + (commonheads and ::missingheads))"
136 # + (commonheads and ::missingheads))"
137 # )
137 # )
138 #
138 #
139 # while trying to push we already computed the following:
139 # while trying to push we already computed the following:
140 # common = (::commonheads)
140 # common = (::commonheads)
141 # missing = ((commonheads::missingheads) - commonheads)
141 # missing = ((commonheads::missingheads) - commonheads)
142 #
142 #
143 # We can pick:
143 # We can pick:
144 # * missingheads part of common (::commonheads)
144 # * missingheads part of common (::commonheads)
145 common = set(self.outgoing.common)
145 common = set(self.outgoing.common)
146 nm = self.repo.changelog.nodemap
146 nm = self.repo.changelog.nodemap
147 cheads = [node for node in self.revs if nm[node] in common]
147 cheads = [node for node in self.revs if nm[node] in common]
148 # and
148 # and
149 # * commonheads parents on missing
149 # * commonheads parents on missing
150 revset = unfi.set('%ln and parents(roots(%ln))',
150 revset = unfi.set('%ln and parents(roots(%ln))',
151 self.outgoing.commonheads,
151 self.outgoing.commonheads,
152 self.outgoing.missing)
152 self.outgoing.missing)
153 cheads.extend(c.node() for c in revset)
153 cheads.extend(c.node() for c in revset)
154 return cheads
154 return cheads
155
155
156 @property
156 @property
157 def commonheads(self):
157 def commonheads(self):
158 """set of all common heads after changeset bundle push"""
158 """set of all common heads after changeset bundle push"""
159 if self.cgresult:
159 if self.cgresult:
160 return self.futureheads
160 return self.futureheads
161 else:
161 else:
162 return self.fallbackheads
162 return self.fallbackheads
163
163
164 # mapping of message used when pushing bookmark
164 # mapping of message used when pushing bookmark
165 bookmsgmap = {'update': (_("updating bookmark %s\n"),
165 bookmsgmap = {'update': (_("updating bookmark %s\n"),
166 _('updating bookmark %s failed!\n')),
166 _('updating bookmark %s failed!\n')),
167 'export': (_("exporting bookmark %s\n"),
167 'export': (_("exporting bookmark %s\n"),
168 _('exporting bookmark %s failed!\n')),
168 _('exporting bookmark %s failed!\n')),
169 'delete': (_("deleting remote bookmark %s\n"),
169 'delete': (_("deleting remote bookmark %s\n"),
170 _('deleting remote bookmark %s failed!\n')),
170 _('deleting remote bookmark %s failed!\n')),
171 }
171 }
172
172
173
173
174 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
174 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
175 '''Push outgoing changesets (limited by revs) from a local
175 '''Push outgoing changesets (limited by revs) from a local
176 repository to remote. Return an integer:
176 repository to remote. Return an integer:
177 - None means nothing to push
177 - None means nothing to push
178 - 0 means HTTP error
178 - 0 means HTTP error
179 - 1 means we pushed and remote head count is unchanged *or*
179 - 1 means we pushed and remote head count is unchanged *or*
180 we have outgoing changesets but refused to push
180 we have outgoing changesets but refused to push
181 - other values as described by addchangegroup()
181 - other values as described by addchangegroup()
182 '''
182 '''
183 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
183 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
184 if pushop.remote.local():
184 if pushop.remote.local():
185 missing = (set(pushop.repo.requirements)
185 missing = (set(pushop.repo.requirements)
186 - pushop.remote.local().supported)
186 - pushop.remote.local().supported)
187 if missing:
187 if missing:
188 msg = _("required features are not"
188 msg = _("required features are not"
189 " supported in the destination:"
189 " supported in the destination:"
190 " %s") % (', '.join(sorted(missing)))
190 " %s") % (', '.join(sorted(missing)))
191 raise util.Abort(msg)
191 raise util.Abort(msg)
192
192
193 # there are two ways to push to remote repo:
193 # there are two ways to push to remote repo:
194 #
194 #
195 # addchangegroup assumes local user can lock remote
195 # addchangegroup assumes local user can lock remote
196 # repo (local filesystem, old ssh servers).
196 # repo (local filesystem, old ssh servers).
197 #
197 #
198 # unbundle assumes local user cannot lock remote repo (new ssh
198 # unbundle assumes local user cannot lock remote repo (new ssh
199 # servers, http servers).
199 # servers, http servers).
200
200
201 if not pushop.remote.canpush():
201 if not pushop.remote.canpush():
202 raise util.Abort(_("destination does not support push"))
202 raise util.Abort(_("destination does not support push"))
203 # get local lock as we might write phase data
203 # get local lock as we might write phase data
204 localwlock = locallock = None
204 localwlock = locallock = None
205 try:
205 try:
206 # bundle2 push may receive a reply bundle touching bookmarks or other
206 # bundle2 push may receive a reply bundle touching bookmarks or other
207 # things requiring the wlock. Take it now to ensure proper ordering.
207 # things requiring the wlock. Take it now to ensure proper ordering.
208 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
208 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
209 if _canusebundle2(pushop) and maypushback:
209 if _canusebundle2(pushop) and maypushback:
210 localwlock = pushop.repo.wlock()
210 localwlock = pushop.repo.wlock()
211 locallock = pushop.repo.lock()
211 locallock = pushop.repo.lock()
212 pushop.locallocked = True
212 pushop.locallocked = True
213 except IOError, err:
213 except IOError, err:
214 pushop.locallocked = False
214 pushop.locallocked = False
215 if err.errno != errno.EACCES:
215 if err.errno != errno.EACCES:
216 raise
216 raise
217 # source repo cannot be locked.
217 # source repo cannot be locked.
218 # We do not abort the push, but just disable the local phase
218 # We do not abort the push, but just disable the local phase
219 # synchronisation.
219 # synchronisation.
220 msg = 'cannot lock source repository: %s\n' % err
220 msg = 'cannot lock source repository: %s\n' % err
221 pushop.ui.debug(msg)
221 pushop.ui.debug(msg)
222 try:
222 try:
223 if pushop.locallocked:
223 if pushop.locallocked:
224 pushop.trmanager = transactionmanager(repo,
224 pushop.trmanager = transactionmanager(repo,
225 'push-response',
225 'push-response',
226 pushop.remote.url())
226 pushop.remote.url())
227 pushop.repo.checkpush(pushop)
227 pushop.repo.checkpush(pushop)
228 lock = None
228 lock = None
229 unbundle = pushop.remote.capable('unbundle')
229 unbundle = pushop.remote.capable('unbundle')
230 if not unbundle:
230 if not unbundle:
231 lock = pushop.remote.lock()
231 lock = pushop.remote.lock()
232 try:
232 try:
233 _pushdiscovery(pushop)
233 _pushdiscovery(pushop)
234 if _canusebundle2(pushop):
234 if _canusebundle2(pushop):
235 _pushbundle2(pushop)
235 _pushbundle2(pushop)
236 _pushchangeset(pushop)
236 _pushchangeset(pushop)
237 _pushsyncphase(pushop)
237 _pushsyncphase(pushop)
238 _pushobsolete(pushop)
238 _pushobsolete(pushop)
239 _pushbookmark(pushop)
239 _pushbookmark(pushop)
240 finally:
240 finally:
241 if lock is not None:
241 if lock is not None:
242 lock.release()
242 lock.release()
243 if pushop.trmanager:
243 if pushop.trmanager:
244 pushop.trmanager.close()
244 pushop.trmanager.close()
245 finally:
245 finally:
246 if pushop.trmanager:
246 if pushop.trmanager:
247 pushop.trmanager.release()
247 pushop.trmanager.release()
248 if locallock is not None:
248 if locallock is not None:
249 locallock.release()
249 locallock.release()
250 if localwlock is not None:
250 if localwlock is not None:
251 localwlock.release()
251 localwlock.release()
252
252
253 return pushop
253 return pushop
254
254
255 # list of steps to perform discovery before push
255 # list of steps to perform discovery before push
256 pushdiscoveryorder = []
256 pushdiscoveryorder = []
257
257
258 # Mapping between step name and function
258 # Mapping between step name and function
259 #
259 #
260 # This exists to help extensions wrap steps if necessary
260 # This exists to help extensions wrap steps if necessary
261 pushdiscoverymapping = {}
261 pushdiscoverymapping = {}
262
262
263 def pushdiscovery(stepname):
263 def pushdiscovery(stepname):
264 """decorator for function performing discovery before push
264 """decorator for function performing discovery before push
265
265
266 The function is added to the step -> function mapping and appended to the
266 The function is added to the step -> function mapping and appended to the
267 list of steps. Beware that decorated function will be added in order (this
267 list of steps. Beware that decorated function will be added in order (this
268 may matter).
268 may matter).
269
269
270 You can only use this decorator for a new step, if you want to wrap a step
270 You can only use this decorator for a new step, if you want to wrap a step
271 from an extension, change the pushdiscovery dictionary directly."""
271 from an extension, change the pushdiscovery dictionary directly."""
272 def dec(func):
272 def dec(func):
273 assert stepname not in pushdiscoverymapping
273 assert stepname not in pushdiscoverymapping
274 pushdiscoverymapping[stepname] = func
274 pushdiscoverymapping[stepname] = func
275 pushdiscoveryorder.append(stepname)
275 pushdiscoveryorder.append(stepname)
276 return func
276 return func
277 return dec
277 return dec
278
278
279 def _pushdiscovery(pushop):
279 def _pushdiscovery(pushop):
280 """Run all discovery steps"""
280 """Run all discovery steps"""
281 for stepname in pushdiscoveryorder:
281 for stepname in pushdiscoveryorder:
282 step = pushdiscoverymapping[stepname]
282 step = pushdiscoverymapping[stepname]
283 step(pushop)
283 step(pushop)
284
284
285 @pushdiscovery('changeset')
285 @pushdiscovery('changeset')
286 def _pushdiscoverychangeset(pushop):
286 def _pushdiscoverychangeset(pushop):
287 """discover the changeset that need to be pushed"""
287 """discover the changeset that need to be pushed"""
288 fci = discovery.findcommonincoming
288 fci = discovery.findcommonincoming
289 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
289 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
290 common, inc, remoteheads = commoninc
290 common, inc, remoteheads = commoninc
291 fco = discovery.findcommonoutgoing
291 fco = discovery.findcommonoutgoing
292 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
292 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
293 commoninc=commoninc, force=pushop.force)
293 commoninc=commoninc, force=pushop.force)
294 pushop.outgoing = outgoing
294 pushop.outgoing = outgoing
295 pushop.remoteheads = remoteheads
295 pushop.remoteheads = remoteheads
296 pushop.incoming = inc
296 pushop.incoming = inc
297
297
298 @pushdiscovery('phase')
298 @pushdiscovery('phase')
299 def _pushdiscoveryphase(pushop):
299 def _pushdiscoveryphase(pushop):
300 """discover the phase that needs to be pushed
300 """discover the phase that needs to be pushed
301
301
302 (computed for both success and failure case for changesets push)"""
302 (computed for both success and failure case for changesets push)"""
303 outgoing = pushop.outgoing
303 outgoing = pushop.outgoing
304 unfi = pushop.repo.unfiltered()
304 unfi = pushop.repo.unfiltered()
305 remotephases = pushop.remote.listkeys('phases')
305 remotephases = pushop.remote.listkeys('phases')
306 publishing = remotephases.get('publishing', False)
306 publishing = remotephases.get('publishing', False)
307 ana = phases.analyzeremotephases(pushop.repo,
307 ana = phases.analyzeremotephases(pushop.repo,
308 pushop.fallbackheads,
308 pushop.fallbackheads,
309 remotephases)
309 remotephases)
310 pheads, droots = ana
310 pheads, droots = ana
311 extracond = ''
311 extracond = ''
312 if not publishing:
312 if not publishing:
313 extracond = ' and public()'
313 extracond = ' and public()'
314 revset = 'heads((%%ln::%%ln) %s)' % extracond
314 revset = 'heads((%%ln::%%ln) %s)' % extracond
315 # Get the list of all revs draft on remote by public here.
315 # Get the list of all revs draft on remote by public here.
316 # XXX Beware that revset break if droots is not strictly
316 # XXX Beware that revset break if droots is not strictly
317 # XXX root we may want to ensure it is but it is costly
317 # XXX root we may want to ensure it is but it is costly
318 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
318 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
319 if not outgoing.missing:
319 if not outgoing.missing:
320 future = fallback
320 future = fallback
321 else:
321 else:
322 # adds changeset we are going to push as draft
322 # adds changeset we are going to push as draft
323 #
323 #
324 # should not be necessary for publishing server, but because of an
324 # should not be necessary for publishing server, but because of an
325 # issue fixed in xxxxx we have to do it anyway.
325 # issue fixed in xxxxx we have to do it anyway.
326 fdroots = list(unfi.set('roots(%ln + %ln::)',
326 fdroots = list(unfi.set('roots(%ln + %ln::)',
327 outgoing.missing, droots))
327 outgoing.missing, droots))
328 fdroots = [f.node() for f in fdroots]
328 fdroots = [f.node() for f in fdroots]
329 future = list(unfi.set(revset, fdroots, pushop.futureheads))
329 future = list(unfi.set(revset, fdroots, pushop.futureheads))
330 pushop.outdatedphases = future
330 pushop.outdatedphases = future
331 pushop.fallbackoutdatedphases = fallback
331 pushop.fallbackoutdatedphases = fallback
332
332
333 @pushdiscovery('obsmarker')
333 @pushdiscovery('obsmarker')
334 def _pushdiscoveryobsmarkers(pushop):
334 def _pushdiscoveryobsmarkers(pushop):
335 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
335 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
336 and pushop.repo.obsstore
336 and pushop.repo.obsstore
337 and 'obsolete' in pushop.remote.listkeys('namespaces')):
337 and 'obsolete' in pushop.remote.listkeys('namespaces')):
338 repo = pushop.repo
338 repo = pushop.repo
339 # very naive computation, that can be quite expensive on big repo.
339 # very naive computation, that can be quite expensive on big repo.
340 # However: evolution is currently slow on them anyway.
340 # However: evolution is currently slow on them anyway.
341 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
341 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
342 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
342 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
343
343
344 @pushdiscovery('bookmarks')
344 @pushdiscovery('bookmarks')
345 def _pushdiscoverybookmarks(pushop):
345 def _pushdiscoverybookmarks(pushop):
346 ui = pushop.ui
346 ui = pushop.ui
347 repo = pushop.repo.unfiltered()
347 repo = pushop.repo.unfiltered()
348 remote = pushop.remote
348 remote = pushop.remote
349 ui.debug("checking for updated bookmarks\n")
349 ui.debug("checking for updated bookmarks\n")
350 ancestors = ()
350 ancestors = ()
351 if pushop.revs:
351 if pushop.revs:
352 revnums = map(repo.changelog.rev, pushop.revs)
352 revnums = map(repo.changelog.rev, pushop.revs)
353 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
353 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
354 remotebookmark = remote.listkeys('bookmarks')
354 remotebookmark = remote.listkeys('bookmarks')
355
355
356 explicit = set(pushop.bookmarks)
356 explicit = set(pushop.bookmarks)
357
357
358 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
358 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
359 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
359 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
360 for b, scid, dcid in advsrc:
360 for b, scid, dcid in advsrc:
361 if b in explicit:
361 if b in explicit:
362 explicit.remove(b)
362 explicit.remove(b)
363 if not ancestors or repo[scid].rev() in ancestors:
363 if not ancestors or repo[scid].rev() in ancestors:
364 pushop.outbookmarks.append((b, dcid, scid))
364 pushop.outbookmarks.append((b, dcid, scid))
365 # search added bookmark
365 # search added bookmark
366 for b, scid, dcid in addsrc:
366 for b, scid, dcid in addsrc:
367 if b in explicit:
367 if b in explicit:
368 explicit.remove(b)
368 explicit.remove(b)
369 pushop.outbookmarks.append((b, '', scid))
369 pushop.outbookmarks.append((b, '', scid))
370 # search for overwritten bookmark
370 # search for overwritten bookmark
371 for b, scid, dcid in advdst + diverge + differ:
371 for b, scid, dcid in advdst + diverge + differ:
372 if b in explicit:
372 if b in explicit:
373 explicit.remove(b)
373 explicit.remove(b)
374 pushop.outbookmarks.append((b, dcid, scid))
374 pushop.outbookmarks.append((b, dcid, scid))
375 # search for bookmark to delete
375 # search for bookmark to delete
376 for b, scid, dcid in adddst:
376 for b, scid, dcid in adddst:
377 if b in explicit:
377 if b in explicit:
378 explicit.remove(b)
378 explicit.remove(b)
379 # treat as "deleted locally"
379 # treat as "deleted locally"
380 pushop.outbookmarks.append((b, dcid, ''))
380 pushop.outbookmarks.append((b, dcid, ''))
381 # identical bookmarks shouldn't get reported
381 # identical bookmarks shouldn't get reported
382 for b, scid, dcid in same:
382 for b, scid, dcid in same:
383 if b in explicit:
383 if b in explicit:
384 explicit.remove(b)
384 explicit.remove(b)
385
385
386 if explicit:
386 if explicit:
387 explicit = sorted(explicit)
387 explicit = sorted(explicit)
388 # we should probably list all of them
388 # we should probably list all of them
389 ui.warn(_('bookmark %s does not exist on the local '
389 ui.warn(_('bookmark %s does not exist on the local '
390 'or remote repository!\n') % explicit[0])
390 'or remote repository!\n') % explicit[0])
391 pushop.bkresult = 2
391 pushop.bkresult = 2
392
392
393 pushop.outbookmarks.sort()
393 pushop.outbookmarks.sort()
394
394
395 def _pushcheckoutgoing(pushop):
395 def _pushcheckoutgoing(pushop):
396 outgoing = pushop.outgoing
396 outgoing = pushop.outgoing
397 unfi = pushop.repo.unfiltered()
397 unfi = pushop.repo.unfiltered()
398 if not outgoing.missing:
398 if not outgoing.missing:
399 # nothing to push
399 # nothing to push
400 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
400 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
401 return False
401 return False
402 # something to push
402 # something to push
403 if not pushop.force:
403 if not pushop.force:
404 # if repo.obsstore == False --> no obsolete
404 # if repo.obsstore == False --> no obsolete
405 # then, save the iteration
405 # then, save the iteration
406 if unfi.obsstore:
406 if unfi.obsstore:
407 # this message are here for 80 char limit reason
407 # this message are here for 80 char limit reason
408 mso = _("push includes obsolete changeset: %s!")
408 mso = _("push includes obsolete changeset: %s!")
409 mst = {"unstable": _("push includes unstable changeset: %s!"),
409 mst = {"unstable": _("push includes unstable changeset: %s!"),
410 "bumped": _("push includes bumped changeset: %s!"),
410 "bumped": _("push includes bumped changeset: %s!"),
411 "divergent": _("push includes divergent changeset: %s!")}
411 "divergent": _("push includes divergent changeset: %s!")}
412 # If we are to push if there is at least one
412 # If we are to push if there is at least one
413 # obsolete or unstable changeset in missing, at
413 # obsolete or unstable changeset in missing, at
414 # least one of the missinghead will be obsolete or
414 # least one of the missinghead will be obsolete or
415 # unstable. So checking heads only is ok
415 # unstable. So checking heads only is ok
416 for node in outgoing.missingheads:
416 for node in outgoing.missingheads:
417 ctx = unfi[node]
417 ctx = unfi[node]
418 if ctx.obsolete():
418 if ctx.obsolete():
419 raise util.Abort(mso % ctx)
419 raise util.Abort(mso % ctx)
420 elif ctx.troubled():
420 elif ctx.troubled():
421 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
421 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
422 newbm = pushop.ui.configlist('bookmarks', 'pushing')
422 newbm = pushop.ui.configlist('bookmarks', 'pushing')
423 discovery.checkheads(unfi, pushop.remote, outgoing,
423 discovery.checkheads(unfi, pushop.remote, outgoing,
424 pushop.remoteheads,
424 pushop.remoteheads,
425 pushop.newbranch,
425 pushop.newbranch,
426 bool(pushop.incoming),
426 bool(pushop.incoming),
427 newbm)
427 newbm)
428 return True
428 return True
429
429
430 # List of names of steps to perform for an outgoing bundle2, order matters.
430 # List of names of steps to perform for an outgoing bundle2, order matters.
431 b2partsgenorder = []
431 b2partsgenorder = []
432
432
433 # Mapping between step name and function
433 # Mapping between step name and function
434 #
434 #
435 # This exists to help extensions wrap steps if necessary
435 # This exists to help extensions wrap steps if necessary
436 b2partsgenmapping = {}
436 b2partsgenmapping = {}
437
437
438 def b2partsgenerator(stepname, idx=None):
438 def b2partsgenerator(stepname, idx=None):
439 """decorator for function generating bundle2 part
439 """decorator for function generating bundle2 part
440
440
441 The function is added to the step -> function mapping and appended to the
441 The function is added to the step -> function mapping and appended to the
442 list of steps. Beware that decorated functions will be added in order
442 list of steps. Beware that decorated functions will be added in order
443 (this may matter).
443 (this may matter).
444
444
445 You can only use this decorator for new steps, if you want to wrap a step
445 You can only use this decorator for new steps, if you want to wrap a step
446 from an extension, attack the b2partsgenmapping dictionary directly."""
446 from an extension, attack the b2partsgenmapping dictionary directly."""
447 def dec(func):
447 def dec(func):
448 assert stepname not in b2partsgenmapping
448 assert stepname not in b2partsgenmapping
449 b2partsgenmapping[stepname] = func
449 b2partsgenmapping[stepname] = func
450 if idx is None:
450 if idx is None:
451 b2partsgenorder.append(stepname)
451 b2partsgenorder.append(stepname)
452 else:
452 else:
453 b2partsgenorder.insert(idx, stepname)
453 b2partsgenorder.insert(idx, stepname)
454 return func
454 return func
455 return dec
455 return dec
456
456
457 @b2partsgenerator('changeset')
457 @b2partsgenerator('changeset')
458 def _pushb2ctx(pushop, bundler):
458 def _pushb2ctx(pushop, bundler):
459 """handle changegroup push through bundle2
459 """handle changegroup push through bundle2
460
460
461 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
461 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
462 """
462 """
463 if 'changesets' in pushop.stepsdone:
463 if 'changesets' in pushop.stepsdone:
464 return
464 return
465 pushop.stepsdone.add('changesets')
465 pushop.stepsdone.add('changesets')
466 # Send known heads to the server for race detection.
466 # Send known heads to the server for race detection.
467 if not _pushcheckoutgoing(pushop):
467 if not _pushcheckoutgoing(pushop):
468 return
468 return
469 pushop.repo.prepushoutgoinghooks(pushop.repo,
469 pushop.repo.prepushoutgoinghooks(pushop.repo,
470 pushop.remote,
470 pushop.remote,
471 pushop.outgoing)
471 pushop.outgoing)
472 if not pushop.force:
472 if not pushop.force:
473 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
473 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
474 b2caps = bundle2.bundle2caps(pushop.remote)
474 b2caps = bundle2.bundle2caps(pushop.remote)
475 version = None
475 version = None
476 cgversions = b2caps.get('changegroup')
476 cgversions = b2caps.get('changegroup')
477 if not cgversions: # 3.1 and 3.2 ship with an empty value
477 if not cgversions: # 3.1 and 3.2 ship with an empty value
478 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
478 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
479 pushop.outgoing)
479 pushop.outgoing)
480 else:
480 else:
481 cgversions = [v for v in cgversions if v in changegroup.packermap]
481 cgversions = [v for v in cgversions if v in changegroup.packermap]
482 if not cgversions:
482 if not cgversions:
483 raise ValueError(_('no common changegroup version'))
483 raise ValueError(_('no common changegroup version'))
484 version = max(cgversions)
484 version = max(cgversions)
485 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
485 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
486 pushop.outgoing,
486 pushop.outgoing,
487 version=version)
487 version=version)
488 cgpart = bundler.newpart('changegroup', data=cg)
488 cgpart = bundler.newpart('changegroup', data=cg)
489 if version is not None:
489 if version is not None:
490 cgpart.addparam('version', version)
490 cgpart.addparam('version', version)
491 def handlereply(op):
491 def handlereply(op):
492 """extract addchangegroup returns from server reply"""
492 """extract addchangegroup returns from server reply"""
493 cgreplies = op.records.getreplies(cgpart.id)
493 cgreplies = op.records.getreplies(cgpart.id)
494 assert len(cgreplies['changegroup']) == 1
494 assert len(cgreplies['changegroup']) == 1
495 pushop.cgresult = cgreplies['changegroup'][0]['return']
495 pushop.cgresult = cgreplies['changegroup'][0]['return']
496 return handlereply
496 return handlereply
497
497
498 @b2partsgenerator('phase')
498 @b2partsgenerator('phase')
499 def _pushb2phases(pushop, bundler):
499 def _pushb2phases(pushop, bundler):
500 """handle phase push through bundle2"""
500 """handle phase push through bundle2"""
501 if 'phases' in pushop.stepsdone:
501 if 'phases' in pushop.stepsdone:
502 return
502 return
503 b2caps = bundle2.bundle2caps(pushop.remote)
503 b2caps = bundle2.bundle2caps(pushop.remote)
504 if not 'pushkey' in b2caps:
504 if not 'pushkey' in b2caps:
505 return
505 return
506 pushop.stepsdone.add('phases')
506 pushop.stepsdone.add('phases')
507 part2node = []
507 part2node = []
508 enc = pushkey.encode
508 enc = pushkey.encode
509 for newremotehead in pushop.outdatedphases:
509 for newremotehead in pushop.outdatedphases:
510 part = bundler.newpart('pushkey')
510 part = bundler.newpart('pushkey')
511 part.addparam('namespace', enc('phases'))
511 part.addparam('namespace', enc('phases'))
512 part.addparam('key', enc(newremotehead.hex()))
512 part.addparam('key', enc(newremotehead.hex()))
513 part.addparam('old', enc(str(phases.draft)))
513 part.addparam('old', enc(str(phases.draft)))
514 part.addparam('new', enc(str(phases.public)))
514 part.addparam('new', enc(str(phases.public)))
515 part2node.append((part.id, newremotehead))
515 part2node.append((part.id, newremotehead))
516 def handlereply(op):
516 def handlereply(op):
517 for partid, node in part2node:
517 for partid, node in part2node:
518 partrep = op.records.getreplies(partid)
518 partrep = op.records.getreplies(partid)
519 results = partrep['pushkey']
519 results = partrep['pushkey']
520 assert len(results) <= 1
520 assert len(results) <= 1
521 msg = None
521 msg = None
522 if not results:
522 if not results:
523 msg = _('server ignored update of %s to public!\n') % node
523 msg = _('server ignored update of %s to public!\n') % node
524 elif not int(results[0]['return']):
524 elif not int(results[0]['return']):
525 msg = _('updating %s to public failed!\n') % node
525 msg = _('updating %s to public failed!\n') % node
526 if msg is not None:
526 if msg is not None:
527 pushop.ui.warn(msg)
527 pushop.ui.warn(msg)
528 return handlereply
528 return handlereply
529
529
530 @b2partsgenerator('obsmarkers')
530 @b2partsgenerator('obsmarkers')
531 def _pushb2obsmarkers(pushop, bundler):
531 def _pushb2obsmarkers(pushop, bundler):
532 if 'obsmarkers' in pushop.stepsdone:
532 if 'obsmarkers' in pushop.stepsdone:
533 return
533 return
534 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
534 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
535 if obsolete.commonversion(remoteversions) is None:
535 if obsolete.commonversion(remoteversions) is None:
536 return
536 return
537 pushop.stepsdone.add('obsmarkers')
537 pushop.stepsdone.add('obsmarkers')
538 if pushop.outobsmarkers:
538 if pushop.outobsmarkers:
539 buildobsmarkerspart(bundler, pushop.outobsmarkers)
539 buildobsmarkerspart(bundler, pushop.outobsmarkers)
540
540
541 @b2partsgenerator('bookmarks')
541 @b2partsgenerator('bookmarks')
542 def _pushb2bookmarks(pushop, bundler):
542 def _pushb2bookmarks(pushop, bundler):
543 """handle phase push through bundle2"""
543 """handle phase push through bundle2"""
544 if 'bookmarks' in pushop.stepsdone:
544 if 'bookmarks' in pushop.stepsdone:
545 return
545 return
546 b2caps = bundle2.bundle2caps(pushop.remote)
546 b2caps = bundle2.bundle2caps(pushop.remote)
547 if 'pushkey' not in b2caps:
547 if 'pushkey' not in b2caps:
548 return
548 return
549 pushop.stepsdone.add('bookmarks')
549 pushop.stepsdone.add('bookmarks')
550 part2book = []
550 part2book = []
551 enc = pushkey.encode
551 enc = pushkey.encode
552 for book, old, new in pushop.outbookmarks:
552 for book, old, new in pushop.outbookmarks:
553 part = bundler.newpart('pushkey')
553 part = bundler.newpart('pushkey')
554 part.addparam('namespace', enc('bookmarks'))
554 part.addparam('namespace', enc('bookmarks'))
555 part.addparam('key', enc(book))
555 part.addparam('key', enc(book))
556 part.addparam('old', enc(old))
556 part.addparam('old', enc(old))
557 part.addparam('new', enc(new))
557 part.addparam('new', enc(new))
558 action = 'update'
558 action = 'update'
559 if not old:
559 if not old:
560 action = 'export'
560 action = 'export'
561 elif not new:
561 elif not new:
562 action = 'delete'
562 action = 'delete'
563 part2book.append((part.id, book, action))
563 part2book.append((part.id, book, action))
564
564
565
565
566 def handlereply(op):
566 def handlereply(op):
567 ui = pushop.ui
567 ui = pushop.ui
568 for partid, book, action in part2book:
568 for partid, book, action in part2book:
569 partrep = op.records.getreplies(partid)
569 partrep = op.records.getreplies(partid)
570 results = partrep['pushkey']
570 results = partrep['pushkey']
571 assert len(results) <= 1
571 assert len(results) <= 1
572 if not results:
572 if not results:
573 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
573 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
574 else:
574 else:
575 ret = int(results[0]['return'])
575 ret = int(results[0]['return'])
576 if ret:
576 if ret:
577 ui.status(bookmsgmap[action][0] % book)
577 ui.status(bookmsgmap[action][0] % book)
578 else:
578 else:
579 ui.warn(bookmsgmap[action][1] % book)
579 ui.warn(bookmsgmap[action][1] % book)
580 if pushop.bkresult is not None:
580 if pushop.bkresult is not None:
581 pushop.bkresult = 1
581 pushop.bkresult = 1
582 return handlereply
582 return handlereply
583
583
584
584
585 def _pushbundle2(pushop):
585 def _pushbundle2(pushop):
586 """push data to the remote using bundle2
586 """push data to the remote using bundle2
587
587
588 The only currently supported type of data is changegroup but this will
588 The only currently supported type of data is changegroup but this will
589 evolve in the future."""
589 evolve in the future."""
590 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
590 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
591 pushback = (pushop.trmanager
591 pushback = (pushop.trmanager
592 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
592 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
593
593
594 # create reply capability
594 # create reply capability
595 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
595 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
596 allowpushback=pushback))
596 allowpushback=pushback))
597 bundler.newpart('replycaps', data=capsblob)
597 bundler.newpart('replycaps', data=capsblob)
598 replyhandlers = []
598 replyhandlers = []
599 for partgenname in b2partsgenorder:
599 for partgenname in b2partsgenorder:
600 partgen = b2partsgenmapping[partgenname]
600 partgen = b2partsgenmapping[partgenname]
601 ret = partgen(pushop, bundler)
601 ret = partgen(pushop, bundler)
602 if callable(ret):
602 if callable(ret):
603 replyhandlers.append(ret)
603 replyhandlers.append(ret)
604 # do not push if nothing to push
604 # do not push if nothing to push
605 if bundler.nbparts <= 1:
605 if bundler.nbparts <= 1:
606 return
606 return
607 stream = util.chunkbuffer(bundler.getchunks())
607 stream = util.chunkbuffer(bundler.getchunks())
608 try:
608 try:
609 reply = pushop.remote.unbundle(stream, ['force'], 'push')
609 reply = pushop.remote.unbundle(stream, ['force'], 'push')
610 except error.BundleValueError, exc:
610 except error.BundleValueError, exc:
611 raise util.Abort('missing support for %s' % exc)
611 raise util.Abort('missing support for %s' % exc)
612 try:
612 try:
613 trgetter = None
613 trgetter = None
614 if pushback:
614 if pushback:
615 trgetter = pushop.trmanager.transaction
615 trgetter = pushop.trmanager.transaction
616 op = bundle2.processbundle(pushop.repo, reply, trgetter)
616 op = bundle2.processbundle(pushop.repo, reply, trgetter)
617 except error.BundleValueError, exc:
617 except error.BundleValueError, exc:
618 raise util.Abort('missing support for %s' % exc)
618 raise util.Abort('missing support for %s' % exc)
619 for rephand in replyhandlers:
619 for rephand in replyhandlers:
620 rephand(op)
620 rephand(op)
621
621
622 def _pushchangeset(pushop):
622 def _pushchangeset(pushop):
623 """Make the actual push of changeset bundle to remote repo"""
623 """Make the actual push of changeset bundle to remote repo"""
624 if 'changesets' in pushop.stepsdone:
624 if 'changesets' in pushop.stepsdone:
625 return
625 return
626 pushop.stepsdone.add('changesets')
626 pushop.stepsdone.add('changesets')
627 if not _pushcheckoutgoing(pushop):
627 if not _pushcheckoutgoing(pushop):
628 return
628 return
629 pushop.repo.prepushoutgoinghooks(pushop.repo,
629 pushop.repo.prepushoutgoinghooks(pushop.repo,
630 pushop.remote,
630 pushop.remote,
631 pushop.outgoing)
631 pushop.outgoing)
632 outgoing = pushop.outgoing
632 outgoing = pushop.outgoing
633 unbundle = pushop.remote.capable('unbundle')
633 unbundle = pushop.remote.capable('unbundle')
634 # TODO: get bundlecaps from remote
634 # TODO: get bundlecaps from remote
635 bundlecaps = None
635 bundlecaps = None
636 # create a changegroup from local
636 # create a changegroup from local
637 if pushop.revs is None and not (outgoing.excluded
637 if pushop.revs is None and not (outgoing.excluded
638 or pushop.repo.changelog.filteredrevs):
638 or pushop.repo.changelog.filteredrevs):
639 # push everything,
639 # push everything,
640 # use the fast path, no race possible on push
640 # use the fast path, no race possible on push
641 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
641 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
642 cg = changegroup.getsubset(pushop.repo,
642 cg = changegroup.getsubset(pushop.repo,
643 outgoing,
643 outgoing,
644 bundler,
644 bundler,
645 'push',
645 'push',
646 fastpath=True)
646 fastpath=True)
647 else:
647 else:
648 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
648 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
649 bundlecaps)
649 bundlecaps)
650
650
651 # apply changegroup to remote
651 # apply changegroup to remote
652 if unbundle:
652 if unbundle:
653 # local repo finds heads on server, finds out what
653 # local repo finds heads on server, finds out what
654 # revs it must push. once revs transferred, if server
654 # revs it must push. once revs transferred, if server
655 # finds it has different heads (someone else won
655 # finds it has different heads (someone else won
656 # commit/push race), server aborts.
656 # commit/push race), server aborts.
657 if pushop.force:
657 if pushop.force:
658 remoteheads = ['force']
658 remoteheads = ['force']
659 else:
659 else:
660 remoteheads = pushop.remoteheads
660 remoteheads = pushop.remoteheads
661 # ssh: return remote's addchangegroup()
661 # ssh: return remote's addchangegroup()
662 # http: return remote's addchangegroup() or 0 for error
662 # http: return remote's addchangegroup() or 0 for error
663 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
663 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
664 pushop.repo.url())
664 pushop.repo.url())
665 else:
665 else:
666 # we return an integer indicating remote head count
666 # we return an integer indicating remote head count
667 # change
667 # change
668 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
668 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
669 pushop.repo.url())
669 pushop.repo.url())
670
670
671 def _pushsyncphase(pushop):
671 def _pushsyncphase(pushop):
672 """synchronise phase information locally and remotely"""
672 """synchronise phase information locally and remotely"""
673 cheads = pushop.commonheads
673 cheads = pushop.commonheads
674 # even when we don't push, exchanging phase data is useful
674 # even when we don't push, exchanging phase data is useful
675 remotephases = pushop.remote.listkeys('phases')
675 remotephases = pushop.remote.listkeys('phases')
676 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
676 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
677 and remotephases # server supports phases
677 and remotephases # server supports phases
678 and pushop.cgresult is None # nothing was pushed
678 and pushop.cgresult is None # nothing was pushed
679 and remotephases.get('publishing', False)):
679 and remotephases.get('publishing', False)):
680 # When:
680 # When:
681 # - this is a subrepo push
681 # - this is a subrepo push
682 # - and remote support phase
682 # - and remote support phase
683 # - and no changeset was pushed
683 # - and no changeset was pushed
684 # - and remote is publishing
684 # - and remote is publishing
685 # We may be in issue 3871 case!
685 # We may be in issue 3871 case!
686 # We drop the possible phase synchronisation done by
686 # We drop the possible phase synchronisation done by
687 # courtesy to publish changesets possibly locally draft
687 # courtesy to publish changesets possibly locally draft
688 # on the remote.
688 # on the remote.
689 remotephases = {'publishing': 'True'}
689 remotephases = {'publishing': 'True'}
690 if not remotephases: # old server or public only reply from non-publishing
690 if not remotephases: # old server or public only reply from non-publishing
691 _localphasemove(pushop, cheads)
691 _localphasemove(pushop, cheads)
692 # don't push any phase data as there is nothing to push
692 # don't push any phase data as there is nothing to push
693 else:
693 else:
694 ana = phases.analyzeremotephases(pushop.repo, cheads,
694 ana = phases.analyzeremotephases(pushop.repo, cheads,
695 remotephases)
695 remotephases)
696 pheads, droots = ana
696 pheads, droots = ana
697 ### Apply remote phase on local
697 ### Apply remote phase on local
698 if remotephases.get('publishing', False):
698 if remotephases.get('publishing', False):
699 _localphasemove(pushop, cheads)
699 _localphasemove(pushop, cheads)
700 else: # publish = False
700 else: # publish = False
701 _localphasemove(pushop, pheads)
701 _localphasemove(pushop, pheads)
702 _localphasemove(pushop, cheads, phases.draft)
702 _localphasemove(pushop, cheads, phases.draft)
703 ### Apply local phase on remote
703 ### Apply local phase on remote
704
704
705 if pushop.cgresult:
705 if pushop.cgresult:
706 if 'phases' in pushop.stepsdone:
706 if 'phases' in pushop.stepsdone:
707 # phases already pushed though bundle2
707 # phases already pushed though bundle2
708 return
708 return
709 outdated = pushop.outdatedphases
709 outdated = pushop.outdatedphases
710 else:
710 else:
711 outdated = pushop.fallbackoutdatedphases
711 outdated = pushop.fallbackoutdatedphases
712
712
713 pushop.stepsdone.add('phases')
713 pushop.stepsdone.add('phases')
714
714
715 # filter heads already turned public by the push
715 # filter heads already turned public by the push
716 outdated = [c for c in outdated if c.node() not in pheads]
716 outdated = [c for c in outdated if c.node() not in pheads]
717 # fallback to independent pushkey command
717 # fallback to independent pushkey command
718 for newremotehead in outdated:
718 for newremotehead in outdated:
719 r = pushop.remote.pushkey('phases',
719 r = pushop.remote.pushkey('phases',
720 newremotehead.hex(),
720 newremotehead.hex(),
721 str(phases.draft),
721 str(phases.draft),
722 str(phases.public))
722 str(phases.public))
723 if not r:
723 if not r:
724 pushop.ui.warn(_('updating %s to public failed!\n')
724 pushop.ui.warn(_('updating %s to public failed!\n')
725 % newremotehead)
725 % newremotehead)
726
726
727 def _localphasemove(pushop, nodes, phase=phases.public):
727 def _localphasemove(pushop, nodes, phase=phases.public):
728 """move <nodes> to <phase> in the local source repo"""
728 """move <nodes> to <phase> in the local source repo"""
729 if pushop.trmanager:
729 if pushop.trmanager:
730 phases.advanceboundary(pushop.repo,
730 phases.advanceboundary(pushop.repo,
731 pushop.trmanager.transaction(),
731 pushop.trmanager.transaction(),
732 phase,
732 phase,
733 nodes)
733 nodes)
734 else:
734 else:
735 # repo is not locked, do not change any phases!
735 # repo is not locked, do not change any phases!
736 # Informs the user that phases should have been moved when
736 # Informs the user that phases should have been moved when
737 # applicable.
737 # applicable.
738 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
738 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
739 phasestr = phases.phasenames[phase]
739 phasestr = phases.phasenames[phase]
740 if actualmoves:
740 if actualmoves:
741 pushop.ui.status(_('cannot lock source repo, skipping '
741 pushop.ui.status(_('cannot lock source repo, skipping '
742 'local %s phase update\n') % phasestr)
742 'local %s phase update\n') % phasestr)
743
743
744 def _pushobsolete(pushop):
744 def _pushobsolete(pushop):
745 """utility function to push obsolete markers to a remote"""
745 """utility function to push obsolete markers to a remote"""
746 if 'obsmarkers' in pushop.stepsdone:
746 if 'obsmarkers' in pushop.stepsdone:
747 return
747 return
748 pushop.ui.debug('try to push obsolete markers to remote\n')
748 pushop.ui.debug('try to push obsolete markers to remote\n')
749 repo = pushop.repo
749 repo = pushop.repo
750 remote = pushop.remote
750 remote = pushop.remote
751 pushop.stepsdone.add('obsmarkers')
751 pushop.stepsdone.add('obsmarkers')
752 if pushop.outobsmarkers:
752 if pushop.outobsmarkers:
753 rslts = []
753 rslts = []
754 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
754 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
755 for key in sorted(remotedata, reverse=True):
755 for key in sorted(remotedata, reverse=True):
756 # reverse sort to ensure we end with dump0
756 # reverse sort to ensure we end with dump0
757 data = remotedata[key]
757 data = remotedata[key]
758 rslts.append(remote.pushkey('obsolete', key, '', data))
758 rslts.append(remote.pushkey('obsolete', key, '', data))
759 if [r for r in rslts if not r]:
759 if [r for r in rslts if not r]:
760 msg = _('failed to push some obsolete markers!\n')
760 msg = _('failed to push some obsolete markers!\n')
761 repo.ui.warn(msg)
761 repo.ui.warn(msg)
762
762
763 def _pushbookmark(pushop):
763 def _pushbookmark(pushop):
764 """Update bookmark position on remote"""
764 """Update bookmark position on remote"""
765 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
765 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
766 return
766 return
767 pushop.stepsdone.add('bookmarks')
767 pushop.stepsdone.add('bookmarks')
768 ui = pushop.ui
768 ui = pushop.ui
769 remote = pushop.remote
769 remote = pushop.remote
770
770
771 for b, old, new in pushop.outbookmarks:
771 for b, old, new in pushop.outbookmarks:
772 action = 'update'
772 action = 'update'
773 if not old:
773 if not old:
774 action = 'export'
774 action = 'export'
775 elif not new:
775 elif not new:
776 action = 'delete'
776 action = 'delete'
777 if remote.pushkey('bookmarks', b, old, new):
777 if remote.pushkey('bookmarks', b, old, new):
778 ui.status(bookmsgmap[action][0] % b)
778 ui.status(bookmsgmap[action][0] % b)
779 else:
779 else:
780 ui.warn(bookmsgmap[action][1] % b)
780 ui.warn(bookmsgmap[action][1] % b)
781 # discovery can have set the value form invalid entry
781 # discovery can have set the value form invalid entry
782 if pushop.bkresult is not None:
782 if pushop.bkresult is not None:
783 pushop.bkresult = 1
783 pushop.bkresult = 1
784
784
785 class pulloperation(object):
785 class pulloperation(object):
786 """A object that represent a single pull operation
786 """A object that represent a single pull operation
787
787
788 It purpose is to carry pull related state and very common operation.
788 It purpose is to carry pull related state and very common operation.
789
789
790 A new should be created at the beginning of each pull and discarded
790 A new should be created at the beginning of each pull and discarded
791 afterward.
791 afterward.
792 """
792 """
793
793
794 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
794 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
795 # repo we pull into
795 # repo we pull into
796 self.repo = repo
796 self.repo = repo
797 # repo we pull from
797 # repo we pull from
798 self.remote = remote
798 self.remote = remote
799 # revision we try to pull (None is "all")
799 # revision we try to pull (None is "all")
800 self.heads = heads
800 self.heads = heads
801 # bookmark pulled explicitly
801 # bookmark pulled explicitly
802 self.explicitbookmarks = bookmarks
802 self.explicitbookmarks = bookmarks
803 # do we force pull?
803 # do we force pull?
804 self.force = force
804 self.force = force
805 # transaction manager
805 # transaction manager
806 self.trmanager = None
806 self.trmanager = None
807 # set of common changeset between local and remote before pull
807 # set of common changeset between local and remote before pull
808 self.common = None
808 self.common = None
809 # set of pulled head
809 # set of pulled head
810 self.rheads = None
810 self.rheads = None
811 # list of missing changeset to fetch remotely
811 # list of missing changeset to fetch remotely
812 self.fetch = None
812 self.fetch = None
813 # remote bookmarks data
813 # remote bookmarks data
814 self.remotebookmarks = None
814 self.remotebookmarks = None
815 # result of changegroup pulling (used as return code by pull)
815 # result of changegroup pulling (used as return code by pull)
816 self.cgresult = None
816 self.cgresult = None
817 # list of step already done
817 # list of step already done
818 self.stepsdone = set()
818 self.stepsdone = set()
819
819
820 @util.propertycache
820 @util.propertycache
821 def pulledsubset(self):
821 def pulledsubset(self):
822 """heads of the set of changeset target by the pull"""
822 """heads of the set of changeset target by the pull"""
823 # compute target subset
823 # compute target subset
824 if self.heads is None:
824 if self.heads is None:
825 # We pulled every thing possible
825 # We pulled every thing possible
826 # sync on everything common
826 # sync on everything common
827 c = set(self.common)
827 c = set(self.common)
828 ret = list(self.common)
828 ret = list(self.common)
829 for n in self.rheads:
829 for n in self.rheads:
830 if n not in c:
830 if n not in c:
831 ret.append(n)
831 ret.append(n)
832 return ret
832 return ret
833 else:
833 else:
834 # We pulled a specific subset
834 # We pulled a specific subset
835 # sync on this subset
835 # sync on this subset
836 return self.heads
836 return self.heads
837
837
838 def gettransaction(self):
838 def gettransaction(self):
839 # deprecated; talk to trmanager directly
839 # deprecated; talk to trmanager directly
840 return self.trmanager.transaction()
840 return self.trmanager.transaction()
841
841
842 class transactionmanager(object):
842 class transactionmanager(object):
843 """An object to manage the life cycle of a transaction
843 """An object to manage the life cycle of a transaction
844
844
845 It creates the transaction on demand and calls the appropriate hooks when
845 It creates the transaction on demand and calls the appropriate hooks when
846 closing the transaction."""
846 closing the transaction."""
847 def __init__(self, repo, source, url):
847 def __init__(self, repo, source, url):
848 self.repo = repo
848 self.repo = repo
849 self.source = source
849 self.source = source
850 self.url = url
850 self.url = url
851 self._tr = None
851 self._tr = None
852
852
853 def transaction(self):
853 def transaction(self):
854 """Return an open transaction object, constructing if necessary"""
854 """Return an open transaction object, constructing if necessary"""
855 if not self._tr:
855 if not self._tr:
856 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
856 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
857 self._tr = self.repo.transaction(trname)
857 self._tr = self.repo.transaction(trname)
858 self._tr.hookargs['source'] = self.source
858 self._tr.hookargs['source'] = self.source
859 self._tr.hookargs['url'] = self.url
859 self._tr.hookargs['url'] = self.url
860 return self._tr
860 return self._tr
861
861
862 def close(self):
862 def close(self):
863 """close transaction if created"""
863 """close transaction if created"""
864 if self._tr is not None:
864 if self._tr is not None:
865 self._tr.close()
865 self._tr.close()
866
866
867 def release(self):
867 def release(self):
868 """release transaction if created"""
868 """release transaction if created"""
869 if self._tr is not None:
869 if self._tr is not None:
870 self._tr.release()
870 self._tr.release()
871
871
872 def pull(repo, remote, heads=None, force=False, bookmarks=()):
872 def pull(repo, remote, heads=None, force=False, bookmarks=()):
873 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
873 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
874 if pullop.remote.local():
874 if pullop.remote.local():
875 missing = set(pullop.remote.requirements) - pullop.repo.supported
875 missing = set(pullop.remote.requirements) - pullop.repo.supported
876 if missing:
876 if missing:
877 msg = _("required features are not"
877 msg = _("required features are not"
878 " supported in the destination:"
878 " supported in the destination:"
879 " %s") % (', '.join(sorted(missing)))
879 " %s") % (', '.join(sorted(missing)))
880 raise util.Abort(msg)
880 raise util.Abort(msg)
881
881
882 pullop.remotebookmarks = remote.listkeys('bookmarks')
882 pullop.remotebookmarks = remote.listkeys('bookmarks')
883 lock = pullop.repo.lock()
883 lock = pullop.repo.lock()
884 try:
884 try:
885 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
885 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
886 _pulldiscovery(pullop)
886 _pulldiscovery(pullop)
887 if _canusebundle2(pullop):
887 if _canusebundle2(pullop):
888 _pullbundle2(pullop)
888 _pullbundle2(pullop)
889 _pullchangeset(pullop)
889 _pullchangeset(pullop)
890 _pullphase(pullop)
890 _pullphase(pullop)
891 _pullbookmarks(pullop)
891 _pullbookmarks(pullop)
892 _pullobsolete(pullop)
892 _pullobsolete(pullop)
893 pullop.trmanager.close()
893 pullop.trmanager.close()
894 finally:
894 finally:
895 pullop.trmanager.release()
895 pullop.trmanager.release()
896 lock.release()
896 lock.release()
897
897
898 return pullop
898 return pullop
899
899
900 # list of steps to perform discovery before pull
900 # list of steps to perform discovery before pull
901 pulldiscoveryorder = []
901 pulldiscoveryorder = []
902
902
903 # Mapping between step name and function
903 # Mapping between step name and function
904 #
904 #
905 # This exists to help extensions wrap steps if necessary
905 # This exists to help extensions wrap steps if necessary
906 pulldiscoverymapping = {}
906 pulldiscoverymapping = {}
907
907
908 def pulldiscovery(stepname):
908 def pulldiscovery(stepname):
909 """decorator for function performing discovery before pull
909 """decorator for function performing discovery before pull
910
910
911 The function is added to the step -> function mapping and appended to the
911 The function is added to the step -> function mapping and appended to the
912 list of steps. Beware that decorated function will be added in order (this
912 list of steps. Beware that decorated function will be added in order (this
913 may matter).
913 may matter).
914
914
915 You can only use this decorator for a new step, if you want to wrap a step
915 You can only use this decorator for a new step, if you want to wrap a step
916 from an extension, change the pulldiscovery dictionary directly."""
916 from an extension, change the pulldiscovery dictionary directly."""
917 def dec(func):
917 def dec(func):
918 assert stepname not in pulldiscoverymapping
918 assert stepname not in pulldiscoverymapping
919 pulldiscoverymapping[stepname] = func
919 pulldiscoverymapping[stepname] = func
920 pulldiscoveryorder.append(stepname)
920 pulldiscoveryorder.append(stepname)
921 return func
921 return func
922 return dec
922 return dec
923
923
924 def _pulldiscovery(pullop):
924 def _pulldiscovery(pullop):
925 """Run all discovery steps"""
925 """Run all discovery steps"""
926 for stepname in pulldiscoveryorder:
926 for stepname in pulldiscoveryorder:
927 step = pulldiscoverymapping[stepname]
927 step = pulldiscoverymapping[stepname]
928 step(pullop)
928 step(pullop)
929
929
930 @pulldiscovery('changegroup')
930 @pulldiscovery('changegroup')
931 def _pulldiscoverychangegroup(pullop):
931 def _pulldiscoverychangegroup(pullop):
932 """discovery phase for the pull
932 """discovery phase for the pull
933
933
934 Current handle changeset discovery only, will change handle all discovery
934 Current handle changeset discovery only, will change handle all discovery
935 at some point."""
935 at some point."""
936 tmp = discovery.findcommonincoming(pullop.repo,
936 tmp = discovery.findcommonincoming(pullop.repo,
937 pullop.remote,
937 pullop.remote,
938 heads=pullop.heads,
938 heads=pullop.heads,
939 force=pullop.force)
939 force=pullop.force)
940 common, fetch, rheads = tmp
940 common, fetch, rheads = tmp
941 nm = pullop.repo.unfiltered().changelog.nodemap
941 nm = pullop.repo.unfiltered().changelog.nodemap
942 if fetch and rheads:
942 if fetch and rheads:
943 # If a remote heads in filtered locally, lets drop it from the unknown
943 # If a remote heads in filtered locally, lets drop it from the unknown
944 # remote heads and put in back in common.
944 # remote heads and put in back in common.
945 #
945 #
946 # This is a hackish solution to catch most of "common but locally
946 # This is a hackish solution to catch most of "common but locally
947 # hidden situation". We do not performs discovery on unfiltered
947 # hidden situation". We do not performs discovery on unfiltered
948 # repository because it end up doing a pathological amount of round
948 # repository because it end up doing a pathological amount of round
949 # trip for w huge amount of changeset we do not care about.
949 # trip for w huge amount of changeset we do not care about.
950 #
950 #
951 # If a set of such "common but filtered" changeset exist on the server
951 # If a set of such "common but filtered" changeset exist on the server
952 # but are not including a remote heads, we'll not be able to detect it,
952 # but are not including a remote heads, we'll not be able to detect it,
953 scommon = set(common)
953 scommon = set(common)
954 filteredrheads = []
954 filteredrheads = []
955 for n in rheads:
955 for n in rheads:
956 if n in nm:
956 if n in nm:
957 if n not in scommon:
957 if n not in scommon:
958 common.append(n)
958 common.append(n)
959 else:
959 else:
960 filteredrheads.append(n)
960 filteredrheads.append(n)
961 if not filteredrheads:
961 if not filteredrheads:
962 fetch = []
962 fetch = []
963 rheads = filteredrheads
963 rheads = filteredrheads
964 pullop.common = common
964 pullop.common = common
965 pullop.fetch = fetch
965 pullop.fetch = fetch
966 pullop.rheads = rheads
966 pullop.rheads = rheads
967
967
968 def _pullbundle2(pullop):
968 def _pullbundle2(pullop):
969 """pull data using bundle2
969 """pull data using bundle2
970
970
971 For now, the only supported data are changegroup."""
971 For now, the only supported data are changegroup."""
972 remotecaps = bundle2.bundle2caps(pullop.remote)
972 remotecaps = bundle2.bundle2caps(pullop.remote)
973 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
973 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
974 # pulling changegroup
974 # pulling changegroup
975 pullop.stepsdone.add('changegroup')
975 pullop.stepsdone.add('changegroup')
976
976
977 kwargs['common'] = pullop.common
977 kwargs['common'] = pullop.common
978 kwargs['heads'] = pullop.heads or pullop.rheads
978 kwargs['heads'] = pullop.heads or pullop.rheads
979 kwargs['cg'] = pullop.fetch
979 kwargs['cg'] = pullop.fetch
980 if 'listkeys' in remotecaps:
980 if 'listkeys' in remotecaps:
981 kwargs['listkeys'] = ['phase', 'bookmarks']
981 kwargs['listkeys'] = ['phase', 'bookmarks']
982 if not pullop.fetch:
982 if not pullop.fetch:
983 pullop.repo.ui.status(_("no changes found\n"))
983 pullop.repo.ui.status(_("no changes found\n"))
984 pullop.cgresult = 0
984 pullop.cgresult = 0
985 else:
985 else:
986 if pullop.heads is None and list(pullop.common) == [nullid]:
986 if pullop.heads is None and list(pullop.common) == [nullid]:
987 pullop.repo.ui.status(_("requesting all changes\n"))
987 pullop.repo.ui.status(_("requesting all changes\n"))
988 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
988 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
989 remoteversions = bundle2.obsmarkersversion(remotecaps)
989 remoteversions = bundle2.obsmarkersversion(remotecaps)
990 if obsolete.commonversion(remoteversions) is not None:
990 if obsolete.commonversion(remoteversions) is not None:
991 kwargs['obsmarkers'] = True
991 kwargs['obsmarkers'] = True
992 pullop.stepsdone.add('obsmarkers')
992 pullop.stepsdone.add('obsmarkers')
993 _pullbundle2extraprepare(pullop, kwargs)
993 _pullbundle2extraprepare(pullop, kwargs)
994 bundle = pullop.remote.getbundle('pull', **kwargs)
994 bundle = pullop.remote.getbundle('pull', **kwargs)
995 try:
995 try:
996 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
996 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
997 except error.BundleValueError, exc:
997 except error.BundleValueError, exc:
998 raise util.Abort('missing support for %s' % exc)
998 raise util.Abort('missing support for %s' % exc)
999
999
1000 if pullop.fetch:
1000 if pullop.fetch:
1001 results = [cg['return'] for cg in op.records['changegroup']]
1001 results = [cg['return'] for cg in op.records['changegroup']]
1002 pullop.cgresult = changegroup.combineresults(results)
1002 pullop.cgresult = changegroup.combineresults(results)
1003
1003
1004 # processing phases change
1004 # processing phases change
1005 for namespace, value in op.records['listkeys']:
1005 for namespace, value in op.records['listkeys']:
1006 if namespace == 'phases':
1006 if namespace == 'phases':
1007 _pullapplyphases(pullop, value)
1007 _pullapplyphases(pullop, value)
1008
1008
1009 # processing bookmark update
1009 # processing bookmark update
1010 for namespace, value in op.records['listkeys']:
1010 for namespace, value in op.records['listkeys']:
1011 if namespace == 'bookmarks':
1011 if namespace == 'bookmarks':
1012 pullop.remotebookmarks = value
1012 pullop.remotebookmarks = value
1013 _pullbookmarks(pullop)
1013 _pullbookmarks(pullop)
1014
1014
1015 def _pullbundle2extraprepare(pullop, kwargs):
1015 def _pullbundle2extraprepare(pullop, kwargs):
1016 """hook function so that extensions can extend the getbundle call"""
1016 """hook function so that extensions can extend the getbundle call"""
1017 pass
1017 pass
1018
1018
1019 def _pullchangeset(pullop):
1019 def _pullchangeset(pullop):
1020 """pull changeset from unbundle into the local repo"""
1020 """pull changeset from unbundle into the local repo"""
1021 # We delay the open of the transaction as late as possible so we
1021 # We delay the open of the transaction as late as possible so we
1022 # don't open transaction for nothing or you break future useful
1022 # don't open transaction for nothing or you break future useful
1023 # rollback call
1023 # rollback call
1024 if 'changegroup' in pullop.stepsdone:
1024 if 'changegroup' in pullop.stepsdone:
1025 return
1025 return
1026 pullop.stepsdone.add('changegroup')
1026 pullop.stepsdone.add('changegroup')
1027 if not pullop.fetch:
1027 if not pullop.fetch:
1028 pullop.repo.ui.status(_("no changes found\n"))
1028 pullop.repo.ui.status(_("no changes found\n"))
1029 pullop.cgresult = 0
1029 pullop.cgresult = 0
1030 return
1030 return
1031 pullop.gettransaction()
1031 pullop.gettransaction()
1032 if pullop.heads is None and list(pullop.common) == [nullid]:
1032 if pullop.heads is None and list(pullop.common) == [nullid]:
1033 pullop.repo.ui.status(_("requesting all changes\n"))
1033 pullop.repo.ui.status(_("requesting all changes\n"))
1034 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1034 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1035 # issue1320, avoid a race if remote changed after discovery
1035 # issue1320, avoid a race if remote changed after discovery
1036 pullop.heads = pullop.rheads
1036 pullop.heads = pullop.rheads
1037
1037
1038 if pullop.remote.capable('getbundle'):
1038 if pullop.remote.capable('getbundle'):
1039 # TODO: get bundlecaps from remote
1039 # TODO: get bundlecaps from remote
1040 cg = pullop.remote.getbundle('pull', common=pullop.common,
1040 cg = pullop.remote.getbundle('pull', common=pullop.common,
1041 heads=pullop.heads or pullop.rheads)
1041 heads=pullop.heads or pullop.rheads)
1042 elif pullop.heads is None:
1042 elif pullop.heads is None:
1043 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1043 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1044 elif not pullop.remote.capable('changegroupsubset'):
1044 elif not pullop.remote.capable('changegroupsubset'):
1045 raise util.Abort(_("partial pull cannot be done because "
1045 raise util.Abort(_("partial pull cannot be done because "
1046 "other repository doesn't support "
1046 "other repository doesn't support "
1047 "changegroupsubset."))
1047 "changegroupsubset."))
1048 else:
1048 else:
1049 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1049 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1050 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1050 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1051 pullop.remote.url())
1051 pullop.remote.url())
1052
1052
1053 def _pullphase(pullop):
1053 def _pullphase(pullop):
1054 # Get remote phases data from remote
1054 # Get remote phases data from remote
1055 if 'phases' in pullop.stepsdone:
1055 if 'phases' in pullop.stepsdone:
1056 return
1056 return
1057 remotephases = pullop.remote.listkeys('phases')
1057 remotephases = pullop.remote.listkeys('phases')
1058 _pullapplyphases(pullop, remotephases)
1058 _pullapplyphases(pullop, remotephases)
1059
1059
1060 def _pullapplyphases(pullop, remotephases):
1060 def _pullapplyphases(pullop, remotephases):
1061 """apply phase movement from observed remote state"""
1061 """apply phase movement from observed remote state"""
1062 if 'phases' in pullop.stepsdone:
1062 if 'phases' in pullop.stepsdone:
1063 return
1063 return
1064 pullop.stepsdone.add('phases')
1064 pullop.stepsdone.add('phases')
1065 publishing = bool(remotephases.get('publishing', False))
1065 publishing = bool(remotephases.get('publishing', False))
1066 if remotephases and not publishing:
1066 if remotephases and not publishing:
1067 # remote is new and unpublishing
1067 # remote is new and unpublishing
1068 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1068 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1069 pullop.pulledsubset,
1069 pullop.pulledsubset,
1070 remotephases)
1070 remotephases)
1071 dheads = pullop.pulledsubset
1071 dheads = pullop.pulledsubset
1072 else:
1072 else:
1073 # Remote is old or publishing all common changesets
1073 # Remote is old or publishing all common changesets
1074 # should be seen as public
1074 # should be seen as public
1075 pheads = pullop.pulledsubset
1075 pheads = pullop.pulledsubset
1076 dheads = []
1076 dheads = []
1077 unfi = pullop.repo.unfiltered()
1077 unfi = pullop.repo.unfiltered()
1078 phase = unfi._phasecache.phase
1078 phase = unfi._phasecache.phase
1079 rev = unfi.changelog.nodemap.get
1079 rev = unfi.changelog.nodemap.get
1080 public = phases.public
1080 public = phases.public
1081 draft = phases.draft
1081 draft = phases.draft
1082
1082
1083 # exclude changesets already public locally and update the others
1083 # exclude changesets already public locally and update the others
1084 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1084 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1085 if pheads:
1085 if pheads:
1086 tr = pullop.gettransaction()
1086 tr = pullop.gettransaction()
1087 phases.advanceboundary(pullop.repo, tr, public, pheads)
1087 phases.advanceboundary(pullop.repo, tr, public, pheads)
1088
1088
1089 # exclude changesets already draft locally and update the others
1089 # exclude changesets already draft locally and update the others
1090 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1090 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1091 if dheads:
1091 if dheads:
1092 tr = pullop.gettransaction()
1092 tr = pullop.gettransaction()
1093 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1093 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1094
1094
1095 def _pullbookmarks(pullop):
1095 def _pullbookmarks(pullop):
1096 """process the remote bookmark information to update the local one"""
1096 """process the remote bookmark information to update the local one"""
1097 if 'bookmarks' in pullop.stepsdone:
1097 if 'bookmarks' in pullop.stepsdone:
1098 return
1098 return
1099 pullop.stepsdone.add('bookmarks')
1099 pullop.stepsdone.add('bookmarks')
1100 repo = pullop.repo
1100 repo = pullop.repo
1101 remotebookmarks = pullop.remotebookmarks
1101 remotebookmarks = pullop.remotebookmarks
1102 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1102 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1103 pullop.remote.url(),
1103 pullop.remote.url(),
1104 pullop.gettransaction,
1104 pullop.gettransaction,
1105 explicit=pullop.explicitbookmarks)
1105 explicit=pullop.explicitbookmarks)
1106
1106
1107 def _pullobsolete(pullop):
1107 def _pullobsolete(pullop):
1108 """utility function to pull obsolete markers from a remote
1108 """utility function to pull obsolete markers from a remote
1109
1109
1110 The `gettransaction` is function that return the pull transaction, creating
1110 The `gettransaction` is function that return the pull transaction, creating
1111 one if necessary. We return the transaction to inform the calling code that
1111 one if necessary. We return the transaction to inform the calling code that
1112 a new transaction have been created (when applicable).
1112 a new transaction have been created (when applicable).
1113
1113
1114 Exists mostly to allow overriding for experimentation purpose"""
1114 Exists mostly to allow overriding for experimentation purpose"""
1115 if 'obsmarkers' in pullop.stepsdone:
1115 if 'obsmarkers' in pullop.stepsdone:
1116 return
1116 return
1117 pullop.stepsdone.add('obsmarkers')
1117 pullop.stepsdone.add('obsmarkers')
1118 tr = None
1118 tr = None
1119 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1119 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1120 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1120 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1121 remoteobs = pullop.remote.listkeys('obsolete')
1121 remoteobs = pullop.remote.listkeys('obsolete')
1122 if 'dump0' in remoteobs:
1122 if 'dump0' in remoteobs:
1123 tr = pullop.gettransaction()
1123 tr = pullop.gettransaction()
1124 for key in sorted(remoteobs, reverse=True):
1124 for key in sorted(remoteobs, reverse=True):
1125 if key.startswith('dump'):
1125 if key.startswith('dump'):
1126 data = base85.b85decode(remoteobs[key])
1126 data = base85.b85decode(remoteobs[key])
1127 pullop.repo.obsstore.mergemarkers(tr, data)
1127 pullop.repo.obsstore.mergemarkers(tr, data)
1128 pullop.repo.invalidatevolatilesets()
1128 pullop.repo.invalidatevolatilesets()
1129 return tr
1129 return tr
1130
1130
1131 def caps20to10(repo):
1131 def caps20to10(repo):
1132 """return a set with appropriate options to use bundle20 during getbundle"""
1132 """return a set with appropriate options to use bundle20 during getbundle"""
1133 caps = set(['HG20'])
1133 caps = set(['HG20'])
1134 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1134 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1135 caps.add('bundle2=' + urllib.quote(capsblob))
1135 caps.add('bundle2=' + urllib.quote(capsblob))
1136 return caps
1136 return caps
1137
1137
1138 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1138 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1139 getbundle2partsorder = []
1139 getbundle2partsorder = []
1140
1140
1141 # Mapping between step name and function
1141 # Mapping between step name and function
1142 #
1142 #
1143 # This exists to help extensions wrap steps if necessary
1143 # This exists to help extensions wrap steps if necessary
1144 getbundle2partsmapping = {}
1144 getbundle2partsmapping = {}
1145
1145
1146 def getbundle2partsgenerator(stepname, idx=None):
1146 def getbundle2partsgenerator(stepname, idx=None):
1147 """decorator for function generating bundle2 part for getbundle
1147 """decorator for function generating bundle2 part for getbundle
1148
1148
1149 The function is added to the step -> function mapping and appended to the
1149 The function is added to the step -> function mapping and appended to the
1150 list of steps. Beware that decorated functions will be added in order
1150 list of steps. Beware that decorated functions will be added in order
1151 (this may matter).
1151 (this may matter).
1152
1152
1153 You can only use this decorator for new steps, if you want to wrap a step
1153 You can only use this decorator for new steps, if you want to wrap a step
1154 from an extension, attack the getbundle2partsmapping dictionary directly."""
1154 from an extension, attack the getbundle2partsmapping dictionary directly."""
1155 def dec(func):
1155 def dec(func):
1156 assert stepname not in getbundle2partsmapping
1156 assert stepname not in getbundle2partsmapping
1157 getbundle2partsmapping[stepname] = func
1157 getbundle2partsmapping[stepname] = func
1158 if idx is None:
1158 if idx is None:
1159 getbundle2partsorder.append(stepname)
1159 getbundle2partsorder.append(stepname)
1160 else:
1160 else:
1161 getbundle2partsorder.insert(idx, stepname)
1161 getbundle2partsorder.insert(idx, stepname)
1162 return func
1162 return func
1163 return dec
1163 return dec
1164
1164
1165 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1165 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1166 **kwargs):
1166 **kwargs):
1167 """return a full bundle (with potentially multiple kind of parts)
1167 """return a full bundle (with potentially multiple kind of parts)
1168
1168
1169 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1169 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1170 passed. For now, the bundle can contain only changegroup, but this will
1170 passed. For now, the bundle can contain only changegroup, but this will
1171 changes when more part type will be available for bundle2.
1171 changes when more part type will be available for bundle2.
1172
1172
1173 This is different from changegroup.getchangegroup that only returns an HG10
1173 This is different from changegroup.getchangegroup that only returns an HG10
1174 changegroup bundle. They may eventually get reunited in the future when we
1174 changegroup bundle. They may eventually get reunited in the future when we
1175 have a clearer idea of the API we what to query different data.
1175 have a clearer idea of the API we what to query different data.
1176
1176
1177 The implementation is at a very early stage and will get massive rework
1177 The implementation is at a very early stage and will get massive rework
1178 when the API of bundle is refined.
1178 when the API of bundle is refined.
1179 """
1179 """
1180 # bundle10 case
1180 # bundle10 case
1181 usebundle2 = False
1181 usebundle2 = False
1182 if bundlecaps is not None:
1182 if bundlecaps is not None:
1183 usebundle2 = util.any((cap.startswith('HG2') for cap in bundlecaps))
1183 usebundle2 = util.any((cap.startswith('HG2') for cap in bundlecaps))
1184 if not usebundle2:
1184 if not usebundle2:
1185 if bundlecaps and not kwargs.get('cg', True):
1185 if bundlecaps and not kwargs.get('cg', True):
1186 raise ValueError(_('request for bundle10 must include changegroup'))
1186 raise ValueError(_('request for bundle10 must include changegroup'))
1187
1187
1188 if kwargs:
1188 if kwargs:
1189 raise ValueError(_('unsupported getbundle arguments: %s')
1189 raise ValueError(_('unsupported getbundle arguments: %s')
1190 % ', '.join(sorted(kwargs.keys())))
1190 % ', '.join(sorted(kwargs.keys())))
1191 return changegroup.getchangegroup(repo, source, heads=heads,
1191 return changegroup.getchangegroup(repo, source, heads=heads,
1192 common=common, bundlecaps=bundlecaps)
1192 common=common, bundlecaps=bundlecaps)
1193
1193
1194 # bundle20 case
1194 # bundle20 case
1195 b2caps = {}
1195 b2caps = {}
1196 for bcaps in bundlecaps:
1196 for bcaps in bundlecaps:
1197 if bcaps.startswith('bundle2='):
1197 if bcaps.startswith('bundle2='):
1198 blob = urllib.unquote(bcaps[len('bundle2='):])
1198 blob = urllib.unquote(bcaps[len('bundle2='):])
1199 b2caps.update(bundle2.decodecaps(blob))
1199 b2caps.update(bundle2.decodecaps(blob))
1200 bundler = bundle2.bundle20(repo.ui, b2caps)
1200 bundler = bundle2.bundle20(repo.ui, b2caps)
1201
1201
1202 kwargs['heads'] = heads
1202 kwargs['heads'] = heads
1203 kwargs['common'] = common
1203 kwargs['common'] = common
1204
1204
1205 for name in getbundle2partsorder:
1205 for name in getbundle2partsorder:
1206 func = getbundle2partsmapping[name]
1206 func = getbundle2partsmapping[name]
1207 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1207 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1208 **kwargs)
1208 **kwargs)
1209
1209
1210 return util.chunkbuffer(bundler.getchunks())
1210 return util.chunkbuffer(bundler.getchunks())
1211
1211
1212 @getbundle2partsgenerator('changegroup')
1212 @getbundle2partsgenerator('changegroup')
1213 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1213 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1214 b2caps=None, heads=None, common=None, **kwargs):
1214 b2caps=None, heads=None, common=None, **kwargs):
1215 """add a changegroup part to the requested bundle"""
1215 """add a changegroup part to the requested bundle"""
1216 cg = None
1216 cg = None
1217 if kwargs.get('cg', True):
1217 if kwargs.get('cg', True):
1218 # build changegroup bundle here.
1218 # build changegroup bundle here.
1219 version = None
1219 version = None
1220 cgversions = b2caps.get('changegroup')
1220 cgversions = b2caps.get('changegroup')
1221 if not cgversions: # 3.1 and 3.2 ship with an empty value
1221 if not cgversions: # 3.1 and 3.2 ship with an empty value
1222 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1222 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1223 common=common,
1223 common=common,
1224 bundlecaps=bundlecaps)
1224 bundlecaps=bundlecaps)
1225 else:
1225 else:
1226 cgversions = [v for v in cgversions if v in changegroup.packermap]
1226 cgversions = [v for v in cgversions if v in changegroup.packermap]
1227 if not cgversions:
1227 if not cgversions:
1228 raise ValueError(_('no common changegroup version'))
1228 raise ValueError(_('no common changegroup version'))
1229 version = max(cgversions)
1229 version = max(cgversions)
1230 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1230 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1231 common=common,
1231 common=common,
1232 bundlecaps=bundlecaps,
1232 bundlecaps=bundlecaps,
1233 version=version)
1233 version=version)
1234
1234
1235 if cg:
1235 if cg:
1236 part = bundler.newpart('changegroup', data=cg)
1236 part = bundler.newpart('changegroup', data=cg)
1237 if version is not None:
1237 if version is not None:
1238 part.addparam('version', version)
1238 part.addparam('version', version)
1239
1239
1240 @getbundle2partsgenerator('listkeys')
1240 @getbundle2partsgenerator('listkeys')
1241 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1241 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1242 b2caps=None, **kwargs):
1242 b2caps=None, **kwargs):
1243 """add parts containing listkeys namespaces to the requested bundle"""
1243 """add parts containing listkeys namespaces to the requested bundle"""
1244 listkeys = kwargs.get('listkeys', ())
1244 listkeys = kwargs.get('listkeys', ())
1245 for namespace in listkeys:
1245 for namespace in listkeys:
1246 part = bundler.newpart('listkeys')
1246 part = bundler.newpart('listkeys')
1247 part.addparam('namespace', namespace)
1247 part.addparam('namespace', namespace)
1248 keys = repo.listkeys(namespace).items()
1248 keys = repo.listkeys(namespace).items()
1249 part.data = pushkey.encodekeys(keys)
1249 part.data = pushkey.encodekeys(keys)
1250
1250
1251 @getbundle2partsgenerator('obsmarkers')
1251 @getbundle2partsgenerator('obsmarkers')
1252 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1252 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1253 b2caps=None, heads=None, **kwargs):
1253 b2caps=None, heads=None, **kwargs):
1254 """add an obsolescence markers part to the requested bundle"""
1254 """add an obsolescence markers part to the requested bundle"""
1255 if kwargs.get('obsmarkers', False):
1255 if kwargs.get('obsmarkers', False):
1256 if heads is None:
1256 if heads is None:
1257 heads = repo.heads()
1257 heads = repo.heads()
1258 subset = [c.node() for c in repo.set('::%ln', heads)]
1258 subset = [c.node() for c in repo.set('::%ln', heads)]
1259 markers = repo.obsstore.relevantmarkers(subset)
1259 markers = repo.obsstore.relevantmarkers(subset)
1260 buildobsmarkerspart(bundler, markers)
1260 buildobsmarkerspart(bundler, markers)
1261
1261
1262 def check_heads(repo, their_heads, context):
1262 def check_heads(repo, their_heads, context):
1263 """check if the heads of a repo have been modified
1263 """check if the heads of a repo have been modified
1264
1264
1265 Used by peer for unbundling.
1265 Used by peer for unbundling.
1266 """
1266 """
1267 heads = repo.heads()
1267 heads = repo.heads()
1268 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1268 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1269 if not (their_heads == ['force'] or their_heads == heads or
1269 if not (their_heads == ['force'] or their_heads == heads or
1270 their_heads == ['hashed', heads_hash]):
1270 their_heads == ['hashed', heads_hash]):
1271 # someone else committed/pushed/unbundled while we
1271 # someone else committed/pushed/unbundled while we
1272 # were transferring data
1272 # were transferring data
1273 raise error.PushRaced('repository changed while %s - '
1273 raise error.PushRaced('repository changed while %s - '
1274 'please try again' % context)
1274 'please try again' % context)
1275
1275
1276 def unbundle(repo, cg, heads, source, url):
1276 def unbundle(repo, cg, heads, source, url):
1277 """Apply a bundle to a repo.
1277 """Apply a bundle to a repo.
1278
1278
1279 this function makes sure the repo is locked during the application and have
1279 this function makes sure the repo is locked during the application and have
1280 mechanism to check that no push race occurred between the creation of the
1280 mechanism to check that no push race occurred between the creation of the
1281 bundle and its application.
1281 bundle and its application.
1282
1282
1283 If the push was raced as PushRaced exception is raised."""
1283 If the push was raced as PushRaced exception is raised."""
1284 r = 0
1284 r = 0
1285 # need a transaction when processing a bundle2 stream
1285 # need a transaction when processing a bundle2 stream
1286 wlock = lock = tr = None
1286 wlock = lock = tr = None
1287 recordout = None
1287 recordout = None
1288 try:
1288 try:
1289 check_heads(repo, heads, 'uploading changes')
1289 check_heads(repo, heads, 'uploading changes')
1290 # push can proceed
1290 # push can proceed
1291 if util.safehasattr(cg, 'params'):
1291 if util.safehasattr(cg, 'params'):
1292 r = None
1292 r = None
1293 try:
1293 try:
1294 wlock = repo.wlock()
1294 wlock = repo.wlock()
1295 lock = repo.lock()
1295 lock = repo.lock()
1296 tr = repo.transaction(source)
1296 tr = repo.transaction(source)
1297 tr.hookargs['source'] = source
1297 tr.hookargs['source'] = source
1298 tr.hookargs['url'] = url
1298 tr.hookargs['url'] = url
1299 tr.hookargs['bundle2'] = '1'
1299 tr.hookargs['bundle2'] = '1'
1300 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1300 op = bundle2.bundleoperation(repo, lambda: tr)
1301 try:
1302 r = bundle2.processbundle(repo, cg, op=op)
1303 finally:
1304 r = op.reply
1301 if r is not None:
1305 if r is not None:
1302 repo.ui.pushbuffer(error=True, subproc=True)
1306 repo.ui.pushbuffer(error=True, subproc=True)
1303 def recordout(output):
1307 def recordout(output):
1304 r.newpart('output', data=output, mandatory=False)
1308 r.newpart('output', data=output, mandatory=False)
1305 tr.close()
1309 tr.close()
1306 except Exception, exc:
1310 except Exception, exc:
1307 exc.duringunbundle2 = True
1311 exc.duringunbundle2 = True
1308 if r is not None:
1312 if r is not None:
1309 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1313 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1310 def recordout(output):
1314 def recordout(output):
1311 part = bundle2.bundlepart('output', data=output,
1315 part = bundle2.bundlepart('output', data=output,
1312 mandatory=False)
1316 mandatory=False)
1313 parts.append(part)
1317 parts.append(part)
1314 raise
1318 raise
1315 else:
1319 else:
1316 lock = repo.lock()
1320 lock = repo.lock()
1317 r = changegroup.addchangegroup(repo, cg, source, url)
1321 r = changegroup.addchangegroup(repo, cg, source, url)
1318 finally:
1322 finally:
1319 lockmod.release(tr, lock, wlock)
1323 lockmod.release(tr, lock, wlock)
1320 if recordout is not None:
1324 if recordout is not None:
1321 recordout(repo.ui.popbuffer())
1325 recordout(repo.ui.popbuffer())
1322 return r
1326 return r
@@ -1,621 +1,669 b''
1 Test exchange of common information using bundle2
1 Test exchange of common information using bundle2
2
2
3
3
4 $ getmainid() {
4 $ getmainid() {
5 > hg -R main log --template '{node}\n' --rev "$1"
5 > hg -R main log --template '{node}\n' --rev "$1"
6 > }
6 > }
7
7
8 enable obsolescence
8 enable obsolescence
9
9
10 $ cat > $TESTTMP/bundle2-pushkey-hook.sh << EOF
10 $ cat > $TESTTMP/bundle2-pushkey-hook.sh << EOF
11 > echo pushkey: lock state after \"\$HG_NAMESPACE\"
11 > echo pushkey: lock state after \"\$HG_NAMESPACE\"
12 > hg debuglock
12 > hg debuglock
13 > EOF
13 > EOF
14
14
15 $ cat >> $HGRCPATH << EOF
15 $ cat >> $HGRCPATH << EOF
16 > [experimental]
16 > [experimental]
17 > evolution=createmarkers,exchange
17 > evolution=createmarkers,exchange
18 > bundle2-exp=True
18 > bundle2-exp=True
19 > [ui]
19 > [ui]
20 > ssh=python "$TESTDIR/dummyssh"
20 > ssh=python "$TESTDIR/dummyssh"
21 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
21 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
22 > [web]
22 > [web]
23 > push_ssl = false
23 > push_ssl = false
24 > allow_push = *
24 > allow_push = *
25 > [phases]
25 > [phases]
26 > publish=False
26 > publish=False
27 > [hooks]
27 > [hooks]
28 > pretxnclose.tip = hg log -r tip -T "pre-close-tip:{node|short} {phase} {bookmarks}\n"
28 > pretxnclose.tip = hg log -r tip -T "pre-close-tip:{node|short} {phase} {bookmarks}\n"
29 > txnclose.tip = hg log -r tip -T "postclose-tip:{node|short} {phase} {bookmarks}\n"
29 > txnclose.tip = hg log -r tip -T "postclose-tip:{node|short} {phase} {bookmarks}\n"
30 > txnclose.env = sh -c "HG_LOCAL= python \"$TESTDIR/printenv.py\" txnclose"
30 > txnclose.env = sh -c "HG_LOCAL= python \"$TESTDIR/printenv.py\" txnclose"
31 > pushkey= sh "$TESTTMP/bundle2-pushkey-hook.sh"
31 > pushkey= sh "$TESTTMP/bundle2-pushkey-hook.sh"
32 > EOF
32 > EOF
33
33
34 The extension requires a repo (currently unused)
34 The extension requires a repo (currently unused)
35
35
36 $ hg init main
36 $ hg init main
37 $ cd main
37 $ cd main
38 $ touch a
38 $ touch a
39 $ hg add a
39 $ hg add a
40 $ hg commit -m 'a'
40 $ hg commit -m 'a'
41 pre-close-tip:3903775176ed draft
41 pre-close-tip:3903775176ed draft
42 postclose-tip:3903775176ed draft
42 postclose-tip:3903775176ed draft
43 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
43 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
44
44
45 $ hg unbundle $TESTDIR/bundles/rebase.hg
45 $ hg unbundle $TESTDIR/bundles/rebase.hg
46 adding changesets
46 adding changesets
47 adding manifests
47 adding manifests
48 adding file changes
48 adding file changes
49 added 8 changesets with 7 changes to 7 files (+3 heads)
49 added 8 changesets with 7 changes to 7 files (+3 heads)
50 pre-close-tip:02de42196ebe draft
50 pre-close-tip:02de42196ebe draft
51 postclose-tip:02de42196ebe draft
51 postclose-tip:02de42196ebe draft
52 txnclose hook: HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=unbundle HG_TXNID=TXN:* HG_TXNNAME=unbundle (glob)
52 txnclose hook: HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=unbundle HG_TXNID=TXN:* HG_TXNNAME=unbundle (glob)
53 bundle:*/tests/bundles/rebase.hg HG_URL=bundle:*/tests/bundles/rebase.hg (glob)
53 bundle:*/tests/bundles/rebase.hg HG_URL=bundle:*/tests/bundles/rebase.hg (glob)
54 (run 'hg heads' to see heads, 'hg merge' to merge)
54 (run 'hg heads' to see heads, 'hg merge' to merge)
55
55
56 $ cd ..
56 $ cd ..
57
57
58 Real world exchange
58 Real world exchange
59 =====================
59 =====================
60
60
61 Add more obsolescence information
61 Add more obsolescence information
62
62
63 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
63 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
64 pre-close-tip:02de42196ebe draft
64 pre-close-tip:02de42196ebe draft
65 postclose-tip:02de42196ebe draft
65 postclose-tip:02de42196ebe draft
66 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
66 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
67 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
67 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
68 pre-close-tip:02de42196ebe draft
68 pre-close-tip:02de42196ebe draft
69 postclose-tip:02de42196ebe draft
69 postclose-tip:02de42196ebe draft
70 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
70 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
71
71
72 clone --pull
72 clone --pull
73
73
74 $ hg -R main phase --public cd010b8cd998
74 $ hg -R main phase --public cd010b8cd998
75 pre-close-tip:02de42196ebe draft
75 pre-close-tip:02de42196ebe draft
76 postclose-tip:02de42196ebe draft
76 postclose-tip:02de42196ebe draft
77 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
77 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
78 $ hg clone main other --pull --rev 9520eea781bc
78 $ hg clone main other --pull --rev 9520eea781bc
79 adding changesets
79 adding changesets
80 adding manifests
80 adding manifests
81 adding file changes
81 adding file changes
82 added 2 changesets with 2 changes to 2 files
82 added 2 changesets with 2 changes to 2 files
83 1 new obsolescence markers
83 1 new obsolescence markers
84 pre-close-tip:9520eea781bc draft
84 pre-close-tip:9520eea781bc draft
85 postclose-tip:9520eea781bc draft
85 postclose-tip:9520eea781bc draft
86 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
86 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
87 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
87 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
88 updating to branch default
88 updating to branch default
89 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
89 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
90 $ hg -R other log -G
90 $ hg -R other log -G
91 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
91 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
92 |
92 |
93 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
93 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
94
94
95 $ hg -R other debugobsolete
95 $ hg -R other debugobsolete
96 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
96 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
97
97
98 pull
98 pull
99
99
100 $ hg -R main phase --public 9520eea781bc
100 $ hg -R main phase --public 9520eea781bc
101 pre-close-tip:02de42196ebe draft
101 pre-close-tip:02de42196ebe draft
102 postclose-tip:02de42196ebe draft
102 postclose-tip:02de42196ebe draft
103 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
103 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
104 $ hg -R other pull -r 24b6387c8c8c
104 $ hg -R other pull -r 24b6387c8c8c
105 pulling from $TESTTMP/main (glob)
105 pulling from $TESTTMP/main (glob)
106 searching for changes
106 searching for changes
107 adding changesets
107 adding changesets
108 adding manifests
108 adding manifests
109 adding file changes
109 adding file changes
110 added 1 changesets with 1 changes to 1 files (+1 heads)
110 added 1 changesets with 1 changes to 1 files (+1 heads)
111 1 new obsolescence markers
111 1 new obsolescence markers
112 pre-close-tip:24b6387c8c8c draft
112 pre-close-tip:24b6387c8c8c draft
113 postclose-tip:24b6387c8c8c draft
113 postclose-tip:24b6387c8c8c draft
114 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=24b6387c8c8cae37178880f3fa95ded3cb1cf785 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
114 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=24b6387c8c8cae37178880f3fa95ded3cb1cf785 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
115 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
115 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
116 (run 'hg heads' to see heads, 'hg merge' to merge)
116 (run 'hg heads' to see heads, 'hg merge' to merge)
117 $ hg -R other log -G
117 $ hg -R other log -G
118 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
118 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
119 |
119 |
120 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
120 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
121 |/
121 |/
122 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
122 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
123
123
124 $ hg -R other debugobsolete
124 $ hg -R other debugobsolete
125 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
125 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
126 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
126 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
127
127
128 pull empty (with phase movement)
128 pull empty (with phase movement)
129
129
130 $ hg -R main phase --public 24b6387c8c8c
130 $ hg -R main phase --public 24b6387c8c8c
131 pre-close-tip:02de42196ebe draft
131 pre-close-tip:02de42196ebe draft
132 postclose-tip:02de42196ebe draft
132 postclose-tip:02de42196ebe draft
133 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
133 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
134 $ hg -R other pull -r 24b6387c8c8c
134 $ hg -R other pull -r 24b6387c8c8c
135 pulling from $TESTTMP/main (glob)
135 pulling from $TESTTMP/main (glob)
136 no changes found
136 no changes found
137 pre-close-tip:24b6387c8c8c public
137 pre-close-tip:24b6387c8c8c public
138 postclose-tip:24b6387c8c8c public
138 postclose-tip:24b6387c8c8c public
139 txnclose hook: HG_NEW_OBSMARKERS=0 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
139 txnclose hook: HG_NEW_OBSMARKERS=0 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
140 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
140 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
141 $ hg -R other log -G
141 $ hg -R other log -G
142 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
142 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
143 |
143 |
144 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
144 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
145 |/
145 |/
146 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
146 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
147
147
148 $ hg -R other debugobsolete
148 $ hg -R other debugobsolete
149 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
149 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
150 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
150 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
151
151
152 pull empty
152 pull empty
153
153
154 $ hg -R other pull -r 24b6387c8c8c
154 $ hg -R other pull -r 24b6387c8c8c
155 pulling from $TESTTMP/main (glob)
155 pulling from $TESTTMP/main (glob)
156 no changes found
156 no changes found
157 pre-close-tip:24b6387c8c8c public
157 pre-close-tip:24b6387c8c8c public
158 postclose-tip:24b6387c8c8c public
158 postclose-tip:24b6387c8c8c public
159 txnclose hook: HG_NEW_OBSMARKERS=0 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
159 txnclose hook: HG_NEW_OBSMARKERS=0 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
160 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
160 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
161 $ hg -R other log -G
161 $ hg -R other log -G
162 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
162 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
163 |
163 |
164 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
164 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
165 |/
165 |/
166 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
166 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
167
167
168 $ hg -R other debugobsolete
168 $ hg -R other debugobsolete
169 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
169 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
170 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
170 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
171
171
172 add extra data to test their exchange during push
172 add extra data to test their exchange during push
173
173
174 $ hg -R main bookmark --rev eea13746799a book_eea1
174 $ hg -R main bookmark --rev eea13746799a book_eea1
175 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
175 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
176 pre-close-tip:02de42196ebe draft
176 pre-close-tip:02de42196ebe draft
177 postclose-tip:02de42196ebe draft
177 postclose-tip:02de42196ebe draft
178 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
178 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
179 $ hg -R main bookmark --rev 02de42196ebe book_02de
179 $ hg -R main bookmark --rev 02de42196ebe book_02de
180 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
180 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
181 pre-close-tip:02de42196ebe draft book_02de
181 pre-close-tip:02de42196ebe draft book_02de
182 postclose-tip:02de42196ebe draft book_02de
182 postclose-tip:02de42196ebe draft book_02de
183 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
183 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
184 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
184 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
185 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
185 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
186 pre-close-tip:02de42196ebe draft book_02de
186 pre-close-tip:02de42196ebe draft book_02de
187 postclose-tip:02de42196ebe draft book_02de
187 postclose-tip:02de42196ebe draft book_02de
188 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
188 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
189 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
189 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
190 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
190 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
191 pre-close-tip:02de42196ebe draft book_02de
191 pre-close-tip:02de42196ebe draft book_02de
192 postclose-tip:02de42196ebe draft book_02de
192 postclose-tip:02de42196ebe draft book_02de
193 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
193 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
194 $ hg -R main bookmark --rev 32af7686d403 book_32af
194 $ hg -R main bookmark --rev 32af7686d403 book_32af
195 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
195 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
196 pre-close-tip:02de42196ebe draft book_02de
196 pre-close-tip:02de42196ebe draft book_02de
197 postclose-tip:02de42196ebe draft book_02de
197 postclose-tip:02de42196ebe draft book_02de
198 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
198 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
199
199
200 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
200 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
201 $ hg -R other bookmark --rev cd010b8cd998 book_02de
201 $ hg -R other bookmark --rev cd010b8cd998 book_02de
202 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
202 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
203 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
203 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
204 $ hg -R other bookmark --rev cd010b8cd998 book_32af
204 $ hg -R other bookmark --rev cd010b8cd998 book_32af
205
205
206 $ hg -R main phase --public eea13746799a
206 $ hg -R main phase --public eea13746799a
207 pre-close-tip:02de42196ebe draft book_02de
207 pre-close-tip:02de42196ebe draft book_02de
208 postclose-tip:02de42196ebe draft book_02de
208 postclose-tip:02de42196ebe draft book_02de
209 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
209 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
210
210
211 push
211 push
212 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
212 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
213 pushing to other
213 pushing to other
214 searching for changes
214 searching for changes
215 remote: adding changesets
215 remote: adding changesets
216 remote: adding manifests
216 remote: adding manifests
217 remote: adding file changes
217 remote: adding file changes
218 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
218 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
219 remote: 1 new obsolescence markers
219 remote: 1 new obsolescence markers
220 remote: pre-close-tip:eea13746799a public book_eea1
220 remote: pre-close-tip:eea13746799a public book_eea1
221 remote: pushkey: lock state after "phases"
221 remote: pushkey: lock state after "phases"
222 remote: lock: free
222 remote: lock: free
223 remote: wlock: free
223 remote: wlock: free
224 remote: pushkey: lock state after "bookmarks"
224 remote: pushkey: lock state after "bookmarks"
225 remote: lock: free
225 remote: lock: free
226 remote: wlock: free
226 remote: wlock: free
227 remote: postclose-tip:eea13746799a public book_eea1
227 remote: postclose-tip:eea13746799a public book_eea1
228 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=eea13746799a9e0bfd88f29d3c2e9dc9389f524f HG_PHASES_MOVED=1 HG_SOURCE=push HG_TXNID=TXN:* HG_TXNNAME=push HG_URL=push (glob)
228 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=eea13746799a9e0bfd88f29d3c2e9dc9389f524f HG_PHASES_MOVED=1 HG_SOURCE=push HG_TXNID=TXN:* HG_TXNNAME=push HG_URL=push (glob)
229 updating bookmark book_eea1
229 updating bookmark book_eea1
230 pre-close-tip:02de42196ebe draft book_02de
230 pre-close-tip:02de42196ebe draft book_02de
231 postclose-tip:02de42196ebe draft book_02de
231 postclose-tip:02de42196ebe draft book_02de
232 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
232 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
233 file:/*/$TESTTMP/other HG_URL=file:$TESTTMP/other (glob)
233 file:/*/$TESTTMP/other HG_URL=file:$TESTTMP/other (glob)
234 $ hg -R other log -G
234 $ hg -R other log -G
235 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
235 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
236 |\
236 |\
237 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
237 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
238 | |
238 | |
239 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
239 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
240 |/
240 |/
241 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
241 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
242
242
243 $ hg -R other debugobsolete
243 $ hg -R other debugobsolete
244 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
244 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
245 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
245 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
246 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
246 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
247
247
248 pull over ssh
248 pull over ssh
249
249
250 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
250 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
251 pulling from ssh://user@dummy/main
251 pulling from ssh://user@dummy/main
252 searching for changes
252 searching for changes
253 adding changesets
253 adding changesets
254 adding manifests
254 adding manifests
255 adding file changes
255 adding file changes
256 added 1 changesets with 1 changes to 1 files (+1 heads)
256 added 1 changesets with 1 changes to 1 files (+1 heads)
257 1 new obsolescence markers
257 1 new obsolescence markers
258 updating bookmark book_02de
258 updating bookmark book_02de
259 pre-close-tip:02de42196ebe draft book_02de
259 pre-close-tip:02de42196ebe draft book_02de
260 postclose-tip:02de42196ebe draft book_02de
260 postclose-tip:02de42196ebe draft book_02de
261 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=02de42196ebee42ef284b6780a87cdc96e8eaab6 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
261 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=02de42196ebee42ef284b6780a87cdc96e8eaab6 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
262 ssh://user@dummy/main HG_URL=ssh://user@dummy/main
262 ssh://user@dummy/main HG_URL=ssh://user@dummy/main
263 (run 'hg heads' to see heads, 'hg merge' to merge)
263 (run 'hg heads' to see heads, 'hg merge' to merge)
264 $ hg -R other debugobsolete
264 $ hg -R other debugobsolete
265 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
265 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
266 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
266 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
267 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
267 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
268 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
268 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
269
269
270 pull over http
270 pull over http
271
271
272 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
272 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
273 $ cat main.pid >> $DAEMON_PIDS
273 $ cat main.pid >> $DAEMON_PIDS
274
274
275 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
275 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
276 pulling from http://localhost:$HGPORT/
276 pulling from http://localhost:$HGPORT/
277 searching for changes
277 searching for changes
278 adding changesets
278 adding changesets
279 adding manifests
279 adding manifests
280 adding file changes
280 adding file changes
281 added 1 changesets with 1 changes to 1 files (+1 heads)
281 added 1 changesets with 1 changes to 1 files (+1 heads)
282 1 new obsolescence markers
282 1 new obsolescence markers
283 updating bookmark book_42cc
283 updating bookmark book_42cc
284 pre-close-tip:42ccdea3bb16 draft book_42cc
284 pre-close-tip:42ccdea3bb16 draft book_42cc
285 postclose-tip:42ccdea3bb16 draft book_42cc
285 postclose-tip:42ccdea3bb16 draft book_42cc
286 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
286 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
287 http://localhost:$HGPORT/ HG_URL=http://localhost:$HGPORT/
287 http://localhost:$HGPORT/ HG_URL=http://localhost:$HGPORT/
288 (run 'hg heads .' to see heads, 'hg merge' to merge)
288 (run 'hg heads .' to see heads, 'hg merge' to merge)
289 $ cat main-error.log
289 $ cat main-error.log
290 $ hg -R other debugobsolete
290 $ hg -R other debugobsolete
291 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
291 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
292 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
292 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
293 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
293 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
294 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
294 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
295 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
295 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
296
296
297 push over ssh
297 push over ssh
298
298
299 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
299 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
300 pushing to ssh://user@dummy/other
300 pushing to ssh://user@dummy/other
301 searching for changes
301 searching for changes
302 remote: adding changesets
302 remote: adding changesets
303 remote: adding manifests
303 remote: adding manifests
304 remote: adding file changes
304 remote: adding file changes
305 remote: added 1 changesets with 1 changes to 1 files
305 remote: added 1 changesets with 1 changes to 1 files
306 remote: 1 new obsolescence markers
306 remote: 1 new obsolescence markers
307 remote: pre-close-tip:5fddd98957c8 draft book_5fdd
307 remote: pre-close-tip:5fddd98957c8 draft book_5fdd
308 remote: pushkey: lock state after "bookmarks"
308 remote: pushkey: lock state after "bookmarks"
309 remote: lock: free
309 remote: lock: free
310 remote: wlock: free
310 remote: wlock: free
311 remote: postclose-tip:5fddd98957c8 draft book_5fdd
311 remote: postclose-tip:5fddd98957c8 draft book_5fdd
312 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=5fddd98957c8a54a4d436dfe1da9d87f21a1b97b HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:ssh:127.0.0.1 (glob)
312 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=5fddd98957c8a54a4d436dfe1da9d87f21a1b97b HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:ssh:127.0.0.1 (glob)
313 updating bookmark book_5fdd
313 updating bookmark book_5fdd
314 pre-close-tip:02de42196ebe draft book_02de
314 pre-close-tip:02de42196ebe draft book_02de
315 postclose-tip:02de42196ebe draft book_02de
315 postclose-tip:02de42196ebe draft book_02de
316 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
316 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
317 ssh://user@dummy/other HG_URL=ssh://user@dummy/other
317 ssh://user@dummy/other HG_URL=ssh://user@dummy/other
318 $ hg -R other log -G
318 $ hg -R other log -G
319 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
319 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
320 |
320 |
321 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
321 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
322 |
322 |
323 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
323 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
324 | |
324 | |
325 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
325 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
326 | |/|
326 | |/|
327 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
327 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
328 |/ /
328 |/ /
329 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
329 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
330 |/
330 |/
331 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
331 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
332
332
333 $ hg -R other debugobsolete
333 $ hg -R other debugobsolete
334 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
334 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
335 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
335 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
336 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
336 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
337 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
337 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
338 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
338 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
339 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
339 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
340
340
341 push over http
341 push over http
342
342
343 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
343 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
344 $ cat other.pid >> $DAEMON_PIDS
344 $ cat other.pid >> $DAEMON_PIDS
345
345
346 $ hg -R main phase --public 32af7686d403
346 $ hg -R main phase --public 32af7686d403
347 pre-close-tip:02de42196ebe draft book_02de
347 pre-close-tip:02de42196ebe draft book_02de
348 postclose-tip:02de42196ebe draft book_02de
348 postclose-tip:02de42196ebe draft book_02de
349 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
349 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
350 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
350 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
351 pushing to http://localhost:$HGPORT2/
351 pushing to http://localhost:$HGPORT2/
352 searching for changes
352 searching for changes
353 remote: adding changesets
353 remote: adding changesets
354 remote: adding manifests
354 remote: adding manifests
355 remote: adding file changes
355 remote: adding file changes
356 remote: added 1 changesets with 1 changes to 1 files
356 remote: added 1 changesets with 1 changes to 1 files
357 remote: 1 new obsolescence markers
357 remote: 1 new obsolescence markers
358 remote: pre-close-tip:32af7686d403 public book_32af
358 remote: pre-close-tip:32af7686d403 public book_32af
359 remote: pushkey: lock state after "phases"
359 remote: pushkey: lock state after "phases"
360 remote: lock: free
360 remote: lock: free
361 remote: wlock: free
361 remote: wlock: free
362 remote: pushkey: lock state after "bookmarks"
362 remote: pushkey: lock state after "bookmarks"
363 remote: lock: free
363 remote: lock: free
364 remote: wlock: free
364 remote: wlock: free
365 remote: postclose-tip:32af7686d403 public book_32af
365 remote: postclose-tip:32af7686d403 public book_32af
366 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=32af7686d403cf45b5d95f2d70cebea587ac806a HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:http:127.0.0.1: (glob)
366 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=32af7686d403cf45b5d95f2d70cebea587ac806a HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:http:127.0.0.1: (glob)
367 updating bookmark book_32af
367 updating bookmark book_32af
368 pre-close-tip:02de42196ebe draft book_02de
368 pre-close-tip:02de42196ebe draft book_02de
369 postclose-tip:02de42196ebe draft book_02de
369 postclose-tip:02de42196ebe draft book_02de
370 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
370 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
371 http://localhost:$HGPORT2/ HG_URL=http://localhost:$HGPORT2/
371 http://localhost:$HGPORT2/ HG_URL=http://localhost:$HGPORT2/
372 $ cat other-error.log
372 $ cat other-error.log
373
373
374 Check final content.
374 Check final content.
375
375
376 $ hg -R other log -G
376 $ hg -R other log -G
377 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
377 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
378 |
378 |
379 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
379 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
380 |
380 |
381 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
381 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
382 |
382 |
383 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
383 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
384 | |
384 | |
385 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
385 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
386 | |/|
386 | |/|
387 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
387 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
388 |/ /
388 |/ /
389 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
389 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
390 |/
390 |/
391 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
391 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
392
392
393 $ hg -R other debugobsolete
393 $ hg -R other debugobsolete
394 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
394 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
395 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
395 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
396 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
396 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
397 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
397 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
398 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
398 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
399 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
399 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
400 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
400 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
401
401
402 (check that no 'pending' files remain)
402 (check that no 'pending' files remain)
403
403
404 $ ls -1 other/.hg/bookmarks*
404 $ ls -1 other/.hg/bookmarks*
405 other/.hg/bookmarks
405 other/.hg/bookmarks
406 $ ls -1 other/.hg/store/phaseroots*
406 $ ls -1 other/.hg/store/phaseroots*
407 other/.hg/store/phaseroots
407 other/.hg/store/phaseroots
408 $ ls -1 other/.hg/store/00changelog.i*
408 $ ls -1 other/.hg/store/00changelog.i*
409 other/.hg/store/00changelog.i
409 other/.hg/store/00changelog.i
410
410
411 Error Handling
411 Error Handling
412 ==============
412 ==============
413
413
414 Check that errors are properly returned to the client during push.
414 Check that errors are properly returned to the client during push.
415
415
416 Setting up
416 Setting up
417
417
418 $ cat > failpush.py << EOF
418 $ cat > failpush.py << EOF
419 > """A small extension that makes push fails when using bundle2
419 > """A small extension that makes push fails when using bundle2
420 >
420 >
421 > used to test error handling in bundle2
421 > used to test error handling in bundle2
422 > """
422 > """
423 >
423 >
424 > from mercurial import util
424 > from mercurial import util
425 > from mercurial import bundle2
425 > from mercurial import bundle2
426 > from mercurial import exchange
426 > from mercurial import exchange
427 > from mercurial import extensions
427 > from mercurial import extensions
428 >
428 >
429 > def _pushbundle2failpart(pushop, bundler):
429 > def _pushbundle2failpart(pushop, bundler):
430 > reason = pushop.ui.config('failpush', 'reason', None)
430 > reason = pushop.ui.config('failpush', 'reason', None)
431 > part = None
431 > part = None
432 > if reason == 'abort':
432 > if reason == 'abort':
433 > bundler.newpart('test:abort')
433 > bundler.newpart('test:abort')
434 > if reason == 'unknown':
434 > if reason == 'unknown':
435 > bundler.newpart('test:unknown')
435 > bundler.newpart('test:unknown')
436 > if reason == 'race':
436 > if reason == 'race':
437 > # 20 Bytes of crap
437 > # 20 Bytes of crap
438 > bundler.newpart('check:heads', data='01234567890123456789')
438 > bundler.newpart('check:heads', data='01234567890123456789')
439 >
439 >
440 > @bundle2.parthandler("test:abort")
440 > @bundle2.parthandler("test:abort")
441 > def handleabort(op, part):
441 > def handleabort(op, part):
442 > raise util.Abort('Abandon ship!', hint="don't panic")
442 > raise util.Abort('Abandon ship!', hint="don't panic")
443 >
443 >
444 > def uisetup(ui):
444 > def uisetup(ui):
445 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
445 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
446 > exchange.b2partsgenorder.insert(0, 'failpart')
446 > exchange.b2partsgenorder.insert(0, 'failpart')
447 >
447 >
448 > EOF
448 > EOF
449
449
450 $ cd main
450 $ cd main
451 $ hg up tip
451 $ hg up tip
452 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
452 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
453 $ echo 'I' > I
453 $ echo 'I' > I
454 $ hg add I
454 $ hg add I
455 $ hg ci -m 'I'
455 $ hg ci -m 'I'
456 pre-close-tip:e7ec4e813ba6 draft
456 pre-close-tip:e7ec4e813ba6 draft
457 postclose-tip:e7ec4e813ba6 draft
457 postclose-tip:e7ec4e813ba6 draft
458 txnclose hook: HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
458 txnclose hook: HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
459 $ hg id
459 $ hg id
460 e7ec4e813ba6 tip
460 e7ec4e813ba6 tip
461 $ cd ..
461 $ cd ..
462
462
463 $ cat << EOF >> $HGRCPATH
463 $ cat << EOF >> $HGRCPATH
464 > [extensions]
464 > [extensions]
465 > failpush=$TESTTMP/failpush.py
465 > failpush=$TESTTMP/failpush.py
466 > EOF
466 > EOF
467
467
468 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
468 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
469 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
469 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
470 $ cat other.pid >> $DAEMON_PIDS
470 $ cat other.pid >> $DAEMON_PIDS
471
471
472 Doing the actual push: Abort error
472 Doing the actual push: Abort error
473
473
474 $ cat << EOF >> $HGRCPATH
474 $ cat << EOF >> $HGRCPATH
475 > [failpush]
475 > [failpush]
476 > reason = abort
476 > reason = abort
477 > EOF
477 > EOF
478
478
479 $ hg -R main push other -r e7ec4e813ba6
479 $ hg -R main push other -r e7ec4e813ba6
480 pushing to other
480 pushing to other
481 searching for changes
481 searching for changes
482 abort: Abandon ship!
482 abort: Abandon ship!
483 (don't panic)
483 (don't panic)
484 [255]
484 [255]
485
485
486 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
486 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
487 pushing to ssh://user@dummy/other
487 pushing to ssh://user@dummy/other
488 searching for changes
488 searching for changes
489 abort: Abandon ship!
489 abort: Abandon ship!
490 (don't panic)
490 (don't panic)
491 [255]
491 [255]
492
492
493 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
493 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
494 pushing to http://localhost:$HGPORT2/
494 pushing to http://localhost:$HGPORT2/
495 searching for changes
495 searching for changes
496 abort: Abandon ship!
496 abort: Abandon ship!
497 (don't panic)
497 (don't panic)
498 [255]
498 [255]
499
499
500
500
501 Doing the actual push: unknown mandatory parts
501 Doing the actual push: unknown mandatory parts
502
502
503 $ cat << EOF >> $HGRCPATH
503 $ cat << EOF >> $HGRCPATH
504 > [failpush]
504 > [failpush]
505 > reason = unknown
505 > reason = unknown
506 > EOF
506 > EOF
507
507
508 $ hg -R main push other -r e7ec4e813ba6
508 $ hg -R main push other -r e7ec4e813ba6
509 pushing to other
509 pushing to other
510 searching for changes
510 searching for changes
511 abort: missing support for test:unknown
511 abort: missing support for test:unknown
512 [255]
512 [255]
513
513
514 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
514 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
515 pushing to ssh://user@dummy/other
515 pushing to ssh://user@dummy/other
516 searching for changes
516 searching for changes
517 abort: missing support for test:unknown
517 abort: missing support for test:unknown
518 [255]
518 [255]
519
519
520 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
520 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
521 pushing to http://localhost:$HGPORT2/
521 pushing to http://localhost:$HGPORT2/
522 searching for changes
522 searching for changes
523 abort: missing support for test:unknown
523 abort: missing support for test:unknown
524 [255]
524 [255]
525
525
526 Doing the actual push: race
526 Doing the actual push: race
527
527
528 $ cat << EOF >> $HGRCPATH
528 $ cat << EOF >> $HGRCPATH
529 > [failpush]
529 > [failpush]
530 > reason = race
530 > reason = race
531 > EOF
531 > EOF
532
532
533 $ hg -R main push other -r e7ec4e813ba6
533 $ hg -R main push other -r e7ec4e813ba6
534 pushing to other
534 pushing to other
535 searching for changes
535 searching for changes
536 abort: push failed:
536 abort: push failed:
537 'repository changed while pushing - please try again'
537 'repository changed while pushing - please try again'
538 [255]
538 [255]
539
539
540 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
540 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
541 pushing to ssh://user@dummy/other
541 pushing to ssh://user@dummy/other
542 searching for changes
542 searching for changes
543 abort: push failed:
543 abort: push failed:
544 'repository changed while pushing - please try again'
544 'repository changed while pushing - please try again'
545 [255]
545 [255]
546
546
547 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
547 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
548 pushing to http://localhost:$HGPORT2/
548 pushing to http://localhost:$HGPORT2/
549 searching for changes
549 searching for changes
550 abort: push failed:
550 abort: push failed:
551 'repository changed while pushing - please try again'
551 'repository changed while pushing - please try again'
552 [255]
552 [255]
553
553
554 Doing the actual push: hook abort
554 Doing the actual push: hook abort
555
555
556 $ cat << EOF >> $HGRCPATH
556 $ cat << EOF >> $HGRCPATH
557 > [failpush]
557 > [failpush]
558 > reason =
558 > reason =
559 > [hooks]
559 > [hooks]
560 > pretxnclose.failpush = echo "You shall not pass!"; false
560 > pretxnclose.failpush = echo "You shall not pass!"; false
561 > txnabort.failpush = echo 'Cleaning up the mess...'
561 > txnabort.failpush = echo 'Cleaning up the mess...'
562 > EOF
562 > EOF
563
563
564 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
564 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
565 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
565 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
566 $ cat other.pid >> $DAEMON_PIDS
566 $ cat other.pid >> $DAEMON_PIDS
567
567
568 $ hg -R main push other -r e7ec4e813ba6
568 $ hg -R main push other -r e7ec4e813ba6
569 pushing to other
569 pushing to other
570 searching for changes
570 searching for changes
571 remote: adding changesets
571 remote: adding changesets
572 remote: adding manifests
572 remote: adding manifests
573 remote: adding file changes
573 remote: adding file changes
574 remote: added 1 changesets with 1 changes to 1 files
574 remote: added 1 changesets with 1 changes to 1 files
575 remote: pre-close-tip:e7ec4e813ba6 draft
575 remote: pre-close-tip:e7ec4e813ba6 draft
576 remote: You shall not pass!
576 remote: You shall not pass!
577 remote: transaction abort!
577 remote: transaction abort!
578 remote: Cleaning up the mess...
578 remote: Cleaning up the mess...
579 remote: rollback completed
579 remote: rollback completed
580 abort: pretxnclose.failpush hook exited with status 1
580 abort: pretxnclose.failpush hook exited with status 1
581 [255]
581 [255]
582
582
583 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
583 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
584 pushing to ssh://user@dummy/other
584 pushing to ssh://user@dummy/other
585 searching for changes
585 searching for changes
586 remote: adding changesets
586 remote: adding changesets
587 remote: adding manifests
587 remote: adding manifests
588 remote: adding file changes
588 remote: adding file changes
589 remote: added 1 changesets with 1 changes to 1 files
589 remote: added 1 changesets with 1 changes to 1 files
590 remote: pre-close-tip:e7ec4e813ba6 draft
590 remote: pre-close-tip:e7ec4e813ba6 draft
591 remote: You shall not pass!
591 remote: You shall not pass!
592 remote: transaction abort!
592 remote: transaction abort!
593 remote: Cleaning up the mess...
593 remote: Cleaning up the mess...
594 remote: rollback completed
594 remote: rollback completed
595 abort: pretxnclose.failpush hook exited with status 1
595 abort: pretxnclose.failpush hook exited with status 1
596 [255]
596 [255]
597
597
598 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
598 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
599 pushing to http://localhost:$HGPORT2/
599 pushing to http://localhost:$HGPORT2/
600 searching for changes
600 searching for changes
601 remote: adding changesets
601 remote: adding changesets
602 remote: adding manifests
602 remote: adding manifests
603 remote: adding file changes
603 remote: adding file changes
604 remote: added 1 changesets with 1 changes to 1 files
604 remote: added 1 changesets with 1 changes to 1 files
605 remote: pre-close-tip:e7ec4e813ba6 draft
605 remote: pre-close-tip:e7ec4e813ba6 draft
606 remote: You shall not pass!
606 remote: You shall not pass!
607 remote: transaction abort!
607 remote: transaction abort!
608 remote: Cleaning up the mess...
608 remote: Cleaning up the mess...
609 remote: rollback completed
609 remote: rollback completed
610 abort: pretxnclose.failpush hook exited with status 1
610 abort: pretxnclose.failpush hook exited with status 1
611 [255]
611 [255]
612
612
613 (check that no 'pending' files remain)
613 (check that no 'pending' files remain)
614
614
615 $ ls -1 other/.hg/bookmarks*
615 $ ls -1 other/.hg/bookmarks*
616 other/.hg/bookmarks
616 other/.hg/bookmarks
617 $ ls -1 other/.hg/store/phaseroots*
617 $ ls -1 other/.hg/store/phaseroots*
618 other/.hg/store/phaseroots
618 other/.hg/store/phaseroots
619 $ ls -1 other/.hg/store/00changelog.i*
619 $ ls -1 other/.hg/store/00changelog.i*
620 other/.hg/store/00changelog.i
620 other/.hg/store/00changelog.i
621
621
622 Check error from hook during the unbundling process itself
623
624 $ cat << EOF >> $HGRCPATH
625 > pretxnchangegroup = echo "Fail early!"; false
626 > EOF
627 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS # reload http config
628 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
629 $ cat other.pid >> $DAEMON_PIDS
630
631 $ hg -R main push other -r e7ec4e813ba6
632 pushing to other
633 searching for changes
634 remote: adding changesets
635 remote: adding manifests
636 remote: adding file changes
637 remote: added 1 changesets with 1 changes to 1 files
638 remote: Fail early!
639 remote: transaction abort!
640 remote: Cleaning up the mess...
641 remote: rollback completed
642 abort: pretxnchangegroup hook exited with status 1
643 [255]
644 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
645 pushing to ssh://user@dummy/other
646 searching for changes
647 remote: adding changesets
648 remote: adding manifests
649 remote: adding file changes
650 remote: added 1 changesets with 1 changes to 1 files
651 remote: Fail early!
652 remote: transaction abort!
653 remote: Cleaning up the mess...
654 remote: rollback completed
655 abort: pretxnchangegroup hook exited with status 1
656 [255]
657 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
658 pushing to http://localhost:$HGPORT2/
659 searching for changes
660 remote: adding changesets
661 remote: adding manifests
662 remote: adding file changes
663 remote: added 1 changesets with 1 changes to 1 files
664 remote: Fail early!
665 remote: transaction abort!
666 remote: Cleaning up the mess...
667 remote: rollback completed
668 abort: pretxnchangegroup hook exited with status 1
669 [255]
General Comments 0
You need to be logged in to leave comments. Login now