##// END OF EJS Templates
bundle2: abort when a mandatory pushkey part fails...
Pierre-Yves David -
r25481:6de96cb3 default
parent child Browse files
Show More
@@ -1,1384 +1,1387 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 import errno
148 import errno
149 import sys
149 import sys
150 import util
150 import util
151 import struct
151 import struct
152 import urllib
152 import urllib
153 import string
153 import string
154 import obsolete
154 import obsolete
155 import pushkey
155 import pushkey
156 import url
156 import url
157 import re
157 import re
158
158
159 import changegroup, error, tags
159 import changegroup, error, tags
160 from i18n import _
160 from i18n import _
161
161
162 _pack = struct.pack
162 _pack = struct.pack
163 _unpack = struct.unpack
163 _unpack = struct.unpack
164
164
165 _fstreamparamsize = '>i'
165 _fstreamparamsize = '>i'
166 _fpartheadersize = '>i'
166 _fpartheadersize = '>i'
167 _fparttypesize = '>B'
167 _fparttypesize = '>B'
168 _fpartid = '>I'
168 _fpartid = '>I'
169 _fpayloadsize = '>i'
169 _fpayloadsize = '>i'
170 _fpartparamcount = '>BB'
170 _fpartparamcount = '>BB'
171
171
172 preferedchunksize = 4096
172 preferedchunksize = 4096
173
173
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
175
175
176 def outdebug(ui, message):
176 def outdebug(ui, message):
177 """debug regarding output stream (bundling)"""
177 """debug regarding output stream (bundling)"""
178 if ui.configbool('devel', 'bundle2.debug', False):
178 if ui.configbool('devel', 'bundle2.debug', False):
179 ui.debug('bundle2-output: %s\n' % message)
179 ui.debug('bundle2-output: %s\n' % message)
180
180
181 def indebug(ui, message):
181 def indebug(ui, message):
182 """debug on input stream (unbundling)"""
182 """debug on input stream (unbundling)"""
183 if ui.configbool('devel', 'bundle2.debug', False):
183 if ui.configbool('devel', 'bundle2.debug', False):
184 ui.debug('bundle2-input: %s\n' % message)
184 ui.debug('bundle2-input: %s\n' % message)
185
185
186 def validateparttype(parttype):
186 def validateparttype(parttype):
187 """raise ValueError if a parttype contains invalid character"""
187 """raise ValueError if a parttype contains invalid character"""
188 if _parttypeforbidden.search(parttype):
188 if _parttypeforbidden.search(parttype):
189 raise ValueError(parttype)
189 raise ValueError(parttype)
190
190
191 def _makefpartparamsizes(nbparams):
191 def _makefpartparamsizes(nbparams):
192 """return a struct format to read part parameter sizes
192 """return a struct format to read part parameter sizes
193
193
194 The number parameters is variable so we need to build that format
194 The number parameters is variable so we need to build that format
195 dynamically.
195 dynamically.
196 """
196 """
197 return '>'+('BB'*nbparams)
197 return '>'+('BB'*nbparams)
198
198
199 parthandlermapping = {}
199 parthandlermapping = {}
200
200
201 def parthandler(parttype, params=()):
201 def parthandler(parttype, params=()):
202 """decorator that register a function as a bundle2 part handler
202 """decorator that register a function as a bundle2 part handler
203
203
204 eg::
204 eg::
205
205
206 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
206 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
207 def myparttypehandler(...):
207 def myparttypehandler(...):
208 '''process a part of type "my part".'''
208 '''process a part of type "my part".'''
209 ...
209 ...
210 """
210 """
211 validateparttype(parttype)
211 validateparttype(parttype)
212 def _decorator(func):
212 def _decorator(func):
213 lparttype = parttype.lower() # enforce lower case matching.
213 lparttype = parttype.lower() # enforce lower case matching.
214 assert lparttype not in parthandlermapping
214 assert lparttype not in parthandlermapping
215 parthandlermapping[lparttype] = func
215 parthandlermapping[lparttype] = func
216 func.params = frozenset(params)
216 func.params = frozenset(params)
217 return func
217 return func
218 return _decorator
218 return _decorator
219
219
220 class unbundlerecords(object):
220 class unbundlerecords(object):
221 """keep record of what happens during and unbundle
221 """keep record of what happens during and unbundle
222
222
223 New records are added using `records.add('cat', obj)`. Where 'cat' is a
223 New records are added using `records.add('cat', obj)`. Where 'cat' is a
224 category of record and obj is an arbitrary object.
224 category of record and obj is an arbitrary object.
225
225
226 `records['cat']` will return all entries of this category 'cat'.
226 `records['cat']` will return all entries of this category 'cat'.
227
227
228 Iterating on the object itself will yield `('category', obj)` tuples
228 Iterating on the object itself will yield `('category', obj)` tuples
229 for all entries.
229 for all entries.
230
230
231 All iterations happens in chronological order.
231 All iterations happens in chronological order.
232 """
232 """
233
233
234 def __init__(self):
234 def __init__(self):
235 self._categories = {}
235 self._categories = {}
236 self._sequences = []
236 self._sequences = []
237 self._replies = {}
237 self._replies = {}
238
238
239 def add(self, category, entry, inreplyto=None):
239 def add(self, category, entry, inreplyto=None):
240 """add a new record of a given category.
240 """add a new record of a given category.
241
241
242 The entry can then be retrieved in the list returned by
242 The entry can then be retrieved in the list returned by
243 self['category']."""
243 self['category']."""
244 self._categories.setdefault(category, []).append(entry)
244 self._categories.setdefault(category, []).append(entry)
245 self._sequences.append((category, entry))
245 self._sequences.append((category, entry))
246 if inreplyto is not None:
246 if inreplyto is not None:
247 self.getreplies(inreplyto).add(category, entry)
247 self.getreplies(inreplyto).add(category, entry)
248
248
249 def getreplies(self, partid):
249 def getreplies(self, partid):
250 """get the records that are replies to a specific part"""
250 """get the records that are replies to a specific part"""
251 return self._replies.setdefault(partid, unbundlerecords())
251 return self._replies.setdefault(partid, unbundlerecords())
252
252
253 def __getitem__(self, cat):
253 def __getitem__(self, cat):
254 return tuple(self._categories.get(cat, ()))
254 return tuple(self._categories.get(cat, ()))
255
255
256 def __iter__(self):
256 def __iter__(self):
257 return iter(self._sequences)
257 return iter(self._sequences)
258
258
259 def __len__(self):
259 def __len__(self):
260 return len(self._sequences)
260 return len(self._sequences)
261
261
262 def __nonzero__(self):
262 def __nonzero__(self):
263 return bool(self._sequences)
263 return bool(self._sequences)
264
264
265 class bundleoperation(object):
265 class bundleoperation(object):
266 """an object that represents a single bundling process
266 """an object that represents a single bundling process
267
267
268 Its purpose is to carry unbundle-related objects and states.
268 Its purpose is to carry unbundle-related objects and states.
269
269
270 A new object should be created at the beginning of each bundle processing.
270 A new object should be created at the beginning of each bundle processing.
271 The object is to be returned by the processing function.
271 The object is to be returned by the processing function.
272
272
273 The object has very little content now it will ultimately contain:
273 The object has very little content now it will ultimately contain:
274 * an access to the repo the bundle is applied to,
274 * an access to the repo the bundle is applied to,
275 * a ui object,
275 * a ui object,
276 * a way to retrieve a transaction to add changes to the repo,
276 * a way to retrieve a transaction to add changes to the repo,
277 * a way to record the result of processing each part,
277 * a way to record the result of processing each part,
278 * a way to construct a bundle response when applicable.
278 * a way to construct a bundle response when applicable.
279 """
279 """
280
280
281 def __init__(self, repo, transactiongetter, captureoutput=True):
281 def __init__(self, repo, transactiongetter, captureoutput=True):
282 self.repo = repo
282 self.repo = repo
283 self.ui = repo.ui
283 self.ui = repo.ui
284 self.records = unbundlerecords()
284 self.records = unbundlerecords()
285 self.gettransaction = transactiongetter
285 self.gettransaction = transactiongetter
286 self.reply = None
286 self.reply = None
287 self.captureoutput = captureoutput
287 self.captureoutput = captureoutput
288
288
289 class TransactionUnavailable(RuntimeError):
289 class TransactionUnavailable(RuntimeError):
290 pass
290 pass
291
291
292 def _notransaction():
292 def _notransaction():
293 """default method to get a transaction while processing a bundle
293 """default method to get a transaction while processing a bundle
294
294
295 Raise an exception to highlight the fact that no transaction was expected
295 Raise an exception to highlight the fact that no transaction was expected
296 to be created"""
296 to be created"""
297 raise TransactionUnavailable()
297 raise TransactionUnavailable()
298
298
299 def processbundle(repo, unbundler, transactiongetter=None, op=None):
299 def processbundle(repo, unbundler, transactiongetter=None, op=None):
300 """This function process a bundle, apply effect to/from a repo
300 """This function process a bundle, apply effect to/from a repo
301
301
302 It iterates over each part then searches for and uses the proper handling
302 It iterates over each part then searches for and uses the proper handling
303 code to process the part. Parts are processed in order.
303 code to process the part. Parts are processed in order.
304
304
305 This is very early version of this function that will be strongly reworked
305 This is very early version of this function that will be strongly reworked
306 before final usage.
306 before final usage.
307
307
308 Unknown Mandatory part will abort the process.
308 Unknown Mandatory part will abort the process.
309
309
310 It is temporarily possible to provide a prebuilt bundleoperation to the
310 It is temporarily possible to provide a prebuilt bundleoperation to the
311 function. This is used to ensure output is properly propagated in case of
311 function. This is used to ensure output is properly propagated in case of
312 an error during the unbundling. This output capturing part will likely be
312 an error during the unbundling. This output capturing part will likely be
313 reworked and this ability will probably go away in the process.
313 reworked and this ability will probably go away in the process.
314 """
314 """
315 if op is None:
315 if op is None:
316 if transactiongetter is None:
316 if transactiongetter is None:
317 transactiongetter = _notransaction
317 transactiongetter = _notransaction
318 op = bundleoperation(repo, transactiongetter)
318 op = bundleoperation(repo, transactiongetter)
319 # todo:
319 # todo:
320 # - replace this is a init function soon.
320 # - replace this is a init function soon.
321 # - exception catching
321 # - exception catching
322 unbundler.params
322 unbundler.params
323 if repo.ui.debugflag:
323 if repo.ui.debugflag:
324 msg = ['bundle2-input-bundle:']
324 msg = ['bundle2-input-bundle:']
325 if unbundler.params:
325 if unbundler.params:
326 msg.append(' %i params')
326 msg.append(' %i params')
327 if op.gettransaction is None:
327 if op.gettransaction is None:
328 msg.append(' no-transaction')
328 msg.append(' no-transaction')
329 else:
329 else:
330 msg.append(' with-transaction')
330 msg.append(' with-transaction')
331 msg.append('\n')
331 msg.append('\n')
332 repo.ui.debug(''.join(msg))
332 repo.ui.debug(''.join(msg))
333 iterparts = enumerate(unbundler.iterparts())
333 iterparts = enumerate(unbundler.iterparts())
334 part = None
334 part = None
335 nbpart = 0
335 nbpart = 0
336 try:
336 try:
337 for nbpart, part in iterparts:
337 for nbpart, part in iterparts:
338 _processpart(op, part)
338 _processpart(op, part)
339 except BaseException, exc:
339 except BaseException, exc:
340 for nbpart, part in iterparts:
340 for nbpart, part in iterparts:
341 # consume the bundle content
341 # consume the bundle content
342 part.seek(0, 2)
342 part.seek(0, 2)
343 # Small hack to let caller code distinguish exceptions from bundle2
343 # Small hack to let caller code distinguish exceptions from bundle2
344 # processing from processing the old format. This is mostly
344 # processing from processing the old format. This is mostly
345 # needed to handle different return codes to unbundle according to the
345 # needed to handle different return codes to unbundle according to the
346 # type of bundle. We should probably clean up or drop this return code
346 # type of bundle. We should probably clean up or drop this return code
347 # craziness in a future version.
347 # craziness in a future version.
348 exc.duringunbundle2 = True
348 exc.duringunbundle2 = True
349 salvaged = []
349 salvaged = []
350 if op.reply is not None:
350 if op.reply is not None:
351 salvaged = op.reply.salvageoutput()
351 salvaged = op.reply.salvageoutput()
352 exc._bundle2salvagedoutput = salvaged
352 exc._bundle2salvagedoutput = salvaged
353 raise
353 raise
354 finally:
354 finally:
355 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
355 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
356
356
357 return op
357 return op
358
358
359 def _processpart(op, part):
359 def _processpart(op, part):
360 """process a single part from a bundle
360 """process a single part from a bundle
361
361
362 The part is guaranteed to have been fully consumed when the function exits
362 The part is guaranteed to have been fully consumed when the function exits
363 (even if an exception is raised)."""
363 (even if an exception is raised)."""
364 status = 'unknown' # used by debug output
364 status = 'unknown' # used by debug output
365 try:
365 try:
366 try:
366 try:
367 handler = parthandlermapping.get(part.type)
367 handler = parthandlermapping.get(part.type)
368 if handler is None:
368 if handler is None:
369 status = 'unsupported-type'
369 status = 'unsupported-type'
370 raise error.UnsupportedPartError(parttype=part.type)
370 raise error.UnsupportedPartError(parttype=part.type)
371 indebug(op.ui, 'found a handler for part %r' % part.type)
371 indebug(op.ui, 'found a handler for part %r' % part.type)
372 unknownparams = part.mandatorykeys - handler.params
372 unknownparams = part.mandatorykeys - handler.params
373 if unknownparams:
373 if unknownparams:
374 unknownparams = list(unknownparams)
374 unknownparams = list(unknownparams)
375 unknownparams.sort()
375 unknownparams.sort()
376 status = 'unsupported-params (%s)' % unknownparams
376 status = 'unsupported-params (%s)' % unknownparams
377 raise error.UnsupportedPartError(parttype=part.type,
377 raise error.UnsupportedPartError(parttype=part.type,
378 params=unknownparams)
378 params=unknownparams)
379 status = 'supported'
379 status = 'supported'
380 except error.UnsupportedPartError, exc:
380 except error.UnsupportedPartError, exc:
381 if part.mandatory: # mandatory parts
381 if part.mandatory: # mandatory parts
382 raise
382 raise
383 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
383 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
384 return # skip to part processing
384 return # skip to part processing
385 finally:
385 finally:
386 if op.ui.debugflag:
386 if op.ui.debugflag:
387 msg = ['bundle2-input-part: "%s"' % part.type]
387 msg = ['bundle2-input-part: "%s"' % part.type]
388 if not part.mandatory:
388 if not part.mandatory:
389 msg.append(' (advisory)')
389 msg.append(' (advisory)')
390 nbmp = len(part.mandatorykeys)
390 nbmp = len(part.mandatorykeys)
391 nbap = len(part.params) - nbmp
391 nbap = len(part.params) - nbmp
392 if nbmp or nbap:
392 if nbmp or nbap:
393 msg.append(' (params:')
393 msg.append(' (params:')
394 if nbmp:
394 if nbmp:
395 msg.append(' %i mandatory' % nbmp)
395 msg.append(' %i mandatory' % nbmp)
396 if nbap:
396 if nbap:
397 msg.append(' %i advisory' % nbmp)
397 msg.append(' %i advisory' % nbmp)
398 msg.append(')')
398 msg.append(')')
399 msg.append(' %s\n' % status)
399 msg.append(' %s\n' % status)
400 op.ui.debug(''.join(msg))
400 op.ui.debug(''.join(msg))
401
401
402 # handler is called outside the above try block so that we don't
402 # handler is called outside the above try block so that we don't
403 # risk catching KeyErrors from anything other than the
403 # risk catching KeyErrors from anything other than the
404 # parthandlermapping lookup (any KeyError raised by handler()
404 # parthandlermapping lookup (any KeyError raised by handler()
405 # itself represents a defect of a different variety).
405 # itself represents a defect of a different variety).
406 output = None
406 output = None
407 if op.captureoutput and op.reply is not None:
407 if op.captureoutput and op.reply is not None:
408 op.ui.pushbuffer(error=True, subproc=True)
408 op.ui.pushbuffer(error=True, subproc=True)
409 output = ''
409 output = ''
410 try:
410 try:
411 handler(op, part)
411 handler(op, part)
412 finally:
412 finally:
413 if output is not None:
413 if output is not None:
414 output = op.ui.popbuffer()
414 output = op.ui.popbuffer()
415 if output:
415 if output:
416 outpart = op.reply.newpart('output', data=output,
416 outpart = op.reply.newpart('output', data=output,
417 mandatory=False)
417 mandatory=False)
418 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
418 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
419 finally:
419 finally:
420 # consume the part content to not corrupt the stream.
420 # consume the part content to not corrupt the stream.
421 part.seek(0, 2)
421 part.seek(0, 2)
422
422
423
423
424 def decodecaps(blob):
424 def decodecaps(blob):
425 """decode a bundle2 caps bytes blob into a dictionary
425 """decode a bundle2 caps bytes blob into a dictionary
426
426
427 The blob is a list of capabilities (one per line)
427 The blob is a list of capabilities (one per line)
428 Capabilities may have values using a line of the form::
428 Capabilities may have values using a line of the form::
429
429
430 capability=value1,value2,value3
430 capability=value1,value2,value3
431
431
432 The values are always a list."""
432 The values are always a list."""
433 caps = {}
433 caps = {}
434 for line in blob.splitlines():
434 for line in blob.splitlines():
435 if not line:
435 if not line:
436 continue
436 continue
437 if '=' not in line:
437 if '=' not in line:
438 key, vals = line, ()
438 key, vals = line, ()
439 else:
439 else:
440 key, vals = line.split('=', 1)
440 key, vals = line.split('=', 1)
441 vals = vals.split(',')
441 vals = vals.split(',')
442 key = urllib.unquote(key)
442 key = urllib.unquote(key)
443 vals = [urllib.unquote(v) for v in vals]
443 vals = [urllib.unquote(v) for v in vals]
444 caps[key] = vals
444 caps[key] = vals
445 return caps
445 return caps
446
446
447 def encodecaps(caps):
447 def encodecaps(caps):
448 """encode a bundle2 caps dictionary into a bytes blob"""
448 """encode a bundle2 caps dictionary into a bytes blob"""
449 chunks = []
449 chunks = []
450 for ca in sorted(caps):
450 for ca in sorted(caps):
451 vals = caps[ca]
451 vals = caps[ca]
452 ca = urllib.quote(ca)
452 ca = urllib.quote(ca)
453 vals = [urllib.quote(v) for v in vals]
453 vals = [urllib.quote(v) for v in vals]
454 if vals:
454 if vals:
455 ca = "%s=%s" % (ca, ','.join(vals))
455 ca = "%s=%s" % (ca, ','.join(vals))
456 chunks.append(ca)
456 chunks.append(ca)
457 return '\n'.join(chunks)
457 return '\n'.join(chunks)
458
458
459 class bundle20(object):
459 class bundle20(object):
460 """represent an outgoing bundle2 container
460 """represent an outgoing bundle2 container
461
461
462 Use the `addparam` method to add stream level parameter. and `newpart` to
462 Use the `addparam` method to add stream level parameter. and `newpart` to
463 populate it. Then call `getchunks` to retrieve all the binary chunks of
463 populate it. Then call `getchunks` to retrieve all the binary chunks of
464 data that compose the bundle2 container."""
464 data that compose the bundle2 container."""
465
465
466 _magicstring = 'HG20'
466 _magicstring = 'HG20'
467
467
468 def __init__(self, ui, capabilities=()):
468 def __init__(self, ui, capabilities=()):
469 self.ui = ui
469 self.ui = ui
470 self._params = []
470 self._params = []
471 self._parts = []
471 self._parts = []
472 self.capabilities = dict(capabilities)
472 self.capabilities = dict(capabilities)
473
473
474 @property
474 @property
475 def nbparts(self):
475 def nbparts(self):
476 """total number of parts added to the bundler"""
476 """total number of parts added to the bundler"""
477 return len(self._parts)
477 return len(self._parts)
478
478
479 # methods used to defines the bundle2 content
479 # methods used to defines the bundle2 content
480 def addparam(self, name, value=None):
480 def addparam(self, name, value=None):
481 """add a stream level parameter"""
481 """add a stream level parameter"""
482 if not name:
482 if not name:
483 raise ValueError('empty parameter name')
483 raise ValueError('empty parameter name')
484 if name[0] not in string.letters:
484 if name[0] not in string.letters:
485 raise ValueError('non letter first character: %r' % name)
485 raise ValueError('non letter first character: %r' % name)
486 self._params.append((name, value))
486 self._params.append((name, value))
487
487
488 def addpart(self, part):
488 def addpart(self, part):
489 """add a new part to the bundle2 container
489 """add a new part to the bundle2 container
490
490
491 Parts contains the actual applicative payload."""
491 Parts contains the actual applicative payload."""
492 assert part.id is None
492 assert part.id is None
493 part.id = len(self._parts) # very cheap counter
493 part.id = len(self._parts) # very cheap counter
494 self._parts.append(part)
494 self._parts.append(part)
495
495
496 def newpart(self, typeid, *args, **kwargs):
496 def newpart(self, typeid, *args, **kwargs):
497 """create a new part and add it to the containers
497 """create a new part and add it to the containers
498
498
499 As the part is directly added to the containers. For now, this means
499 As the part is directly added to the containers. For now, this means
500 that any failure to properly initialize the part after calling
500 that any failure to properly initialize the part after calling
501 ``newpart`` should result in a failure of the whole bundling process.
501 ``newpart`` should result in a failure of the whole bundling process.
502
502
503 You can still fall back to manually create and add if you need better
503 You can still fall back to manually create and add if you need better
504 control."""
504 control."""
505 part = bundlepart(typeid, *args, **kwargs)
505 part = bundlepart(typeid, *args, **kwargs)
506 self.addpart(part)
506 self.addpart(part)
507 return part
507 return part
508
508
509 # methods used to generate the bundle2 stream
509 # methods used to generate the bundle2 stream
510 def getchunks(self):
510 def getchunks(self):
511 if self.ui.debugflag:
511 if self.ui.debugflag:
512 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
512 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
513 if self._params:
513 if self._params:
514 msg.append(' (%i params)' % len(self._params))
514 msg.append(' (%i params)' % len(self._params))
515 msg.append(' %i parts total\n' % len(self._parts))
515 msg.append(' %i parts total\n' % len(self._parts))
516 self.ui.debug(''.join(msg))
516 self.ui.debug(''.join(msg))
517 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
517 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
518 yield self._magicstring
518 yield self._magicstring
519 param = self._paramchunk()
519 param = self._paramchunk()
520 outdebug(self.ui, 'bundle parameter: %s' % param)
520 outdebug(self.ui, 'bundle parameter: %s' % param)
521 yield _pack(_fstreamparamsize, len(param))
521 yield _pack(_fstreamparamsize, len(param))
522 if param:
522 if param:
523 yield param
523 yield param
524
524
525 outdebug(self.ui, 'start of parts')
525 outdebug(self.ui, 'start of parts')
526 for part in self._parts:
526 for part in self._parts:
527 outdebug(self.ui, 'bundle part: "%s"' % part.type)
527 outdebug(self.ui, 'bundle part: "%s"' % part.type)
528 for chunk in part.getchunks(ui=self.ui):
528 for chunk in part.getchunks(ui=self.ui):
529 yield chunk
529 yield chunk
530 outdebug(self.ui, 'end of bundle')
530 outdebug(self.ui, 'end of bundle')
531 yield _pack(_fpartheadersize, 0)
531 yield _pack(_fpartheadersize, 0)
532
532
533 def _paramchunk(self):
533 def _paramchunk(self):
534 """return a encoded version of all stream parameters"""
534 """return a encoded version of all stream parameters"""
535 blocks = []
535 blocks = []
536 for par, value in self._params:
536 for par, value in self._params:
537 par = urllib.quote(par)
537 par = urllib.quote(par)
538 if value is not None:
538 if value is not None:
539 value = urllib.quote(value)
539 value = urllib.quote(value)
540 par = '%s=%s' % (par, value)
540 par = '%s=%s' % (par, value)
541 blocks.append(par)
541 blocks.append(par)
542 return ' '.join(blocks)
542 return ' '.join(blocks)
543
543
544 def salvageoutput(self):
544 def salvageoutput(self):
545 """return a list with a copy of all output parts in the bundle
545 """return a list with a copy of all output parts in the bundle
546
546
547 This is meant to be used during error handling to make sure we preserve
547 This is meant to be used during error handling to make sure we preserve
548 server output"""
548 server output"""
549 salvaged = []
549 salvaged = []
550 for part in self._parts:
550 for part in self._parts:
551 if part.type.startswith('output'):
551 if part.type.startswith('output'):
552 salvaged.append(part.copy())
552 salvaged.append(part.copy())
553 return salvaged
553 return salvaged
554
554
555
555
556 class unpackermixin(object):
556 class unpackermixin(object):
557 """A mixin to extract bytes and struct data from a stream"""
557 """A mixin to extract bytes and struct data from a stream"""
558
558
559 def __init__(self, fp):
559 def __init__(self, fp):
560 self._fp = fp
560 self._fp = fp
561 self._seekable = (util.safehasattr(fp, 'seek') and
561 self._seekable = (util.safehasattr(fp, 'seek') and
562 util.safehasattr(fp, 'tell'))
562 util.safehasattr(fp, 'tell'))
563
563
564 def _unpack(self, format):
564 def _unpack(self, format):
565 """unpack this struct format from the stream"""
565 """unpack this struct format from the stream"""
566 data = self._readexact(struct.calcsize(format))
566 data = self._readexact(struct.calcsize(format))
567 return _unpack(format, data)
567 return _unpack(format, data)
568
568
569 def _readexact(self, size):
569 def _readexact(self, size):
570 """read exactly <size> bytes from the stream"""
570 """read exactly <size> bytes from the stream"""
571 return changegroup.readexactly(self._fp, size)
571 return changegroup.readexactly(self._fp, size)
572
572
573 def seek(self, offset, whence=0):
573 def seek(self, offset, whence=0):
574 """move the underlying file pointer"""
574 """move the underlying file pointer"""
575 if self._seekable:
575 if self._seekable:
576 return self._fp.seek(offset, whence)
576 return self._fp.seek(offset, whence)
577 else:
577 else:
578 raise NotImplementedError(_('File pointer is not seekable'))
578 raise NotImplementedError(_('File pointer is not seekable'))
579
579
580 def tell(self):
580 def tell(self):
581 """return the file offset, or None if file is not seekable"""
581 """return the file offset, or None if file is not seekable"""
582 if self._seekable:
582 if self._seekable:
583 try:
583 try:
584 return self._fp.tell()
584 return self._fp.tell()
585 except IOError, e:
585 except IOError, e:
586 if e.errno == errno.ESPIPE:
586 if e.errno == errno.ESPIPE:
587 self._seekable = False
587 self._seekable = False
588 else:
588 else:
589 raise
589 raise
590 return None
590 return None
591
591
592 def close(self):
592 def close(self):
593 """close underlying file"""
593 """close underlying file"""
594 if util.safehasattr(self._fp, 'close'):
594 if util.safehasattr(self._fp, 'close'):
595 return self._fp.close()
595 return self._fp.close()
596
596
597 def getunbundler(ui, fp, header=None):
597 def getunbundler(ui, fp, header=None):
598 """return a valid unbundler object for a given header"""
598 """return a valid unbundler object for a given header"""
599 if header is None:
599 if header is None:
600 header = changegroup.readexactly(fp, 4)
600 header = changegroup.readexactly(fp, 4)
601 magic, version = header[0:2], header[2:4]
601 magic, version = header[0:2], header[2:4]
602 if magic != 'HG':
602 if magic != 'HG':
603 raise util.Abort(_('not a Mercurial bundle'))
603 raise util.Abort(_('not a Mercurial bundle'))
604 unbundlerclass = formatmap.get(version)
604 unbundlerclass = formatmap.get(version)
605 if unbundlerclass is None:
605 if unbundlerclass is None:
606 raise util.Abort(_('unknown bundle version %s') % version)
606 raise util.Abort(_('unknown bundle version %s') % version)
607 unbundler = unbundlerclass(ui, fp)
607 unbundler = unbundlerclass(ui, fp)
608 indebug(ui, 'start processing of %s stream' % header)
608 indebug(ui, 'start processing of %s stream' % header)
609 return unbundler
609 return unbundler
610
610
611 class unbundle20(unpackermixin):
611 class unbundle20(unpackermixin):
612 """interpret a bundle2 stream
612 """interpret a bundle2 stream
613
613
614 This class is fed with a binary stream and yields parts through its
614 This class is fed with a binary stream and yields parts through its
615 `iterparts` methods."""
615 `iterparts` methods."""
616
616
617 def __init__(self, ui, fp):
617 def __init__(self, ui, fp):
618 """If header is specified, we do not read it out of the stream."""
618 """If header is specified, we do not read it out of the stream."""
619 self.ui = ui
619 self.ui = ui
620 super(unbundle20, self).__init__(fp)
620 super(unbundle20, self).__init__(fp)
621
621
622 @util.propertycache
622 @util.propertycache
623 def params(self):
623 def params(self):
624 """dictionary of stream level parameters"""
624 """dictionary of stream level parameters"""
625 indebug(self.ui, 'reading bundle2 stream parameters')
625 indebug(self.ui, 'reading bundle2 stream parameters')
626 params = {}
626 params = {}
627 paramssize = self._unpack(_fstreamparamsize)[0]
627 paramssize = self._unpack(_fstreamparamsize)[0]
628 if paramssize < 0:
628 if paramssize < 0:
629 raise error.BundleValueError('negative bundle param size: %i'
629 raise error.BundleValueError('negative bundle param size: %i'
630 % paramssize)
630 % paramssize)
631 if paramssize:
631 if paramssize:
632 for p in self._readexact(paramssize).split(' '):
632 for p in self._readexact(paramssize).split(' '):
633 p = p.split('=', 1)
633 p = p.split('=', 1)
634 p = [urllib.unquote(i) for i in p]
634 p = [urllib.unquote(i) for i in p]
635 if len(p) < 2:
635 if len(p) < 2:
636 p.append(None)
636 p.append(None)
637 self._processparam(*p)
637 self._processparam(*p)
638 params[p[0]] = p[1]
638 params[p[0]] = p[1]
639 return params
639 return params
640
640
641 def _processparam(self, name, value):
641 def _processparam(self, name, value):
642 """process a parameter, applying its effect if needed
642 """process a parameter, applying its effect if needed
643
643
644 Parameter starting with a lower case letter are advisory and will be
644 Parameter starting with a lower case letter are advisory and will be
645 ignored when unknown. Those starting with an upper case letter are
645 ignored when unknown. Those starting with an upper case letter are
646 mandatory and will this function will raise a KeyError when unknown.
646 mandatory and will this function will raise a KeyError when unknown.
647
647
648 Note: no option are currently supported. Any input will be either
648 Note: no option are currently supported. Any input will be either
649 ignored or failing.
649 ignored or failing.
650 """
650 """
651 if not name:
651 if not name:
652 raise ValueError('empty parameter name')
652 raise ValueError('empty parameter name')
653 if name[0] not in string.letters:
653 if name[0] not in string.letters:
654 raise ValueError('non letter first character: %r' % name)
654 raise ValueError('non letter first character: %r' % name)
655 # Some logic will be later added here to try to process the option for
655 # Some logic will be later added here to try to process the option for
656 # a dict of known parameter.
656 # a dict of known parameter.
657 if name[0].islower():
657 if name[0].islower():
658 indebug(self.ui, "ignoring unknown parameter %r" % name)
658 indebug(self.ui, "ignoring unknown parameter %r" % name)
659 else:
659 else:
660 raise error.UnsupportedPartError(params=(name,))
660 raise error.UnsupportedPartError(params=(name,))
661
661
662
662
663 def iterparts(self):
663 def iterparts(self):
664 """yield all parts contained in the stream"""
664 """yield all parts contained in the stream"""
665 # make sure param have been loaded
665 # make sure param have been loaded
666 self.params
666 self.params
667 indebug(self.ui, 'start extraction of bundle2 parts')
667 indebug(self.ui, 'start extraction of bundle2 parts')
668 headerblock = self._readpartheader()
668 headerblock = self._readpartheader()
669 while headerblock is not None:
669 while headerblock is not None:
670 part = unbundlepart(self.ui, headerblock, self._fp)
670 part = unbundlepart(self.ui, headerblock, self._fp)
671 yield part
671 yield part
672 part.seek(0, 2)
672 part.seek(0, 2)
673 headerblock = self._readpartheader()
673 headerblock = self._readpartheader()
674 indebug(self.ui, 'end of bundle2 stream')
674 indebug(self.ui, 'end of bundle2 stream')
675
675
676 def _readpartheader(self):
676 def _readpartheader(self):
677 """reads a part header size and return the bytes blob
677 """reads a part header size and return the bytes blob
678
678
679 returns None if empty"""
679 returns None if empty"""
680 headersize = self._unpack(_fpartheadersize)[0]
680 headersize = self._unpack(_fpartheadersize)[0]
681 if headersize < 0:
681 if headersize < 0:
682 raise error.BundleValueError('negative part header size: %i'
682 raise error.BundleValueError('negative part header size: %i'
683 % headersize)
683 % headersize)
684 indebug(self.ui, 'part header size: %i' % headersize)
684 indebug(self.ui, 'part header size: %i' % headersize)
685 if headersize:
685 if headersize:
686 return self._readexact(headersize)
686 return self._readexact(headersize)
687 return None
687 return None
688
688
689 def compressed(self):
689 def compressed(self):
690 return False
690 return False
691
691
692 formatmap = {'20': unbundle20}
692 formatmap = {'20': unbundle20}
693
693
694 class bundlepart(object):
694 class bundlepart(object):
695 """A bundle2 part contains application level payload
695 """A bundle2 part contains application level payload
696
696
697 The part `type` is used to route the part to the application level
697 The part `type` is used to route the part to the application level
698 handler.
698 handler.
699
699
700 The part payload is contained in ``part.data``. It could be raw bytes or a
700 The part payload is contained in ``part.data``. It could be raw bytes or a
701 generator of byte chunks.
701 generator of byte chunks.
702
702
703 You can add parameters to the part using the ``addparam`` method.
703 You can add parameters to the part using the ``addparam`` method.
704 Parameters can be either mandatory (default) or advisory. Remote side
704 Parameters can be either mandatory (default) or advisory. Remote side
705 should be able to safely ignore the advisory ones.
705 should be able to safely ignore the advisory ones.
706
706
707 Both data and parameters cannot be modified after the generation has begun.
707 Both data and parameters cannot be modified after the generation has begun.
708 """
708 """
709
709
710 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
710 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
711 data='', mandatory=True):
711 data='', mandatory=True):
712 validateparttype(parttype)
712 validateparttype(parttype)
713 self.id = None
713 self.id = None
714 self.type = parttype
714 self.type = parttype
715 self._data = data
715 self._data = data
716 self._mandatoryparams = list(mandatoryparams)
716 self._mandatoryparams = list(mandatoryparams)
717 self._advisoryparams = list(advisoryparams)
717 self._advisoryparams = list(advisoryparams)
718 # checking for duplicated entries
718 # checking for duplicated entries
719 self._seenparams = set()
719 self._seenparams = set()
720 for pname, __ in self._mandatoryparams + self._advisoryparams:
720 for pname, __ in self._mandatoryparams + self._advisoryparams:
721 if pname in self._seenparams:
721 if pname in self._seenparams:
722 raise RuntimeError('duplicated params: %s' % pname)
722 raise RuntimeError('duplicated params: %s' % pname)
723 self._seenparams.add(pname)
723 self._seenparams.add(pname)
724 # status of the part's generation:
724 # status of the part's generation:
725 # - None: not started,
725 # - None: not started,
726 # - False: currently generated,
726 # - False: currently generated,
727 # - True: generation done.
727 # - True: generation done.
728 self._generated = None
728 self._generated = None
729 self.mandatory = mandatory
729 self.mandatory = mandatory
730
730
731 def copy(self):
731 def copy(self):
732 """return a copy of the part
732 """return a copy of the part
733
733
734 The new part have the very same content but no partid assigned yet.
734 The new part have the very same content but no partid assigned yet.
735 Parts with generated data cannot be copied."""
735 Parts with generated data cannot be copied."""
736 assert not util.safehasattr(self.data, 'next')
736 assert not util.safehasattr(self.data, 'next')
737 return self.__class__(self.type, self._mandatoryparams,
737 return self.__class__(self.type, self._mandatoryparams,
738 self._advisoryparams, self._data, self.mandatory)
738 self._advisoryparams, self._data, self.mandatory)
739
739
740 # methods used to defines the part content
740 # methods used to defines the part content
741 def __setdata(self, data):
741 def __setdata(self, data):
742 if self._generated is not None:
742 if self._generated is not None:
743 raise error.ReadOnlyPartError('part is being generated')
743 raise error.ReadOnlyPartError('part is being generated')
744 self._data = data
744 self._data = data
745 def __getdata(self):
745 def __getdata(self):
746 return self._data
746 return self._data
747 data = property(__getdata, __setdata)
747 data = property(__getdata, __setdata)
748
748
749 @property
749 @property
750 def mandatoryparams(self):
750 def mandatoryparams(self):
751 # make it an immutable tuple to force people through ``addparam``
751 # make it an immutable tuple to force people through ``addparam``
752 return tuple(self._mandatoryparams)
752 return tuple(self._mandatoryparams)
753
753
754 @property
754 @property
755 def advisoryparams(self):
755 def advisoryparams(self):
756 # make it an immutable tuple to force people through ``addparam``
756 # make it an immutable tuple to force people through ``addparam``
757 return tuple(self._advisoryparams)
757 return tuple(self._advisoryparams)
758
758
759 def addparam(self, name, value='', mandatory=True):
759 def addparam(self, name, value='', mandatory=True):
760 if self._generated is not None:
760 if self._generated is not None:
761 raise error.ReadOnlyPartError('part is being generated')
761 raise error.ReadOnlyPartError('part is being generated')
762 if name in self._seenparams:
762 if name in self._seenparams:
763 raise ValueError('duplicated params: %s' % name)
763 raise ValueError('duplicated params: %s' % name)
764 self._seenparams.add(name)
764 self._seenparams.add(name)
765 params = self._advisoryparams
765 params = self._advisoryparams
766 if mandatory:
766 if mandatory:
767 params = self._mandatoryparams
767 params = self._mandatoryparams
768 params.append((name, value))
768 params.append((name, value))
769
769
770 # methods used to generates the bundle2 stream
770 # methods used to generates the bundle2 stream
771 def getchunks(self, ui):
771 def getchunks(self, ui):
772 if self._generated is not None:
772 if self._generated is not None:
773 raise RuntimeError('part can only be consumed once')
773 raise RuntimeError('part can only be consumed once')
774 self._generated = False
774 self._generated = False
775
775
776 if ui.debugflag:
776 if ui.debugflag:
777 msg = ['bundle2-output-part: "%s"' % self.type]
777 msg = ['bundle2-output-part: "%s"' % self.type]
778 if not self.mandatory:
778 if not self.mandatory:
779 msg.append(' (advisory)')
779 msg.append(' (advisory)')
780 nbmp = len(self.mandatoryparams)
780 nbmp = len(self.mandatoryparams)
781 nbap = len(self.advisoryparams)
781 nbap = len(self.advisoryparams)
782 if nbmp or nbap:
782 if nbmp or nbap:
783 msg.append(' (params:')
783 msg.append(' (params:')
784 if nbmp:
784 if nbmp:
785 msg.append(' %i mandatory' % nbmp)
785 msg.append(' %i mandatory' % nbmp)
786 if nbap:
786 if nbap:
787 msg.append(' %i advisory' % nbmp)
787 msg.append(' %i advisory' % nbmp)
788 msg.append(')')
788 msg.append(')')
789 if not self.data:
789 if not self.data:
790 msg.append(' empty payload')
790 msg.append(' empty payload')
791 elif util.safehasattr(self.data, 'next'):
791 elif util.safehasattr(self.data, 'next'):
792 msg.append(' streamed payload')
792 msg.append(' streamed payload')
793 else:
793 else:
794 msg.append(' %i bytes payload' % len(self.data))
794 msg.append(' %i bytes payload' % len(self.data))
795 msg.append('\n')
795 msg.append('\n')
796 ui.debug(''.join(msg))
796 ui.debug(''.join(msg))
797
797
798 #### header
798 #### header
799 if self.mandatory:
799 if self.mandatory:
800 parttype = self.type.upper()
800 parttype = self.type.upper()
801 else:
801 else:
802 parttype = self.type.lower()
802 parttype = self.type.lower()
803 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
803 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
804 ## parttype
804 ## parttype
805 header = [_pack(_fparttypesize, len(parttype)),
805 header = [_pack(_fparttypesize, len(parttype)),
806 parttype, _pack(_fpartid, self.id),
806 parttype, _pack(_fpartid, self.id),
807 ]
807 ]
808 ## parameters
808 ## parameters
809 # count
809 # count
810 manpar = self.mandatoryparams
810 manpar = self.mandatoryparams
811 advpar = self.advisoryparams
811 advpar = self.advisoryparams
812 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
812 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
813 # size
813 # size
814 parsizes = []
814 parsizes = []
815 for key, value in manpar:
815 for key, value in manpar:
816 parsizes.append(len(key))
816 parsizes.append(len(key))
817 parsizes.append(len(value))
817 parsizes.append(len(value))
818 for key, value in advpar:
818 for key, value in advpar:
819 parsizes.append(len(key))
819 parsizes.append(len(key))
820 parsizes.append(len(value))
820 parsizes.append(len(value))
821 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
821 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
822 header.append(paramsizes)
822 header.append(paramsizes)
823 # key, value
823 # key, value
824 for key, value in manpar:
824 for key, value in manpar:
825 header.append(key)
825 header.append(key)
826 header.append(value)
826 header.append(value)
827 for key, value in advpar:
827 for key, value in advpar:
828 header.append(key)
828 header.append(key)
829 header.append(value)
829 header.append(value)
830 ## finalize header
830 ## finalize header
831 headerchunk = ''.join(header)
831 headerchunk = ''.join(header)
832 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
832 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
833 yield _pack(_fpartheadersize, len(headerchunk))
833 yield _pack(_fpartheadersize, len(headerchunk))
834 yield headerchunk
834 yield headerchunk
835 ## payload
835 ## payload
836 try:
836 try:
837 for chunk in self._payloadchunks():
837 for chunk in self._payloadchunks():
838 outdebug(ui, 'payload chunk size: %i' % len(chunk))
838 outdebug(ui, 'payload chunk size: %i' % len(chunk))
839 yield _pack(_fpayloadsize, len(chunk))
839 yield _pack(_fpayloadsize, len(chunk))
840 yield chunk
840 yield chunk
841 except BaseException, exc:
841 except BaseException, exc:
842 # backup exception data for later
842 # backup exception data for later
843 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
843 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
844 % exc)
844 % exc)
845 exc_info = sys.exc_info()
845 exc_info = sys.exc_info()
846 msg = 'unexpected error: %s' % exc
846 msg = 'unexpected error: %s' % exc
847 interpart = bundlepart('error:abort', [('message', msg)],
847 interpart = bundlepart('error:abort', [('message', msg)],
848 mandatory=False)
848 mandatory=False)
849 interpart.id = 0
849 interpart.id = 0
850 yield _pack(_fpayloadsize, -1)
850 yield _pack(_fpayloadsize, -1)
851 for chunk in interpart.getchunks(ui=ui):
851 for chunk in interpart.getchunks(ui=ui):
852 yield chunk
852 yield chunk
853 outdebug(ui, 'closing payload chunk')
853 outdebug(ui, 'closing payload chunk')
854 # abort current part payload
854 # abort current part payload
855 yield _pack(_fpayloadsize, 0)
855 yield _pack(_fpayloadsize, 0)
856 raise exc_info[0], exc_info[1], exc_info[2]
856 raise exc_info[0], exc_info[1], exc_info[2]
857 # end of payload
857 # end of payload
858 outdebug(ui, 'closing payload chunk')
858 outdebug(ui, 'closing payload chunk')
859 yield _pack(_fpayloadsize, 0)
859 yield _pack(_fpayloadsize, 0)
860 self._generated = True
860 self._generated = True
861
861
862 def _payloadchunks(self):
862 def _payloadchunks(self):
863 """yield chunks of a the part payload
863 """yield chunks of a the part payload
864
864
865 Exists to handle the different methods to provide data to a part."""
865 Exists to handle the different methods to provide data to a part."""
866 # we only support fixed size data now.
866 # we only support fixed size data now.
867 # This will be improved in the future.
867 # This will be improved in the future.
868 if util.safehasattr(self.data, 'next'):
868 if util.safehasattr(self.data, 'next'):
869 buff = util.chunkbuffer(self.data)
869 buff = util.chunkbuffer(self.data)
870 chunk = buff.read(preferedchunksize)
870 chunk = buff.read(preferedchunksize)
871 while chunk:
871 while chunk:
872 yield chunk
872 yield chunk
873 chunk = buff.read(preferedchunksize)
873 chunk = buff.read(preferedchunksize)
874 elif len(self.data):
874 elif len(self.data):
875 yield self.data
875 yield self.data
876
876
877
877
878 flaginterrupt = -1
878 flaginterrupt = -1
879
879
880 class interrupthandler(unpackermixin):
880 class interrupthandler(unpackermixin):
881 """read one part and process it with restricted capability
881 """read one part and process it with restricted capability
882
882
883 This allows to transmit exception raised on the producer size during part
883 This allows to transmit exception raised on the producer size during part
884 iteration while the consumer is reading a part.
884 iteration while the consumer is reading a part.
885
885
886 Part processed in this manner only have access to a ui object,"""
886 Part processed in this manner only have access to a ui object,"""
887
887
888 def __init__(self, ui, fp):
888 def __init__(self, ui, fp):
889 super(interrupthandler, self).__init__(fp)
889 super(interrupthandler, self).__init__(fp)
890 self.ui = ui
890 self.ui = ui
891
891
892 def _readpartheader(self):
892 def _readpartheader(self):
893 """reads a part header size and return the bytes blob
893 """reads a part header size and return the bytes blob
894
894
895 returns None if empty"""
895 returns None if empty"""
896 headersize = self._unpack(_fpartheadersize)[0]
896 headersize = self._unpack(_fpartheadersize)[0]
897 if headersize < 0:
897 if headersize < 0:
898 raise error.BundleValueError('negative part header size: %i'
898 raise error.BundleValueError('negative part header size: %i'
899 % headersize)
899 % headersize)
900 indebug(self.ui, 'part header size: %i\n' % headersize)
900 indebug(self.ui, 'part header size: %i\n' % headersize)
901 if headersize:
901 if headersize:
902 return self._readexact(headersize)
902 return self._readexact(headersize)
903 return None
903 return None
904
904
905 def __call__(self):
905 def __call__(self):
906
906
907 self.ui.debug('bundle2-input-stream-interrupt:'
907 self.ui.debug('bundle2-input-stream-interrupt:'
908 ' opening out of band context\n')
908 ' opening out of band context\n')
909 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
909 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
910 headerblock = self._readpartheader()
910 headerblock = self._readpartheader()
911 if headerblock is None:
911 if headerblock is None:
912 indebug(self.ui, 'no part found during interruption.')
912 indebug(self.ui, 'no part found during interruption.')
913 return
913 return
914 part = unbundlepart(self.ui, headerblock, self._fp)
914 part = unbundlepart(self.ui, headerblock, self._fp)
915 op = interruptoperation(self.ui)
915 op = interruptoperation(self.ui)
916 _processpart(op, part)
916 _processpart(op, part)
917 self.ui.debug('bundle2-input-stream-interrupt:'
917 self.ui.debug('bundle2-input-stream-interrupt:'
918 ' closing out of band context\n')
918 ' closing out of band context\n')
919
919
920 class interruptoperation(object):
920 class interruptoperation(object):
921 """A limited operation to be use by part handler during interruption
921 """A limited operation to be use by part handler during interruption
922
922
923 It only have access to an ui object.
923 It only have access to an ui object.
924 """
924 """
925
925
926 def __init__(self, ui):
926 def __init__(self, ui):
927 self.ui = ui
927 self.ui = ui
928 self.reply = None
928 self.reply = None
929 self.captureoutput = False
929 self.captureoutput = False
930
930
931 @property
931 @property
932 def repo(self):
932 def repo(self):
933 raise RuntimeError('no repo access from stream interruption')
933 raise RuntimeError('no repo access from stream interruption')
934
934
935 def gettransaction(self):
935 def gettransaction(self):
936 raise TransactionUnavailable('no repo access from stream interruption')
936 raise TransactionUnavailable('no repo access from stream interruption')
937
937
938 class unbundlepart(unpackermixin):
938 class unbundlepart(unpackermixin):
939 """a bundle part read from a bundle"""
939 """a bundle part read from a bundle"""
940
940
941 def __init__(self, ui, header, fp):
941 def __init__(self, ui, header, fp):
942 super(unbundlepart, self).__init__(fp)
942 super(unbundlepart, self).__init__(fp)
943 self.ui = ui
943 self.ui = ui
944 # unbundle state attr
944 # unbundle state attr
945 self._headerdata = header
945 self._headerdata = header
946 self._headeroffset = 0
946 self._headeroffset = 0
947 self._initialized = False
947 self._initialized = False
948 self.consumed = False
948 self.consumed = False
949 # part data
949 # part data
950 self.id = None
950 self.id = None
951 self.type = None
951 self.type = None
952 self.mandatoryparams = None
952 self.mandatoryparams = None
953 self.advisoryparams = None
953 self.advisoryparams = None
954 self.params = None
954 self.params = None
955 self.mandatorykeys = ()
955 self.mandatorykeys = ()
956 self._payloadstream = None
956 self._payloadstream = None
957 self._readheader()
957 self._readheader()
958 self._mandatory = None
958 self._mandatory = None
959 self._chunkindex = [] #(payload, file) position tuples for chunk starts
959 self._chunkindex = [] #(payload, file) position tuples for chunk starts
960 self._pos = 0
960 self._pos = 0
961
961
962 def _fromheader(self, size):
962 def _fromheader(self, size):
963 """return the next <size> byte from the header"""
963 """return the next <size> byte from the header"""
964 offset = self._headeroffset
964 offset = self._headeroffset
965 data = self._headerdata[offset:(offset + size)]
965 data = self._headerdata[offset:(offset + size)]
966 self._headeroffset = offset + size
966 self._headeroffset = offset + size
967 return data
967 return data
968
968
969 def _unpackheader(self, format):
969 def _unpackheader(self, format):
970 """read given format from header
970 """read given format from header
971
971
972 This automatically compute the size of the format to read."""
972 This automatically compute the size of the format to read."""
973 data = self._fromheader(struct.calcsize(format))
973 data = self._fromheader(struct.calcsize(format))
974 return _unpack(format, data)
974 return _unpack(format, data)
975
975
976 def _initparams(self, mandatoryparams, advisoryparams):
976 def _initparams(self, mandatoryparams, advisoryparams):
977 """internal function to setup all logic related parameters"""
977 """internal function to setup all logic related parameters"""
978 # make it read only to prevent people touching it by mistake.
978 # make it read only to prevent people touching it by mistake.
979 self.mandatoryparams = tuple(mandatoryparams)
979 self.mandatoryparams = tuple(mandatoryparams)
980 self.advisoryparams = tuple(advisoryparams)
980 self.advisoryparams = tuple(advisoryparams)
981 # user friendly UI
981 # user friendly UI
982 self.params = dict(self.mandatoryparams)
982 self.params = dict(self.mandatoryparams)
983 self.params.update(dict(self.advisoryparams))
983 self.params.update(dict(self.advisoryparams))
984 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
984 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
985
985
986 def _payloadchunks(self, chunknum=0):
986 def _payloadchunks(self, chunknum=0):
987 '''seek to specified chunk and start yielding data'''
987 '''seek to specified chunk and start yielding data'''
988 if len(self._chunkindex) == 0:
988 if len(self._chunkindex) == 0:
989 assert chunknum == 0, 'Must start with chunk 0'
989 assert chunknum == 0, 'Must start with chunk 0'
990 self._chunkindex.append((0, super(unbundlepart, self).tell()))
990 self._chunkindex.append((0, super(unbundlepart, self).tell()))
991 else:
991 else:
992 assert chunknum < len(self._chunkindex), \
992 assert chunknum < len(self._chunkindex), \
993 'Unknown chunk %d' % chunknum
993 'Unknown chunk %d' % chunknum
994 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
994 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
995
995
996 pos = self._chunkindex[chunknum][0]
996 pos = self._chunkindex[chunknum][0]
997 payloadsize = self._unpack(_fpayloadsize)[0]
997 payloadsize = self._unpack(_fpayloadsize)[0]
998 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
998 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
999 while payloadsize:
999 while payloadsize:
1000 if payloadsize == flaginterrupt:
1000 if payloadsize == flaginterrupt:
1001 # interruption detection, the handler will now read a
1001 # interruption detection, the handler will now read a
1002 # single part and process it.
1002 # single part and process it.
1003 interrupthandler(self.ui, self._fp)()
1003 interrupthandler(self.ui, self._fp)()
1004 elif payloadsize < 0:
1004 elif payloadsize < 0:
1005 msg = 'negative payload chunk size: %i' % payloadsize
1005 msg = 'negative payload chunk size: %i' % payloadsize
1006 raise error.BundleValueError(msg)
1006 raise error.BundleValueError(msg)
1007 else:
1007 else:
1008 result = self._readexact(payloadsize)
1008 result = self._readexact(payloadsize)
1009 chunknum += 1
1009 chunknum += 1
1010 pos += payloadsize
1010 pos += payloadsize
1011 if chunknum == len(self._chunkindex):
1011 if chunknum == len(self._chunkindex):
1012 self._chunkindex.append((pos,
1012 self._chunkindex.append((pos,
1013 super(unbundlepart, self).tell()))
1013 super(unbundlepart, self).tell()))
1014 yield result
1014 yield result
1015 payloadsize = self._unpack(_fpayloadsize)[0]
1015 payloadsize = self._unpack(_fpayloadsize)[0]
1016 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1016 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1017
1017
1018 def _findchunk(self, pos):
1018 def _findchunk(self, pos):
1019 '''for a given payload position, return a chunk number and offset'''
1019 '''for a given payload position, return a chunk number and offset'''
1020 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1020 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1021 if ppos == pos:
1021 if ppos == pos:
1022 return chunk, 0
1022 return chunk, 0
1023 elif ppos > pos:
1023 elif ppos > pos:
1024 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1024 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1025 raise ValueError('Unknown chunk')
1025 raise ValueError('Unknown chunk')
1026
1026
1027 def _readheader(self):
1027 def _readheader(self):
1028 """read the header and setup the object"""
1028 """read the header and setup the object"""
1029 typesize = self._unpackheader(_fparttypesize)[0]
1029 typesize = self._unpackheader(_fparttypesize)[0]
1030 self.type = self._fromheader(typesize)
1030 self.type = self._fromheader(typesize)
1031 indebug(self.ui, 'part type: "%s"' % self.type)
1031 indebug(self.ui, 'part type: "%s"' % self.type)
1032 self.id = self._unpackheader(_fpartid)[0]
1032 self.id = self._unpackheader(_fpartid)[0]
1033 indebug(self.ui, 'part id: "%s"' % self.id)
1033 indebug(self.ui, 'part id: "%s"' % self.id)
1034 # extract mandatory bit from type
1034 # extract mandatory bit from type
1035 self.mandatory = (self.type != self.type.lower())
1035 self.mandatory = (self.type != self.type.lower())
1036 self.type = self.type.lower()
1036 self.type = self.type.lower()
1037 ## reading parameters
1037 ## reading parameters
1038 # param count
1038 # param count
1039 mancount, advcount = self._unpackheader(_fpartparamcount)
1039 mancount, advcount = self._unpackheader(_fpartparamcount)
1040 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1040 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1041 # param size
1041 # param size
1042 fparamsizes = _makefpartparamsizes(mancount + advcount)
1042 fparamsizes = _makefpartparamsizes(mancount + advcount)
1043 paramsizes = self._unpackheader(fparamsizes)
1043 paramsizes = self._unpackheader(fparamsizes)
1044 # make it a list of couple again
1044 # make it a list of couple again
1045 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1045 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1046 # split mandatory from advisory
1046 # split mandatory from advisory
1047 mansizes = paramsizes[:mancount]
1047 mansizes = paramsizes[:mancount]
1048 advsizes = paramsizes[mancount:]
1048 advsizes = paramsizes[mancount:]
1049 # retrieve param value
1049 # retrieve param value
1050 manparams = []
1050 manparams = []
1051 for key, value in mansizes:
1051 for key, value in mansizes:
1052 manparams.append((self._fromheader(key), self._fromheader(value)))
1052 manparams.append((self._fromheader(key), self._fromheader(value)))
1053 advparams = []
1053 advparams = []
1054 for key, value in advsizes:
1054 for key, value in advsizes:
1055 advparams.append((self._fromheader(key), self._fromheader(value)))
1055 advparams.append((self._fromheader(key), self._fromheader(value)))
1056 self._initparams(manparams, advparams)
1056 self._initparams(manparams, advparams)
1057 ## part payload
1057 ## part payload
1058 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1058 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1059 # we read the data, tell it
1059 # we read the data, tell it
1060 self._initialized = True
1060 self._initialized = True
1061
1061
1062 def read(self, size=None):
1062 def read(self, size=None):
1063 """read payload data"""
1063 """read payload data"""
1064 if not self._initialized:
1064 if not self._initialized:
1065 self._readheader()
1065 self._readheader()
1066 if size is None:
1066 if size is None:
1067 data = self._payloadstream.read()
1067 data = self._payloadstream.read()
1068 else:
1068 else:
1069 data = self._payloadstream.read(size)
1069 data = self._payloadstream.read(size)
1070 self._pos += len(data)
1070 self._pos += len(data)
1071 if size is None or len(data) < size:
1071 if size is None or len(data) < size:
1072 if not self.consumed and self._pos:
1072 if not self.consumed and self._pos:
1073 self.ui.debug('bundle2-input-part: total payload size %i\n'
1073 self.ui.debug('bundle2-input-part: total payload size %i\n'
1074 % self._pos)
1074 % self._pos)
1075 self.consumed = True
1075 self.consumed = True
1076 return data
1076 return data
1077
1077
1078 def tell(self):
1078 def tell(self):
1079 return self._pos
1079 return self._pos
1080
1080
1081 def seek(self, offset, whence=0):
1081 def seek(self, offset, whence=0):
1082 if whence == 0:
1082 if whence == 0:
1083 newpos = offset
1083 newpos = offset
1084 elif whence == 1:
1084 elif whence == 1:
1085 newpos = self._pos + offset
1085 newpos = self._pos + offset
1086 elif whence == 2:
1086 elif whence == 2:
1087 if not self.consumed:
1087 if not self.consumed:
1088 self.read()
1088 self.read()
1089 newpos = self._chunkindex[-1][0] - offset
1089 newpos = self._chunkindex[-1][0] - offset
1090 else:
1090 else:
1091 raise ValueError('Unknown whence value: %r' % (whence,))
1091 raise ValueError('Unknown whence value: %r' % (whence,))
1092
1092
1093 if newpos > self._chunkindex[-1][0] and not self.consumed:
1093 if newpos > self._chunkindex[-1][0] and not self.consumed:
1094 self.read()
1094 self.read()
1095 if not 0 <= newpos <= self._chunkindex[-1][0]:
1095 if not 0 <= newpos <= self._chunkindex[-1][0]:
1096 raise ValueError('Offset out of range')
1096 raise ValueError('Offset out of range')
1097
1097
1098 if self._pos != newpos:
1098 if self._pos != newpos:
1099 chunk, internaloffset = self._findchunk(newpos)
1099 chunk, internaloffset = self._findchunk(newpos)
1100 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1100 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1101 adjust = self.read(internaloffset)
1101 adjust = self.read(internaloffset)
1102 if len(adjust) != internaloffset:
1102 if len(adjust) != internaloffset:
1103 raise util.Abort(_('Seek failed\n'))
1103 raise util.Abort(_('Seek failed\n'))
1104 self._pos = newpos
1104 self._pos = newpos
1105
1105
1106 # These are only the static capabilities.
1106 # These are only the static capabilities.
1107 # Check the 'getrepocaps' function for the rest.
1107 # Check the 'getrepocaps' function for the rest.
1108 capabilities = {'HG20': (),
1108 capabilities = {'HG20': (),
1109 'listkeys': (),
1109 'listkeys': (),
1110 'pushkey': (),
1110 'pushkey': (),
1111 'digests': tuple(sorted(util.DIGESTS.keys())),
1111 'digests': tuple(sorted(util.DIGESTS.keys())),
1112 'remote-changegroup': ('http', 'https'),
1112 'remote-changegroup': ('http', 'https'),
1113 'hgtagsfnodes': (),
1113 'hgtagsfnodes': (),
1114 }
1114 }
1115
1115
1116 def getrepocaps(repo, allowpushback=False):
1116 def getrepocaps(repo, allowpushback=False):
1117 """return the bundle2 capabilities for a given repo
1117 """return the bundle2 capabilities for a given repo
1118
1118
1119 Exists to allow extensions (like evolution) to mutate the capabilities.
1119 Exists to allow extensions (like evolution) to mutate the capabilities.
1120 """
1120 """
1121 caps = capabilities.copy()
1121 caps = capabilities.copy()
1122 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1122 caps['changegroup'] = tuple(sorted(changegroup.packermap.keys()))
1123 if obsolete.isenabled(repo, obsolete.exchangeopt):
1123 if obsolete.isenabled(repo, obsolete.exchangeopt):
1124 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1124 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1125 caps['obsmarkers'] = supportedformat
1125 caps['obsmarkers'] = supportedformat
1126 if allowpushback:
1126 if allowpushback:
1127 caps['pushback'] = ()
1127 caps['pushback'] = ()
1128 return caps
1128 return caps
1129
1129
1130 def bundle2caps(remote):
1130 def bundle2caps(remote):
1131 """return the bundle capabilities of a peer as dict"""
1131 """return the bundle capabilities of a peer as dict"""
1132 raw = remote.capable('bundle2')
1132 raw = remote.capable('bundle2')
1133 if not raw and raw != '':
1133 if not raw and raw != '':
1134 return {}
1134 return {}
1135 capsblob = urllib.unquote(remote.capable('bundle2'))
1135 capsblob = urllib.unquote(remote.capable('bundle2'))
1136 return decodecaps(capsblob)
1136 return decodecaps(capsblob)
1137
1137
1138 def obsmarkersversion(caps):
1138 def obsmarkersversion(caps):
1139 """extract the list of supported obsmarkers versions from a bundle2caps dict
1139 """extract the list of supported obsmarkers versions from a bundle2caps dict
1140 """
1140 """
1141 obscaps = caps.get('obsmarkers', ())
1141 obscaps = caps.get('obsmarkers', ())
1142 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1142 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1143
1143
1144 @parthandler('changegroup', ('version',))
1144 @parthandler('changegroup', ('version',))
1145 def handlechangegroup(op, inpart):
1145 def handlechangegroup(op, inpart):
1146 """apply a changegroup part on the repo
1146 """apply a changegroup part on the repo
1147
1147
1148 This is a very early implementation that will massive rework before being
1148 This is a very early implementation that will massive rework before being
1149 inflicted to any end-user.
1149 inflicted to any end-user.
1150 """
1150 """
1151 # Make sure we trigger a transaction creation
1151 # Make sure we trigger a transaction creation
1152 #
1152 #
1153 # The addchangegroup function will get a transaction object by itself, but
1153 # The addchangegroup function will get a transaction object by itself, but
1154 # we need to make sure we trigger the creation of a transaction object used
1154 # we need to make sure we trigger the creation of a transaction object used
1155 # for the whole processing scope.
1155 # for the whole processing scope.
1156 op.gettransaction()
1156 op.gettransaction()
1157 unpackerversion = inpart.params.get('version', '01')
1157 unpackerversion = inpart.params.get('version', '01')
1158 # We should raise an appropriate exception here
1158 # We should raise an appropriate exception here
1159 unpacker = changegroup.packermap[unpackerversion][1]
1159 unpacker = changegroup.packermap[unpackerversion][1]
1160 cg = unpacker(inpart, 'UN')
1160 cg = unpacker(inpart, 'UN')
1161 # the source and url passed here are overwritten by the one contained in
1161 # the source and url passed here are overwritten by the one contained in
1162 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1162 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1163 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1163 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1164 op.records.add('changegroup', {'return': ret})
1164 op.records.add('changegroup', {'return': ret})
1165 if op.reply is not None:
1165 if op.reply is not None:
1166 # This is definitely not the final form of this
1166 # This is definitely not the final form of this
1167 # return. But one need to start somewhere.
1167 # return. But one need to start somewhere.
1168 part = op.reply.newpart('reply:changegroup', mandatory=False)
1168 part = op.reply.newpart('reply:changegroup', mandatory=False)
1169 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1169 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1170 part.addparam('return', '%i' % ret, mandatory=False)
1170 part.addparam('return', '%i' % ret, mandatory=False)
1171 assert not inpart.read()
1171 assert not inpart.read()
1172
1172
1173 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1173 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1174 ['digest:%s' % k for k in util.DIGESTS.keys()])
1174 ['digest:%s' % k for k in util.DIGESTS.keys()])
1175 @parthandler('remote-changegroup', _remotechangegroupparams)
1175 @parthandler('remote-changegroup', _remotechangegroupparams)
1176 def handleremotechangegroup(op, inpart):
1176 def handleremotechangegroup(op, inpart):
1177 """apply a bundle10 on the repo, given an url and validation information
1177 """apply a bundle10 on the repo, given an url and validation information
1178
1178
1179 All the information about the remote bundle to import are given as
1179 All the information about the remote bundle to import are given as
1180 parameters. The parameters include:
1180 parameters. The parameters include:
1181 - url: the url to the bundle10.
1181 - url: the url to the bundle10.
1182 - size: the bundle10 file size. It is used to validate what was
1182 - size: the bundle10 file size. It is used to validate what was
1183 retrieved by the client matches the server knowledge about the bundle.
1183 retrieved by the client matches the server knowledge about the bundle.
1184 - digests: a space separated list of the digest types provided as
1184 - digests: a space separated list of the digest types provided as
1185 parameters.
1185 parameters.
1186 - digest:<digest-type>: the hexadecimal representation of the digest with
1186 - digest:<digest-type>: the hexadecimal representation of the digest with
1187 that name. Like the size, it is used to validate what was retrieved by
1187 that name. Like the size, it is used to validate what was retrieved by
1188 the client matches what the server knows about the bundle.
1188 the client matches what the server knows about the bundle.
1189
1189
1190 When multiple digest types are given, all of them are checked.
1190 When multiple digest types are given, all of them are checked.
1191 """
1191 """
1192 try:
1192 try:
1193 raw_url = inpart.params['url']
1193 raw_url = inpart.params['url']
1194 except KeyError:
1194 except KeyError:
1195 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1195 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1196 parsed_url = util.url(raw_url)
1196 parsed_url = util.url(raw_url)
1197 if parsed_url.scheme not in capabilities['remote-changegroup']:
1197 if parsed_url.scheme not in capabilities['remote-changegroup']:
1198 raise util.Abort(_('remote-changegroup does not support %s urls') %
1198 raise util.Abort(_('remote-changegroup does not support %s urls') %
1199 parsed_url.scheme)
1199 parsed_url.scheme)
1200
1200
1201 try:
1201 try:
1202 size = int(inpart.params['size'])
1202 size = int(inpart.params['size'])
1203 except ValueError:
1203 except ValueError:
1204 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1204 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1205 % 'size')
1205 % 'size')
1206 except KeyError:
1206 except KeyError:
1207 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1207 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1208
1208
1209 digests = {}
1209 digests = {}
1210 for typ in inpart.params.get('digests', '').split():
1210 for typ in inpart.params.get('digests', '').split():
1211 param = 'digest:%s' % typ
1211 param = 'digest:%s' % typ
1212 try:
1212 try:
1213 value = inpart.params[param]
1213 value = inpart.params[param]
1214 except KeyError:
1214 except KeyError:
1215 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1215 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1216 param)
1216 param)
1217 digests[typ] = value
1217 digests[typ] = value
1218
1218
1219 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1219 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1220
1220
1221 # Make sure we trigger a transaction creation
1221 # Make sure we trigger a transaction creation
1222 #
1222 #
1223 # The addchangegroup function will get a transaction object by itself, but
1223 # The addchangegroup function will get a transaction object by itself, but
1224 # we need to make sure we trigger the creation of a transaction object used
1224 # we need to make sure we trigger the creation of a transaction object used
1225 # for the whole processing scope.
1225 # for the whole processing scope.
1226 op.gettransaction()
1226 op.gettransaction()
1227 import exchange
1227 import exchange
1228 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1228 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1229 if not isinstance(cg, changegroup.cg1unpacker):
1229 if not isinstance(cg, changegroup.cg1unpacker):
1230 raise util.Abort(_('%s: not a bundle version 1.0') %
1230 raise util.Abort(_('%s: not a bundle version 1.0') %
1231 util.hidepassword(raw_url))
1231 util.hidepassword(raw_url))
1232 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1232 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1233 op.records.add('changegroup', {'return': ret})
1233 op.records.add('changegroup', {'return': ret})
1234 if op.reply is not None:
1234 if op.reply is not None:
1235 # This is definitely not the final form of this
1235 # This is definitely not the final form of this
1236 # return. But one need to start somewhere.
1236 # return. But one need to start somewhere.
1237 part = op.reply.newpart('reply:changegroup')
1237 part = op.reply.newpart('reply:changegroup')
1238 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1238 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1239 part.addparam('return', '%i' % ret, mandatory=False)
1239 part.addparam('return', '%i' % ret, mandatory=False)
1240 try:
1240 try:
1241 real_part.validate()
1241 real_part.validate()
1242 except util.Abort, e:
1242 except util.Abort, e:
1243 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1243 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1244 (util.hidepassword(raw_url), str(e)))
1244 (util.hidepassword(raw_url), str(e)))
1245 assert not inpart.read()
1245 assert not inpart.read()
1246
1246
1247 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1247 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1248 def handlereplychangegroup(op, inpart):
1248 def handlereplychangegroup(op, inpart):
1249 ret = int(inpart.params['return'])
1249 ret = int(inpart.params['return'])
1250 replyto = int(inpart.params['in-reply-to'])
1250 replyto = int(inpart.params['in-reply-to'])
1251 op.records.add('changegroup', {'return': ret}, replyto)
1251 op.records.add('changegroup', {'return': ret}, replyto)
1252
1252
1253 @parthandler('check:heads')
1253 @parthandler('check:heads')
1254 def handlecheckheads(op, inpart):
1254 def handlecheckheads(op, inpart):
1255 """check that head of the repo did not change
1255 """check that head of the repo did not change
1256
1256
1257 This is used to detect a push race when using unbundle.
1257 This is used to detect a push race when using unbundle.
1258 This replaces the "heads" argument of unbundle."""
1258 This replaces the "heads" argument of unbundle."""
1259 h = inpart.read(20)
1259 h = inpart.read(20)
1260 heads = []
1260 heads = []
1261 while len(h) == 20:
1261 while len(h) == 20:
1262 heads.append(h)
1262 heads.append(h)
1263 h = inpart.read(20)
1263 h = inpart.read(20)
1264 assert not h
1264 assert not h
1265 if heads != op.repo.heads():
1265 if heads != op.repo.heads():
1266 raise error.PushRaced('repository changed while pushing - '
1266 raise error.PushRaced('repository changed while pushing - '
1267 'please try again')
1267 'please try again')
1268
1268
1269 @parthandler('output')
1269 @parthandler('output')
1270 def handleoutput(op, inpart):
1270 def handleoutput(op, inpart):
1271 """forward output captured on the server to the client"""
1271 """forward output captured on the server to the client"""
1272 for line in inpart.read().splitlines():
1272 for line in inpart.read().splitlines():
1273 op.ui.status(('remote: %s\n' % line))
1273 op.ui.status(('remote: %s\n' % line))
1274
1274
1275 @parthandler('replycaps')
1275 @parthandler('replycaps')
1276 def handlereplycaps(op, inpart):
1276 def handlereplycaps(op, inpart):
1277 """Notify that a reply bundle should be created
1277 """Notify that a reply bundle should be created
1278
1278
1279 The payload contains the capabilities information for the reply"""
1279 The payload contains the capabilities information for the reply"""
1280 caps = decodecaps(inpart.read())
1280 caps = decodecaps(inpart.read())
1281 if op.reply is None:
1281 if op.reply is None:
1282 op.reply = bundle20(op.ui, caps)
1282 op.reply = bundle20(op.ui, caps)
1283
1283
1284 @parthandler('error:abort', ('message', 'hint'))
1284 @parthandler('error:abort', ('message', 'hint'))
1285 def handleerrorabort(op, inpart):
1285 def handleerrorabort(op, inpart):
1286 """Used to transmit abort error over the wire"""
1286 """Used to transmit abort error over the wire"""
1287 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1287 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1288
1288
1289 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1289 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1290 def handleerrorunsupportedcontent(op, inpart):
1290 def handleerrorunsupportedcontent(op, inpart):
1291 """Used to transmit unknown content error over the wire"""
1291 """Used to transmit unknown content error over the wire"""
1292 kwargs = {}
1292 kwargs = {}
1293 parttype = inpart.params.get('parttype')
1293 parttype = inpart.params.get('parttype')
1294 if parttype is not None:
1294 if parttype is not None:
1295 kwargs['parttype'] = parttype
1295 kwargs['parttype'] = parttype
1296 params = inpart.params.get('params')
1296 params = inpart.params.get('params')
1297 if params is not None:
1297 if params is not None:
1298 kwargs['params'] = params.split('\0')
1298 kwargs['params'] = params.split('\0')
1299
1299
1300 raise error.UnsupportedPartError(**kwargs)
1300 raise error.UnsupportedPartError(**kwargs)
1301
1301
1302 @parthandler('error:pushraced', ('message',))
1302 @parthandler('error:pushraced', ('message',))
1303 def handleerrorpushraced(op, inpart):
1303 def handleerrorpushraced(op, inpart):
1304 """Used to transmit push race error over the wire"""
1304 """Used to transmit push race error over the wire"""
1305 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1305 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1306
1306
1307 @parthandler('listkeys', ('namespace',))
1307 @parthandler('listkeys', ('namespace',))
1308 def handlelistkeys(op, inpart):
1308 def handlelistkeys(op, inpart):
1309 """retrieve pushkey namespace content stored in a bundle2"""
1309 """retrieve pushkey namespace content stored in a bundle2"""
1310 namespace = inpart.params['namespace']
1310 namespace = inpart.params['namespace']
1311 r = pushkey.decodekeys(inpart.read())
1311 r = pushkey.decodekeys(inpart.read())
1312 op.records.add('listkeys', (namespace, r))
1312 op.records.add('listkeys', (namespace, r))
1313
1313
1314 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1314 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1315 def handlepushkey(op, inpart):
1315 def handlepushkey(op, inpart):
1316 """process a pushkey request"""
1316 """process a pushkey request"""
1317 dec = pushkey.decode
1317 dec = pushkey.decode
1318 namespace = dec(inpart.params['namespace'])
1318 namespace = dec(inpart.params['namespace'])
1319 key = dec(inpart.params['key'])
1319 key = dec(inpart.params['key'])
1320 old = dec(inpart.params['old'])
1320 old = dec(inpart.params['old'])
1321 new = dec(inpart.params['new'])
1321 new = dec(inpart.params['new'])
1322 ret = op.repo.pushkey(namespace, key, old, new)
1322 ret = op.repo.pushkey(namespace, key, old, new)
1323 record = {'namespace': namespace,
1323 record = {'namespace': namespace,
1324 'key': key,
1324 'key': key,
1325 'old': old,
1325 'old': old,
1326 'new': new}
1326 'new': new}
1327 op.records.add('pushkey', record)
1327 op.records.add('pushkey', record)
1328 if op.reply is not None:
1328 if op.reply is not None:
1329 rpart = op.reply.newpart('reply:pushkey')
1329 rpart = op.reply.newpart('reply:pushkey')
1330 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1330 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1331 rpart.addparam('return', '%i' % ret, mandatory=False)
1331 rpart.addparam('return', '%i' % ret, mandatory=False)
1332 if inpart.mandatory and not ret:
1333 raise util.Abort(_('failed to update value for "%s/%s"')
1334 % (namespace, key))
1332
1335
1333 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1336 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1334 def handlepushkeyreply(op, inpart):
1337 def handlepushkeyreply(op, inpart):
1335 """retrieve the result of a pushkey request"""
1338 """retrieve the result of a pushkey request"""
1336 ret = int(inpart.params['return'])
1339 ret = int(inpart.params['return'])
1337 partid = int(inpart.params['in-reply-to'])
1340 partid = int(inpart.params['in-reply-to'])
1338 op.records.add('pushkey', {'return': ret}, partid)
1341 op.records.add('pushkey', {'return': ret}, partid)
1339
1342
1340 @parthandler('obsmarkers')
1343 @parthandler('obsmarkers')
1341 def handleobsmarker(op, inpart):
1344 def handleobsmarker(op, inpart):
1342 """add a stream of obsmarkers to the repo"""
1345 """add a stream of obsmarkers to the repo"""
1343 tr = op.gettransaction()
1346 tr = op.gettransaction()
1344 markerdata = inpart.read()
1347 markerdata = inpart.read()
1345 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1348 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1346 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1349 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1347 % len(markerdata))
1350 % len(markerdata))
1348 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1351 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1349 if new:
1352 if new:
1350 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1353 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1351 op.records.add('obsmarkers', {'new': new})
1354 op.records.add('obsmarkers', {'new': new})
1352 if op.reply is not None:
1355 if op.reply is not None:
1353 rpart = op.reply.newpart('reply:obsmarkers')
1356 rpart = op.reply.newpart('reply:obsmarkers')
1354 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1357 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1355 rpart.addparam('new', '%i' % new, mandatory=False)
1358 rpart.addparam('new', '%i' % new, mandatory=False)
1356
1359
1357
1360
1358 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1361 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1359 def handlepushkeyreply(op, inpart):
1362 def handlepushkeyreply(op, inpart):
1360 """retrieve the result of a pushkey request"""
1363 """retrieve the result of a pushkey request"""
1361 ret = int(inpart.params['new'])
1364 ret = int(inpart.params['new'])
1362 partid = int(inpart.params['in-reply-to'])
1365 partid = int(inpart.params['in-reply-to'])
1363 op.records.add('obsmarkers', {'new': ret}, partid)
1366 op.records.add('obsmarkers', {'new': ret}, partid)
1364
1367
1365 @parthandler('hgtagsfnodes')
1368 @parthandler('hgtagsfnodes')
1366 def handlehgtagsfnodes(op, inpart):
1369 def handlehgtagsfnodes(op, inpart):
1367 """Applies .hgtags fnodes cache entries to the local repo.
1370 """Applies .hgtags fnodes cache entries to the local repo.
1368
1371
1369 Payload is pairs of 20 byte changeset nodes and filenodes.
1372 Payload is pairs of 20 byte changeset nodes and filenodes.
1370 """
1373 """
1371 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1374 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1372
1375
1373 count = 0
1376 count = 0
1374 while True:
1377 while True:
1375 node = inpart.read(20)
1378 node = inpart.read(20)
1376 fnode = inpart.read(20)
1379 fnode = inpart.read(20)
1377 if len(node) < 20 or len(fnode) < 20:
1380 if len(node) < 20 or len(fnode) < 20:
1378 op.ui.debug('received incomplete .hgtags fnodes data, ignoring\n')
1381 op.ui.debug('received incomplete .hgtags fnodes data, ignoring\n')
1379 break
1382 break
1380 cache.setfnode(node, fnode)
1383 cache.setfnode(node, fnode)
1381 count += 1
1384 count += 1
1382
1385
1383 cache.write()
1386 cache.write()
1384 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1387 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,719 +1,861 b''
1 Test exchange of common information using bundle2
1 Test exchange of common information using bundle2
2
2
3
3
4 $ getmainid() {
4 $ getmainid() {
5 > hg -R main log --template '{node}\n' --rev "$1"
5 > hg -R main log --template '{node}\n' --rev "$1"
6 > }
6 > }
7
7
8 enable obsolescence
8 enable obsolescence
9
9
10 $ cat > $TESTTMP/bundle2-pushkey-hook.sh << EOF
10 $ cat > $TESTTMP/bundle2-pushkey-hook.sh << EOF
11 > echo pushkey: lock state after \"\$HG_NAMESPACE\"
11 > echo pushkey: lock state after \"\$HG_NAMESPACE\"
12 > hg debuglock
12 > hg debuglock
13 > EOF
13 > EOF
14
14
15 $ cat >> $HGRCPATH << EOF
15 $ cat >> $HGRCPATH << EOF
16 > [experimental]
16 > [experimental]
17 > evolution=createmarkers,exchange
17 > evolution=createmarkers,exchange
18 > bundle2-exp=True
18 > bundle2-exp=True
19 > bundle2-output-capture=True
19 > bundle2-output-capture=True
20 > [ui]
20 > [ui]
21 > ssh=dummyssh
21 > ssh=dummyssh
22 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
22 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
23 > [web]
23 > [web]
24 > push_ssl = false
24 > push_ssl = false
25 > allow_push = *
25 > allow_push = *
26 > [phases]
26 > [phases]
27 > publish=False
27 > publish=False
28 > [hooks]
28 > [hooks]
29 > pretxnclose.tip = hg log -r tip -T "pre-close-tip:{node|short} {phase} {bookmarks}\n"
29 > pretxnclose.tip = hg log -r tip -T "pre-close-tip:{node|short} {phase} {bookmarks}\n"
30 > txnclose.tip = hg log -r tip -T "postclose-tip:{node|short} {phase} {bookmarks}\n"
30 > txnclose.tip = hg log -r tip -T "postclose-tip:{node|short} {phase} {bookmarks}\n"
31 > txnclose.env = sh -c "HG_LOCAL= printenv.py txnclose"
31 > txnclose.env = sh -c "HG_LOCAL= printenv.py txnclose"
32 > pushkey= sh "$TESTTMP/bundle2-pushkey-hook.sh"
32 > pushkey= sh "$TESTTMP/bundle2-pushkey-hook.sh"
33 > EOF
33 > EOF
34
34
35 The extension requires a repo (currently unused)
35 The extension requires a repo (currently unused)
36
36
37 $ hg init main
37 $ hg init main
38 $ cd main
38 $ cd main
39 $ touch a
39 $ touch a
40 $ hg add a
40 $ hg add a
41 $ hg commit -m 'a'
41 $ hg commit -m 'a'
42 pre-close-tip:3903775176ed draft
42 pre-close-tip:3903775176ed draft
43 postclose-tip:3903775176ed draft
43 postclose-tip:3903775176ed draft
44 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
44 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
45
45
46 $ hg unbundle $TESTDIR/bundles/rebase.hg
46 $ hg unbundle $TESTDIR/bundles/rebase.hg
47 adding changesets
47 adding changesets
48 adding manifests
48 adding manifests
49 adding file changes
49 adding file changes
50 added 8 changesets with 7 changes to 7 files (+3 heads)
50 added 8 changesets with 7 changes to 7 files (+3 heads)
51 pre-close-tip:02de42196ebe draft
51 pre-close-tip:02de42196ebe draft
52 postclose-tip:02de42196ebe draft
52 postclose-tip:02de42196ebe draft
53 txnclose hook: HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=unbundle HG_TXNID=TXN:* HG_TXNNAME=unbundle (glob)
53 txnclose hook: HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=unbundle HG_TXNID=TXN:* HG_TXNNAME=unbundle (glob)
54 bundle:*/tests/bundles/rebase.hg HG_URL=bundle:*/tests/bundles/rebase.hg (glob)
54 bundle:*/tests/bundles/rebase.hg HG_URL=bundle:*/tests/bundles/rebase.hg (glob)
55 (run 'hg heads' to see heads, 'hg merge' to merge)
55 (run 'hg heads' to see heads, 'hg merge' to merge)
56
56
57 $ cd ..
57 $ cd ..
58
58
59 Real world exchange
59 Real world exchange
60 =====================
60 =====================
61
61
62 Add more obsolescence information
62 Add more obsolescence information
63
63
64 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
64 $ hg -R main debugobsolete -d '0 0' 1111111111111111111111111111111111111111 `getmainid 9520eea781bc`
65 pre-close-tip:02de42196ebe draft
65 pre-close-tip:02de42196ebe draft
66 postclose-tip:02de42196ebe draft
66 postclose-tip:02de42196ebe draft
67 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
67 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
68 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
68 $ hg -R main debugobsolete -d '0 0' 2222222222222222222222222222222222222222 `getmainid 24b6387c8c8c`
69 pre-close-tip:02de42196ebe draft
69 pre-close-tip:02de42196ebe draft
70 postclose-tip:02de42196ebe draft
70 postclose-tip:02de42196ebe draft
71 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
71 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
72
72
73 clone --pull
73 clone --pull
74
74
75 $ hg -R main phase --public cd010b8cd998
75 $ hg -R main phase --public cd010b8cd998
76 pre-close-tip:02de42196ebe draft
76 pre-close-tip:02de42196ebe draft
77 postclose-tip:02de42196ebe draft
77 postclose-tip:02de42196ebe draft
78 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
78 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
79 $ hg clone main other --pull --rev 9520eea781bc
79 $ hg clone main other --pull --rev 9520eea781bc
80 adding changesets
80 adding changesets
81 adding manifests
81 adding manifests
82 adding file changes
82 adding file changes
83 added 2 changesets with 2 changes to 2 files
83 added 2 changesets with 2 changes to 2 files
84 1 new obsolescence markers
84 1 new obsolescence markers
85 pre-close-tip:9520eea781bc draft
85 pre-close-tip:9520eea781bc draft
86 postclose-tip:9520eea781bc draft
86 postclose-tip:9520eea781bc draft
87 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
87 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=cd010b8cd998f3981a5a8115f94f8da4ab506089 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
88 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
88 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
89 updating to branch default
89 updating to branch default
90 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
90 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
91 $ hg -R other log -G
91 $ hg -R other log -G
92 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
92 @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
93 |
93 |
94 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
94 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
95
95
96 $ hg -R other debugobsolete
96 $ hg -R other debugobsolete
97 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
97 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
98
98
99 pull
99 pull
100
100
101 $ hg -R main phase --public 9520eea781bc
101 $ hg -R main phase --public 9520eea781bc
102 pre-close-tip:02de42196ebe draft
102 pre-close-tip:02de42196ebe draft
103 postclose-tip:02de42196ebe draft
103 postclose-tip:02de42196ebe draft
104 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
104 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
105 $ hg -R other pull -r 24b6387c8c8c
105 $ hg -R other pull -r 24b6387c8c8c
106 pulling from $TESTTMP/main (glob)
106 pulling from $TESTTMP/main (glob)
107 searching for changes
107 searching for changes
108 adding changesets
108 adding changesets
109 adding manifests
109 adding manifests
110 adding file changes
110 adding file changes
111 added 1 changesets with 1 changes to 1 files (+1 heads)
111 added 1 changesets with 1 changes to 1 files (+1 heads)
112 1 new obsolescence markers
112 1 new obsolescence markers
113 pre-close-tip:24b6387c8c8c draft
113 pre-close-tip:24b6387c8c8c draft
114 postclose-tip:24b6387c8c8c draft
114 postclose-tip:24b6387c8c8c draft
115 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=24b6387c8c8cae37178880f3fa95ded3cb1cf785 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
115 txnclose hook: HG_NEW_OBSMARKERS=1 HG_NODE=24b6387c8c8cae37178880f3fa95ded3cb1cf785 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
116 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
116 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
117 (run 'hg heads' to see heads, 'hg merge' to merge)
117 (run 'hg heads' to see heads, 'hg merge' to merge)
118 $ hg -R other log -G
118 $ hg -R other log -G
119 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
119 o 2:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
120 |
120 |
121 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
121 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
122 |/
122 |/
123 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
123 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
124
124
125 $ hg -R other debugobsolete
125 $ hg -R other debugobsolete
126 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
126 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
127 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
127 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
128
128
129 pull empty (with phase movement)
129 pull empty (with phase movement)
130
130
131 $ hg -R main phase --public 24b6387c8c8c
131 $ hg -R main phase --public 24b6387c8c8c
132 pre-close-tip:02de42196ebe draft
132 pre-close-tip:02de42196ebe draft
133 postclose-tip:02de42196ebe draft
133 postclose-tip:02de42196ebe draft
134 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
134 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
135 $ hg -R other pull -r 24b6387c8c8c
135 $ hg -R other pull -r 24b6387c8c8c
136 pulling from $TESTTMP/main (glob)
136 pulling from $TESTTMP/main (glob)
137 no changes found
137 no changes found
138 pre-close-tip:24b6387c8c8c public
138 pre-close-tip:24b6387c8c8c public
139 postclose-tip:24b6387c8c8c public
139 postclose-tip:24b6387c8c8c public
140 txnclose hook: HG_NEW_OBSMARKERS=0 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
140 txnclose hook: HG_NEW_OBSMARKERS=0 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
141 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
141 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
142 $ hg -R other log -G
142 $ hg -R other log -G
143 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
143 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
144 |
144 |
145 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
145 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
146 |/
146 |/
147 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
147 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
148
148
149 $ hg -R other debugobsolete
149 $ hg -R other debugobsolete
150 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
150 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
151 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
151 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
152
152
153 pull empty
153 pull empty
154
154
155 $ hg -R other pull -r 24b6387c8c8c
155 $ hg -R other pull -r 24b6387c8c8c
156 pulling from $TESTTMP/main (glob)
156 pulling from $TESTTMP/main (glob)
157 no changes found
157 no changes found
158 pre-close-tip:24b6387c8c8c public
158 pre-close-tip:24b6387c8c8c public
159 postclose-tip:24b6387c8c8c public
159 postclose-tip:24b6387c8c8c public
160 txnclose hook: HG_NEW_OBSMARKERS=0 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
160 txnclose hook: HG_NEW_OBSMARKERS=0 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
161 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
161 file:/*/$TESTTMP/main HG_URL=file:$TESTTMP/main (glob)
162 $ hg -R other log -G
162 $ hg -R other log -G
163 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
163 o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
164 |
164 |
165 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
165 | @ 1:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
166 |/
166 |/
167 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
167 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
168
168
169 $ hg -R other debugobsolete
169 $ hg -R other debugobsolete
170 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
170 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
171 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
171 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
172
172
173 add extra data to test their exchange during push
173 add extra data to test their exchange during push
174
174
175 $ hg -R main bookmark --rev eea13746799a book_eea1
175 $ hg -R main bookmark --rev eea13746799a book_eea1
176 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
176 $ hg -R main debugobsolete -d '0 0' 3333333333333333333333333333333333333333 `getmainid eea13746799a`
177 pre-close-tip:02de42196ebe draft
177 pre-close-tip:02de42196ebe draft
178 postclose-tip:02de42196ebe draft
178 postclose-tip:02de42196ebe draft
179 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
179 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
180 $ hg -R main bookmark --rev 02de42196ebe book_02de
180 $ hg -R main bookmark --rev 02de42196ebe book_02de
181 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
181 $ hg -R main debugobsolete -d '0 0' 4444444444444444444444444444444444444444 `getmainid 02de42196ebe`
182 pre-close-tip:02de42196ebe draft book_02de
182 pre-close-tip:02de42196ebe draft book_02de
183 postclose-tip:02de42196ebe draft book_02de
183 postclose-tip:02de42196ebe draft book_02de
184 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
184 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
185 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
185 $ hg -R main bookmark --rev 42ccdea3bb16 book_42cc
186 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
186 $ hg -R main debugobsolete -d '0 0' 5555555555555555555555555555555555555555 `getmainid 42ccdea3bb16`
187 pre-close-tip:02de42196ebe draft book_02de
187 pre-close-tip:02de42196ebe draft book_02de
188 postclose-tip:02de42196ebe draft book_02de
188 postclose-tip:02de42196ebe draft book_02de
189 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
189 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
190 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
190 $ hg -R main bookmark --rev 5fddd98957c8 book_5fdd
191 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
191 $ hg -R main debugobsolete -d '0 0' 6666666666666666666666666666666666666666 `getmainid 5fddd98957c8`
192 pre-close-tip:02de42196ebe draft book_02de
192 pre-close-tip:02de42196ebe draft book_02de
193 postclose-tip:02de42196ebe draft book_02de
193 postclose-tip:02de42196ebe draft book_02de
194 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
194 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
195 $ hg -R main bookmark --rev 32af7686d403 book_32af
195 $ hg -R main bookmark --rev 32af7686d403 book_32af
196 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
196 $ hg -R main debugobsolete -d '0 0' 7777777777777777777777777777777777777777 `getmainid 32af7686d403`
197 pre-close-tip:02de42196ebe draft book_02de
197 pre-close-tip:02de42196ebe draft book_02de
198 postclose-tip:02de42196ebe draft book_02de
198 postclose-tip:02de42196ebe draft book_02de
199 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
199 txnclose hook: HG_NEW_OBSMARKERS=1 HG_TXNID=TXN:* HG_TXNNAME=debugobsolete (glob)
200
200
201 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
201 $ hg -R other bookmark --rev cd010b8cd998 book_eea1
202 $ hg -R other bookmark --rev cd010b8cd998 book_02de
202 $ hg -R other bookmark --rev cd010b8cd998 book_02de
203 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
203 $ hg -R other bookmark --rev cd010b8cd998 book_42cc
204 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
204 $ hg -R other bookmark --rev cd010b8cd998 book_5fdd
205 $ hg -R other bookmark --rev cd010b8cd998 book_32af
205 $ hg -R other bookmark --rev cd010b8cd998 book_32af
206
206
207 $ hg -R main phase --public eea13746799a
207 $ hg -R main phase --public eea13746799a
208 pre-close-tip:02de42196ebe draft book_02de
208 pre-close-tip:02de42196ebe draft book_02de
209 postclose-tip:02de42196ebe draft book_02de
209 postclose-tip:02de42196ebe draft book_02de
210 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
210 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
211
211
212 push
212 push
213 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
213 $ hg -R main push other --rev eea13746799a --bookmark book_eea1
214 pushing to other
214 pushing to other
215 searching for changes
215 searching for changes
216 remote: adding changesets
216 remote: adding changesets
217 remote: adding manifests
217 remote: adding manifests
218 remote: adding file changes
218 remote: adding file changes
219 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
219 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
220 remote: 1 new obsolescence markers
220 remote: 1 new obsolescence markers
221 remote: pre-close-tip:eea13746799a public book_eea1
221 remote: pre-close-tip:eea13746799a public book_eea1
222 remote: pushkey: lock state after "phases"
222 remote: pushkey: lock state after "phases"
223 remote: lock: free
223 remote: lock: free
224 remote: wlock: free
224 remote: wlock: free
225 remote: pushkey: lock state after "bookmarks"
225 remote: pushkey: lock state after "bookmarks"
226 remote: lock: free
226 remote: lock: free
227 remote: wlock: free
227 remote: wlock: free
228 remote: postclose-tip:eea13746799a public book_eea1
228 remote: postclose-tip:eea13746799a public book_eea1
229 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=eea13746799a9e0bfd88f29d3c2e9dc9389f524f HG_PHASES_MOVED=1 HG_SOURCE=push HG_TXNID=TXN:* HG_TXNNAME=push HG_URL=push (glob)
229 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=eea13746799a9e0bfd88f29d3c2e9dc9389f524f HG_PHASES_MOVED=1 HG_SOURCE=push HG_TXNID=TXN:* HG_TXNNAME=push HG_URL=push (glob)
230 updating bookmark book_eea1
230 updating bookmark book_eea1
231 pre-close-tip:02de42196ebe draft book_02de
231 pre-close-tip:02de42196ebe draft book_02de
232 postclose-tip:02de42196ebe draft book_02de
232 postclose-tip:02de42196ebe draft book_02de
233 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
233 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
234 file:/*/$TESTTMP/other HG_URL=file:$TESTTMP/other (glob)
234 file:/*/$TESTTMP/other HG_URL=file:$TESTTMP/other (glob)
235 $ hg -R other log -G
235 $ hg -R other log -G
236 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
236 o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
237 |\
237 |\
238 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
238 | o 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
239 | |
239 | |
240 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
240 @ | 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
241 |/
241 |/
242 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
242 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de book_32af book_42cc book_5fdd A
243
243
244 $ hg -R other debugobsolete
244 $ hg -R other debugobsolete
245 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
245 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
246 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
246 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
247 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
247 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
248
248
249 pull over ssh
249 pull over ssh
250
250
251 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
251 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --bookmark book_02de
252 pulling from ssh://user@dummy/main
252 pulling from ssh://user@dummy/main
253 searching for changes
253 searching for changes
254 adding changesets
254 adding changesets
255 adding manifests
255 adding manifests
256 adding file changes
256 adding file changes
257 added 1 changesets with 1 changes to 1 files (+1 heads)
257 added 1 changesets with 1 changes to 1 files (+1 heads)
258 1 new obsolescence markers
258 1 new obsolescence markers
259 updating bookmark book_02de
259 updating bookmark book_02de
260 pre-close-tip:02de42196ebe draft book_02de
260 pre-close-tip:02de42196ebe draft book_02de
261 postclose-tip:02de42196ebe draft book_02de
261 postclose-tip:02de42196ebe draft book_02de
262 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=02de42196ebee42ef284b6780a87cdc96e8eaab6 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
262 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=02de42196ebee42ef284b6780a87cdc96e8eaab6 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
263 ssh://user@dummy/main HG_URL=ssh://user@dummy/main
263 ssh://user@dummy/main HG_URL=ssh://user@dummy/main
264 (run 'hg heads' to see heads, 'hg merge' to merge)
264 (run 'hg heads' to see heads, 'hg merge' to merge)
265 $ hg -R other debugobsolete
265 $ hg -R other debugobsolete
266 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
266 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
267 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
267 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
268 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
268 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
269 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
269 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
270
270
271 pull over http
271 pull over http
272
272
273 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
273 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
274 $ cat main.pid >> $DAEMON_PIDS
274 $ cat main.pid >> $DAEMON_PIDS
275
275
276 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
276 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16 --bookmark book_42cc
277 pulling from http://localhost:$HGPORT/
277 pulling from http://localhost:$HGPORT/
278 searching for changes
278 searching for changes
279 adding changesets
279 adding changesets
280 adding manifests
280 adding manifests
281 adding file changes
281 adding file changes
282 added 1 changesets with 1 changes to 1 files (+1 heads)
282 added 1 changesets with 1 changes to 1 files (+1 heads)
283 1 new obsolescence markers
283 1 new obsolescence markers
284 updating bookmark book_42cc
284 updating bookmark book_42cc
285 pre-close-tip:42ccdea3bb16 draft book_42cc
285 pre-close-tip:42ccdea3bb16 draft book_42cc
286 postclose-tip:42ccdea3bb16 draft book_42cc
286 postclose-tip:42ccdea3bb16 draft book_42cc
287 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
287 txnclose hook: HG_BOOKMARK_MOVED=1 HG_NEW_OBSMARKERS=1 HG_NODE=42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:* HG_TXNNAME=pull (glob)
288 http://localhost:$HGPORT/ HG_URL=http://localhost:$HGPORT/
288 http://localhost:$HGPORT/ HG_URL=http://localhost:$HGPORT/
289 (run 'hg heads .' to see heads, 'hg merge' to merge)
289 (run 'hg heads .' to see heads, 'hg merge' to merge)
290 $ cat main-error.log
290 $ cat main-error.log
291 $ hg -R other debugobsolete
291 $ hg -R other debugobsolete
292 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
292 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
293 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
293 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
294 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
294 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
295 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
295 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
296 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
296 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
297
297
298 push over ssh
298 push over ssh
299
299
300 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
300 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8 --bookmark book_5fdd
301 pushing to ssh://user@dummy/other
301 pushing to ssh://user@dummy/other
302 searching for changes
302 searching for changes
303 remote: adding changesets
303 remote: adding changesets
304 remote: adding manifests
304 remote: adding manifests
305 remote: adding file changes
305 remote: adding file changes
306 remote: added 1 changesets with 1 changes to 1 files
306 remote: added 1 changesets with 1 changes to 1 files
307 remote: 1 new obsolescence markers
307 remote: 1 new obsolescence markers
308 remote: pre-close-tip:5fddd98957c8 draft book_5fdd
308 remote: pre-close-tip:5fddd98957c8 draft book_5fdd
309 remote: pushkey: lock state after "bookmarks"
309 remote: pushkey: lock state after "bookmarks"
310 remote: lock: free
310 remote: lock: free
311 remote: wlock: free
311 remote: wlock: free
312 remote: postclose-tip:5fddd98957c8 draft book_5fdd
312 remote: postclose-tip:5fddd98957c8 draft book_5fdd
313 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=5fddd98957c8a54a4d436dfe1da9d87f21a1b97b HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:ssh:127.0.0.1 (glob)
313 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=5fddd98957c8a54a4d436dfe1da9d87f21a1b97b HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:ssh:127.0.0.1 (glob)
314 updating bookmark book_5fdd
314 updating bookmark book_5fdd
315 pre-close-tip:02de42196ebe draft book_02de
315 pre-close-tip:02de42196ebe draft book_02de
316 postclose-tip:02de42196ebe draft book_02de
316 postclose-tip:02de42196ebe draft book_02de
317 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
317 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
318 ssh://user@dummy/other HG_URL=ssh://user@dummy/other
318 ssh://user@dummy/other HG_URL=ssh://user@dummy/other
319 $ hg -R other log -G
319 $ hg -R other log -G
320 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
320 o 6:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
321 |
321 |
322 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
322 o 5:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
323 |
323 |
324 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
324 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
325 | |
325 | |
326 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
326 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
327 | |/|
327 | |/|
328 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
328 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
329 |/ /
329 |/ /
330 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
330 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
331 |/
331 |/
332 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
332 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af A
333
333
334 $ hg -R other debugobsolete
334 $ hg -R other debugobsolete
335 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
335 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
336 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
336 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
337 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
337 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
338 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
338 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
339 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
339 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
340 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
340 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
341
341
342 push over http
342 push over http
343
343
344 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
344 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
345 $ cat other.pid >> $DAEMON_PIDS
345 $ cat other.pid >> $DAEMON_PIDS
346
346
347 $ hg -R main phase --public 32af7686d403
347 $ hg -R main phase --public 32af7686d403
348 pre-close-tip:02de42196ebe draft book_02de
348 pre-close-tip:02de42196ebe draft book_02de
349 postclose-tip:02de42196ebe draft book_02de
349 postclose-tip:02de42196ebe draft book_02de
350 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
350 txnclose hook: HG_PHASES_MOVED=1 HG_TXNID=TXN:* HG_TXNNAME=phase (glob)
351 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
351 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403 --bookmark book_32af
352 pushing to http://localhost:$HGPORT2/
352 pushing to http://localhost:$HGPORT2/
353 searching for changes
353 searching for changes
354 remote: adding changesets
354 remote: adding changesets
355 remote: adding manifests
355 remote: adding manifests
356 remote: adding file changes
356 remote: adding file changes
357 remote: added 1 changesets with 1 changes to 1 files
357 remote: added 1 changesets with 1 changes to 1 files
358 remote: 1 new obsolescence markers
358 remote: 1 new obsolescence markers
359 remote: pre-close-tip:32af7686d403 public book_32af
359 remote: pre-close-tip:32af7686d403 public book_32af
360 remote: pushkey: lock state after "phases"
360 remote: pushkey: lock state after "phases"
361 remote: lock: free
361 remote: lock: free
362 remote: wlock: free
362 remote: wlock: free
363 remote: pushkey: lock state after "bookmarks"
363 remote: pushkey: lock state after "bookmarks"
364 remote: lock: free
364 remote: lock: free
365 remote: wlock: free
365 remote: wlock: free
366 remote: postclose-tip:32af7686d403 public book_32af
366 remote: postclose-tip:32af7686d403 public book_32af
367 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=32af7686d403cf45b5d95f2d70cebea587ac806a HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:http:127.0.0.1: (glob)
367 remote: txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_NEW_OBSMARKERS=1 HG_NODE=32af7686d403cf45b5d95f2d70cebea587ac806a HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:* HG_TXNNAME=serve HG_URL=remote:http:127.0.0.1: (glob)
368 updating bookmark book_32af
368 updating bookmark book_32af
369 pre-close-tip:02de42196ebe draft book_02de
369 pre-close-tip:02de42196ebe draft book_02de
370 postclose-tip:02de42196ebe draft book_02de
370 postclose-tip:02de42196ebe draft book_02de
371 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
371 txnclose hook: HG_SOURCE=push-response HG_TXNID=TXN:* HG_TXNNAME=push-response (glob)
372 http://localhost:$HGPORT2/ HG_URL=http://localhost:$HGPORT2/
372 http://localhost:$HGPORT2/ HG_URL=http://localhost:$HGPORT2/
373 $ cat other-error.log
373 $ cat other-error.log
374
374
375 Check final content.
375 Check final content.
376
376
377 $ hg -R other log -G
377 $ hg -R other log -G
378 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
378 o 7:32af7686d403 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_32af D
379 |
379 |
380 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
380 o 6:5fddd98957c8 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_5fdd C
381 |
381 |
382 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
382 o 5:42ccdea3bb16 public Nicolas Dumazet <nicdumz.commits@gmail.com> book_42cc B
383 |
383 |
384 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
384 | o 4:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> book_02de H
385 | |
385 | |
386 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
386 | | o 3:eea13746799a public Nicolas Dumazet <nicdumz.commits@gmail.com> book_eea1 G
387 | |/|
387 | |/|
388 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
388 | o | 2:24b6387c8c8c public Nicolas Dumazet <nicdumz.commits@gmail.com> F
389 |/ /
389 |/ /
390 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
390 | @ 1:9520eea781bc public Nicolas Dumazet <nicdumz.commits@gmail.com> E
391 |/
391 |/
392 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
392 o 0:cd010b8cd998 public Nicolas Dumazet <nicdumz.commits@gmail.com> A
393
393
394 $ hg -R other debugobsolete
394 $ hg -R other debugobsolete
395 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
395 1111111111111111111111111111111111111111 9520eea781bcca16c1e15acc0ba14335a0e8e5ba 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
396 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
396 2222222222222222222222222222222222222222 24b6387c8c8cae37178880f3fa95ded3cb1cf785 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
397 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
397 3333333333333333333333333333333333333333 eea13746799a9e0bfd88f29d3c2e9dc9389f524f 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
398 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
398 4444444444444444444444444444444444444444 02de42196ebee42ef284b6780a87cdc96e8eaab6 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
399 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
399 5555555555555555555555555555555555555555 42ccdea3bb16d28e1848c95fe2e44c000f3f21b1 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
400 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
400 6666666666666666666666666666666666666666 5fddd98957c8a54a4d436dfe1da9d87f21a1b97b 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
401 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
401 7777777777777777777777777777777777777777 32af7686d403cf45b5d95f2d70cebea587ac806a 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
402
402
403 (check that no 'pending' files remain)
403 (check that no 'pending' files remain)
404
404
405 $ ls -1 other/.hg/bookmarks*
405 $ ls -1 other/.hg/bookmarks*
406 other/.hg/bookmarks
406 other/.hg/bookmarks
407 $ ls -1 other/.hg/store/phaseroots*
407 $ ls -1 other/.hg/store/phaseroots*
408 other/.hg/store/phaseroots
408 other/.hg/store/phaseroots
409 $ ls -1 other/.hg/store/00changelog.i*
409 $ ls -1 other/.hg/store/00changelog.i*
410 other/.hg/store/00changelog.i
410 other/.hg/store/00changelog.i
411
411
412 Error Handling
412 Error Handling
413 ==============
413 ==============
414
414
415 Check that errors are properly returned to the client during push.
415 Check that errors are properly returned to the client during push.
416
416
417 Setting up
417 Setting up
418
418
419 $ cat > failpush.py << EOF
419 $ cat > failpush.py << EOF
420 > """A small extension that makes push fails when using bundle2
420 > """A small extension that makes push fails when using bundle2
421 >
421 >
422 > used to test error handling in bundle2
422 > used to test error handling in bundle2
423 > """
423 > """
424 >
424 >
425 > from mercurial import util
425 > from mercurial import util
426 > from mercurial import bundle2
426 > from mercurial import bundle2
427 > from mercurial import exchange
427 > from mercurial import exchange
428 > from mercurial import extensions
428 > from mercurial import extensions
429 >
429 >
430 > def _pushbundle2failpart(pushop, bundler):
430 > def _pushbundle2failpart(pushop, bundler):
431 > reason = pushop.ui.config('failpush', 'reason', None)
431 > reason = pushop.ui.config('failpush', 'reason', None)
432 > part = None
432 > part = None
433 > if reason == 'abort':
433 > if reason == 'abort':
434 > bundler.newpart('test:abort')
434 > bundler.newpart('test:abort')
435 > if reason == 'unknown':
435 > if reason == 'unknown':
436 > bundler.newpart('test:unknown')
436 > bundler.newpart('test:unknown')
437 > if reason == 'race':
437 > if reason == 'race':
438 > # 20 Bytes of crap
438 > # 20 Bytes of crap
439 > bundler.newpart('check:heads', data='01234567890123456789')
439 > bundler.newpart('check:heads', data='01234567890123456789')
440 >
440 >
441 > @bundle2.parthandler("test:abort")
441 > @bundle2.parthandler("test:abort")
442 > def handleabort(op, part):
442 > def handleabort(op, part):
443 > raise util.Abort('Abandon ship!', hint="don't panic")
443 > raise util.Abort('Abandon ship!', hint="don't panic")
444 >
444 >
445 > def uisetup(ui):
445 > def uisetup(ui):
446 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
446 > exchange.b2partsgenmapping['failpart'] = _pushbundle2failpart
447 > exchange.b2partsgenorder.insert(0, 'failpart')
447 > exchange.b2partsgenorder.insert(0, 'failpart')
448 >
448 >
449 > EOF
449 > EOF
450
450
451 $ cd main
451 $ cd main
452 $ hg up tip
452 $ hg up tip
453 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
453 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
454 $ echo 'I' > I
454 $ echo 'I' > I
455 $ hg add I
455 $ hg add I
456 $ hg ci -m 'I'
456 $ hg ci -m 'I'
457 pre-close-tip:e7ec4e813ba6 draft
457 pre-close-tip:e7ec4e813ba6 draft
458 postclose-tip:e7ec4e813ba6 draft
458 postclose-tip:e7ec4e813ba6 draft
459 txnclose hook: HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
459 txnclose hook: HG_TXNID=TXN:* HG_TXNNAME=commit (glob)
460 $ hg id
460 $ hg id
461 e7ec4e813ba6 tip
461 e7ec4e813ba6 tip
462 $ cd ..
462 $ cd ..
463
463
464 $ cat << EOF >> $HGRCPATH
464 $ cat << EOF >> $HGRCPATH
465 > [extensions]
465 > [extensions]
466 > failpush=$TESTTMP/failpush.py
466 > failpush=$TESTTMP/failpush.py
467 > EOF
467 > EOF
468
468
469 $ killdaemons.py
469 $ killdaemons.py
470 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
470 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
471 $ cat other.pid >> $DAEMON_PIDS
471 $ cat other.pid >> $DAEMON_PIDS
472
472
473 Doing the actual push: Abort error
473 Doing the actual push: Abort error
474
474
475 $ cat << EOF >> $HGRCPATH
475 $ cat << EOF >> $HGRCPATH
476 > [failpush]
476 > [failpush]
477 > reason = abort
477 > reason = abort
478 > EOF
478 > EOF
479
479
480 $ hg -R main push other -r e7ec4e813ba6
480 $ hg -R main push other -r e7ec4e813ba6
481 pushing to other
481 pushing to other
482 searching for changes
482 searching for changes
483 abort: Abandon ship!
483 abort: Abandon ship!
484 (don't panic)
484 (don't panic)
485 [255]
485 [255]
486
486
487 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
487 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
488 pushing to ssh://user@dummy/other
488 pushing to ssh://user@dummy/other
489 searching for changes
489 searching for changes
490 abort: Abandon ship!
490 abort: Abandon ship!
491 (don't panic)
491 (don't panic)
492 [255]
492 [255]
493
493
494 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
494 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
495 pushing to http://localhost:$HGPORT2/
495 pushing to http://localhost:$HGPORT2/
496 searching for changes
496 searching for changes
497 abort: Abandon ship!
497 abort: Abandon ship!
498 (don't panic)
498 (don't panic)
499 [255]
499 [255]
500
500
501
501
502 Doing the actual push: unknown mandatory parts
502 Doing the actual push: unknown mandatory parts
503
503
504 $ cat << EOF >> $HGRCPATH
504 $ cat << EOF >> $HGRCPATH
505 > [failpush]
505 > [failpush]
506 > reason = unknown
506 > reason = unknown
507 > EOF
507 > EOF
508
508
509 $ hg -R main push other -r e7ec4e813ba6
509 $ hg -R main push other -r e7ec4e813ba6
510 pushing to other
510 pushing to other
511 searching for changes
511 searching for changes
512 abort: missing support for test:unknown
512 abort: missing support for test:unknown
513 [255]
513 [255]
514
514
515 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
515 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
516 pushing to ssh://user@dummy/other
516 pushing to ssh://user@dummy/other
517 searching for changes
517 searching for changes
518 abort: missing support for test:unknown
518 abort: missing support for test:unknown
519 [255]
519 [255]
520
520
521 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
521 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
522 pushing to http://localhost:$HGPORT2/
522 pushing to http://localhost:$HGPORT2/
523 searching for changes
523 searching for changes
524 abort: missing support for test:unknown
524 abort: missing support for test:unknown
525 [255]
525 [255]
526
526
527 Doing the actual push: race
527 Doing the actual push: race
528
528
529 $ cat << EOF >> $HGRCPATH
529 $ cat << EOF >> $HGRCPATH
530 > [failpush]
530 > [failpush]
531 > reason = race
531 > reason = race
532 > EOF
532 > EOF
533
533
534 $ hg -R main push other -r e7ec4e813ba6
534 $ hg -R main push other -r e7ec4e813ba6
535 pushing to other
535 pushing to other
536 searching for changes
536 searching for changes
537 abort: push failed:
537 abort: push failed:
538 'repository changed while pushing - please try again'
538 'repository changed while pushing - please try again'
539 [255]
539 [255]
540
540
541 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
541 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
542 pushing to ssh://user@dummy/other
542 pushing to ssh://user@dummy/other
543 searching for changes
543 searching for changes
544 abort: push failed:
544 abort: push failed:
545 'repository changed while pushing - please try again'
545 'repository changed while pushing - please try again'
546 [255]
546 [255]
547
547
548 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
548 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
549 pushing to http://localhost:$HGPORT2/
549 pushing to http://localhost:$HGPORT2/
550 searching for changes
550 searching for changes
551 abort: push failed:
551 abort: push failed:
552 'repository changed while pushing - please try again'
552 'repository changed while pushing - please try again'
553 [255]
553 [255]
554
554
555 Doing the actual push: hook abort
555 Doing the actual push: hook abort
556
556
557 $ cat << EOF >> $HGRCPATH
557 $ cat << EOF >> $HGRCPATH
558 > [failpush]
558 > [failpush]
559 > reason =
559 > reason =
560 > [hooks]
560 > [hooks]
561 > pretxnclose.failpush = sh -c "echo 'You shall not pass!'; false"
561 > pretxnclose.failpush = sh -c "echo 'You shall not pass!'; false"
562 > txnabort.failpush = sh -c "echo 'Cleaning up the mess...'"
562 > txnabort.failpush = sh -c "echo 'Cleaning up the mess...'"
563 > EOF
563 > EOF
564
564
565 $ killdaemons.py
565 $ killdaemons.py
566 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
566 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
567 $ cat other.pid >> $DAEMON_PIDS
567 $ cat other.pid >> $DAEMON_PIDS
568
568
569 $ hg -R main push other -r e7ec4e813ba6
569 $ hg -R main push other -r e7ec4e813ba6
570 pushing to other
570 pushing to other
571 searching for changes
571 searching for changes
572 remote: adding changesets
572 remote: adding changesets
573 remote: adding manifests
573 remote: adding manifests
574 remote: adding file changes
574 remote: adding file changes
575 remote: added 1 changesets with 1 changes to 1 files
575 remote: added 1 changesets with 1 changes to 1 files
576 remote: pre-close-tip:e7ec4e813ba6 draft
576 remote: pre-close-tip:e7ec4e813ba6 draft
577 remote: You shall not pass!
577 remote: You shall not pass!
578 remote: transaction abort!
578 remote: transaction abort!
579 remote: Cleaning up the mess...
579 remote: Cleaning up the mess...
580 remote: rollback completed
580 remote: rollback completed
581 abort: pretxnclose.failpush hook exited with status 1
581 abort: pretxnclose.failpush hook exited with status 1
582 [255]
582 [255]
583
583
584 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
584 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
585 pushing to ssh://user@dummy/other
585 pushing to ssh://user@dummy/other
586 searching for changes
586 searching for changes
587 remote: adding changesets
587 remote: adding changesets
588 remote: adding manifests
588 remote: adding manifests
589 remote: adding file changes
589 remote: adding file changes
590 remote: added 1 changesets with 1 changes to 1 files
590 remote: added 1 changesets with 1 changes to 1 files
591 remote: pre-close-tip:e7ec4e813ba6 draft
591 remote: pre-close-tip:e7ec4e813ba6 draft
592 remote: You shall not pass!
592 remote: You shall not pass!
593 remote: transaction abort!
593 remote: transaction abort!
594 remote: Cleaning up the mess...
594 remote: Cleaning up the mess...
595 remote: rollback completed
595 remote: rollback completed
596 abort: pretxnclose.failpush hook exited with status 1
596 abort: pretxnclose.failpush hook exited with status 1
597 [255]
597 [255]
598
598
599 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
599 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
600 pushing to http://localhost:$HGPORT2/
600 pushing to http://localhost:$HGPORT2/
601 searching for changes
601 searching for changes
602 remote: adding changesets
602 remote: adding changesets
603 remote: adding manifests
603 remote: adding manifests
604 remote: adding file changes
604 remote: adding file changes
605 remote: added 1 changesets with 1 changes to 1 files
605 remote: added 1 changesets with 1 changes to 1 files
606 remote: pre-close-tip:e7ec4e813ba6 draft
606 remote: pre-close-tip:e7ec4e813ba6 draft
607 remote: You shall not pass!
607 remote: You shall not pass!
608 remote: transaction abort!
608 remote: transaction abort!
609 remote: Cleaning up the mess...
609 remote: Cleaning up the mess...
610 remote: rollback completed
610 remote: rollback completed
611 abort: pretxnclose.failpush hook exited with status 1
611 abort: pretxnclose.failpush hook exited with status 1
612 [255]
612 [255]
613
613
614 (check that no 'pending' files remain)
614 (check that no 'pending' files remain)
615
615
616 $ ls -1 other/.hg/bookmarks*
616 $ ls -1 other/.hg/bookmarks*
617 other/.hg/bookmarks
617 other/.hg/bookmarks
618 $ ls -1 other/.hg/store/phaseroots*
618 $ ls -1 other/.hg/store/phaseroots*
619 other/.hg/store/phaseroots
619 other/.hg/store/phaseroots
620 $ ls -1 other/.hg/store/00changelog.i*
620 $ ls -1 other/.hg/store/00changelog.i*
621 other/.hg/store/00changelog.i
621 other/.hg/store/00changelog.i
622
622
623 Check error from hook during the unbundling process itself
623 Check error from hook during the unbundling process itself
624
624
625 $ cat << EOF >> $HGRCPATH
625 $ cat << EOF >> $HGRCPATH
626 > pretxnchangegroup = sh -c "echo 'Fail early!'; false"
626 > pretxnchangegroup = sh -c "echo 'Fail early!'; false"
627 > EOF
627 > EOF
628 $ killdaemons.py # reload http config
628 $ killdaemons.py # reload http config
629 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
629 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
630 $ cat other.pid >> $DAEMON_PIDS
630 $ cat other.pid >> $DAEMON_PIDS
631
631
632 $ hg -R main push other -r e7ec4e813ba6
632 $ hg -R main push other -r e7ec4e813ba6
633 pushing to other
633 pushing to other
634 searching for changes
634 searching for changes
635 remote: adding changesets
635 remote: adding changesets
636 remote: adding manifests
636 remote: adding manifests
637 remote: adding file changes
637 remote: adding file changes
638 remote: added 1 changesets with 1 changes to 1 files
638 remote: added 1 changesets with 1 changes to 1 files
639 remote: Fail early!
639 remote: Fail early!
640 remote: transaction abort!
640 remote: transaction abort!
641 remote: Cleaning up the mess...
641 remote: Cleaning up the mess...
642 remote: rollback completed
642 remote: rollback completed
643 abort: pretxnchangegroup hook exited with status 1
643 abort: pretxnchangegroup hook exited with status 1
644 [255]
644 [255]
645 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
645 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
646 pushing to ssh://user@dummy/other
646 pushing to ssh://user@dummy/other
647 searching for changes
647 searching for changes
648 remote: adding changesets
648 remote: adding changesets
649 remote: adding manifests
649 remote: adding manifests
650 remote: adding file changes
650 remote: adding file changes
651 remote: added 1 changesets with 1 changes to 1 files
651 remote: added 1 changesets with 1 changes to 1 files
652 remote: Fail early!
652 remote: Fail early!
653 remote: transaction abort!
653 remote: transaction abort!
654 remote: Cleaning up the mess...
654 remote: Cleaning up the mess...
655 remote: rollback completed
655 remote: rollback completed
656 abort: pretxnchangegroup hook exited with status 1
656 abort: pretxnchangegroup hook exited with status 1
657 [255]
657 [255]
658 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
658 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
659 pushing to http://localhost:$HGPORT2/
659 pushing to http://localhost:$HGPORT2/
660 searching for changes
660 searching for changes
661 remote: adding changesets
661 remote: adding changesets
662 remote: adding manifests
662 remote: adding manifests
663 remote: adding file changes
663 remote: adding file changes
664 remote: added 1 changesets with 1 changes to 1 files
664 remote: added 1 changesets with 1 changes to 1 files
665 remote: Fail early!
665 remote: Fail early!
666 remote: transaction abort!
666 remote: transaction abort!
667 remote: Cleaning up the mess...
667 remote: Cleaning up the mess...
668 remote: rollback completed
668 remote: rollback completed
669 abort: pretxnchangegroup hook exited with status 1
669 abort: pretxnchangegroup hook exited with status 1
670 [255]
670 [255]
671
671
672 Check output capture control.
672 Check output capture control.
673
673
674 (should be still forced for http, disabled for local and ssh)
674 (should be still forced for http, disabled for local and ssh)
675
675
676 $ cat >> $HGRCPATH << EOF
676 $ cat >> $HGRCPATH << EOF
677 > [experimental]
677 > [experimental]
678 > bundle2-output-capture=False
678 > bundle2-output-capture=False
679 > EOF
679 > EOF
680
680
681 $ hg -R main push other -r e7ec4e813ba6
681 $ hg -R main push other -r e7ec4e813ba6
682 pushing to other
682 pushing to other
683 searching for changes
683 searching for changes
684 adding changesets
684 adding changesets
685 adding manifests
685 adding manifests
686 adding file changes
686 adding file changes
687 added 1 changesets with 1 changes to 1 files
687 added 1 changesets with 1 changes to 1 files
688 Fail early!
688 Fail early!
689 transaction abort!
689 transaction abort!
690 Cleaning up the mess...
690 Cleaning up the mess...
691 rollback completed
691 rollback completed
692 abort: pretxnchangegroup hook exited with status 1
692 abort: pretxnchangegroup hook exited with status 1
693 [255]
693 [255]
694 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
694 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
695 pushing to ssh://user@dummy/other
695 pushing to ssh://user@dummy/other
696 searching for changes
696 searching for changes
697 remote: adding changesets
697 remote: adding changesets
698 remote: adding manifests
698 remote: adding manifests
699 remote: adding file changes
699 remote: adding file changes
700 remote: added 1 changesets with 1 changes to 1 files
700 remote: added 1 changesets with 1 changes to 1 files
701 remote: Fail early!
701 remote: Fail early!
702 remote: transaction abort!
702 remote: transaction abort!
703 remote: Cleaning up the mess...
703 remote: Cleaning up the mess...
704 remote: rollback completed
704 remote: rollback completed
705 abort: pretxnchangegroup hook exited with status 1
705 abort: pretxnchangegroup hook exited with status 1
706 [255]
706 [255]
707 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
707 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
708 pushing to http://localhost:$HGPORT2/
708 pushing to http://localhost:$HGPORT2/
709 searching for changes
709 searching for changes
710 remote: adding changesets
710 remote: adding changesets
711 remote: adding manifests
711 remote: adding manifests
712 remote: adding file changes
712 remote: adding file changes
713 remote: added 1 changesets with 1 changes to 1 files
713 remote: added 1 changesets with 1 changes to 1 files
714 remote: Fail early!
714 remote: Fail early!
715 remote: transaction abort!
715 remote: transaction abort!
716 remote: Cleaning up the mess...
716 remote: Cleaning up the mess...
717 remote: rollback completed
717 remote: rollback completed
718 abort: pretxnchangegroup hook exited with status 1
718 abort: pretxnchangegroup hook exited with status 1
719 [255]
719 [255]
720
721 Check abort from mandatory pushkey
722
723 $ cat > mandatorypart.py << EOF
724 > from mercurial import exchange
725 > from mercurial import pushkey
726 > from mercurial import node
727 > @exchange.b2partsgenerator('failingpuskey')
728 > def addfailingpushey(pushop, bundler):
729 > enc = pushkey.encode
730 > part = bundler.newpart('pushkey')
731 > part.addparam('namespace', enc('phases'))
732 > part.addparam('key', enc(pushop.repo['cd010b8cd998'].hex()))
733 > part.addparam('old', enc(str(0))) # successful update
734 > part.addparam('new', enc(str(0)))
735 > EOF
736 $ cat >> $HGRCPATH << EOF
737 > [hooks]
738 > pretxnchangegroup=
739 > pretxnclose.failpush=
740 > prepushkey.failpush = sh -c "echo 'do not push the key !'; false"
741 > [extensions]
742 > mandatorypart=$TESTTMP/mandatorypart.py
743 > EOF
744 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS # reload http config
745 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
746 $ cat other.pid >> $DAEMON_PIDS
747
748 (Failure from a hook)
749
750 $ hg -R main push other -r e7ec4e813ba6
751 pushing to other
752 searching for changes
753 adding changesets
754 adding manifests
755 adding file changes
756 added 1 changesets with 1 changes to 1 files
757 do not push the key !
758 pushkey-abort: prepushkey.failpush hook exited with status 1
759 transaction abort!
760 Cleaning up the mess...
761 rollback completed
762 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
763 [255]
764 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
765 pushing to ssh://user@dummy/other
766 searching for changes
767 remote: adding changesets
768 remote: adding manifests
769 remote: adding file changes
770 remote: added 1 changesets with 1 changes to 1 files
771 remote: do not push the key !
772 remote: pushkey-abort: prepushkey.failpush hook exited with status 1
773 remote: transaction abort!
774 remote: Cleaning up the mess...
775 remote: rollback completed
776 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
777 [255]
778 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
779 pushing to http://localhost:$HGPORT2/
780 searching for changes
781 remote: adding changesets
782 remote: adding manifests
783 remote: adding file changes
784 remote: added 1 changesets with 1 changes to 1 files
785 remote: do not push the key !
786 remote: pushkey-abort: prepushkey.failpush hook exited with status 1
787 remote: transaction abort!
788 remote: Cleaning up the mess...
789 remote: rollback completed
790 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
791 [255]
792
793 (Failure from a the pushkey)
794
795 $ cat > mandatorypart.py << EOF
796 > from mercurial import exchange
797 > from mercurial import pushkey
798 > from mercurial import node
799 > @exchange.b2partsgenerator('failingpuskey')
800 > def addfailingpushey(pushop, bundler):
801 > enc = pushkey.encode
802 > part = bundler.newpart('pushkey')
803 > part.addparam('namespace', enc('phases'))
804 > part.addparam('key', enc(pushop.repo['cd010b8cd998'].hex()))
805 > part.addparam('old', enc(str(4))) # will fail
806 > part.addparam('new', enc(str(3)))
807 > EOF
808 $ cat >> $HGRCPATH << EOF
809 > [hooks]
810 > prepushkey.failpush =
811 > EOF
812 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS # reload http config
813 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
814 $ cat other.pid >> $DAEMON_PIDS
815
816 $ hg -R main push other -r e7ec4e813ba6
817 pushing to other
818 searching for changes
819 adding changesets
820 adding manifests
821 adding file changes
822 added 1 changesets with 1 changes to 1 files
823 transaction abort!
824 Cleaning up the mess...
825 rollback completed
826 pushkey: lock state after "phases"
827 lock: free
828 wlock: free
829 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
830 [255]
831 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
832 pushing to ssh://user@dummy/other
833 searching for changes
834 remote: adding changesets
835 remote: adding manifests
836 remote: adding file changes
837 remote: added 1 changesets with 1 changes to 1 files
838 remote: transaction abort!
839 remote: Cleaning up the mess...
840 remote: rollback completed
841 remote: pushkey: lock state after "phases"
842 remote: lock: free
843 remote: wlock: free
844 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
845 [255]
846 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
847 pushing to http://localhost:$HGPORT2/
848 searching for changes
849 remote: adding changesets
850 remote: adding manifests
851 remote: adding file changes
852 remote: added 1 changesets with 1 changes to 1 files
853 remote: transaction abort!
854 remote: Cleaning up the mess...
855 remote: rollback completed
856 remote: pushkey: lock state after "phases"
857 remote: lock: free
858 remote: wlock: free
859 abort: failed to update value for "phases/cd010b8cd998f3981a5a8115f94f8da4ab506089"
860 [255]
861
General Comments 0
You need to be logged in to leave comments. Login now