##// END OF EJS Templates
unbundle20: retrieve unbundler instances through a factory function...
Pierre-Yves David -
r24641:60fecc5b default
parent child Browse files
Show More
@@ -1,1227 +1,1231 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 import errno
148 import errno
149 import sys
149 import sys
150 import util
150 import util
151 import struct
151 import struct
152 import urllib
152 import urllib
153 import string
153 import string
154 import obsolete
154 import obsolete
155 import pushkey
155 import pushkey
156 import url
156 import url
157 import re
157 import re
158
158
159 import changegroup, error
159 import changegroup, error
160 from i18n import _
160 from i18n import _
161
161
162 _pack = struct.pack
162 _pack = struct.pack
163 _unpack = struct.unpack
163 _unpack = struct.unpack
164
164
165 _fstreamparamsize = '>i'
165 _fstreamparamsize = '>i'
166 _fpartheadersize = '>i'
166 _fpartheadersize = '>i'
167 _fparttypesize = '>B'
167 _fparttypesize = '>B'
168 _fpartid = '>I'
168 _fpartid = '>I'
169 _fpayloadsize = '>i'
169 _fpayloadsize = '>i'
170 _fpartparamcount = '>BB'
170 _fpartparamcount = '>BB'
171
171
172 preferedchunksize = 4096
172 preferedchunksize = 4096
173
173
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
174 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
175
175
176 def validateparttype(parttype):
176 def validateparttype(parttype):
177 """raise ValueError if a parttype contains invalid character"""
177 """raise ValueError if a parttype contains invalid character"""
178 if _parttypeforbidden.search(parttype):
178 if _parttypeforbidden.search(parttype):
179 raise ValueError(parttype)
179 raise ValueError(parttype)
180
180
181 def _makefpartparamsizes(nbparams):
181 def _makefpartparamsizes(nbparams):
182 """return a struct format to read part parameter sizes
182 """return a struct format to read part parameter sizes
183
183
184 The number parameters is variable so we need to build that format
184 The number parameters is variable so we need to build that format
185 dynamically.
185 dynamically.
186 """
186 """
187 return '>'+('BB'*nbparams)
187 return '>'+('BB'*nbparams)
188
188
189 parthandlermapping = {}
189 parthandlermapping = {}
190
190
191 def parthandler(parttype, params=()):
191 def parthandler(parttype, params=()):
192 """decorator that register a function as a bundle2 part handler
192 """decorator that register a function as a bundle2 part handler
193
193
194 eg::
194 eg::
195
195
196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
196 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
197 def myparttypehandler(...):
197 def myparttypehandler(...):
198 '''process a part of type "my part".'''
198 '''process a part of type "my part".'''
199 ...
199 ...
200 """
200 """
201 validateparttype(parttype)
201 validateparttype(parttype)
202 def _decorator(func):
202 def _decorator(func):
203 lparttype = parttype.lower() # enforce lower case matching.
203 lparttype = parttype.lower() # enforce lower case matching.
204 assert lparttype not in parthandlermapping
204 assert lparttype not in parthandlermapping
205 parthandlermapping[lparttype] = func
205 parthandlermapping[lparttype] = func
206 func.params = frozenset(params)
206 func.params = frozenset(params)
207 return func
207 return func
208 return _decorator
208 return _decorator
209
209
210 class unbundlerecords(object):
210 class unbundlerecords(object):
211 """keep record of what happens during and unbundle
211 """keep record of what happens during and unbundle
212
212
213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
213 New records are added using `records.add('cat', obj)`. Where 'cat' is a
214 category of record and obj is an arbitrary object.
214 category of record and obj is an arbitrary object.
215
215
216 `records['cat']` will return all entries of this category 'cat'.
216 `records['cat']` will return all entries of this category 'cat'.
217
217
218 Iterating on the object itself will yield `('category', obj)` tuples
218 Iterating on the object itself will yield `('category', obj)` tuples
219 for all entries.
219 for all entries.
220
220
221 All iterations happens in chronological order.
221 All iterations happens in chronological order.
222 """
222 """
223
223
224 def __init__(self):
224 def __init__(self):
225 self._categories = {}
225 self._categories = {}
226 self._sequences = []
226 self._sequences = []
227 self._replies = {}
227 self._replies = {}
228
228
229 def add(self, category, entry, inreplyto=None):
229 def add(self, category, entry, inreplyto=None):
230 """add a new record of a given category.
230 """add a new record of a given category.
231
231
232 The entry can then be retrieved in the list returned by
232 The entry can then be retrieved in the list returned by
233 self['category']."""
233 self['category']."""
234 self._categories.setdefault(category, []).append(entry)
234 self._categories.setdefault(category, []).append(entry)
235 self._sequences.append((category, entry))
235 self._sequences.append((category, entry))
236 if inreplyto is not None:
236 if inreplyto is not None:
237 self.getreplies(inreplyto).add(category, entry)
237 self.getreplies(inreplyto).add(category, entry)
238
238
239 def getreplies(self, partid):
239 def getreplies(self, partid):
240 """get the records that are replies to a specific part"""
240 """get the records that are replies to a specific part"""
241 return self._replies.setdefault(partid, unbundlerecords())
241 return self._replies.setdefault(partid, unbundlerecords())
242
242
243 def __getitem__(self, cat):
243 def __getitem__(self, cat):
244 return tuple(self._categories.get(cat, ()))
244 return tuple(self._categories.get(cat, ()))
245
245
246 def __iter__(self):
246 def __iter__(self):
247 return iter(self._sequences)
247 return iter(self._sequences)
248
248
249 def __len__(self):
249 def __len__(self):
250 return len(self._sequences)
250 return len(self._sequences)
251
251
252 def __nonzero__(self):
252 def __nonzero__(self):
253 return bool(self._sequences)
253 return bool(self._sequences)
254
254
255 class bundleoperation(object):
255 class bundleoperation(object):
256 """an object that represents a single bundling process
256 """an object that represents a single bundling process
257
257
258 Its purpose is to carry unbundle-related objects and states.
258 Its purpose is to carry unbundle-related objects and states.
259
259
260 A new object should be created at the beginning of each bundle processing.
260 A new object should be created at the beginning of each bundle processing.
261 The object is to be returned by the processing function.
261 The object is to be returned by the processing function.
262
262
263 The object has very little content now it will ultimately contain:
263 The object has very little content now it will ultimately contain:
264 * an access to the repo the bundle is applied to,
264 * an access to the repo the bundle is applied to,
265 * a ui object,
265 * a ui object,
266 * a way to retrieve a transaction to add changes to the repo,
266 * a way to retrieve a transaction to add changes to the repo,
267 * a way to record the result of processing each part,
267 * a way to record the result of processing each part,
268 * a way to construct a bundle response when applicable.
268 * a way to construct a bundle response when applicable.
269 """
269 """
270
270
271 def __init__(self, repo, transactiongetter):
271 def __init__(self, repo, transactiongetter):
272 self.repo = repo
272 self.repo = repo
273 self.ui = repo.ui
273 self.ui = repo.ui
274 self.records = unbundlerecords()
274 self.records = unbundlerecords()
275 self.gettransaction = transactiongetter
275 self.gettransaction = transactiongetter
276 self.reply = None
276 self.reply = None
277
277
278 class TransactionUnavailable(RuntimeError):
278 class TransactionUnavailable(RuntimeError):
279 pass
279 pass
280
280
281 def _notransaction():
281 def _notransaction():
282 """default method to get a transaction while processing a bundle
282 """default method to get a transaction while processing a bundle
283
283
284 Raise an exception to highlight the fact that no transaction was expected
284 Raise an exception to highlight the fact that no transaction was expected
285 to be created"""
285 to be created"""
286 raise TransactionUnavailable()
286 raise TransactionUnavailable()
287
287
288 def processbundle(repo, unbundler, transactiongetter=None):
288 def processbundle(repo, unbundler, transactiongetter=None):
289 """This function process a bundle, apply effect to/from a repo
289 """This function process a bundle, apply effect to/from a repo
290
290
291 It iterates over each part then searches for and uses the proper handling
291 It iterates over each part then searches for and uses the proper handling
292 code to process the part. Parts are processed in order.
292 code to process the part. Parts are processed in order.
293
293
294 This is very early version of this function that will be strongly reworked
294 This is very early version of this function that will be strongly reworked
295 before final usage.
295 before final usage.
296
296
297 Unknown Mandatory part will abort the process.
297 Unknown Mandatory part will abort the process.
298 """
298 """
299 if transactiongetter is None:
299 if transactiongetter is None:
300 transactiongetter = _notransaction
300 transactiongetter = _notransaction
301 op = bundleoperation(repo, transactiongetter)
301 op = bundleoperation(repo, transactiongetter)
302 # todo:
302 # todo:
303 # - replace this is a init function soon.
303 # - replace this is a init function soon.
304 # - exception catching
304 # - exception catching
305 unbundler.params
305 unbundler.params
306 iterparts = unbundler.iterparts()
306 iterparts = unbundler.iterparts()
307 part = None
307 part = None
308 try:
308 try:
309 for part in iterparts:
309 for part in iterparts:
310 _processpart(op, part)
310 _processpart(op, part)
311 except Exception, exc:
311 except Exception, exc:
312 for part in iterparts:
312 for part in iterparts:
313 # consume the bundle content
313 # consume the bundle content
314 part.seek(0, 2)
314 part.seek(0, 2)
315 # Small hack to let caller code distinguish exceptions from bundle2
315 # Small hack to let caller code distinguish exceptions from bundle2
316 # processing from processing the old format. This is mostly
316 # processing from processing the old format. This is mostly
317 # needed to handle different return codes to unbundle according to the
317 # needed to handle different return codes to unbundle according to the
318 # type of bundle. We should probably clean up or drop this return code
318 # type of bundle. We should probably clean up or drop this return code
319 # craziness in a future version.
319 # craziness in a future version.
320 exc.duringunbundle2 = True
320 exc.duringunbundle2 = True
321 raise
321 raise
322 return op
322 return op
323
323
324 def _processpart(op, part):
324 def _processpart(op, part):
325 """process a single part from a bundle
325 """process a single part from a bundle
326
326
327 The part is guaranteed to have been fully consumed when the function exits
327 The part is guaranteed to have been fully consumed when the function exits
328 (even if an exception is raised)."""
328 (even if an exception is raised)."""
329 try:
329 try:
330 try:
330 try:
331 handler = parthandlermapping.get(part.type)
331 handler = parthandlermapping.get(part.type)
332 if handler is None:
332 if handler is None:
333 raise error.UnsupportedPartError(parttype=part.type)
333 raise error.UnsupportedPartError(parttype=part.type)
334 op.ui.debug('found a handler for part %r\n' % part.type)
334 op.ui.debug('found a handler for part %r\n' % part.type)
335 unknownparams = part.mandatorykeys - handler.params
335 unknownparams = part.mandatorykeys - handler.params
336 if unknownparams:
336 if unknownparams:
337 unknownparams = list(unknownparams)
337 unknownparams = list(unknownparams)
338 unknownparams.sort()
338 unknownparams.sort()
339 raise error.UnsupportedPartError(parttype=part.type,
339 raise error.UnsupportedPartError(parttype=part.type,
340 params=unknownparams)
340 params=unknownparams)
341 except error.UnsupportedPartError, exc:
341 except error.UnsupportedPartError, exc:
342 if part.mandatory: # mandatory parts
342 if part.mandatory: # mandatory parts
343 raise
343 raise
344 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
344 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
345 return # skip to part processing
345 return # skip to part processing
346
346
347 # handler is called outside the above try block so that we don't
347 # handler is called outside the above try block so that we don't
348 # risk catching KeyErrors from anything other than the
348 # risk catching KeyErrors from anything other than the
349 # parthandlermapping lookup (any KeyError raised by handler()
349 # parthandlermapping lookup (any KeyError raised by handler()
350 # itself represents a defect of a different variety).
350 # itself represents a defect of a different variety).
351 output = None
351 output = None
352 if op.reply is not None:
352 if op.reply is not None:
353 op.ui.pushbuffer(error=True)
353 op.ui.pushbuffer(error=True)
354 output = ''
354 output = ''
355 try:
355 try:
356 handler(op, part)
356 handler(op, part)
357 finally:
357 finally:
358 if output is not None:
358 if output is not None:
359 output = op.ui.popbuffer()
359 output = op.ui.popbuffer()
360 if output:
360 if output:
361 outpart = op.reply.newpart('b2x:output', data=output,
361 outpart = op.reply.newpart('b2x:output', data=output,
362 mandatory=False)
362 mandatory=False)
363 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
363 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
364 finally:
364 finally:
365 # consume the part content to not corrupt the stream.
365 # consume the part content to not corrupt the stream.
366 part.seek(0, 2)
366 part.seek(0, 2)
367
367
368
368
369 def decodecaps(blob):
369 def decodecaps(blob):
370 """decode a bundle2 caps bytes blob into a dictionary
370 """decode a bundle2 caps bytes blob into a dictionary
371
371
372 The blob is a list of capabilities (one per line)
372 The blob is a list of capabilities (one per line)
373 Capabilities may have values using a line of the form::
373 Capabilities may have values using a line of the form::
374
374
375 capability=value1,value2,value3
375 capability=value1,value2,value3
376
376
377 The values are always a list."""
377 The values are always a list."""
378 caps = {}
378 caps = {}
379 for line in blob.splitlines():
379 for line in blob.splitlines():
380 if not line:
380 if not line:
381 continue
381 continue
382 if '=' not in line:
382 if '=' not in line:
383 key, vals = line, ()
383 key, vals = line, ()
384 else:
384 else:
385 key, vals = line.split('=', 1)
385 key, vals = line.split('=', 1)
386 vals = vals.split(',')
386 vals = vals.split(',')
387 key = urllib.unquote(key)
387 key = urllib.unquote(key)
388 vals = [urllib.unquote(v) for v in vals]
388 vals = [urllib.unquote(v) for v in vals]
389 caps[key] = vals
389 caps[key] = vals
390 return caps
390 return caps
391
391
392 def encodecaps(caps):
392 def encodecaps(caps):
393 """encode a bundle2 caps dictionary into a bytes blob"""
393 """encode a bundle2 caps dictionary into a bytes blob"""
394 chunks = []
394 chunks = []
395 for ca in sorted(caps):
395 for ca in sorted(caps):
396 vals = caps[ca]
396 vals = caps[ca]
397 ca = urllib.quote(ca)
397 ca = urllib.quote(ca)
398 vals = [urllib.quote(v) for v in vals]
398 vals = [urllib.quote(v) for v in vals]
399 if vals:
399 if vals:
400 ca = "%s=%s" % (ca, ','.join(vals))
400 ca = "%s=%s" % (ca, ','.join(vals))
401 chunks.append(ca)
401 chunks.append(ca)
402 return '\n'.join(chunks)
402 return '\n'.join(chunks)
403
403
404 class bundle20(object):
404 class bundle20(object):
405 """represent an outgoing bundle2 container
405 """represent an outgoing bundle2 container
406
406
407 Use the `addparam` method to add stream level parameter. and `newpart` to
407 Use the `addparam` method to add stream level parameter. and `newpart` to
408 populate it. Then call `getchunks` to retrieve all the binary chunks of
408 populate it. Then call `getchunks` to retrieve all the binary chunks of
409 data that compose the bundle2 container."""
409 data that compose the bundle2 container."""
410
410
411 _magicstring = 'HG2Y'
411 _magicstring = 'HG2Y'
412
412
413 def __init__(self, ui, capabilities=()):
413 def __init__(self, ui, capabilities=()):
414 self.ui = ui
414 self.ui = ui
415 self._params = []
415 self._params = []
416 self._parts = []
416 self._parts = []
417 self.capabilities = dict(capabilities)
417 self.capabilities = dict(capabilities)
418
418
419 @property
419 @property
420 def nbparts(self):
420 def nbparts(self):
421 """total number of parts added to the bundler"""
421 """total number of parts added to the bundler"""
422 return len(self._parts)
422 return len(self._parts)
423
423
424 # methods used to defines the bundle2 content
424 # methods used to defines the bundle2 content
425 def addparam(self, name, value=None):
425 def addparam(self, name, value=None):
426 """add a stream level parameter"""
426 """add a stream level parameter"""
427 if not name:
427 if not name:
428 raise ValueError('empty parameter name')
428 raise ValueError('empty parameter name')
429 if name[0] not in string.letters:
429 if name[0] not in string.letters:
430 raise ValueError('non letter first character: %r' % name)
430 raise ValueError('non letter first character: %r' % name)
431 self._params.append((name, value))
431 self._params.append((name, value))
432
432
433 def addpart(self, part):
433 def addpart(self, part):
434 """add a new part to the bundle2 container
434 """add a new part to the bundle2 container
435
435
436 Parts contains the actual applicative payload."""
436 Parts contains the actual applicative payload."""
437 assert part.id is None
437 assert part.id is None
438 part.id = len(self._parts) # very cheap counter
438 part.id = len(self._parts) # very cheap counter
439 self._parts.append(part)
439 self._parts.append(part)
440
440
441 def newpart(self, typeid, *args, **kwargs):
441 def newpart(self, typeid, *args, **kwargs):
442 """create a new part and add it to the containers
442 """create a new part and add it to the containers
443
443
444 As the part is directly added to the containers. For now, this means
444 As the part is directly added to the containers. For now, this means
445 that any failure to properly initialize the part after calling
445 that any failure to properly initialize the part after calling
446 ``newpart`` should result in a failure of the whole bundling process.
446 ``newpart`` should result in a failure of the whole bundling process.
447
447
448 You can still fall back to manually create and add if you need better
448 You can still fall back to manually create and add if you need better
449 control."""
449 control."""
450 part = bundlepart(typeid, *args, **kwargs)
450 part = bundlepart(typeid, *args, **kwargs)
451 self.addpart(part)
451 self.addpart(part)
452 return part
452 return part
453
453
454 # methods used to generate the bundle2 stream
454 # methods used to generate the bundle2 stream
455 def getchunks(self):
455 def getchunks(self):
456 self.ui.debug('start emission of %s stream\n' % self._magicstring)
456 self.ui.debug('start emission of %s stream\n' % self._magicstring)
457 yield self._magicstring
457 yield self._magicstring
458 param = self._paramchunk()
458 param = self._paramchunk()
459 self.ui.debug('bundle parameter: %s\n' % param)
459 self.ui.debug('bundle parameter: %s\n' % param)
460 yield _pack(_fstreamparamsize, len(param))
460 yield _pack(_fstreamparamsize, len(param))
461 if param:
461 if param:
462 yield param
462 yield param
463
463
464 self.ui.debug('start of parts\n')
464 self.ui.debug('start of parts\n')
465 for part in self._parts:
465 for part in self._parts:
466 self.ui.debug('bundle part: "%s"\n' % part.type)
466 self.ui.debug('bundle part: "%s"\n' % part.type)
467 for chunk in part.getchunks():
467 for chunk in part.getchunks():
468 yield chunk
468 yield chunk
469 self.ui.debug('end of bundle\n')
469 self.ui.debug('end of bundle\n')
470 yield _pack(_fpartheadersize, 0)
470 yield _pack(_fpartheadersize, 0)
471
471
472 def _paramchunk(self):
472 def _paramchunk(self):
473 """return a encoded version of all stream parameters"""
473 """return a encoded version of all stream parameters"""
474 blocks = []
474 blocks = []
475 for par, value in self._params:
475 for par, value in self._params:
476 par = urllib.quote(par)
476 par = urllib.quote(par)
477 if value is not None:
477 if value is not None:
478 value = urllib.quote(value)
478 value = urllib.quote(value)
479 par = '%s=%s' % (par, value)
479 par = '%s=%s' % (par, value)
480 blocks.append(par)
480 blocks.append(par)
481 return ' '.join(blocks)
481 return ' '.join(blocks)
482
482
483 class unpackermixin(object):
483 class unpackermixin(object):
484 """A mixin to extract bytes and struct data from a stream"""
484 """A mixin to extract bytes and struct data from a stream"""
485
485
486 def __init__(self, fp):
486 def __init__(self, fp):
487 self._fp = fp
487 self._fp = fp
488 self._seekable = (util.safehasattr(fp, 'seek') and
488 self._seekable = (util.safehasattr(fp, 'seek') and
489 util.safehasattr(fp, 'tell'))
489 util.safehasattr(fp, 'tell'))
490
490
491 def _unpack(self, format):
491 def _unpack(self, format):
492 """unpack this struct format from the stream"""
492 """unpack this struct format from the stream"""
493 data = self._readexact(struct.calcsize(format))
493 data = self._readexact(struct.calcsize(format))
494 return _unpack(format, data)
494 return _unpack(format, data)
495
495
496 def _readexact(self, size):
496 def _readexact(self, size):
497 """read exactly <size> bytes from the stream"""
497 """read exactly <size> bytes from the stream"""
498 return changegroup.readexactly(self._fp, size)
498 return changegroup.readexactly(self._fp, size)
499
499
500 def seek(self, offset, whence=0):
500 def seek(self, offset, whence=0):
501 """move the underlying file pointer"""
501 """move the underlying file pointer"""
502 if self._seekable:
502 if self._seekable:
503 return self._fp.seek(offset, whence)
503 return self._fp.seek(offset, whence)
504 else:
504 else:
505 raise NotImplementedError(_('File pointer is not seekable'))
505 raise NotImplementedError(_('File pointer is not seekable'))
506
506
507 def tell(self):
507 def tell(self):
508 """return the file offset, or None if file is not seekable"""
508 """return the file offset, or None if file is not seekable"""
509 if self._seekable:
509 if self._seekable:
510 try:
510 try:
511 return self._fp.tell()
511 return self._fp.tell()
512 except IOError, e:
512 except IOError, e:
513 if e.errno == errno.ESPIPE:
513 if e.errno == errno.ESPIPE:
514 self._seekable = False
514 self._seekable = False
515 else:
515 else:
516 raise
516 raise
517 return None
517 return None
518
518
519 def close(self):
519 def close(self):
520 """close underlying file"""
520 """close underlying file"""
521 if util.safehasattr(self._fp, 'close'):
521 if util.safehasattr(self._fp, 'close'):
522 return self._fp.close()
522 return self._fp.close()
523
523
524 def getunbundler(ui, fp, header=None):
525 """return a valid unbundler object for a given header"""
526 return unbundle20(ui, fp, header)
527
524 class unbundle20(unpackermixin):
528 class unbundle20(unpackermixin):
525 """interpret a bundle2 stream
529 """interpret a bundle2 stream
526
530
527 This class is fed with a binary stream and yields parts through its
531 This class is fed with a binary stream and yields parts through its
528 `iterparts` methods."""
532 `iterparts` methods."""
529
533
530 def __init__(self, ui, fp, header=None):
534 def __init__(self, ui, fp, header=None):
531 """If header is specified, we do not read it out of the stream."""
535 """If header is specified, we do not read it out of the stream."""
532 self.ui = ui
536 self.ui = ui
533 super(unbundle20, self).__init__(fp)
537 super(unbundle20, self).__init__(fp)
534 if header is None:
538 if header is None:
535 header = self._readexact(4)
539 header = self._readexact(4)
536 magic, version = header[0:2], header[2:4]
540 magic, version = header[0:2], header[2:4]
537 if magic != 'HG':
541 if magic != 'HG':
538 raise util.Abort(_('not a Mercurial bundle'))
542 raise util.Abort(_('not a Mercurial bundle'))
539 if version != '2Y':
543 if version != '2Y':
540 raise util.Abort(_('unknown bundle version %s') % version)
544 raise util.Abort(_('unknown bundle version %s') % version)
541 self.ui.debug('start processing of %s stream\n' % header)
545 self.ui.debug('start processing of %s stream\n' % header)
542
546
543 @util.propertycache
547 @util.propertycache
544 def params(self):
548 def params(self):
545 """dictionary of stream level parameters"""
549 """dictionary of stream level parameters"""
546 self.ui.debug('reading bundle2 stream parameters\n')
550 self.ui.debug('reading bundle2 stream parameters\n')
547 params = {}
551 params = {}
548 paramssize = self._unpack(_fstreamparamsize)[0]
552 paramssize = self._unpack(_fstreamparamsize)[0]
549 if paramssize < 0:
553 if paramssize < 0:
550 raise error.BundleValueError('negative bundle param size: %i'
554 raise error.BundleValueError('negative bundle param size: %i'
551 % paramssize)
555 % paramssize)
552 if paramssize:
556 if paramssize:
553 for p in self._readexact(paramssize).split(' '):
557 for p in self._readexact(paramssize).split(' '):
554 p = p.split('=', 1)
558 p = p.split('=', 1)
555 p = [urllib.unquote(i) for i in p]
559 p = [urllib.unquote(i) for i in p]
556 if len(p) < 2:
560 if len(p) < 2:
557 p.append(None)
561 p.append(None)
558 self._processparam(*p)
562 self._processparam(*p)
559 params[p[0]] = p[1]
563 params[p[0]] = p[1]
560 return params
564 return params
561
565
562 def _processparam(self, name, value):
566 def _processparam(self, name, value):
563 """process a parameter, applying its effect if needed
567 """process a parameter, applying its effect if needed
564
568
565 Parameter starting with a lower case letter are advisory and will be
569 Parameter starting with a lower case letter are advisory and will be
566 ignored when unknown. Those starting with an upper case letter are
570 ignored when unknown. Those starting with an upper case letter are
567 mandatory and will this function will raise a KeyError when unknown.
571 mandatory and will this function will raise a KeyError when unknown.
568
572
569 Note: no option are currently supported. Any input will be either
573 Note: no option are currently supported. Any input will be either
570 ignored or failing.
574 ignored or failing.
571 """
575 """
572 if not name:
576 if not name:
573 raise ValueError('empty parameter name')
577 raise ValueError('empty parameter name')
574 if name[0] not in string.letters:
578 if name[0] not in string.letters:
575 raise ValueError('non letter first character: %r' % name)
579 raise ValueError('non letter first character: %r' % name)
576 # Some logic will be later added here to try to process the option for
580 # Some logic will be later added here to try to process the option for
577 # a dict of known parameter.
581 # a dict of known parameter.
578 if name[0].islower():
582 if name[0].islower():
579 self.ui.debug("ignoring unknown parameter %r\n" % name)
583 self.ui.debug("ignoring unknown parameter %r\n" % name)
580 else:
584 else:
581 raise error.UnsupportedPartError(params=(name,))
585 raise error.UnsupportedPartError(params=(name,))
582
586
583
587
584 def iterparts(self):
588 def iterparts(self):
585 """yield all parts contained in the stream"""
589 """yield all parts contained in the stream"""
586 # make sure param have been loaded
590 # make sure param have been loaded
587 self.params
591 self.params
588 self.ui.debug('start extraction of bundle2 parts\n')
592 self.ui.debug('start extraction of bundle2 parts\n')
589 headerblock = self._readpartheader()
593 headerblock = self._readpartheader()
590 while headerblock is not None:
594 while headerblock is not None:
591 part = unbundlepart(self.ui, headerblock, self._fp)
595 part = unbundlepart(self.ui, headerblock, self._fp)
592 yield part
596 yield part
593 part.seek(0, 2)
597 part.seek(0, 2)
594 headerblock = self._readpartheader()
598 headerblock = self._readpartheader()
595 self.ui.debug('end of bundle2 stream\n')
599 self.ui.debug('end of bundle2 stream\n')
596
600
597 def _readpartheader(self):
601 def _readpartheader(self):
598 """reads a part header size and return the bytes blob
602 """reads a part header size and return the bytes blob
599
603
600 returns None if empty"""
604 returns None if empty"""
601 headersize = self._unpack(_fpartheadersize)[0]
605 headersize = self._unpack(_fpartheadersize)[0]
602 if headersize < 0:
606 if headersize < 0:
603 raise error.BundleValueError('negative part header size: %i'
607 raise error.BundleValueError('negative part header size: %i'
604 % headersize)
608 % headersize)
605 self.ui.debug('part header size: %i\n' % headersize)
609 self.ui.debug('part header size: %i\n' % headersize)
606 if headersize:
610 if headersize:
607 return self._readexact(headersize)
611 return self._readexact(headersize)
608 return None
612 return None
609
613
610 def compressed(self):
614 def compressed(self):
611 return False
615 return False
612
616
613 class bundlepart(object):
617 class bundlepart(object):
614 """A bundle2 part contains application level payload
618 """A bundle2 part contains application level payload
615
619
616 The part `type` is used to route the part to the application level
620 The part `type` is used to route the part to the application level
617 handler.
621 handler.
618
622
619 The part payload is contained in ``part.data``. It could be raw bytes or a
623 The part payload is contained in ``part.data``. It could be raw bytes or a
620 generator of byte chunks.
624 generator of byte chunks.
621
625
622 You can add parameters to the part using the ``addparam`` method.
626 You can add parameters to the part using the ``addparam`` method.
623 Parameters can be either mandatory (default) or advisory. Remote side
627 Parameters can be either mandatory (default) or advisory. Remote side
624 should be able to safely ignore the advisory ones.
628 should be able to safely ignore the advisory ones.
625
629
626 Both data and parameters cannot be modified after the generation has begun.
630 Both data and parameters cannot be modified after the generation has begun.
627 """
631 """
628
632
629 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
633 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
630 data='', mandatory=True):
634 data='', mandatory=True):
631 validateparttype(parttype)
635 validateparttype(parttype)
632 self.id = None
636 self.id = None
633 self.type = parttype
637 self.type = parttype
634 self._data = data
638 self._data = data
635 self._mandatoryparams = list(mandatoryparams)
639 self._mandatoryparams = list(mandatoryparams)
636 self._advisoryparams = list(advisoryparams)
640 self._advisoryparams = list(advisoryparams)
637 # checking for duplicated entries
641 # checking for duplicated entries
638 self._seenparams = set()
642 self._seenparams = set()
639 for pname, __ in self._mandatoryparams + self._advisoryparams:
643 for pname, __ in self._mandatoryparams + self._advisoryparams:
640 if pname in self._seenparams:
644 if pname in self._seenparams:
641 raise RuntimeError('duplicated params: %s' % pname)
645 raise RuntimeError('duplicated params: %s' % pname)
642 self._seenparams.add(pname)
646 self._seenparams.add(pname)
643 # status of the part's generation:
647 # status of the part's generation:
644 # - None: not started,
648 # - None: not started,
645 # - False: currently generated,
649 # - False: currently generated,
646 # - True: generation done.
650 # - True: generation done.
647 self._generated = None
651 self._generated = None
648 self.mandatory = mandatory
652 self.mandatory = mandatory
649
653
650 # methods used to defines the part content
654 # methods used to defines the part content
651 def __setdata(self, data):
655 def __setdata(self, data):
652 if self._generated is not None:
656 if self._generated is not None:
653 raise error.ReadOnlyPartError('part is being generated')
657 raise error.ReadOnlyPartError('part is being generated')
654 self._data = data
658 self._data = data
655 def __getdata(self):
659 def __getdata(self):
656 return self._data
660 return self._data
657 data = property(__getdata, __setdata)
661 data = property(__getdata, __setdata)
658
662
659 @property
663 @property
660 def mandatoryparams(self):
664 def mandatoryparams(self):
661 # make it an immutable tuple to force people through ``addparam``
665 # make it an immutable tuple to force people through ``addparam``
662 return tuple(self._mandatoryparams)
666 return tuple(self._mandatoryparams)
663
667
664 @property
668 @property
665 def advisoryparams(self):
669 def advisoryparams(self):
666 # make it an immutable tuple to force people through ``addparam``
670 # make it an immutable tuple to force people through ``addparam``
667 return tuple(self._advisoryparams)
671 return tuple(self._advisoryparams)
668
672
669 def addparam(self, name, value='', mandatory=True):
673 def addparam(self, name, value='', mandatory=True):
670 if self._generated is not None:
674 if self._generated is not None:
671 raise error.ReadOnlyPartError('part is being generated')
675 raise error.ReadOnlyPartError('part is being generated')
672 if name in self._seenparams:
676 if name in self._seenparams:
673 raise ValueError('duplicated params: %s' % name)
677 raise ValueError('duplicated params: %s' % name)
674 self._seenparams.add(name)
678 self._seenparams.add(name)
675 params = self._advisoryparams
679 params = self._advisoryparams
676 if mandatory:
680 if mandatory:
677 params = self._mandatoryparams
681 params = self._mandatoryparams
678 params.append((name, value))
682 params.append((name, value))
679
683
680 # methods used to generates the bundle2 stream
684 # methods used to generates the bundle2 stream
681 def getchunks(self):
685 def getchunks(self):
682 if self._generated is not None:
686 if self._generated is not None:
683 raise RuntimeError('part can only be consumed once')
687 raise RuntimeError('part can only be consumed once')
684 self._generated = False
688 self._generated = False
685 #### header
689 #### header
686 if self.mandatory:
690 if self.mandatory:
687 parttype = self.type.upper()
691 parttype = self.type.upper()
688 else:
692 else:
689 parttype = self.type.lower()
693 parttype = self.type.lower()
690 ## parttype
694 ## parttype
691 header = [_pack(_fparttypesize, len(parttype)),
695 header = [_pack(_fparttypesize, len(parttype)),
692 parttype, _pack(_fpartid, self.id),
696 parttype, _pack(_fpartid, self.id),
693 ]
697 ]
694 ## parameters
698 ## parameters
695 # count
699 # count
696 manpar = self.mandatoryparams
700 manpar = self.mandatoryparams
697 advpar = self.advisoryparams
701 advpar = self.advisoryparams
698 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
702 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
699 # size
703 # size
700 parsizes = []
704 parsizes = []
701 for key, value in manpar:
705 for key, value in manpar:
702 parsizes.append(len(key))
706 parsizes.append(len(key))
703 parsizes.append(len(value))
707 parsizes.append(len(value))
704 for key, value in advpar:
708 for key, value in advpar:
705 parsizes.append(len(key))
709 parsizes.append(len(key))
706 parsizes.append(len(value))
710 parsizes.append(len(value))
707 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
711 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
708 header.append(paramsizes)
712 header.append(paramsizes)
709 # key, value
713 # key, value
710 for key, value in manpar:
714 for key, value in manpar:
711 header.append(key)
715 header.append(key)
712 header.append(value)
716 header.append(value)
713 for key, value in advpar:
717 for key, value in advpar:
714 header.append(key)
718 header.append(key)
715 header.append(value)
719 header.append(value)
716 ## finalize header
720 ## finalize header
717 headerchunk = ''.join(header)
721 headerchunk = ''.join(header)
718 yield _pack(_fpartheadersize, len(headerchunk))
722 yield _pack(_fpartheadersize, len(headerchunk))
719 yield headerchunk
723 yield headerchunk
720 ## payload
724 ## payload
721 try:
725 try:
722 for chunk in self._payloadchunks():
726 for chunk in self._payloadchunks():
723 yield _pack(_fpayloadsize, len(chunk))
727 yield _pack(_fpayloadsize, len(chunk))
724 yield chunk
728 yield chunk
725 except Exception, exc:
729 except Exception, exc:
726 # backup exception data for later
730 # backup exception data for later
727 exc_info = sys.exc_info()
731 exc_info = sys.exc_info()
728 msg = 'unexpected error: %s' % exc
732 msg = 'unexpected error: %s' % exc
729 interpart = bundlepart('b2x:error:abort', [('message', msg)],
733 interpart = bundlepart('b2x:error:abort', [('message', msg)],
730 mandatory=False)
734 mandatory=False)
731 interpart.id = 0
735 interpart.id = 0
732 yield _pack(_fpayloadsize, -1)
736 yield _pack(_fpayloadsize, -1)
733 for chunk in interpart.getchunks():
737 for chunk in interpart.getchunks():
734 yield chunk
738 yield chunk
735 # abort current part payload
739 # abort current part payload
736 yield _pack(_fpayloadsize, 0)
740 yield _pack(_fpayloadsize, 0)
737 raise exc_info[0], exc_info[1], exc_info[2]
741 raise exc_info[0], exc_info[1], exc_info[2]
738 # end of payload
742 # end of payload
739 yield _pack(_fpayloadsize, 0)
743 yield _pack(_fpayloadsize, 0)
740 self._generated = True
744 self._generated = True
741
745
742 def _payloadchunks(self):
746 def _payloadchunks(self):
743 """yield chunks of a the part payload
747 """yield chunks of a the part payload
744
748
745 Exists to handle the different methods to provide data to a part."""
749 Exists to handle the different methods to provide data to a part."""
746 # we only support fixed size data now.
750 # we only support fixed size data now.
747 # This will be improved in the future.
751 # This will be improved in the future.
748 if util.safehasattr(self.data, 'next'):
752 if util.safehasattr(self.data, 'next'):
749 buff = util.chunkbuffer(self.data)
753 buff = util.chunkbuffer(self.data)
750 chunk = buff.read(preferedchunksize)
754 chunk = buff.read(preferedchunksize)
751 while chunk:
755 while chunk:
752 yield chunk
756 yield chunk
753 chunk = buff.read(preferedchunksize)
757 chunk = buff.read(preferedchunksize)
754 elif len(self.data):
758 elif len(self.data):
755 yield self.data
759 yield self.data
756
760
757
761
758 flaginterrupt = -1
762 flaginterrupt = -1
759
763
760 class interrupthandler(unpackermixin):
764 class interrupthandler(unpackermixin):
761 """read one part and process it with restricted capability
765 """read one part and process it with restricted capability
762
766
763 This allows to transmit exception raised on the producer size during part
767 This allows to transmit exception raised on the producer size during part
764 iteration while the consumer is reading a part.
768 iteration while the consumer is reading a part.
765
769
766 Part processed in this manner only have access to a ui object,"""
770 Part processed in this manner only have access to a ui object,"""
767
771
768 def __init__(self, ui, fp):
772 def __init__(self, ui, fp):
769 super(interrupthandler, self).__init__(fp)
773 super(interrupthandler, self).__init__(fp)
770 self.ui = ui
774 self.ui = ui
771
775
772 def _readpartheader(self):
776 def _readpartheader(self):
773 """reads a part header size and return the bytes blob
777 """reads a part header size and return the bytes blob
774
778
775 returns None if empty"""
779 returns None if empty"""
776 headersize = self._unpack(_fpartheadersize)[0]
780 headersize = self._unpack(_fpartheadersize)[0]
777 if headersize < 0:
781 if headersize < 0:
778 raise error.BundleValueError('negative part header size: %i'
782 raise error.BundleValueError('negative part header size: %i'
779 % headersize)
783 % headersize)
780 self.ui.debug('part header size: %i\n' % headersize)
784 self.ui.debug('part header size: %i\n' % headersize)
781 if headersize:
785 if headersize:
782 return self._readexact(headersize)
786 return self._readexact(headersize)
783 return None
787 return None
784
788
785 def __call__(self):
789 def __call__(self):
786 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
790 self.ui.debug('bundle2 stream interruption, looking for a part.\n')
787 headerblock = self._readpartheader()
791 headerblock = self._readpartheader()
788 if headerblock is None:
792 if headerblock is None:
789 self.ui.debug('no part found during interruption.\n')
793 self.ui.debug('no part found during interruption.\n')
790 return
794 return
791 part = unbundlepart(self.ui, headerblock, self._fp)
795 part = unbundlepart(self.ui, headerblock, self._fp)
792 op = interruptoperation(self.ui)
796 op = interruptoperation(self.ui)
793 _processpart(op, part)
797 _processpart(op, part)
794
798
795 class interruptoperation(object):
799 class interruptoperation(object):
796 """A limited operation to be use by part handler during interruption
800 """A limited operation to be use by part handler during interruption
797
801
798 It only have access to an ui object.
802 It only have access to an ui object.
799 """
803 """
800
804
801 def __init__(self, ui):
805 def __init__(self, ui):
802 self.ui = ui
806 self.ui = ui
803 self.reply = None
807 self.reply = None
804
808
805 @property
809 @property
806 def repo(self):
810 def repo(self):
807 raise RuntimeError('no repo access from stream interruption')
811 raise RuntimeError('no repo access from stream interruption')
808
812
809 def gettransaction(self):
813 def gettransaction(self):
810 raise TransactionUnavailable('no repo access from stream interruption')
814 raise TransactionUnavailable('no repo access from stream interruption')
811
815
812 class unbundlepart(unpackermixin):
816 class unbundlepart(unpackermixin):
813 """a bundle part read from a bundle"""
817 """a bundle part read from a bundle"""
814
818
815 def __init__(self, ui, header, fp):
819 def __init__(self, ui, header, fp):
816 super(unbundlepart, self).__init__(fp)
820 super(unbundlepart, self).__init__(fp)
817 self.ui = ui
821 self.ui = ui
818 # unbundle state attr
822 # unbundle state attr
819 self._headerdata = header
823 self._headerdata = header
820 self._headeroffset = 0
824 self._headeroffset = 0
821 self._initialized = False
825 self._initialized = False
822 self.consumed = False
826 self.consumed = False
823 # part data
827 # part data
824 self.id = None
828 self.id = None
825 self.type = None
829 self.type = None
826 self.mandatoryparams = None
830 self.mandatoryparams = None
827 self.advisoryparams = None
831 self.advisoryparams = None
828 self.params = None
832 self.params = None
829 self.mandatorykeys = ()
833 self.mandatorykeys = ()
830 self._payloadstream = None
834 self._payloadstream = None
831 self._readheader()
835 self._readheader()
832 self._mandatory = None
836 self._mandatory = None
833 self._chunkindex = [] #(payload, file) position tuples for chunk starts
837 self._chunkindex = [] #(payload, file) position tuples for chunk starts
834 self._pos = 0
838 self._pos = 0
835
839
836 def _fromheader(self, size):
840 def _fromheader(self, size):
837 """return the next <size> byte from the header"""
841 """return the next <size> byte from the header"""
838 offset = self._headeroffset
842 offset = self._headeroffset
839 data = self._headerdata[offset:(offset + size)]
843 data = self._headerdata[offset:(offset + size)]
840 self._headeroffset = offset + size
844 self._headeroffset = offset + size
841 return data
845 return data
842
846
843 def _unpackheader(self, format):
847 def _unpackheader(self, format):
844 """read given format from header
848 """read given format from header
845
849
846 This automatically compute the size of the format to read."""
850 This automatically compute the size of the format to read."""
847 data = self._fromheader(struct.calcsize(format))
851 data = self._fromheader(struct.calcsize(format))
848 return _unpack(format, data)
852 return _unpack(format, data)
849
853
850 def _initparams(self, mandatoryparams, advisoryparams):
854 def _initparams(self, mandatoryparams, advisoryparams):
851 """internal function to setup all logic related parameters"""
855 """internal function to setup all logic related parameters"""
852 # make it read only to prevent people touching it by mistake.
856 # make it read only to prevent people touching it by mistake.
853 self.mandatoryparams = tuple(mandatoryparams)
857 self.mandatoryparams = tuple(mandatoryparams)
854 self.advisoryparams = tuple(advisoryparams)
858 self.advisoryparams = tuple(advisoryparams)
855 # user friendly UI
859 # user friendly UI
856 self.params = dict(self.mandatoryparams)
860 self.params = dict(self.mandatoryparams)
857 self.params.update(dict(self.advisoryparams))
861 self.params.update(dict(self.advisoryparams))
858 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
862 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
859
863
860 def _payloadchunks(self, chunknum=0):
864 def _payloadchunks(self, chunknum=0):
861 '''seek to specified chunk and start yielding data'''
865 '''seek to specified chunk and start yielding data'''
862 if len(self._chunkindex) == 0:
866 if len(self._chunkindex) == 0:
863 assert chunknum == 0, 'Must start with chunk 0'
867 assert chunknum == 0, 'Must start with chunk 0'
864 self._chunkindex.append((0, super(unbundlepart, self).tell()))
868 self._chunkindex.append((0, super(unbundlepart, self).tell()))
865 else:
869 else:
866 assert chunknum < len(self._chunkindex), \
870 assert chunknum < len(self._chunkindex), \
867 'Unknown chunk %d' % chunknum
871 'Unknown chunk %d' % chunknum
868 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
872 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
869
873
870 pos = self._chunkindex[chunknum][0]
874 pos = self._chunkindex[chunknum][0]
871 payloadsize = self._unpack(_fpayloadsize)[0]
875 payloadsize = self._unpack(_fpayloadsize)[0]
872 self.ui.debug('payload chunk size: %i\n' % payloadsize)
876 self.ui.debug('payload chunk size: %i\n' % payloadsize)
873 while payloadsize:
877 while payloadsize:
874 if payloadsize == flaginterrupt:
878 if payloadsize == flaginterrupt:
875 # interruption detection, the handler will now read a
879 # interruption detection, the handler will now read a
876 # single part and process it.
880 # single part and process it.
877 interrupthandler(self.ui, self._fp)()
881 interrupthandler(self.ui, self._fp)()
878 elif payloadsize < 0:
882 elif payloadsize < 0:
879 msg = 'negative payload chunk size: %i' % payloadsize
883 msg = 'negative payload chunk size: %i' % payloadsize
880 raise error.BundleValueError(msg)
884 raise error.BundleValueError(msg)
881 else:
885 else:
882 result = self._readexact(payloadsize)
886 result = self._readexact(payloadsize)
883 chunknum += 1
887 chunknum += 1
884 pos += payloadsize
888 pos += payloadsize
885 if chunknum == len(self._chunkindex):
889 if chunknum == len(self._chunkindex):
886 self._chunkindex.append((pos,
890 self._chunkindex.append((pos,
887 super(unbundlepart, self).tell()))
891 super(unbundlepart, self).tell()))
888 yield result
892 yield result
889 payloadsize = self._unpack(_fpayloadsize)[0]
893 payloadsize = self._unpack(_fpayloadsize)[0]
890 self.ui.debug('payload chunk size: %i\n' % payloadsize)
894 self.ui.debug('payload chunk size: %i\n' % payloadsize)
891
895
892 def _findchunk(self, pos):
896 def _findchunk(self, pos):
893 '''for a given payload position, return a chunk number and offset'''
897 '''for a given payload position, return a chunk number and offset'''
894 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
898 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
895 if ppos == pos:
899 if ppos == pos:
896 return chunk, 0
900 return chunk, 0
897 elif ppos > pos:
901 elif ppos > pos:
898 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
902 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
899 raise ValueError('Unknown chunk')
903 raise ValueError('Unknown chunk')
900
904
901 def _readheader(self):
905 def _readheader(self):
902 """read the header and setup the object"""
906 """read the header and setup the object"""
903 typesize = self._unpackheader(_fparttypesize)[0]
907 typesize = self._unpackheader(_fparttypesize)[0]
904 self.type = self._fromheader(typesize)
908 self.type = self._fromheader(typesize)
905 self.ui.debug('part type: "%s"\n' % self.type)
909 self.ui.debug('part type: "%s"\n' % self.type)
906 self.id = self._unpackheader(_fpartid)[0]
910 self.id = self._unpackheader(_fpartid)[0]
907 self.ui.debug('part id: "%s"\n' % self.id)
911 self.ui.debug('part id: "%s"\n' % self.id)
908 # extract mandatory bit from type
912 # extract mandatory bit from type
909 self.mandatory = (self.type != self.type.lower())
913 self.mandatory = (self.type != self.type.lower())
910 self.type = self.type.lower()
914 self.type = self.type.lower()
911 ## reading parameters
915 ## reading parameters
912 # param count
916 # param count
913 mancount, advcount = self._unpackheader(_fpartparamcount)
917 mancount, advcount = self._unpackheader(_fpartparamcount)
914 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
918 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
915 # param size
919 # param size
916 fparamsizes = _makefpartparamsizes(mancount + advcount)
920 fparamsizes = _makefpartparamsizes(mancount + advcount)
917 paramsizes = self._unpackheader(fparamsizes)
921 paramsizes = self._unpackheader(fparamsizes)
918 # make it a list of couple again
922 # make it a list of couple again
919 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
923 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
920 # split mandatory from advisory
924 # split mandatory from advisory
921 mansizes = paramsizes[:mancount]
925 mansizes = paramsizes[:mancount]
922 advsizes = paramsizes[mancount:]
926 advsizes = paramsizes[mancount:]
923 # retrieve param value
927 # retrieve param value
924 manparams = []
928 manparams = []
925 for key, value in mansizes:
929 for key, value in mansizes:
926 manparams.append((self._fromheader(key), self._fromheader(value)))
930 manparams.append((self._fromheader(key), self._fromheader(value)))
927 advparams = []
931 advparams = []
928 for key, value in advsizes:
932 for key, value in advsizes:
929 advparams.append((self._fromheader(key), self._fromheader(value)))
933 advparams.append((self._fromheader(key), self._fromheader(value)))
930 self._initparams(manparams, advparams)
934 self._initparams(manparams, advparams)
931 ## part payload
935 ## part payload
932 self._payloadstream = util.chunkbuffer(self._payloadchunks())
936 self._payloadstream = util.chunkbuffer(self._payloadchunks())
933 # we read the data, tell it
937 # we read the data, tell it
934 self._initialized = True
938 self._initialized = True
935
939
936 def read(self, size=None):
940 def read(self, size=None):
937 """read payload data"""
941 """read payload data"""
938 if not self._initialized:
942 if not self._initialized:
939 self._readheader()
943 self._readheader()
940 if size is None:
944 if size is None:
941 data = self._payloadstream.read()
945 data = self._payloadstream.read()
942 else:
946 else:
943 data = self._payloadstream.read(size)
947 data = self._payloadstream.read(size)
944 if size is None or len(data) < size:
948 if size is None or len(data) < size:
945 self.consumed = True
949 self.consumed = True
946 self._pos += len(data)
950 self._pos += len(data)
947 return data
951 return data
948
952
949 def tell(self):
953 def tell(self):
950 return self._pos
954 return self._pos
951
955
952 def seek(self, offset, whence=0):
956 def seek(self, offset, whence=0):
953 if whence == 0:
957 if whence == 0:
954 newpos = offset
958 newpos = offset
955 elif whence == 1:
959 elif whence == 1:
956 newpos = self._pos + offset
960 newpos = self._pos + offset
957 elif whence == 2:
961 elif whence == 2:
958 if not self.consumed:
962 if not self.consumed:
959 self.read()
963 self.read()
960 newpos = self._chunkindex[-1][0] - offset
964 newpos = self._chunkindex[-1][0] - offset
961 else:
965 else:
962 raise ValueError('Unknown whence value: %r' % (whence,))
966 raise ValueError('Unknown whence value: %r' % (whence,))
963
967
964 if newpos > self._chunkindex[-1][0] and not self.consumed:
968 if newpos > self._chunkindex[-1][0] and not self.consumed:
965 self.read()
969 self.read()
966 if not 0 <= newpos <= self._chunkindex[-1][0]:
970 if not 0 <= newpos <= self._chunkindex[-1][0]:
967 raise ValueError('Offset out of range')
971 raise ValueError('Offset out of range')
968
972
969 if self._pos != newpos:
973 if self._pos != newpos:
970 chunk, internaloffset = self._findchunk(newpos)
974 chunk, internaloffset = self._findchunk(newpos)
971 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
975 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
972 adjust = self.read(internaloffset)
976 adjust = self.read(internaloffset)
973 if len(adjust) != internaloffset:
977 if len(adjust) != internaloffset:
974 raise util.Abort(_('Seek failed\n'))
978 raise util.Abort(_('Seek failed\n'))
975 self._pos = newpos
979 self._pos = newpos
976
980
977 capabilities = {'HG2Y': (),
981 capabilities = {'HG2Y': (),
978 'b2x:listkeys': (),
982 'b2x:listkeys': (),
979 'b2x:pushkey': (),
983 'b2x:pushkey': (),
980 'digests': tuple(sorted(util.DIGESTS.keys())),
984 'digests': tuple(sorted(util.DIGESTS.keys())),
981 'b2x:remote-changegroup': ('http', 'https'),
985 'b2x:remote-changegroup': ('http', 'https'),
982 }
986 }
983
987
984 def getrepocaps(repo, allowpushback=False):
988 def getrepocaps(repo, allowpushback=False):
985 """return the bundle2 capabilities for a given repo
989 """return the bundle2 capabilities for a given repo
986
990
987 Exists to allow extensions (like evolution) to mutate the capabilities.
991 Exists to allow extensions (like evolution) to mutate the capabilities.
988 """
992 """
989 caps = capabilities.copy()
993 caps = capabilities.copy()
990 caps['b2x:changegroup'] = tuple(sorted(changegroup.packermap.keys()))
994 caps['b2x:changegroup'] = tuple(sorted(changegroup.packermap.keys()))
991 if obsolete.isenabled(repo, obsolete.exchangeopt):
995 if obsolete.isenabled(repo, obsolete.exchangeopt):
992 supportedformat = tuple('V%i' % v for v in obsolete.formats)
996 supportedformat = tuple('V%i' % v for v in obsolete.formats)
993 caps['b2x:obsmarkers'] = supportedformat
997 caps['b2x:obsmarkers'] = supportedformat
994 if allowpushback:
998 if allowpushback:
995 caps['b2x:pushback'] = ()
999 caps['b2x:pushback'] = ()
996 return caps
1000 return caps
997
1001
998 def bundle2caps(remote):
1002 def bundle2caps(remote):
999 """return the bundle capabilities of a peer as dict"""
1003 """return the bundle capabilities of a peer as dict"""
1000 raw = remote.capable('bundle2-exp')
1004 raw = remote.capable('bundle2-exp')
1001 if not raw and raw != '':
1005 if not raw and raw != '':
1002 return {}
1006 return {}
1003 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
1007 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
1004 return decodecaps(capsblob)
1008 return decodecaps(capsblob)
1005
1009
1006 def obsmarkersversion(caps):
1010 def obsmarkersversion(caps):
1007 """extract the list of supported obsmarkers versions from a bundle2caps dict
1011 """extract the list of supported obsmarkers versions from a bundle2caps dict
1008 """
1012 """
1009 obscaps = caps.get('b2x:obsmarkers', ())
1013 obscaps = caps.get('b2x:obsmarkers', ())
1010 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1014 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1011
1015
1012 @parthandler('b2x:changegroup', ('version',))
1016 @parthandler('b2x:changegroup', ('version',))
1013 def handlechangegroup(op, inpart):
1017 def handlechangegroup(op, inpart):
1014 """apply a changegroup part on the repo
1018 """apply a changegroup part on the repo
1015
1019
1016 This is a very early implementation that will massive rework before being
1020 This is a very early implementation that will massive rework before being
1017 inflicted to any end-user.
1021 inflicted to any end-user.
1018 """
1022 """
1019 # Make sure we trigger a transaction creation
1023 # Make sure we trigger a transaction creation
1020 #
1024 #
1021 # The addchangegroup function will get a transaction object by itself, but
1025 # The addchangegroup function will get a transaction object by itself, but
1022 # we need to make sure we trigger the creation of a transaction object used
1026 # we need to make sure we trigger the creation of a transaction object used
1023 # for the whole processing scope.
1027 # for the whole processing scope.
1024 op.gettransaction()
1028 op.gettransaction()
1025 unpackerversion = inpart.params.get('version', '01')
1029 unpackerversion = inpart.params.get('version', '01')
1026 # We should raise an appropriate exception here
1030 # We should raise an appropriate exception here
1027 unpacker = changegroup.packermap[unpackerversion][1]
1031 unpacker = changegroup.packermap[unpackerversion][1]
1028 cg = unpacker(inpart, 'UN')
1032 cg = unpacker(inpart, 'UN')
1029 # the source and url passed here are overwritten by the one contained in
1033 # the source and url passed here are overwritten by the one contained in
1030 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1034 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1031 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1035 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1032 op.records.add('changegroup', {'return': ret})
1036 op.records.add('changegroup', {'return': ret})
1033 if op.reply is not None:
1037 if op.reply is not None:
1034 # This is definitely not the final form of this
1038 # This is definitely not the final form of this
1035 # return. But one need to start somewhere.
1039 # return. But one need to start somewhere.
1036 part = op.reply.newpart('b2x:reply:changegroup', mandatory=False)
1040 part = op.reply.newpart('b2x:reply:changegroup', mandatory=False)
1037 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1041 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1038 part.addparam('return', '%i' % ret, mandatory=False)
1042 part.addparam('return', '%i' % ret, mandatory=False)
1039 assert not inpart.read()
1043 assert not inpart.read()
1040
1044
1041 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1045 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1042 ['digest:%s' % k for k in util.DIGESTS.keys()])
1046 ['digest:%s' % k for k in util.DIGESTS.keys()])
1043 @parthandler('b2x:remote-changegroup', _remotechangegroupparams)
1047 @parthandler('b2x:remote-changegroup', _remotechangegroupparams)
1044 def handleremotechangegroup(op, inpart):
1048 def handleremotechangegroup(op, inpart):
1045 """apply a bundle10 on the repo, given an url and validation information
1049 """apply a bundle10 on the repo, given an url and validation information
1046
1050
1047 All the information about the remote bundle to import are given as
1051 All the information about the remote bundle to import are given as
1048 parameters. The parameters include:
1052 parameters. The parameters include:
1049 - url: the url to the bundle10.
1053 - url: the url to the bundle10.
1050 - size: the bundle10 file size. It is used to validate what was
1054 - size: the bundle10 file size. It is used to validate what was
1051 retrieved by the client matches the server knowledge about the bundle.
1055 retrieved by the client matches the server knowledge about the bundle.
1052 - digests: a space separated list of the digest types provided as
1056 - digests: a space separated list of the digest types provided as
1053 parameters.
1057 parameters.
1054 - digest:<digest-type>: the hexadecimal representation of the digest with
1058 - digest:<digest-type>: the hexadecimal representation of the digest with
1055 that name. Like the size, it is used to validate what was retrieved by
1059 that name. Like the size, it is used to validate what was retrieved by
1056 the client matches what the server knows about the bundle.
1060 the client matches what the server knows about the bundle.
1057
1061
1058 When multiple digest types are given, all of them are checked.
1062 When multiple digest types are given, all of them are checked.
1059 """
1063 """
1060 try:
1064 try:
1061 raw_url = inpart.params['url']
1065 raw_url = inpart.params['url']
1062 except KeyError:
1066 except KeyError:
1063 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1067 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1064 parsed_url = util.url(raw_url)
1068 parsed_url = util.url(raw_url)
1065 if parsed_url.scheme not in capabilities['b2x:remote-changegroup']:
1069 if parsed_url.scheme not in capabilities['b2x:remote-changegroup']:
1066 raise util.Abort(_('remote-changegroup does not support %s urls') %
1070 raise util.Abort(_('remote-changegroup does not support %s urls') %
1067 parsed_url.scheme)
1071 parsed_url.scheme)
1068
1072
1069 try:
1073 try:
1070 size = int(inpart.params['size'])
1074 size = int(inpart.params['size'])
1071 except ValueError:
1075 except ValueError:
1072 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1076 raise util.Abort(_('remote-changegroup: invalid value for param "%s"')
1073 % 'size')
1077 % 'size')
1074 except KeyError:
1078 except KeyError:
1075 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1079 raise util.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1076
1080
1077 digests = {}
1081 digests = {}
1078 for typ in inpart.params.get('digests', '').split():
1082 for typ in inpart.params.get('digests', '').split():
1079 param = 'digest:%s' % typ
1083 param = 'digest:%s' % typ
1080 try:
1084 try:
1081 value = inpart.params[param]
1085 value = inpart.params[param]
1082 except KeyError:
1086 except KeyError:
1083 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1087 raise util.Abort(_('remote-changegroup: missing "%s" param') %
1084 param)
1088 param)
1085 digests[typ] = value
1089 digests[typ] = value
1086
1090
1087 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1091 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1088
1092
1089 # Make sure we trigger a transaction creation
1093 # Make sure we trigger a transaction creation
1090 #
1094 #
1091 # The addchangegroup function will get a transaction object by itself, but
1095 # The addchangegroup function will get a transaction object by itself, but
1092 # we need to make sure we trigger the creation of a transaction object used
1096 # we need to make sure we trigger the creation of a transaction object used
1093 # for the whole processing scope.
1097 # for the whole processing scope.
1094 op.gettransaction()
1098 op.gettransaction()
1095 import exchange
1099 import exchange
1096 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1100 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1097 if not isinstance(cg, changegroup.cg1unpacker):
1101 if not isinstance(cg, changegroup.cg1unpacker):
1098 raise util.Abort(_('%s: not a bundle version 1.0') %
1102 raise util.Abort(_('%s: not a bundle version 1.0') %
1099 util.hidepassword(raw_url))
1103 util.hidepassword(raw_url))
1100 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1104 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
1101 op.records.add('changegroup', {'return': ret})
1105 op.records.add('changegroup', {'return': ret})
1102 if op.reply is not None:
1106 if op.reply is not None:
1103 # This is definitely not the final form of this
1107 # This is definitely not the final form of this
1104 # return. But one need to start somewhere.
1108 # return. But one need to start somewhere.
1105 part = op.reply.newpart('b2x:reply:changegroup')
1109 part = op.reply.newpart('b2x:reply:changegroup')
1106 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1110 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1107 part.addparam('return', '%i' % ret, mandatory=False)
1111 part.addparam('return', '%i' % ret, mandatory=False)
1108 try:
1112 try:
1109 real_part.validate()
1113 real_part.validate()
1110 except util.Abort, e:
1114 except util.Abort, e:
1111 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1115 raise util.Abort(_('bundle at %s is corrupted:\n%s') %
1112 (util.hidepassword(raw_url), str(e)))
1116 (util.hidepassword(raw_url), str(e)))
1113 assert not inpart.read()
1117 assert not inpart.read()
1114
1118
1115 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
1119 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
1116 def handlereplychangegroup(op, inpart):
1120 def handlereplychangegroup(op, inpart):
1117 ret = int(inpart.params['return'])
1121 ret = int(inpart.params['return'])
1118 replyto = int(inpart.params['in-reply-to'])
1122 replyto = int(inpart.params['in-reply-to'])
1119 op.records.add('changegroup', {'return': ret}, replyto)
1123 op.records.add('changegroup', {'return': ret}, replyto)
1120
1124
1121 @parthandler('b2x:check:heads')
1125 @parthandler('b2x:check:heads')
1122 def handlecheckheads(op, inpart):
1126 def handlecheckheads(op, inpart):
1123 """check that head of the repo did not change
1127 """check that head of the repo did not change
1124
1128
1125 This is used to detect a push race when using unbundle.
1129 This is used to detect a push race when using unbundle.
1126 This replaces the "heads" argument of unbundle."""
1130 This replaces the "heads" argument of unbundle."""
1127 h = inpart.read(20)
1131 h = inpart.read(20)
1128 heads = []
1132 heads = []
1129 while len(h) == 20:
1133 while len(h) == 20:
1130 heads.append(h)
1134 heads.append(h)
1131 h = inpart.read(20)
1135 h = inpart.read(20)
1132 assert not h
1136 assert not h
1133 if heads != op.repo.heads():
1137 if heads != op.repo.heads():
1134 raise error.PushRaced('repository changed while pushing - '
1138 raise error.PushRaced('repository changed while pushing - '
1135 'please try again')
1139 'please try again')
1136
1140
1137 @parthandler('b2x:output')
1141 @parthandler('b2x:output')
1138 def handleoutput(op, inpart):
1142 def handleoutput(op, inpart):
1139 """forward output captured on the server to the client"""
1143 """forward output captured on the server to the client"""
1140 for line in inpart.read().splitlines():
1144 for line in inpart.read().splitlines():
1141 op.ui.write(('remote: %s\n' % line))
1145 op.ui.write(('remote: %s\n' % line))
1142
1146
1143 @parthandler('b2x:replycaps')
1147 @parthandler('b2x:replycaps')
1144 def handlereplycaps(op, inpart):
1148 def handlereplycaps(op, inpart):
1145 """Notify that a reply bundle should be created
1149 """Notify that a reply bundle should be created
1146
1150
1147 The payload contains the capabilities information for the reply"""
1151 The payload contains the capabilities information for the reply"""
1148 caps = decodecaps(inpart.read())
1152 caps = decodecaps(inpart.read())
1149 if op.reply is None:
1153 if op.reply is None:
1150 op.reply = bundle20(op.ui, caps)
1154 op.reply = bundle20(op.ui, caps)
1151
1155
1152 @parthandler('b2x:error:abort', ('message', 'hint'))
1156 @parthandler('b2x:error:abort', ('message', 'hint'))
1153 def handlereplycaps(op, inpart):
1157 def handlereplycaps(op, inpart):
1154 """Used to transmit abort error over the wire"""
1158 """Used to transmit abort error over the wire"""
1155 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1159 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
1156
1160
1157 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
1161 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
1158 def handlereplycaps(op, inpart):
1162 def handlereplycaps(op, inpart):
1159 """Used to transmit unknown content error over the wire"""
1163 """Used to transmit unknown content error over the wire"""
1160 kwargs = {}
1164 kwargs = {}
1161 parttype = inpart.params.get('parttype')
1165 parttype = inpart.params.get('parttype')
1162 if parttype is not None:
1166 if parttype is not None:
1163 kwargs['parttype'] = parttype
1167 kwargs['parttype'] = parttype
1164 params = inpart.params.get('params')
1168 params = inpart.params.get('params')
1165 if params is not None:
1169 if params is not None:
1166 kwargs['params'] = params.split('\0')
1170 kwargs['params'] = params.split('\0')
1167
1171
1168 raise error.UnsupportedPartError(**kwargs)
1172 raise error.UnsupportedPartError(**kwargs)
1169
1173
1170 @parthandler('b2x:error:pushraced', ('message',))
1174 @parthandler('b2x:error:pushraced', ('message',))
1171 def handlereplycaps(op, inpart):
1175 def handlereplycaps(op, inpart):
1172 """Used to transmit push race error over the wire"""
1176 """Used to transmit push race error over the wire"""
1173 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1177 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1174
1178
1175 @parthandler('b2x:listkeys', ('namespace',))
1179 @parthandler('b2x:listkeys', ('namespace',))
1176 def handlelistkeys(op, inpart):
1180 def handlelistkeys(op, inpart):
1177 """retrieve pushkey namespace content stored in a bundle2"""
1181 """retrieve pushkey namespace content stored in a bundle2"""
1178 namespace = inpart.params['namespace']
1182 namespace = inpart.params['namespace']
1179 r = pushkey.decodekeys(inpart.read())
1183 r = pushkey.decodekeys(inpart.read())
1180 op.records.add('listkeys', (namespace, r))
1184 op.records.add('listkeys', (namespace, r))
1181
1185
1182 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
1186 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
1183 def handlepushkey(op, inpart):
1187 def handlepushkey(op, inpart):
1184 """process a pushkey request"""
1188 """process a pushkey request"""
1185 dec = pushkey.decode
1189 dec = pushkey.decode
1186 namespace = dec(inpart.params['namespace'])
1190 namespace = dec(inpart.params['namespace'])
1187 key = dec(inpart.params['key'])
1191 key = dec(inpart.params['key'])
1188 old = dec(inpart.params['old'])
1192 old = dec(inpart.params['old'])
1189 new = dec(inpart.params['new'])
1193 new = dec(inpart.params['new'])
1190 ret = op.repo.pushkey(namespace, key, old, new)
1194 ret = op.repo.pushkey(namespace, key, old, new)
1191 record = {'namespace': namespace,
1195 record = {'namespace': namespace,
1192 'key': key,
1196 'key': key,
1193 'old': old,
1197 'old': old,
1194 'new': new}
1198 'new': new}
1195 op.records.add('pushkey', record)
1199 op.records.add('pushkey', record)
1196 if op.reply is not None:
1200 if op.reply is not None:
1197 rpart = op.reply.newpart('b2x:reply:pushkey')
1201 rpart = op.reply.newpart('b2x:reply:pushkey')
1198 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1202 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1199 rpart.addparam('return', '%i' % ret, mandatory=False)
1203 rpart.addparam('return', '%i' % ret, mandatory=False)
1200
1204
1201 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
1205 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
1202 def handlepushkeyreply(op, inpart):
1206 def handlepushkeyreply(op, inpart):
1203 """retrieve the result of a pushkey request"""
1207 """retrieve the result of a pushkey request"""
1204 ret = int(inpart.params['return'])
1208 ret = int(inpart.params['return'])
1205 partid = int(inpart.params['in-reply-to'])
1209 partid = int(inpart.params['in-reply-to'])
1206 op.records.add('pushkey', {'return': ret}, partid)
1210 op.records.add('pushkey', {'return': ret}, partid)
1207
1211
1208 @parthandler('b2x:obsmarkers')
1212 @parthandler('b2x:obsmarkers')
1209 def handleobsmarker(op, inpart):
1213 def handleobsmarker(op, inpart):
1210 """add a stream of obsmarkers to the repo"""
1214 """add a stream of obsmarkers to the repo"""
1211 tr = op.gettransaction()
1215 tr = op.gettransaction()
1212 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
1216 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
1213 if new:
1217 if new:
1214 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1218 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1215 op.records.add('obsmarkers', {'new': new})
1219 op.records.add('obsmarkers', {'new': new})
1216 if op.reply is not None:
1220 if op.reply is not None:
1217 rpart = op.reply.newpart('b2x:reply:obsmarkers')
1221 rpart = op.reply.newpart('b2x:reply:obsmarkers')
1218 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1222 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1219 rpart.addparam('new', '%i' % new, mandatory=False)
1223 rpart.addparam('new', '%i' % new, mandatory=False)
1220
1224
1221
1225
1222 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
1226 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
1223 def handlepushkeyreply(op, inpart):
1227 def handlepushkeyreply(op, inpart):
1224 """retrieve the result of a pushkey request"""
1228 """retrieve the result of a pushkey request"""
1225 ret = int(inpart.params['new'])
1229 ret = int(inpart.params['new'])
1226 partid = int(inpart.params['in-reply-to'])
1230 partid = int(inpart.params['in-reply-to'])
1227 op.records.add('obsmarkers', {'new': ret}, partid)
1231 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,1303 +1,1303 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.cg1unpacker(fh, alg)
34 return changegroup.cg1unpacker(fh, alg)
35 elif version == '2Y':
35 elif version == '2Y':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.getunbundler(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40 def buildobsmarkerspart(bundler, markers):
40 def buildobsmarkerspart(bundler, markers):
41 """add an obsmarker part to the bundler with <markers>
41 """add an obsmarker part to the bundler with <markers>
42
42
43 No part is created if markers is empty.
43 No part is created if markers is empty.
44 Raises ValueError if the bundler doesn't support any known obsmarker format.
44 Raises ValueError if the bundler doesn't support any known obsmarker format.
45 """
45 """
46 if markers:
46 if markers:
47 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
47 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
48 version = obsolete.commonversion(remoteversions)
48 version = obsolete.commonversion(remoteversions)
49 if version is None:
49 if version is None:
50 raise ValueError('bundler do not support common obsmarker format')
50 raise ValueError('bundler do not support common obsmarker format')
51 stream = obsolete.encodemarkers(markers, True, version=version)
51 stream = obsolete.encodemarkers(markers, True, version=version)
52 return bundler.newpart('b2x:obsmarkers', data=stream)
52 return bundler.newpart('b2x:obsmarkers', data=stream)
53 return None
53 return None
54
54
55 class pushoperation(object):
55 class pushoperation(object):
56 """A object that represent a single push operation
56 """A object that represent a single push operation
57
57
58 It purpose is to carry push related state and very common operation.
58 It purpose is to carry push related state and very common operation.
59
59
60 A new should be created at the beginning of each push and discarded
60 A new should be created at the beginning of each push and discarded
61 afterward.
61 afterward.
62 """
62 """
63
63
64 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
64 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
65 bookmarks=()):
65 bookmarks=()):
66 # repo we push from
66 # repo we push from
67 self.repo = repo
67 self.repo = repo
68 self.ui = repo.ui
68 self.ui = repo.ui
69 # repo we push to
69 # repo we push to
70 self.remote = remote
70 self.remote = remote
71 # force option provided
71 # force option provided
72 self.force = force
72 self.force = force
73 # revs to be pushed (None is "all")
73 # revs to be pushed (None is "all")
74 self.revs = revs
74 self.revs = revs
75 # bookmark explicitly pushed
75 # bookmark explicitly pushed
76 self.bookmarks = bookmarks
76 self.bookmarks = bookmarks
77 # allow push of new branch
77 # allow push of new branch
78 self.newbranch = newbranch
78 self.newbranch = newbranch
79 # did a local lock get acquired?
79 # did a local lock get acquired?
80 self.locallocked = None
80 self.locallocked = None
81 # step already performed
81 # step already performed
82 # (used to check what steps have been already performed through bundle2)
82 # (used to check what steps have been already performed through bundle2)
83 self.stepsdone = set()
83 self.stepsdone = set()
84 # Integer version of the changegroup push result
84 # Integer version of the changegroup push result
85 # - None means nothing to push
85 # - None means nothing to push
86 # - 0 means HTTP error
86 # - 0 means HTTP error
87 # - 1 means we pushed and remote head count is unchanged *or*
87 # - 1 means we pushed and remote head count is unchanged *or*
88 # we have outgoing changesets but refused to push
88 # we have outgoing changesets but refused to push
89 # - other values as described by addchangegroup()
89 # - other values as described by addchangegroup()
90 self.cgresult = None
90 self.cgresult = None
91 # Boolean value for the bookmark push
91 # Boolean value for the bookmark push
92 self.bkresult = None
92 self.bkresult = None
93 # discover.outgoing object (contains common and outgoing data)
93 # discover.outgoing object (contains common and outgoing data)
94 self.outgoing = None
94 self.outgoing = None
95 # all remote heads before the push
95 # all remote heads before the push
96 self.remoteheads = None
96 self.remoteheads = None
97 # testable as a boolean indicating if any nodes are missing locally.
97 # testable as a boolean indicating if any nodes are missing locally.
98 self.incoming = None
98 self.incoming = None
99 # phases changes that must be pushed along side the changesets
99 # phases changes that must be pushed along side the changesets
100 self.outdatedphases = None
100 self.outdatedphases = None
101 # phases changes that must be pushed if changeset push fails
101 # phases changes that must be pushed if changeset push fails
102 self.fallbackoutdatedphases = None
102 self.fallbackoutdatedphases = None
103 # outgoing obsmarkers
103 # outgoing obsmarkers
104 self.outobsmarkers = set()
104 self.outobsmarkers = set()
105 # outgoing bookmarks
105 # outgoing bookmarks
106 self.outbookmarks = []
106 self.outbookmarks = []
107 # transaction manager
107 # transaction manager
108 self.trmanager = None
108 self.trmanager = None
109
109
110 @util.propertycache
110 @util.propertycache
111 def futureheads(self):
111 def futureheads(self):
112 """future remote heads if the changeset push succeeds"""
112 """future remote heads if the changeset push succeeds"""
113 return self.outgoing.missingheads
113 return self.outgoing.missingheads
114
114
115 @util.propertycache
115 @util.propertycache
116 def fallbackheads(self):
116 def fallbackheads(self):
117 """future remote heads if the changeset push fails"""
117 """future remote heads if the changeset push fails"""
118 if self.revs is None:
118 if self.revs is None:
119 # not target to push, all common are relevant
119 # not target to push, all common are relevant
120 return self.outgoing.commonheads
120 return self.outgoing.commonheads
121 unfi = self.repo.unfiltered()
121 unfi = self.repo.unfiltered()
122 # I want cheads = heads(::missingheads and ::commonheads)
122 # I want cheads = heads(::missingheads and ::commonheads)
123 # (missingheads is revs with secret changeset filtered out)
123 # (missingheads is revs with secret changeset filtered out)
124 #
124 #
125 # This can be expressed as:
125 # This can be expressed as:
126 # cheads = ( (missingheads and ::commonheads)
126 # cheads = ( (missingheads and ::commonheads)
127 # + (commonheads and ::missingheads))"
127 # + (commonheads and ::missingheads))"
128 # )
128 # )
129 #
129 #
130 # while trying to push we already computed the following:
130 # while trying to push we already computed the following:
131 # common = (::commonheads)
131 # common = (::commonheads)
132 # missing = ((commonheads::missingheads) - commonheads)
132 # missing = ((commonheads::missingheads) - commonheads)
133 #
133 #
134 # We can pick:
134 # We can pick:
135 # * missingheads part of common (::commonheads)
135 # * missingheads part of common (::commonheads)
136 common = set(self.outgoing.common)
136 common = set(self.outgoing.common)
137 nm = self.repo.changelog.nodemap
137 nm = self.repo.changelog.nodemap
138 cheads = [node for node in self.revs if nm[node] in common]
138 cheads = [node for node in self.revs if nm[node] in common]
139 # and
139 # and
140 # * commonheads parents on missing
140 # * commonheads parents on missing
141 revset = unfi.set('%ln and parents(roots(%ln))',
141 revset = unfi.set('%ln and parents(roots(%ln))',
142 self.outgoing.commonheads,
142 self.outgoing.commonheads,
143 self.outgoing.missing)
143 self.outgoing.missing)
144 cheads.extend(c.node() for c in revset)
144 cheads.extend(c.node() for c in revset)
145 return cheads
145 return cheads
146
146
147 @property
147 @property
148 def commonheads(self):
148 def commonheads(self):
149 """set of all common heads after changeset bundle push"""
149 """set of all common heads after changeset bundle push"""
150 if self.cgresult:
150 if self.cgresult:
151 return self.futureheads
151 return self.futureheads
152 else:
152 else:
153 return self.fallbackheads
153 return self.fallbackheads
154
154
155 # mapping of message used when pushing bookmark
155 # mapping of message used when pushing bookmark
156 bookmsgmap = {'update': (_("updating bookmark %s\n"),
156 bookmsgmap = {'update': (_("updating bookmark %s\n"),
157 _('updating bookmark %s failed!\n')),
157 _('updating bookmark %s failed!\n')),
158 'export': (_("exporting bookmark %s\n"),
158 'export': (_("exporting bookmark %s\n"),
159 _('exporting bookmark %s failed!\n')),
159 _('exporting bookmark %s failed!\n')),
160 'delete': (_("deleting remote bookmark %s\n"),
160 'delete': (_("deleting remote bookmark %s\n"),
161 _('deleting remote bookmark %s failed!\n')),
161 _('deleting remote bookmark %s failed!\n')),
162 }
162 }
163
163
164
164
165 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
165 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
166 '''Push outgoing changesets (limited by revs) from a local
166 '''Push outgoing changesets (limited by revs) from a local
167 repository to remote. Return an integer:
167 repository to remote. Return an integer:
168 - None means nothing to push
168 - None means nothing to push
169 - 0 means HTTP error
169 - 0 means HTTP error
170 - 1 means we pushed and remote head count is unchanged *or*
170 - 1 means we pushed and remote head count is unchanged *or*
171 we have outgoing changesets but refused to push
171 we have outgoing changesets but refused to push
172 - other values as described by addchangegroup()
172 - other values as described by addchangegroup()
173 '''
173 '''
174 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
174 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
175 if pushop.remote.local():
175 if pushop.remote.local():
176 missing = (set(pushop.repo.requirements)
176 missing = (set(pushop.repo.requirements)
177 - pushop.remote.local().supported)
177 - pushop.remote.local().supported)
178 if missing:
178 if missing:
179 msg = _("required features are not"
179 msg = _("required features are not"
180 " supported in the destination:"
180 " supported in the destination:"
181 " %s") % (', '.join(sorted(missing)))
181 " %s") % (', '.join(sorted(missing)))
182 raise util.Abort(msg)
182 raise util.Abort(msg)
183
183
184 # there are two ways to push to remote repo:
184 # there are two ways to push to remote repo:
185 #
185 #
186 # addchangegroup assumes local user can lock remote
186 # addchangegroup assumes local user can lock remote
187 # repo (local filesystem, old ssh servers).
187 # repo (local filesystem, old ssh servers).
188 #
188 #
189 # unbundle assumes local user cannot lock remote repo (new ssh
189 # unbundle assumes local user cannot lock remote repo (new ssh
190 # servers, http servers).
190 # servers, http servers).
191
191
192 if not pushop.remote.canpush():
192 if not pushop.remote.canpush():
193 raise util.Abort(_("destination does not support push"))
193 raise util.Abort(_("destination does not support push"))
194 # get local lock as we might write phase data
194 # get local lock as we might write phase data
195 locallock = None
195 locallock = None
196 try:
196 try:
197 locallock = pushop.repo.lock()
197 locallock = pushop.repo.lock()
198 pushop.locallocked = True
198 pushop.locallocked = True
199 except IOError, err:
199 except IOError, err:
200 pushop.locallocked = False
200 pushop.locallocked = False
201 if err.errno != errno.EACCES:
201 if err.errno != errno.EACCES:
202 raise
202 raise
203 # source repo cannot be locked.
203 # source repo cannot be locked.
204 # We do not abort the push, but just disable the local phase
204 # We do not abort the push, but just disable the local phase
205 # synchronisation.
205 # synchronisation.
206 msg = 'cannot lock source repository: %s\n' % err
206 msg = 'cannot lock source repository: %s\n' % err
207 pushop.ui.debug(msg)
207 pushop.ui.debug(msg)
208 try:
208 try:
209 if pushop.locallocked:
209 if pushop.locallocked:
210 pushop.trmanager = transactionmanager(repo,
210 pushop.trmanager = transactionmanager(repo,
211 'push-response',
211 'push-response',
212 pushop.remote.url())
212 pushop.remote.url())
213 pushop.repo.checkpush(pushop)
213 pushop.repo.checkpush(pushop)
214 lock = None
214 lock = None
215 unbundle = pushop.remote.capable('unbundle')
215 unbundle = pushop.remote.capable('unbundle')
216 if not unbundle:
216 if not unbundle:
217 lock = pushop.remote.lock()
217 lock = pushop.remote.lock()
218 try:
218 try:
219 _pushdiscovery(pushop)
219 _pushdiscovery(pushop)
220 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
220 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
221 False)
221 False)
222 and pushop.remote.capable('bundle2-exp')):
222 and pushop.remote.capable('bundle2-exp')):
223 _pushbundle2(pushop)
223 _pushbundle2(pushop)
224 _pushchangeset(pushop)
224 _pushchangeset(pushop)
225 _pushsyncphase(pushop)
225 _pushsyncphase(pushop)
226 _pushobsolete(pushop)
226 _pushobsolete(pushop)
227 _pushbookmark(pushop)
227 _pushbookmark(pushop)
228 finally:
228 finally:
229 if lock is not None:
229 if lock is not None:
230 lock.release()
230 lock.release()
231 if pushop.trmanager:
231 if pushop.trmanager:
232 pushop.trmanager.close()
232 pushop.trmanager.close()
233 finally:
233 finally:
234 if pushop.trmanager:
234 if pushop.trmanager:
235 pushop.trmanager.release()
235 pushop.trmanager.release()
236 if locallock is not None:
236 if locallock is not None:
237 locallock.release()
237 locallock.release()
238
238
239 return pushop
239 return pushop
240
240
241 # list of steps to perform discovery before push
241 # list of steps to perform discovery before push
242 pushdiscoveryorder = []
242 pushdiscoveryorder = []
243
243
244 # Mapping between step name and function
244 # Mapping between step name and function
245 #
245 #
246 # This exists to help extensions wrap steps if necessary
246 # This exists to help extensions wrap steps if necessary
247 pushdiscoverymapping = {}
247 pushdiscoverymapping = {}
248
248
249 def pushdiscovery(stepname):
249 def pushdiscovery(stepname):
250 """decorator for function performing discovery before push
250 """decorator for function performing discovery before push
251
251
252 The function is added to the step -> function mapping and appended to the
252 The function is added to the step -> function mapping and appended to the
253 list of steps. Beware that decorated function will be added in order (this
253 list of steps. Beware that decorated function will be added in order (this
254 may matter).
254 may matter).
255
255
256 You can only use this decorator for a new step, if you want to wrap a step
256 You can only use this decorator for a new step, if you want to wrap a step
257 from an extension, change the pushdiscovery dictionary directly."""
257 from an extension, change the pushdiscovery dictionary directly."""
258 def dec(func):
258 def dec(func):
259 assert stepname not in pushdiscoverymapping
259 assert stepname not in pushdiscoverymapping
260 pushdiscoverymapping[stepname] = func
260 pushdiscoverymapping[stepname] = func
261 pushdiscoveryorder.append(stepname)
261 pushdiscoveryorder.append(stepname)
262 return func
262 return func
263 return dec
263 return dec
264
264
265 def _pushdiscovery(pushop):
265 def _pushdiscovery(pushop):
266 """Run all discovery steps"""
266 """Run all discovery steps"""
267 for stepname in pushdiscoveryorder:
267 for stepname in pushdiscoveryorder:
268 step = pushdiscoverymapping[stepname]
268 step = pushdiscoverymapping[stepname]
269 step(pushop)
269 step(pushop)
270
270
271 @pushdiscovery('changeset')
271 @pushdiscovery('changeset')
272 def _pushdiscoverychangeset(pushop):
272 def _pushdiscoverychangeset(pushop):
273 """discover the changeset that need to be pushed"""
273 """discover the changeset that need to be pushed"""
274 fci = discovery.findcommonincoming
274 fci = discovery.findcommonincoming
275 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
275 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
276 common, inc, remoteheads = commoninc
276 common, inc, remoteheads = commoninc
277 fco = discovery.findcommonoutgoing
277 fco = discovery.findcommonoutgoing
278 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
278 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
279 commoninc=commoninc, force=pushop.force)
279 commoninc=commoninc, force=pushop.force)
280 pushop.outgoing = outgoing
280 pushop.outgoing = outgoing
281 pushop.remoteheads = remoteheads
281 pushop.remoteheads = remoteheads
282 pushop.incoming = inc
282 pushop.incoming = inc
283
283
284 @pushdiscovery('phase')
284 @pushdiscovery('phase')
285 def _pushdiscoveryphase(pushop):
285 def _pushdiscoveryphase(pushop):
286 """discover the phase that needs to be pushed
286 """discover the phase that needs to be pushed
287
287
288 (computed for both success and failure case for changesets push)"""
288 (computed for both success and failure case for changesets push)"""
289 outgoing = pushop.outgoing
289 outgoing = pushop.outgoing
290 unfi = pushop.repo.unfiltered()
290 unfi = pushop.repo.unfiltered()
291 remotephases = pushop.remote.listkeys('phases')
291 remotephases = pushop.remote.listkeys('phases')
292 publishing = remotephases.get('publishing', False)
292 publishing = remotephases.get('publishing', False)
293 ana = phases.analyzeremotephases(pushop.repo,
293 ana = phases.analyzeremotephases(pushop.repo,
294 pushop.fallbackheads,
294 pushop.fallbackheads,
295 remotephases)
295 remotephases)
296 pheads, droots = ana
296 pheads, droots = ana
297 extracond = ''
297 extracond = ''
298 if not publishing:
298 if not publishing:
299 extracond = ' and public()'
299 extracond = ' and public()'
300 revset = 'heads((%%ln::%%ln) %s)' % extracond
300 revset = 'heads((%%ln::%%ln) %s)' % extracond
301 # Get the list of all revs draft on remote by public here.
301 # Get the list of all revs draft on remote by public here.
302 # XXX Beware that revset break if droots is not strictly
302 # XXX Beware that revset break if droots is not strictly
303 # XXX root we may want to ensure it is but it is costly
303 # XXX root we may want to ensure it is but it is costly
304 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
304 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
305 if not outgoing.missing:
305 if not outgoing.missing:
306 future = fallback
306 future = fallback
307 else:
307 else:
308 # adds changeset we are going to push as draft
308 # adds changeset we are going to push as draft
309 #
309 #
310 # should not be necessary for publishing server, but because of an
310 # should not be necessary for publishing server, but because of an
311 # issue fixed in xxxxx we have to do it anyway.
311 # issue fixed in xxxxx we have to do it anyway.
312 fdroots = list(unfi.set('roots(%ln + %ln::)',
312 fdroots = list(unfi.set('roots(%ln + %ln::)',
313 outgoing.missing, droots))
313 outgoing.missing, droots))
314 fdroots = [f.node() for f in fdroots]
314 fdroots = [f.node() for f in fdroots]
315 future = list(unfi.set(revset, fdroots, pushop.futureheads))
315 future = list(unfi.set(revset, fdroots, pushop.futureheads))
316 pushop.outdatedphases = future
316 pushop.outdatedphases = future
317 pushop.fallbackoutdatedphases = fallback
317 pushop.fallbackoutdatedphases = fallback
318
318
319 @pushdiscovery('obsmarker')
319 @pushdiscovery('obsmarker')
320 def _pushdiscoveryobsmarkers(pushop):
320 def _pushdiscoveryobsmarkers(pushop):
321 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
321 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
322 and pushop.repo.obsstore
322 and pushop.repo.obsstore
323 and 'obsolete' in pushop.remote.listkeys('namespaces')):
323 and 'obsolete' in pushop.remote.listkeys('namespaces')):
324 repo = pushop.repo
324 repo = pushop.repo
325 # very naive computation, that can be quite expensive on big repo.
325 # very naive computation, that can be quite expensive on big repo.
326 # However: evolution is currently slow on them anyway.
326 # However: evolution is currently slow on them anyway.
327 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
327 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
328 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
328 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
329
329
330 @pushdiscovery('bookmarks')
330 @pushdiscovery('bookmarks')
331 def _pushdiscoverybookmarks(pushop):
331 def _pushdiscoverybookmarks(pushop):
332 ui = pushop.ui
332 ui = pushop.ui
333 repo = pushop.repo.unfiltered()
333 repo = pushop.repo.unfiltered()
334 remote = pushop.remote
334 remote = pushop.remote
335 ui.debug("checking for updated bookmarks\n")
335 ui.debug("checking for updated bookmarks\n")
336 ancestors = ()
336 ancestors = ()
337 if pushop.revs:
337 if pushop.revs:
338 revnums = map(repo.changelog.rev, pushop.revs)
338 revnums = map(repo.changelog.rev, pushop.revs)
339 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
339 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
340 remotebookmark = remote.listkeys('bookmarks')
340 remotebookmark = remote.listkeys('bookmarks')
341
341
342 explicit = set(pushop.bookmarks)
342 explicit = set(pushop.bookmarks)
343
343
344 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
344 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
345 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
345 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
346 for b, scid, dcid in advsrc:
346 for b, scid, dcid in advsrc:
347 if b in explicit:
347 if b in explicit:
348 explicit.remove(b)
348 explicit.remove(b)
349 if not ancestors or repo[scid].rev() in ancestors:
349 if not ancestors or repo[scid].rev() in ancestors:
350 pushop.outbookmarks.append((b, dcid, scid))
350 pushop.outbookmarks.append((b, dcid, scid))
351 # search added bookmark
351 # search added bookmark
352 for b, scid, dcid in addsrc:
352 for b, scid, dcid in addsrc:
353 if b in explicit:
353 if b in explicit:
354 explicit.remove(b)
354 explicit.remove(b)
355 pushop.outbookmarks.append((b, '', scid))
355 pushop.outbookmarks.append((b, '', scid))
356 # search for overwritten bookmark
356 # search for overwritten bookmark
357 for b, scid, dcid in advdst + diverge + differ:
357 for b, scid, dcid in advdst + diverge + differ:
358 if b in explicit:
358 if b in explicit:
359 explicit.remove(b)
359 explicit.remove(b)
360 pushop.outbookmarks.append((b, dcid, scid))
360 pushop.outbookmarks.append((b, dcid, scid))
361 # search for bookmark to delete
361 # search for bookmark to delete
362 for b, scid, dcid in adddst:
362 for b, scid, dcid in adddst:
363 if b in explicit:
363 if b in explicit:
364 explicit.remove(b)
364 explicit.remove(b)
365 # treat as "deleted locally"
365 # treat as "deleted locally"
366 pushop.outbookmarks.append((b, dcid, ''))
366 pushop.outbookmarks.append((b, dcid, ''))
367 # identical bookmarks shouldn't get reported
367 # identical bookmarks shouldn't get reported
368 for b, scid, dcid in same:
368 for b, scid, dcid in same:
369 if b in explicit:
369 if b in explicit:
370 explicit.remove(b)
370 explicit.remove(b)
371
371
372 if explicit:
372 if explicit:
373 explicit = sorted(explicit)
373 explicit = sorted(explicit)
374 # we should probably list all of them
374 # we should probably list all of them
375 ui.warn(_('bookmark %s does not exist on the local '
375 ui.warn(_('bookmark %s does not exist on the local '
376 'or remote repository!\n') % explicit[0])
376 'or remote repository!\n') % explicit[0])
377 pushop.bkresult = 2
377 pushop.bkresult = 2
378
378
379 pushop.outbookmarks.sort()
379 pushop.outbookmarks.sort()
380
380
381 def _pushcheckoutgoing(pushop):
381 def _pushcheckoutgoing(pushop):
382 outgoing = pushop.outgoing
382 outgoing = pushop.outgoing
383 unfi = pushop.repo.unfiltered()
383 unfi = pushop.repo.unfiltered()
384 if not outgoing.missing:
384 if not outgoing.missing:
385 # nothing to push
385 # nothing to push
386 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
386 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
387 return False
387 return False
388 # something to push
388 # something to push
389 if not pushop.force:
389 if not pushop.force:
390 # if repo.obsstore == False --> no obsolete
390 # if repo.obsstore == False --> no obsolete
391 # then, save the iteration
391 # then, save the iteration
392 if unfi.obsstore:
392 if unfi.obsstore:
393 # this message are here for 80 char limit reason
393 # this message are here for 80 char limit reason
394 mso = _("push includes obsolete changeset: %s!")
394 mso = _("push includes obsolete changeset: %s!")
395 mst = {"unstable": _("push includes unstable changeset: %s!"),
395 mst = {"unstable": _("push includes unstable changeset: %s!"),
396 "bumped": _("push includes bumped changeset: %s!"),
396 "bumped": _("push includes bumped changeset: %s!"),
397 "divergent": _("push includes divergent changeset: %s!")}
397 "divergent": _("push includes divergent changeset: %s!")}
398 # If we are to push if there is at least one
398 # If we are to push if there is at least one
399 # obsolete or unstable changeset in missing, at
399 # obsolete or unstable changeset in missing, at
400 # least one of the missinghead will be obsolete or
400 # least one of the missinghead will be obsolete or
401 # unstable. So checking heads only is ok
401 # unstable. So checking heads only is ok
402 for node in outgoing.missingheads:
402 for node in outgoing.missingheads:
403 ctx = unfi[node]
403 ctx = unfi[node]
404 if ctx.obsolete():
404 if ctx.obsolete():
405 raise util.Abort(mso % ctx)
405 raise util.Abort(mso % ctx)
406 elif ctx.troubled():
406 elif ctx.troubled():
407 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
407 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
408 newbm = pushop.ui.configlist('bookmarks', 'pushing')
408 newbm = pushop.ui.configlist('bookmarks', 'pushing')
409 discovery.checkheads(unfi, pushop.remote, outgoing,
409 discovery.checkheads(unfi, pushop.remote, outgoing,
410 pushop.remoteheads,
410 pushop.remoteheads,
411 pushop.newbranch,
411 pushop.newbranch,
412 bool(pushop.incoming),
412 bool(pushop.incoming),
413 newbm)
413 newbm)
414 return True
414 return True
415
415
416 # List of names of steps to perform for an outgoing bundle2, order matters.
416 # List of names of steps to perform for an outgoing bundle2, order matters.
417 b2partsgenorder = []
417 b2partsgenorder = []
418
418
419 # Mapping between step name and function
419 # Mapping between step name and function
420 #
420 #
421 # This exists to help extensions wrap steps if necessary
421 # This exists to help extensions wrap steps if necessary
422 b2partsgenmapping = {}
422 b2partsgenmapping = {}
423
423
424 def b2partsgenerator(stepname):
424 def b2partsgenerator(stepname):
425 """decorator for function generating bundle2 part
425 """decorator for function generating bundle2 part
426
426
427 The function is added to the step -> function mapping and appended to the
427 The function is added to the step -> function mapping and appended to the
428 list of steps. Beware that decorated functions will be added in order
428 list of steps. Beware that decorated functions will be added in order
429 (this may matter).
429 (this may matter).
430
430
431 You can only use this decorator for new steps, if you want to wrap a step
431 You can only use this decorator for new steps, if you want to wrap a step
432 from an extension, attack the b2partsgenmapping dictionary directly."""
432 from an extension, attack the b2partsgenmapping dictionary directly."""
433 def dec(func):
433 def dec(func):
434 assert stepname not in b2partsgenmapping
434 assert stepname not in b2partsgenmapping
435 b2partsgenmapping[stepname] = func
435 b2partsgenmapping[stepname] = func
436 b2partsgenorder.append(stepname)
436 b2partsgenorder.append(stepname)
437 return func
437 return func
438 return dec
438 return dec
439
439
440 @b2partsgenerator('changeset')
440 @b2partsgenerator('changeset')
441 def _pushb2ctx(pushop, bundler):
441 def _pushb2ctx(pushop, bundler):
442 """handle changegroup push through bundle2
442 """handle changegroup push through bundle2
443
443
444 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
444 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
445 """
445 """
446 if 'changesets' in pushop.stepsdone:
446 if 'changesets' in pushop.stepsdone:
447 return
447 return
448 pushop.stepsdone.add('changesets')
448 pushop.stepsdone.add('changesets')
449 # Send known heads to the server for race detection.
449 # Send known heads to the server for race detection.
450 if not _pushcheckoutgoing(pushop):
450 if not _pushcheckoutgoing(pushop):
451 return
451 return
452 pushop.repo.prepushoutgoinghooks(pushop.repo,
452 pushop.repo.prepushoutgoinghooks(pushop.repo,
453 pushop.remote,
453 pushop.remote,
454 pushop.outgoing)
454 pushop.outgoing)
455 if not pushop.force:
455 if not pushop.force:
456 bundler.newpart('b2x:check:heads', data=iter(pushop.remoteheads))
456 bundler.newpart('b2x:check:heads', data=iter(pushop.remoteheads))
457 b2caps = bundle2.bundle2caps(pushop.remote)
457 b2caps = bundle2.bundle2caps(pushop.remote)
458 version = None
458 version = None
459 cgversions = b2caps.get('b2x:changegroup')
459 cgversions = b2caps.get('b2x:changegroup')
460 if not cgversions: # 3.1 and 3.2 ship with an empty value
460 if not cgversions: # 3.1 and 3.2 ship with an empty value
461 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
461 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
462 pushop.outgoing)
462 pushop.outgoing)
463 else:
463 else:
464 cgversions = [v for v in cgversions if v in changegroup.packermap]
464 cgversions = [v for v in cgversions if v in changegroup.packermap]
465 if not cgversions:
465 if not cgversions:
466 raise ValueError(_('no common changegroup version'))
466 raise ValueError(_('no common changegroup version'))
467 version = max(cgversions)
467 version = max(cgversions)
468 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
468 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
469 pushop.outgoing,
469 pushop.outgoing,
470 version=version)
470 version=version)
471 cgpart = bundler.newpart('b2x:changegroup', data=cg)
471 cgpart = bundler.newpart('b2x:changegroup', data=cg)
472 if version is not None:
472 if version is not None:
473 cgpart.addparam('version', version)
473 cgpart.addparam('version', version)
474 def handlereply(op):
474 def handlereply(op):
475 """extract addchangegroup returns from server reply"""
475 """extract addchangegroup returns from server reply"""
476 cgreplies = op.records.getreplies(cgpart.id)
476 cgreplies = op.records.getreplies(cgpart.id)
477 assert len(cgreplies['changegroup']) == 1
477 assert len(cgreplies['changegroup']) == 1
478 pushop.cgresult = cgreplies['changegroup'][0]['return']
478 pushop.cgresult = cgreplies['changegroup'][0]['return']
479 return handlereply
479 return handlereply
480
480
481 @b2partsgenerator('phase')
481 @b2partsgenerator('phase')
482 def _pushb2phases(pushop, bundler):
482 def _pushb2phases(pushop, bundler):
483 """handle phase push through bundle2"""
483 """handle phase push through bundle2"""
484 if 'phases' in pushop.stepsdone:
484 if 'phases' in pushop.stepsdone:
485 return
485 return
486 b2caps = bundle2.bundle2caps(pushop.remote)
486 b2caps = bundle2.bundle2caps(pushop.remote)
487 if not 'b2x:pushkey' in b2caps:
487 if not 'b2x:pushkey' in b2caps:
488 return
488 return
489 pushop.stepsdone.add('phases')
489 pushop.stepsdone.add('phases')
490 part2node = []
490 part2node = []
491 enc = pushkey.encode
491 enc = pushkey.encode
492 for newremotehead in pushop.outdatedphases:
492 for newremotehead in pushop.outdatedphases:
493 part = bundler.newpart('b2x:pushkey')
493 part = bundler.newpart('b2x:pushkey')
494 part.addparam('namespace', enc('phases'))
494 part.addparam('namespace', enc('phases'))
495 part.addparam('key', enc(newremotehead.hex()))
495 part.addparam('key', enc(newremotehead.hex()))
496 part.addparam('old', enc(str(phases.draft)))
496 part.addparam('old', enc(str(phases.draft)))
497 part.addparam('new', enc(str(phases.public)))
497 part.addparam('new', enc(str(phases.public)))
498 part2node.append((part.id, newremotehead))
498 part2node.append((part.id, newremotehead))
499 def handlereply(op):
499 def handlereply(op):
500 for partid, node in part2node:
500 for partid, node in part2node:
501 partrep = op.records.getreplies(partid)
501 partrep = op.records.getreplies(partid)
502 results = partrep['pushkey']
502 results = partrep['pushkey']
503 assert len(results) <= 1
503 assert len(results) <= 1
504 msg = None
504 msg = None
505 if not results:
505 if not results:
506 msg = _('server ignored update of %s to public!\n') % node
506 msg = _('server ignored update of %s to public!\n') % node
507 elif not int(results[0]['return']):
507 elif not int(results[0]['return']):
508 msg = _('updating %s to public failed!\n') % node
508 msg = _('updating %s to public failed!\n') % node
509 if msg is not None:
509 if msg is not None:
510 pushop.ui.warn(msg)
510 pushop.ui.warn(msg)
511 return handlereply
511 return handlereply
512
512
513 @b2partsgenerator('obsmarkers')
513 @b2partsgenerator('obsmarkers')
514 def _pushb2obsmarkers(pushop, bundler):
514 def _pushb2obsmarkers(pushop, bundler):
515 if 'obsmarkers' in pushop.stepsdone:
515 if 'obsmarkers' in pushop.stepsdone:
516 return
516 return
517 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
517 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
518 if obsolete.commonversion(remoteversions) is None:
518 if obsolete.commonversion(remoteversions) is None:
519 return
519 return
520 pushop.stepsdone.add('obsmarkers')
520 pushop.stepsdone.add('obsmarkers')
521 if pushop.outobsmarkers:
521 if pushop.outobsmarkers:
522 buildobsmarkerspart(bundler, pushop.outobsmarkers)
522 buildobsmarkerspart(bundler, pushop.outobsmarkers)
523
523
524 @b2partsgenerator('bookmarks')
524 @b2partsgenerator('bookmarks')
525 def _pushb2bookmarks(pushop, bundler):
525 def _pushb2bookmarks(pushop, bundler):
526 """handle phase push through bundle2"""
526 """handle phase push through bundle2"""
527 if 'bookmarks' in pushop.stepsdone:
527 if 'bookmarks' in pushop.stepsdone:
528 return
528 return
529 b2caps = bundle2.bundle2caps(pushop.remote)
529 b2caps = bundle2.bundle2caps(pushop.remote)
530 if 'b2x:pushkey' not in b2caps:
530 if 'b2x:pushkey' not in b2caps:
531 return
531 return
532 pushop.stepsdone.add('bookmarks')
532 pushop.stepsdone.add('bookmarks')
533 part2book = []
533 part2book = []
534 enc = pushkey.encode
534 enc = pushkey.encode
535 for book, old, new in pushop.outbookmarks:
535 for book, old, new in pushop.outbookmarks:
536 part = bundler.newpart('b2x:pushkey')
536 part = bundler.newpart('b2x:pushkey')
537 part.addparam('namespace', enc('bookmarks'))
537 part.addparam('namespace', enc('bookmarks'))
538 part.addparam('key', enc(book))
538 part.addparam('key', enc(book))
539 part.addparam('old', enc(old))
539 part.addparam('old', enc(old))
540 part.addparam('new', enc(new))
540 part.addparam('new', enc(new))
541 action = 'update'
541 action = 'update'
542 if not old:
542 if not old:
543 action = 'export'
543 action = 'export'
544 elif not new:
544 elif not new:
545 action = 'delete'
545 action = 'delete'
546 part2book.append((part.id, book, action))
546 part2book.append((part.id, book, action))
547
547
548
548
549 def handlereply(op):
549 def handlereply(op):
550 ui = pushop.ui
550 ui = pushop.ui
551 for partid, book, action in part2book:
551 for partid, book, action in part2book:
552 partrep = op.records.getreplies(partid)
552 partrep = op.records.getreplies(partid)
553 results = partrep['pushkey']
553 results = partrep['pushkey']
554 assert len(results) <= 1
554 assert len(results) <= 1
555 if not results:
555 if not results:
556 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
556 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
557 else:
557 else:
558 ret = int(results[0]['return'])
558 ret = int(results[0]['return'])
559 if ret:
559 if ret:
560 ui.status(bookmsgmap[action][0] % book)
560 ui.status(bookmsgmap[action][0] % book)
561 else:
561 else:
562 ui.warn(bookmsgmap[action][1] % book)
562 ui.warn(bookmsgmap[action][1] % book)
563 if pushop.bkresult is not None:
563 if pushop.bkresult is not None:
564 pushop.bkresult = 1
564 pushop.bkresult = 1
565 return handlereply
565 return handlereply
566
566
567
567
568 def _pushbundle2(pushop):
568 def _pushbundle2(pushop):
569 """push data to the remote using bundle2
569 """push data to the remote using bundle2
570
570
571 The only currently supported type of data is changegroup but this will
571 The only currently supported type of data is changegroup but this will
572 evolve in the future."""
572 evolve in the future."""
573 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
573 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
574 pushback = (pushop.trmanager
574 pushback = (pushop.trmanager
575 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
575 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
576
576
577 # create reply capability
577 # create reply capability
578 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
578 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
579 allowpushback=pushback))
579 allowpushback=pushback))
580 bundler.newpart('b2x:replycaps', data=capsblob)
580 bundler.newpart('b2x:replycaps', data=capsblob)
581 replyhandlers = []
581 replyhandlers = []
582 for partgenname in b2partsgenorder:
582 for partgenname in b2partsgenorder:
583 partgen = b2partsgenmapping[partgenname]
583 partgen = b2partsgenmapping[partgenname]
584 ret = partgen(pushop, bundler)
584 ret = partgen(pushop, bundler)
585 if callable(ret):
585 if callable(ret):
586 replyhandlers.append(ret)
586 replyhandlers.append(ret)
587 # do not push if nothing to push
587 # do not push if nothing to push
588 if bundler.nbparts <= 1:
588 if bundler.nbparts <= 1:
589 return
589 return
590 stream = util.chunkbuffer(bundler.getchunks())
590 stream = util.chunkbuffer(bundler.getchunks())
591 try:
591 try:
592 reply = pushop.remote.unbundle(stream, ['force'], 'push')
592 reply = pushop.remote.unbundle(stream, ['force'], 'push')
593 except error.BundleValueError, exc:
593 except error.BundleValueError, exc:
594 raise util.Abort('missing support for %s' % exc)
594 raise util.Abort('missing support for %s' % exc)
595 try:
595 try:
596 trgetter = None
596 trgetter = None
597 if pushback:
597 if pushback:
598 trgetter = pushop.trmanager.transaction
598 trgetter = pushop.trmanager.transaction
599 op = bundle2.processbundle(pushop.repo, reply, trgetter)
599 op = bundle2.processbundle(pushop.repo, reply, trgetter)
600 except error.BundleValueError, exc:
600 except error.BundleValueError, exc:
601 raise util.Abort('missing support for %s' % exc)
601 raise util.Abort('missing support for %s' % exc)
602 for rephand in replyhandlers:
602 for rephand in replyhandlers:
603 rephand(op)
603 rephand(op)
604
604
605 def _pushchangeset(pushop):
605 def _pushchangeset(pushop):
606 """Make the actual push of changeset bundle to remote repo"""
606 """Make the actual push of changeset bundle to remote repo"""
607 if 'changesets' in pushop.stepsdone:
607 if 'changesets' in pushop.stepsdone:
608 return
608 return
609 pushop.stepsdone.add('changesets')
609 pushop.stepsdone.add('changesets')
610 if not _pushcheckoutgoing(pushop):
610 if not _pushcheckoutgoing(pushop):
611 return
611 return
612 pushop.repo.prepushoutgoinghooks(pushop.repo,
612 pushop.repo.prepushoutgoinghooks(pushop.repo,
613 pushop.remote,
613 pushop.remote,
614 pushop.outgoing)
614 pushop.outgoing)
615 outgoing = pushop.outgoing
615 outgoing = pushop.outgoing
616 unbundle = pushop.remote.capable('unbundle')
616 unbundle = pushop.remote.capable('unbundle')
617 # TODO: get bundlecaps from remote
617 # TODO: get bundlecaps from remote
618 bundlecaps = None
618 bundlecaps = None
619 # create a changegroup from local
619 # create a changegroup from local
620 if pushop.revs is None and not (outgoing.excluded
620 if pushop.revs is None and not (outgoing.excluded
621 or pushop.repo.changelog.filteredrevs):
621 or pushop.repo.changelog.filteredrevs):
622 # push everything,
622 # push everything,
623 # use the fast path, no race possible on push
623 # use the fast path, no race possible on push
624 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
624 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
625 cg = changegroup.getsubset(pushop.repo,
625 cg = changegroup.getsubset(pushop.repo,
626 outgoing,
626 outgoing,
627 bundler,
627 bundler,
628 'push',
628 'push',
629 fastpath=True)
629 fastpath=True)
630 else:
630 else:
631 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
631 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
632 bundlecaps)
632 bundlecaps)
633
633
634 # apply changegroup to remote
634 # apply changegroup to remote
635 if unbundle:
635 if unbundle:
636 # local repo finds heads on server, finds out what
636 # local repo finds heads on server, finds out what
637 # revs it must push. once revs transferred, if server
637 # revs it must push. once revs transferred, if server
638 # finds it has different heads (someone else won
638 # finds it has different heads (someone else won
639 # commit/push race), server aborts.
639 # commit/push race), server aborts.
640 if pushop.force:
640 if pushop.force:
641 remoteheads = ['force']
641 remoteheads = ['force']
642 else:
642 else:
643 remoteheads = pushop.remoteheads
643 remoteheads = pushop.remoteheads
644 # ssh: return remote's addchangegroup()
644 # ssh: return remote's addchangegroup()
645 # http: return remote's addchangegroup() or 0 for error
645 # http: return remote's addchangegroup() or 0 for error
646 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
646 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
647 pushop.repo.url())
647 pushop.repo.url())
648 else:
648 else:
649 # we return an integer indicating remote head count
649 # we return an integer indicating remote head count
650 # change
650 # change
651 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
651 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
652 pushop.repo.url())
652 pushop.repo.url())
653
653
654 def _pushsyncphase(pushop):
654 def _pushsyncphase(pushop):
655 """synchronise phase information locally and remotely"""
655 """synchronise phase information locally and remotely"""
656 cheads = pushop.commonheads
656 cheads = pushop.commonheads
657 # even when we don't push, exchanging phase data is useful
657 # even when we don't push, exchanging phase data is useful
658 remotephases = pushop.remote.listkeys('phases')
658 remotephases = pushop.remote.listkeys('phases')
659 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
659 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
660 and remotephases # server supports phases
660 and remotephases # server supports phases
661 and pushop.cgresult is None # nothing was pushed
661 and pushop.cgresult is None # nothing was pushed
662 and remotephases.get('publishing', False)):
662 and remotephases.get('publishing', False)):
663 # When:
663 # When:
664 # - this is a subrepo push
664 # - this is a subrepo push
665 # - and remote support phase
665 # - and remote support phase
666 # - and no changeset was pushed
666 # - and no changeset was pushed
667 # - and remote is publishing
667 # - and remote is publishing
668 # We may be in issue 3871 case!
668 # We may be in issue 3871 case!
669 # We drop the possible phase synchronisation done by
669 # We drop the possible phase synchronisation done by
670 # courtesy to publish changesets possibly locally draft
670 # courtesy to publish changesets possibly locally draft
671 # on the remote.
671 # on the remote.
672 remotephases = {'publishing': 'True'}
672 remotephases = {'publishing': 'True'}
673 if not remotephases: # old server or public only reply from non-publishing
673 if not remotephases: # old server or public only reply from non-publishing
674 _localphasemove(pushop, cheads)
674 _localphasemove(pushop, cheads)
675 # don't push any phase data as there is nothing to push
675 # don't push any phase data as there is nothing to push
676 else:
676 else:
677 ana = phases.analyzeremotephases(pushop.repo, cheads,
677 ana = phases.analyzeremotephases(pushop.repo, cheads,
678 remotephases)
678 remotephases)
679 pheads, droots = ana
679 pheads, droots = ana
680 ### Apply remote phase on local
680 ### Apply remote phase on local
681 if remotephases.get('publishing', False):
681 if remotephases.get('publishing', False):
682 _localphasemove(pushop, cheads)
682 _localphasemove(pushop, cheads)
683 else: # publish = False
683 else: # publish = False
684 _localphasemove(pushop, pheads)
684 _localphasemove(pushop, pheads)
685 _localphasemove(pushop, cheads, phases.draft)
685 _localphasemove(pushop, cheads, phases.draft)
686 ### Apply local phase on remote
686 ### Apply local phase on remote
687
687
688 if pushop.cgresult:
688 if pushop.cgresult:
689 if 'phases' in pushop.stepsdone:
689 if 'phases' in pushop.stepsdone:
690 # phases already pushed though bundle2
690 # phases already pushed though bundle2
691 return
691 return
692 outdated = pushop.outdatedphases
692 outdated = pushop.outdatedphases
693 else:
693 else:
694 outdated = pushop.fallbackoutdatedphases
694 outdated = pushop.fallbackoutdatedphases
695
695
696 pushop.stepsdone.add('phases')
696 pushop.stepsdone.add('phases')
697
697
698 # filter heads already turned public by the push
698 # filter heads already turned public by the push
699 outdated = [c for c in outdated if c.node() not in pheads]
699 outdated = [c for c in outdated if c.node() not in pheads]
700 # fallback to independent pushkey command
700 # fallback to independent pushkey command
701 for newremotehead in outdated:
701 for newremotehead in outdated:
702 r = pushop.remote.pushkey('phases',
702 r = pushop.remote.pushkey('phases',
703 newremotehead.hex(),
703 newremotehead.hex(),
704 str(phases.draft),
704 str(phases.draft),
705 str(phases.public))
705 str(phases.public))
706 if not r:
706 if not r:
707 pushop.ui.warn(_('updating %s to public failed!\n')
707 pushop.ui.warn(_('updating %s to public failed!\n')
708 % newremotehead)
708 % newremotehead)
709
709
710 def _localphasemove(pushop, nodes, phase=phases.public):
710 def _localphasemove(pushop, nodes, phase=phases.public):
711 """move <nodes> to <phase> in the local source repo"""
711 """move <nodes> to <phase> in the local source repo"""
712 if pushop.trmanager:
712 if pushop.trmanager:
713 phases.advanceboundary(pushop.repo,
713 phases.advanceboundary(pushop.repo,
714 pushop.trmanager.transaction(),
714 pushop.trmanager.transaction(),
715 phase,
715 phase,
716 nodes)
716 nodes)
717 else:
717 else:
718 # repo is not locked, do not change any phases!
718 # repo is not locked, do not change any phases!
719 # Informs the user that phases should have been moved when
719 # Informs the user that phases should have been moved when
720 # applicable.
720 # applicable.
721 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
721 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
722 phasestr = phases.phasenames[phase]
722 phasestr = phases.phasenames[phase]
723 if actualmoves:
723 if actualmoves:
724 pushop.ui.status(_('cannot lock source repo, skipping '
724 pushop.ui.status(_('cannot lock source repo, skipping '
725 'local %s phase update\n') % phasestr)
725 'local %s phase update\n') % phasestr)
726
726
727 def _pushobsolete(pushop):
727 def _pushobsolete(pushop):
728 """utility function to push obsolete markers to a remote"""
728 """utility function to push obsolete markers to a remote"""
729 if 'obsmarkers' in pushop.stepsdone:
729 if 'obsmarkers' in pushop.stepsdone:
730 return
730 return
731 pushop.ui.debug('try to push obsolete markers to remote\n')
731 pushop.ui.debug('try to push obsolete markers to remote\n')
732 repo = pushop.repo
732 repo = pushop.repo
733 remote = pushop.remote
733 remote = pushop.remote
734 pushop.stepsdone.add('obsmarkers')
734 pushop.stepsdone.add('obsmarkers')
735 if pushop.outobsmarkers:
735 if pushop.outobsmarkers:
736 rslts = []
736 rslts = []
737 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
737 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
738 for key in sorted(remotedata, reverse=True):
738 for key in sorted(remotedata, reverse=True):
739 # reverse sort to ensure we end with dump0
739 # reverse sort to ensure we end with dump0
740 data = remotedata[key]
740 data = remotedata[key]
741 rslts.append(remote.pushkey('obsolete', key, '', data))
741 rslts.append(remote.pushkey('obsolete', key, '', data))
742 if [r for r in rslts if not r]:
742 if [r for r in rslts if not r]:
743 msg = _('failed to push some obsolete markers!\n')
743 msg = _('failed to push some obsolete markers!\n')
744 repo.ui.warn(msg)
744 repo.ui.warn(msg)
745
745
746 def _pushbookmark(pushop):
746 def _pushbookmark(pushop):
747 """Update bookmark position on remote"""
747 """Update bookmark position on remote"""
748 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
748 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
749 return
749 return
750 pushop.stepsdone.add('bookmarks')
750 pushop.stepsdone.add('bookmarks')
751 ui = pushop.ui
751 ui = pushop.ui
752 remote = pushop.remote
752 remote = pushop.remote
753
753
754 for b, old, new in pushop.outbookmarks:
754 for b, old, new in pushop.outbookmarks:
755 action = 'update'
755 action = 'update'
756 if not old:
756 if not old:
757 action = 'export'
757 action = 'export'
758 elif not new:
758 elif not new:
759 action = 'delete'
759 action = 'delete'
760 if remote.pushkey('bookmarks', b, old, new):
760 if remote.pushkey('bookmarks', b, old, new):
761 ui.status(bookmsgmap[action][0] % b)
761 ui.status(bookmsgmap[action][0] % b)
762 else:
762 else:
763 ui.warn(bookmsgmap[action][1] % b)
763 ui.warn(bookmsgmap[action][1] % b)
764 # discovery can have set the value form invalid entry
764 # discovery can have set the value form invalid entry
765 if pushop.bkresult is not None:
765 if pushop.bkresult is not None:
766 pushop.bkresult = 1
766 pushop.bkresult = 1
767
767
768 class pulloperation(object):
768 class pulloperation(object):
769 """A object that represent a single pull operation
769 """A object that represent a single pull operation
770
770
771 It purpose is to carry pull related state and very common operation.
771 It purpose is to carry pull related state and very common operation.
772
772
773 A new should be created at the beginning of each pull and discarded
773 A new should be created at the beginning of each pull and discarded
774 afterward.
774 afterward.
775 """
775 """
776
776
777 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
777 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
778 # repo we pull into
778 # repo we pull into
779 self.repo = repo
779 self.repo = repo
780 # repo we pull from
780 # repo we pull from
781 self.remote = remote
781 self.remote = remote
782 # revision we try to pull (None is "all")
782 # revision we try to pull (None is "all")
783 self.heads = heads
783 self.heads = heads
784 # bookmark pulled explicitly
784 # bookmark pulled explicitly
785 self.explicitbookmarks = bookmarks
785 self.explicitbookmarks = bookmarks
786 # do we force pull?
786 # do we force pull?
787 self.force = force
787 self.force = force
788 # transaction manager
788 # transaction manager
789 self.trmanager = None
789 self.trmanager = None
790 # set of common changeset between local and remote before pull
790 # set of common changeset between local and remote before pull
791 self.common = None
791 self.common = None
792 # set of pulled head
792 # set of pulled head
793 self.rheads = None
793 self.rheads = None
794 # list of missing changeset to fetch remotely
794 # list of missing changeset to fetch remotely
795 self.fetch = None
795 self.fetch = None
796 # remote bookmarks data
796 # remote bookmarks data
797 self.remotebookmarks = None
797 self.remotebookmarks = None
798 # result of changegroup pulling (used as return code by pull)
798 # result of changegroup pulling (used as return code by pull)
799 self.cgresult = None
799 self.cgresult = None
800 # list of step already done
800 # list of step already done
801 self.stepsdone = set()
801 self.stepsdone = set()
802
802
803 @util.propertycache
803 @util.propertycache
804 def pulledsubset(self):
804 def pulledsubset(self):
805 """heads of the set of changeset target by the pull"""
805 """heads of the set of changeset target by the pull"""
806 # compute target subset
806 # compute target subset
807 if self.heads is None:
807 if self.heads is None:
808 # We pulled every thing possible
808 # We pulled every thing possible
809 # sync on everything common
809 # sync on everything common
810 c = set(self.common)
810 c = set(self.common)
811 ret = list(self.common)
811 ret = list(self.common)
812 for n in self.rheads:
812 for n in self.rheads:
813 if n not in c:
813 if n not in c:
814 ret.append(n)
814 ret.append(n)
815 return ret
815 return ret
816 else:
816 else:
817 # We pulled a specific subset
817 # We pulled a specific subset
818 # sync on this subset
818 # sync on this subset
819 return self.heads
819 return self.heads
820
820
821 def gettransaction(self):
821 def gettransaction(self):
822 # deprecated; talk to trmanager directly
822 # deprecated; talk to trmanager directly
823 return self.trmanager.transaction()
823 return self.trmanager.transaction()
824
824
825 class transactionmanager(object):
825 class transactionmanager(object):
826 """An object to manage the life cycle of a transaction
826 """An object to manage the life cycle of a transaction
827
827
828 It creates the transaction on demand and calls the appropriate hooks when
828 It creates the transaction on demand and calls the appropriate hooks when
829 closing the transaction."""
829 closing the transaction."""
830 def __init__(self, repo, source, url):
830 def __init__(self, repo, source, url):
831 self.repo = repo
831 self.repo = repo
832 self.source = source
832 self.source = source
833 self.url = url
833 self.url = url
834 self._tr = None
834 self._tr = None
835
835
836 def transaction(self):
836 def transaction(self):
837 """Return an open transaction object, constructing if necessary"""
837 """Return an open transaction object, constructing if necessary"""
838 if not self._tr:
838 if not self._tr:
839 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
839 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
840 self._tr = self.repo.transaction(trname)
840 self._tr = self.repo.transaction(trname)
841 self._tr.hookargs['source'] = self.source
841 self._tr.hookargs['source'] = self.source
842 self._tr.hookargs['url'] = self.url
842 self._tr.hookargs['url'] = self.url
843 return self._tr
843 return self._tr
844
844
845 def close(self):
845 def close(self):
846 """close transaction if created"""
846 """close transaction if created"""
847 if self._tr is not None:
847 if self._tr is not None:
848 repo = self.repo
848 repo = self.repo
849 p = lambda: self._tr.writepending() and repo.root or ""
849 p = lambda: self._tr.writepending() and repo.root or ""
850 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
850 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
851 **self._tr.hookargs)
851 **self._tr.hookargs)
852 hookargs = dict(self._tr.hookargs)
852 hookargs = dict(self._tr.hookargs)
853 def runhooks():
853 def runhooks():
854 repo.hook('b2x-transactionclose', **hookargs)
854 repo.hook('b2x-transactionclose', **hookargs)
855 self._tr.addpostclose('b2x-hook-transactionclose',
855 self._tr.addpostclose('b2x-hook-transactionclose',
856 lambda tr: repo._afterlock(runhooks))
856 lambda tr: repo._afterlock(runhooks))
857 self._tr.close()
857 self._tr.close()
858
858
859 def release(self):
859 def release(self):
860 """release transaction if created"""
860 """release transaction if created"""
861 if self._tr is not None:
861 if self._tr is not None:
862 self._tr.release()
862 self._tr.release()
863
863
864 def pull(repo, remote, heads=None, force=False, bookmarks=()):
864 def pull(repo, remote, heads=None, force=False, bookmarks=()):
865 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
865 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
866 if pullop.remote.local():
866 if pullop.remote.local():
867 missing = set(pullop.remote.requirements) - pullop.repo.supported
867 missing = set(pullop.remote.requirements) - pullop.repo.supported
868 if missing:
868 if missing:
869 msg = _("required features are not"
869 msg = _("required features are not"
870 " supported in the destination:"
870 " supported in the destination:"
871 " %s") % (', '.join(sorted(missing)))
871 " %s") % (', '.join(sorted(missing)))
872 raise util.Abort(msg)
872 raise util.Abort(msg)
873
873
874 pullop.remotebookmarks = remote.listkeys('bookmarks')
874 pullop.remotebookmarks = remote.listkeys('bookmarks')
875 lock = pullop.repo.lock()
875 lock = pullop.repo.lock()
876 try:
876 try:
877 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
877 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
878 _pulldiscovery(pullop)
878 _pulldiscovery(pullop)
879 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
879 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
880 and pullop.remote.capable('bundle2-exp')):
880 and pullop.remote.capable('bundle2-exp')):
881 _pullbundle2(pullop)
881 _pullbundle2(pullop)
882 _pullchangeset(pullop)
882 _pullchangeset(pullop)
883 _pullphase(pullop)
883 _pullphase(pullop)
884 _pullbookmarks(pullop)
884 _pullbookmarks(pullop)
885 _pullobsolete(pullop)
885 _pullobsolete(pullop)
886 pullop.trmanager.close()
886 pullop.trmanager.close()
887 finally:
887 finally:
888 pullop.trmanager.release()
888 pullop.trmanager.release()
889 lock.release()
889 lock.release()
890
890
891 return pullop
891 return pullop
892
892
893 # list of steps to perform discovery before pull
893 # list of steps to perform discovery before pull
894 pulldiscoveryorder = []
894 pulldiscoveryorder = []
895
895
896 # Mapping between step name and function
896 # Mapping between step name and function
897 #
897 #
898 # This exists to help extensions wrap steps if necessary
898 # This exists to help extensions wrap steps if necessary
899 pulldiscoverymapping = {}
899 pulldiscoverymapping = {}
900
900
901 def pulldiscovery(stepname):
901 def pulldiscovery(stepname):
902 """decorator for function performing discovery before pull
902 """decorator for function performing discovery before pull
903
903
904 The function is added to the step -> function mapping and appended to the
904 The function is added to the step -> function mapping and appended to the
905 list of steps. Beware that decorated function will be added in order (this
905 list of steps. Beware that decorated function will be added in order (this
906 may matter).
906 may matter).
907
907
908 You can only use this decorator for a new step, if you want to wrap a step
908 You can only use this decorator for a new step, if you want to wrap a step
909 from an extension, change the pulldiscovery dictionary directly."""
909 from an extension, change the pulldiscovery dictionary directly."""
910 def dec(func):
910 def dec(func):
911 assert stepname not in pulldiscoverymapping
911 assert stepname not in pulldiscoverymapping
912 pulldiscoverymapping[stepname] = func
912 pulldiscoverymapping[stepname] = func
913 pulldiscoveryorder.append(stepname)
913 pulldiscoveryorder.append(stepname)
914 return func
914 return func
915 return dec
915 return dec
916
916
917 def _pulldiscovery(pullop):
917 def _pulldiscovery(pullop):
918 """Run all discovery steps"""
918 """Run all discovery steps"""
919 for stepname in pulldiscoveryorder:
919 for stepname in pulldiscoveryorder:
920 step = pulldiscoverymapping[stepname]
920 step = pulldiscoverymapping[stepname]
921 step(pullop)
921 step(pullop)
922
922
923 @pulldiscovery('changegroup')
923 @pulldiscovery('changegroup')
924 def _pulldiscoverychangegroup(pullop):
924 def _pulldiscoverychangegroup(pullop):
925 """discovery phase for the pull
925 """discovery phase for the pull
926
926
927 Current handle changeset discovery only, will change handle all discovery
927 Current handle changeset discovery only, will change handle all discovery
928 at some point."""
928 at some point."""
929 tmp = discovery.findcommonincoming(pullop.repo,
929 tmp = discovery.findcommonincoming(pullop.repo,
930 pullop.remote,
930 pullop.remote,
931 heads=pullop.heads,
931 heads=pullop.heads,
932 force=pullop.force)
932 force=pullop.force)
933 common, fetch, rheads = tmp
933 common, fetch, rheads = tmp
934 nm = pullop.repo.unfiltered().changelog.nodemap
934 nm = pullop.repo.unfiltered().changelog.nodemap
935 if fetch and rheads:
935 if fetch and rheads:
936 # If a remote heads in filtered locally, lets drop it from the unknown
936 # If a remote heads in filtered locally, lets drop it from the unknown
937 # remote heads and put in back in common.
937 # remote heads and put in back in common.
938 #
938 #
939 # This is a hackish solution to catch most of "common but locally
939 # This is a hackish solution to catch most of "common but locally
940 # hidden situation". We do not performs discovery on unfiltered
940 # hidden situation". We do not performs discovery on unfiltered
941 # repository because it end up doing a pathological amount of round
941 # repository because it end up doing a pathological amount of round
942 # trip for w huge amount of changeset we do not care about.
942 # trip for w huge amount of changeset we do not care about.
943 #
943 #
944 # If a set of such "common but filtered" changeset exist on the server
944 # If a set of such "common but filtered" changeset exist on the server
945 # but are not including a remote heads, we'll not be able to detect it,
945 # but are not including a remote heads, we'll not be able to detect it,
946 scommon = set(common)
946 scommon = set(common)
947 filteredrheads = []
947 filteredrheads = []
948 for n in rheads:
948 for n in rheads:
949 if n in nm:
949 if n in nm:
950 if n not in scommon:
950 if n not in scommon:
951 common.append(n)
951 common.append(n)
952 else:
952 else:
953 filteredrheads.append(n)
953 filteredrheads.append(n)
954 if not filteredrheads:
954 if not filteredrheads:
955 fetch = []
955 fetch = []
956 rheads = filteredrheads
956 rheads = filteredrheads
957 pullop.common = common
957 pullop.common = common
958 pullop.fetch = fetch
958 pullop.fetch = fetch
959 pullop.rheads = rheads
959 pullop.rheads = rheads
960
960
961 def _pullbundle2(pullop):
961 def _pullbundle2(pullop):
962 """pull data using bundle2
962 """pull data using bundle2
963
963
964 For now, the only supported data are changegroup."""
964 For now, the only supported data are changegroup."""
965 remotecaps = bundle2.bundle2caps(pullop.remote)
965 remotecaps = bundle2.bundle2caps(pullop.remote)
966 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
966 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
967 # pulling changegroup
967 # pulling changegroup
968 pullop.stepsdone.add('changegroup')
968 pullop.stepsdone.add('changegroup')
969
969
970 kwargs['common'] = pullop.common
970 kwargs['common'] = pullop.common
971 kwargs['heads'] = pullop.heads or pullop.rheads
971 kwargs['heads'] = pullop.heads or pullop.rheads
972 kwargs['cg'] = pullop.fetch
972 kwargs['cg'] = pullop.fetch
973 if 'b2x:listkeys' in remotecaps:
973 if 'b2x:listkeys' in remotecaps:
974 kwargs['listkeys'] = ['phase', 'bookmarks']
974 kwargs['listkeys'] = ['phase', 'bookmarks']
975 if not pullop.fetch:
975 if not pullop.fetch:
976 pullop.repo.ui.status(_("no changes found\n"))
976 pullop.repo.ui.status(_("no changes found\n"))
977 pullop.cgresult = 0
977 pullop.cgresult = 0
978 else:
978 else:
979 if pullop.heads is None and list(pullop.common) == [nullid]:
979 if pullop.heads is None and list(pullop.common) == [nullid]:
980 pullop.repo.ui.status(_("requesting all changes\n"))
980 pullop.repo.ui.status(_("requesting all changes\n"))
981 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
981 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
982 remoteversions = bundle2.obsmarkersversion(remotecaps)
982 remoteversions = bundle2.obsmarkersversion(remotecaps)
983 if obsolete.commonversion(remoteversions) is not None:
983 if obsolete.commonversion(remoteversions) is not None:
984 kwargs['obsmarkers'] = True
984 kwargs['obsmarkers'] = True
985 pullop.stepsdone.add('obsmarkers')
985 pullop.stepsdone.add('obsmarkers')
986 _pullbundle2extraprepare(pullop, kwargs)
986 _pullbundle2extraprepare(pullop, kwargs)
987 bundle = pullop.remote.getbundle('pull', **kwargs)
987 bundle = pullop.remote.getbundle('pull', **kwargs)
988 try:
988 try:
989 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
989 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
990 except error.BundleValueError, exc:
990 except error.BundleValueError, exc:
991 raise util.Abort('missing support for %s' % exc)
991 raise util.Abort('missing support for %s' % exc)
992
992
993 if pullop.fetch:
993 if pullop.fetch:
994 results = [cg['return'] for cg in op.records['changegroup']]
994 results = [cg['return'] for cg in op.records['changegroup']]
995 pullop.cgresult = changegroup.combineresults(results)
995 pullop.cgresult = changegroup.combineresults(results)
996
996
997 # processing phases change
997 # processing phases change
998 for namespace, value in op.records['listkeys']:
998 for namespace, value in op.records['listkeys']:
999 if namespace == 'phases':
999 if namespace == 'phases':
1000 _pullapplyphases(pullop, value)
1000 _pullapplyphases(pullop, value)
1001
1001
1002 # processing bookmark update
1002 # processing bookmark update
1003 for namespace, value in op.records['listkeys']:
1003 for namespace, value in op.records['listkeys']:
1004 if namespace == 'bookmarks':
1004 if namespace == 'bookmarks':
1005 pullop.remotebookmarks = value
1005 pullop.remotebookmarks = value
1006 _pullbookmarks(pullop)
1006 _pullbookmarks(pullop)
1007
1007
1008 def _pullbundle2extraprepare(pullop, kwargs):
1008 def _pullbundle2extraprepare(pullop, kwargs):
1009 """hook function so that extensions can extend the getbundle call"""
1009 """hook function so that extensions can extend the getbundle call"""
1010 pass
1010 pass
1011
1011
1012 def _pullchangeset(pullop):
1012 def _pullchangeset(pullop):
1013 """pull changeset from unbundle into the local repo"""
1013 """pull changeset from unbundle into the local repo"""
1014 # We delay the open of the transaction as late as possible so we
1014 # We delay the open of the transaction as late as possible so we
1015 # don't open transaction for nothing or you break future useful
1015 # don't open transaction for nothing or you break future useful
1016 # rollback call
1016 # rollback call
1017 if 'changegroup' in pullop.stepsdone:
1017 if 'changegroup' in pullop.stepsdone:
1018 return
1018 return
1019 pullop.stepsdone.add('changegroup')
1019 pullop.stepsdone.add('changegroup')
1020 if not pullop.fetch:
1020 if not pullop.fetch:
1021 pullop.repo.ui.status(_("no changes found\n"))
1021 pullop.repo.ui.status(_("no changes found\n"))
1022 pullop.cgresult = 0
1022 pullop.cgresult = 0
1023 return
1023 return
1024 pullop.gettransaction()
1024 pullop.gettransaction()
1025 if pullop.heads is None and list(pullop.common) == [nullid]:
1025 if pullop.heads is None and list(pullop.common) == [nullid]:
1026 pullop.repo.ui.status(_("requesting all changes\n"))
1026 pullop.repo.ui.status(_("requesting all changes\n"))
1027 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1027 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1028 # issue1320, avoid a race if remote changed after discovery
1028 # issue1320, avoid a race if remote changed after discovery
1029 pullop.heads = pullop.rheads
1029 pullop.heads = pullop.rheads
1030
1030
1031 if pullop.remote.capable('getbundle'):
1031 if pullop.remote.capable('getbundle'):
1032 # TODO: get bundlecaps from remote
1032 # TODO: get bundlecaps from remote
1033 cg = pullop.remote.getbundle('pull', common=pullop.common,
1033 cg = pullop.remote.getbundle('pull', common=pullop.common,
1034 heads=pullop.heads or pullop.rheads)
1034 heads=pullop.heads or pullop.rheads)
1035 elif pullop.heads is None:
1035 elif pullop.heads is None:
1036 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1036 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1037 elif not pullop.remote.capable('changegroupsubset'):
1037 elif not pullop.remote.capable('changegroupsubset'):
1038 raise util.Abort(_("partial pull cannot be done because "
1038 raise util.Abort(_("partial pull cannot be done because "
1039 "other repository doesn't support "
1039 "other repository doesn't support "
1040 "changegroupsubset."))
1040 "changegroupsubset."))
1041 else:
1041 else:
1042 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1042 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1043 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1043 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1044 pullop.remote.url())
1044 pullop.remote.url())
1045
1045
1046 def _pullphase(pullop):
1046 def _pullphase(pullop):
1047 # Get remote phases data from remote
1047 # Get remote phases data from remote
1048 if 'phases' in pullop.stepsdone:
1048 if 'phases' in pullop.stepsdone:
1049 return
1049 return
1050 remotephases = pullop.remote.listkeys('phases')
1050 remotephases = pullop.remote.listkeys('phases')
1051 _pullapplyphases(pullop, remotephases)
1051 _pullapplyphases(pullop, remotephases)
1052
1052
1053 def _pullapplyphases(pullop, remotephases):
1053 def _pullapplyphases(pullop, remotephases):
1054 """apply phase movement from observed remote state"""
1054 """apply phase movement from observed remote state"""
1055 if 'phases' in pullop.stepsdone:
1055 if 'phases' in pullop.stepsdone:
1056 return
1056 return
1057 pullop.stepsdone.add('phases')
1057 pullop.stepsdone.add('phases')
1058 publishing = bool(remotephases.get('publishing', False))
1058 publishing = bool(remotephases.get('publishing', False))
1059 if remotephases and not publishing:
1059 if remotephases and not publishing:
1060 # remote is new and unpublishing
1060 # remote is new and unpublishing
1061 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1061 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1062 pullop.pulledsubset,
1062 pullop.pulledsubset,
1063 remotephases)
1063 remotephases)
1064 dheads = pullop.pulledsubset
1064 dheads = pullop.pulledsubset
1065 else:
1065 else:
1066 # Remote is old or publishing all common changesets
1066 # Remote is old or publishing all common changesets
1067 # should be seen as public
1067 # should be seen as public
1068 pheads = pullop.pulledsubset
1068 pheads = pullop.pulledsubset
1069 dheads = []
1069 dheads = []
1070 unfi = pullop.repo.unfiltered()
1070 unfi = pullop.repo.unfiltered()
1071 phase = unfi._phasecache.phase
1071 phase = unfi._phasecache.phase
1072 rev = unfi.changelog.nodemap.get
1072 rev = unfi.changelog.nodemap.get
1073 public = phases.public
1073 public = phases.public
1074 draft = phases.draft
1074 draft = phases.draft
1075
1075
1076 # exclude changesets already public locally and update the others
1076 # exclude changesets already public locally and update the others
1077 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1077 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1078 if pheads:
1078 if pheads:
1079 tr = pullop.gettransaction()
1079 tr = pullop.gettransaction()
1080 phases.advanceboundary(pullop.repo, tr, public, pheads)
1080 phases.advanceboundary(pullop.repo, tr, public, pheads)
1081
1081
1082 # exclude changesets already draft locally and update the others
1082 # exclude changesets already draft locally and update the others
1083 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1083 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1084 if dheads:
1084 if dheads:
1085 tr = pullop.gettransaction()
1085 tr = pullop.gettransaction()
1086 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1086 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1087
1087
1088 def _pullbookmarks(pullop):
1088 def _pullbookmarks(pullop):
1089 """process the remote bookmark information to update the local one"""
1089 """process the remote bookmark information to update the local one"""
1090 if 'bookmarks' in pullop.stepsdone:
1090 if 'bookmarks' in pullop.stepsdone:
1091 return
1091 return
1092 pullop.stepsdone.add('bookmarks')
1092 pullop.stepsdone.add('bookmarks')
1093 repo = pullop.repo
1093 repo = pullop.repo
1094 remotebookmarks = pullop.remotebookmarks
1094 remotebookmarks = pullop.remotebookmarks
1095 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1095 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1096 pullop.remote.url(),
1096 pullop.remote.url(),
1097 pullop.gettransaction,
1097 pullop.gettransaction,
1098 explicit=pullop.explicitbookmarks)
1098 explicit=pullop.explicitbookmarks)
1099
1099
1100 def _pullobsolete(pullop):
1100 def _pullobsolete(pullop):
1101 """utility function to pull obsolete markers from a remote
1101 """utility function to pull obsolete markers from a remote
1102
1102
1103 The `gettransaction` is function that return the pull transaction, creating
1103 The `gettransaction` is function that return the pull transaction, creating
1104 one if necessary. We return the transaction to inform the calling code that
1104 one if necessary. We return the transaction to inform the calling code that
1105 a new transaction have been created (when applicable).
1105 a new transaction have been created (when applicable).
1106
1106
1107 Exists mostly to allow overriding for experimentation purpose"""
1107 Exists mostly to allow overriding for experimentation purpose"""
1108 if 'obsmarkers' in pullop.stepsdone:
1108 if 'obsmarkers' in pullop.stepsdone:
1109 return
1109 return
1110 pullop.stepsdone.add('obsmarkers')
1110 pullop.stepsdone.add('obsmarkers')
1111 tr = None
1111 tr = None
1112 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1112 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1113 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1113 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1114 remoteobs = pullop.remote.listkeys('obsolete')
1114 remoteobs = pullop.remote.listkeys('obsolete')
1115 if 'dump0' in remoteobs:
1115 if 'dump0' in remoteobs:
1116 tr = pullop.gettransaction()
1116 tr = pullop.gettransaction()
1117 for key in sorted(remoteobs, reverse=True):
1117 for key in sorted(remoteobs, reverse=True):
1118 if key.startswith('dump'):
1118 if key.startswith('dump'):
1119 data = base85.b85decode(remoteobs[key])
1119 data = base85.b85decode(remoteobs[key])
1120 pullop.repo.obsstore.mergemarkers(tr, data)
1120 pullop.repo.obsstore.mergemarkers(tr, data)
1121 pullop.repo.invalidatevolatilesets()
1121 pullop.repo.invalidatevolatilesets()
1122 return tr
1122 return tr
1123
1123
1124 def caps20to10(repo):
1124 def caps20to10(repo):
1125 """return a set with appropriate options to use bundle20 during getbundle"""
1125 """return a set with appropriate options to use bundle20 during getbundle"""
1126 caps = set(['HG2Y'])
1126 caps = set(['HG2Y'])
1127 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1127 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1128 caps.add('bundle2=' + urllib.quote(capsblob))
1128 caps.add('bundle2=' + urllib.quote(capsblob))
1129 return caps
1129 return caps
1130
1130
1131 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1131 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1132 getbundle2partsorder = []
1132 getbundle2partsorder = []
1133
1133
1134 # Mapping between step name and function
1134 # Mapping between step name and function
1135 #
1135 #
1136 # This exists to help extensions wrap steps if necessary
1136 # This exists to help extensions wrap steps if necessary
1137 getbundle2partsmapping = {}
1137 getbundle2partsmapping = {}
1138
1138
1139 def getbundle2partsgenerator(stepname):
1139 def getbundle2partsgenerator(stepname):
1140 """decorator for function generating bundle2 part for getbundle
1140 """decorator for function generating bundle2 part for getbundle
1141
1141
1142 The function is added to the step -> function mapping and appended to the
1142 The function is added to the step -> function mapping and appended to the
1143 list of steps. Beware that decorated functions will be added in order
1143 list of steps. Beware that decorated functions will be added in order
1144 (this may matter).
1144 (this may matter).
1145
1145
1146 You can only use this decorator for new steps, if you want to wrap a step
1146 You can only use this decorator for new steps, if you want to wrap a step
1147 from an extension, attack the getbundle2partsmapping dictionary directly."""
1147 from an extension, attack the getbundle2partsmapping dictionary directly."""
1148 def dec(func):
1148 def dec(func):
1149 assert stepname not in getbundle2partsmapping
1149 assert stepname not in getbundle2partsmapping
1150 getbundle2partsmapping[stepname] = func
1150 getbundle2partsmapping[stepname] = func
1151 getbundle2partsorder.append(stepname)
1151 getbundle2partsorder.append(stepname)
1152 return func
1152 return func
1153 return dec
1153 return dec
1154
1154
1155 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1155 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1156 **kwargs):
1156 **kwargs):
1157 """return a full bundle (with potentially multiple kind of parts)
1157 """return a full bundle (with potentially multiple kind of parts)
1158
1158
1159 Could be a bundle HG10 or a bundle HG2Y depending on bundlecaps
1159 Could be a bundle HG10 or a bundle HG2Y depending on bundlecaps
1160 passed. For now, the bundle can contain only changegroup, but this will
1160 passed. For now, the bundle can contain only changegroup, but this will
1161 changes when more part type will be available for bundle2.
1161 changes when more part type will be available for bundle2.
1162
1162
1163 This is different from changegroup.getchangegroup that only returns an HG10
1163 This is different from changegroup.getchangegroup that only returns an HG10
1164 changegroup bundle. They may eventually get reunited in the future when we
1164 changegroup bundle. They may eventually get reunited in the future when we
1165 have a clearer idea of the API we what to query different data.
1165 have a clearer idea of the API we what to query different data.
1166
1166
1167 The implementation is at a very early stage and will get massive rework
1167 The implementation is at a very early stage and will get massive rework
1168 when the API of bundle is refined.
1168 when the API of bundle is refined.
1169 """
1169 """
1170 # bundle10 case
1170 # bundle10 case
1171 if bundlecaps is None or 'HG2Y' not in bundlecaps:
1171 if bundlecaps is None or 'HG2Y' not in bundlecaps:
1172 if bundlecaps and not kwargs.get('cg', True):
1172 if bundlecaps and not kwargs.get('cg', True):
1173 raise ValueError(_('request for bundle10 must include changegroup'))
1173 raise ValueError(_('request for bundle10 must include changegroup'))
1174
1174
1175 if kwargs:
1175 if kwargs:
1176 raise ValueError(_('unsupported getbundle arguments: %s')
1176 raise ValueError(_('unsupported getbundle arguments: %s')
1177 % ', '.join(sorted(kwargs.keys())))
1177 % ', '.join(sorted(kwargs.keys())))
1178 return changegroup.getchangegroup(repo, source, heads=heads,
1178 return changegroup.getchangegroup(repo, source, heads=heads,
1179 common=common, bundlecaps=bundlecaps)
1179 common=common, bundlecaps=bundlecaps)
1180
1180
1181 # bundle20 case
1181 # bundle20 case
1182 b2caps = {}
1182 b2caps = {}
1183 for bcaps in bundlecaps:
1183 for bcaps in bundlecaps:
1184 if bcaps.startswith('bundle2='):
1184 if bcaps.startswith('bundle2='):
1185 blob = urllib.unquote(bcaps[len('bundle2='):])
1185 blob = urllib.unquote(bcaps[len('bundle2='):])
1186 b2caps.update(bundle2.decodecaps(blob))
1186 b2caps.update(bundle2.decodecaps(blob))
1187 bundler = bundle2.bundle20(repo.ui, b2caps)
1187 bundler = bundle2.bundle20(repo.ui, b2caps)
1188
1188
1189 kwargs['heads'] = heads
1189 kwargs['heads'] = heads
1190 kwargs['common'] = common
1190 kwargs['common'] = common
1191
1191
1192 for name in getbundle2partsorder:
1192 for name in getbundle2partsorder:
1193 func = getbundle2partsmapping[name]
1193 func = getbundle2partsmapping[name]
1194 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1194 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1195 **kwargs)
1195 **kwargs)
1196
1196
1197 return util.chunkbuffer(bundler.getchunks())
1197 return util.chunkbuffer(bundler.getchunks())
1198
1198
1199 @getbundle2partsgenerator('changegroup')
1199 @getbundle2partsgenerator('changegroup')
1200 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1200 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1201 b2caps=None, heads=None, common=None, **kwargs):
1201 b2caps=None, heads=None, common=None, **kwargs):
1202 """add a changegroup part to the requested bundle"""
1202 """add a changegroup part to the requested bundle"""
1203 cg = None
1203 cg = None
1204 if kwargs.get('cg', True):
1204 if kwargs.get('cg', True):
1205 # build changegroup bundle here.
1205 # build changegroup bundle here.
1206 version = None
1206 version = None
1207 cgversions = b2caps.get('b2x:changegroup')
1207 cgversions = b2caps.get('b2x:changegroup')
1208 if not cgversions: # 3.1 and 3.2 ship with an empty value
1208 if not cgversions: # 3.1 and 3.2 ship with an empty value
1209 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1209 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1210 common=common,
1210 common=common,
1211 bundlecaps=bundlecaps)
1211 bundlecaps=bundlecaps)
1212 else:
1212 else:
1213 cgversions = [v for v in cgversions if v in changegroup.packermap]
1213 cgversions = [v for v in cgversions if v in changegroup.packermap]
1214 if not cgversions:
1214 if not cgversions:
1215 raise ValueError(_('no common changegroup version'))
1215 raise ValueError(_('no common changegroup version'))
1216 version = max(cgversions)
1216 version = max(cgversions)
1217 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1217 cg = changegroup.getchangegroupraw(repo, source, heads=heads,
1218 common=common,
1218 common=common,
1219 bundlecaps=bundlecaps,
1219 bundlecaps=bundlecaps,
1220 version=version)
1220 version=version)
1221
1221
1222 if cg:
1222 if cg:
1223 part = bundler.newpart('b2x:changegroup', data=cg)
1223 part = bundler.newpart('b2x:changegroup', data=cg)
1224 if version is not None:
1224 if version is not None:
1225 part.addparam('version', version)
1225 part.addparam('version', version)
1226
1226
1227 @getbundle2partsgenerator('listkeys')
1227 @getbundle2partsgenerator('listkeys')
1228 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1228 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1229 b2caps=None, **kwargs):
1229 b2caps=None, **kwargs):
1230 """add parts containing listkeys namespaces to the requested bundle"""
1230 """add parts containing listkeys namespaces to the requested bundle"""
1231 listkeys = kwargs.get('listkeys', ())
1231 listkeys = kwargs.get('listkeys', ())
1232 for namespace in listkeys:
1232 for namespace in listkeys:
1233 part = bundler.newpart('b2x:listkeys')
1233 part = bundler.newpart('b2x:listkeys')
1234 part.addparam('namespace', namespace)
1234 part.addparam('namespace', namespace)
1235 keys = repo.listkeys(namespace).items()
1235 keys = repo.listkeys(namespace).items()
1236 part.data = pushkey.encodekeys(keys)
1236 part.data = pushkey.encodekeys(keys)
1237
1237
1238 @getbundle2partsgenerator('obsmarkers')
1238 @getbundle2partsgenerator('obsmarkers')
1239 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1239 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1240 b2caps=None, heads=None, **kwargs):
1240 b2caps=None, heads=None, **kwargs):
1241 """add an obsolescence markers part to the requested bundle"""
1241 """add an obsolescence markers part to the requested bundle"""
1242 if kwargs.get('obsmarkers', False):
1242 if kwargs.get('obsmarkers', False):
1243 if heads is None:
1243 if heads is None:
1244 heads = repo.heads()
1244 heads = repo.heads()
1245 subset = [c.node() for c in repo.set('::%ln', heads)]
1245 subset = [c.node() for c in repo.set('::%ln', heads)]
1246 markers = repo.obsstore.relevantmarkers(subset)
1246 markers = repo.obsstore.relevantmarkers(subset)
1247 buildobsmarkerspart(bundler, markers)
1247 buildobsmarkerspart(bundler, markers)
1248
1248
1249 def check_heads(repo, their_heads, context):
1249 def check_heads(repo, their_heads, context):
1250 """check if the heads of a repo have been modified
1250 """check if the heads of a repo have been modified
1251
1251
1252 Used by peer for unbundling.
1252 Used by peer for unbundling.
1253 """
1253 """
1254 heads = repo.heads()
1254 heads = repo.heads()
1255 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1255 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1256 if not (their_heads == ['force'] or their_heads == heads or
1256 if not (their_heads == ['force'] or their_heads == heads or
1257 their_heads == ['hashed', heads_hash]):
1257 their_heads == ['hashed', heads_hash]):
1258 # someone else committed/pushed/unbundled while we
1258 # someone else committed/pushed/unbundled while we
1259 # were transferring data
1259 # were transferring data
1260 raise error.PushRaced('repository changed while %s - '
1260 raise error.PushRaced('repository changed while %s - '
1261 'please try again' % context)
1261 'please try again' % context)
1262
1262
1263 def unbundle(repo, cg, heads, source, url):
1263 def unbundle(repo, cg, heads, source, url):
1264 """Apply a bundle to a repo.
1264 """Apply a bundle to a repo.
1265
1265
1266 this function makes sure the repo is locked during the application and have
1266 this function makes sure the repo is locked during the application and have
1267 mechanism to check that no push race occurred between the creation of the
1267 mechanism to check that no push race occurred between the creation of the
1268 bundle and its application.
1268 bundle and its application.
1269
1269
1270 If the push was raced as PushRaced exception is raised."""
1270 If the push was raced as PushRaced exception is raised."""
1271 r = 0
1271 r = 0
1272 # need a transaction when processing a bundle2 stream
1272 # need a transaction when processing a bundle2 stream
1273 tr = None
1273 tr = None
1274 lock = repo.lock()
1274 lock = repo.lock()
1275 try:
1275 try:
1276 check_heads(repo, heads, 'uploading changes')
1276 check_heads(repo, heads, 'uploading changes')
1277 # push can proceed
1277 # push can proceed
1278 if util.safehasattr(cg, 'params'):
1278 if util.safehasattr(cg, 'params'):
1279 try:
1279 try:
1280 tr = repo.transaction('unbundle')
1280 tr = repo.transaction('unbundle')
1281 tr.hookargs['source'] = source
1281 tr.hookargs['source'] = source
1282 tr.hookargs['url'] = url
1282 tr.hookargs['url'] = url
1283 tr.hookargs['bundle2-exp'] = '1'
1283 tr.hookargs['bundle2-exp'] = '1'
1284 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1284 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1285 p = lambda: tr.writepending() and repo.root or ""
1285 p = lambda: tr.writepending() and repo.root or ""
1286 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
1286 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
1287 **tr.hookargs)
1287 **tr.hookargs)
1288 hookargs = dict(tr.hookargs)
1288 hookargs = dict(tr.hookargs)
1289 def runhooks():
1289 def runhooks():
1290 repo.hook('b2x-transactionclose', **hookargs)
1290 repo.hook('b2x-transactionclose', **hookargs)
1291 tr.addpostclose('b2x-hook-transactionclose',
1291 tr.addpostclose('b2x-hook-transactionclose',
1292 lambda tr: repo._afterlock(runhooks))
1292 lambda tr: repo._afterlock(runhooks))
1293 tr.close()
1293 tr.close()
1294 except Exception, exc:
1294 except Exception, exc:
1295 exc.duringunbundle2 = True
1295 exc.duringunbundle2 = True
1296 raise
1296 raise
1297 else:
1297 else:
1298 r = changegroup.addchangegroup(repo, cg, source, url)
1298 r = changegroup.addchangegroup(repo, cg, source, url)
1299 finally:
1299 finally:
1300 if tr is not None:
1300 if tr is not None:
1301 tr.release()
1301 tr.release()
1302 lock.release()
1302 lock.release()
1303 return r
1303 return r
@@ -1,1925 +1,1925 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from node import hex, nullid, short
7 from node import hex, nullid, short
8 from i18n import _
8 from i18n import _
9 import urllib
9 import urllib
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
12 import lock as lockmod
12 import lock as lockmod
13 import transaction, store, encoding, exchange, bundle2
13 import transaction, store, encoding, exchange, bundle2
14 import scmutil, util, extensions, hook, error, revset
14 import scmutil, util, extensions, hook, error, revset
15 import match as matchmod
15 import match as matchmod
16 import merge as mergemod
16 import merge as mergemod
17 import tags as tagsmod
17 import tags as tagsmod
18 from lock import release
18 from lock import release
19 import weakref, errno, os, time, inspect
19 import weakref, errno, os, time, inspect
20 import branchmap, pathutil
20 import branchmap, pathutil
21 import namespaces
21 import namespaces
22 propertycache = util.propertycache
22 propertycache = util.propertycache
23 filecache = scmutil.filecache
23 filecache = scmutil.filecache
24
24
25 class repofilecache(filecache):
25 class repofilecache(filecache):
26 """All filecache usage on repo are done for logic that should be unfiltered
26 """All filecache usage on repo are done for logic that should be unfiltered
27 """
27 """
28
28
29 def __get__(self, repo, type=None):
29 def __get__(self, repo, type=None):
30 return super(repofilecache, self).__get__(repo.unfiltered(), type)
30 return super(repofilecache, self).__get__(repo.unfiltered(), type)
31 def __set__(self, repo, value):
31 def __set__(self, repo, value):
32 return super(repofilecache, self).__set__(repo.unfiltered(), value)
32 return super(repofilecache, self).__set__(repo.unfiltered(), value)
33 def __delete__(self, repo):
33 def __delete__(self, repo):
34 return super(repofilecache, self).__delete__(repo.unfiltered())
34 return super(repofilecache, self).__delete__(repo.unfiltered())
35
35
36 class storecache(repofilecache):
36 class storecache(repofilecache):
37 """filecache for files in the store"""
37 """filecache for files in the store"""
38 def join(self, obj, fname):
38 def join(self, obj, fname):
39 return obj.sjoin(fname)
39 return obj.sjoin(fname)
40
40
41 class unfilteredpropertycache(propertycache):
41 class unfilteredpropertycache(propertycache):
42 """propertycache that apply to unfiltered repo only"""
42 """propertycache that apply to unfiltered repo only"""
43
43
44 def __get__(self, repo, type=None):
44 def __get__(self, repo, type=None):
45 unfi = repo.unfiltered()
45 unfi = repo.unfiltered()
46 if unfi is repo:
46 if unfi is repo:
47 return super(unfilteredpropertycache, self).__get__(unfi)
47 return super(unfilteredpropertycache, self).__get__(unfi)
48 return getattr(unfi, self.name)
48 return getattr(unfi, self.name)
49
49
50 class filteredpropertycache(propertycache):
50 class filteredpropertycache(propertycache):
51 """propertycache that must take filtering in account"""
51 """propertycache that must take filtering in account"""
52
52
53 def cachevalue(self, obj, value):
53 def cachevalue(self, obj, value):
54 object.__setattr__(obj, self.name, value)
54 object.__setattr__(obj, self.name, value)
55
55
56
56
57 def hasunfilteredcache(repo, name):
57 def hasunfilteredcache(repo, name):
58 """check if a repo has an unfilteredpropertycache value for <name>"""
58 """check if a repo has an unfilteredpropertycache value for <name>"""
59 return name in vars(repo.unfiltered())
59 return name in vars(repo.unfiltered())
60
60
61 def unfilteredmethod(orig):
61 def unfilteredmethod(orig):
62 """decorate method that always need to be run on unfiltered version"""
62 """decorate method that always need to be run on unfiltered version"""
63 def wrapper(repo, *args, **kwargs):
63 def wrapper(repo, *args, **kwargs):
64 return orig(repo.unfiltered(), *args, **kwargs)
64 return orig(repo.unfiltered(), *args, **kwargs)
65 return wrapper
65 return wrapper
66
66
67 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
67 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
68 'unbundle'))
68 'unbundle'))
69 legacycaps = moderncaps.union(set(['changegroupsubset']))
69 legacycaps = moderncaps.union(set(['changegroupsubset']))
70
70
71 class localpeer(peer.peerrepository):
71 class localpeer(peer.peerrepository):
72 '''peer for a local repo; reflects only the most recent API'''
72 '''peer for a local repo; reflects only the most recent API'''
73
73
74 def __init__(self, repo, caps=moderncaps):
74 def __init__(self, repo, caps=moderncaps):
75 peer.peerrepository.__init__(self)
75 peer.peerrepository.__init__(self)
76 self._repo = repo.filtered('served')
76 self._repo = repo.filtered('served')
77 self.ui = repo.ui
77 self.ui = repo.ui
78 self._caps = repo._restrictcapabilities(caps)
78 self._caps = repo._restrictcapabilities(caps)
79 self.requirements = repo.requirements
79 self.requirements = repo.requirements
80 self.supportedformats = repo.supportedformats
80 self.supportedformats = repo.supportedformats
81
81
82 def close(self):
82 def close(self):
83 self._repo.close()
83 self._repo.close()
84
84
85 def _capabilities(self):
85 def _capabilities(self):
86 return self._caps
86 return self._caps
87
87
88 def local(self):
88 def local(self):
89 return self._repo
89 return self._repo
90
90
91 def canpush(self):
91 def canpush(self):
92 return True
92 return True
93
93
94 def url(self):
94 def url(self):
95 return self._repo.url()
95 return self._repo.url()
96
96
97 def lookup(self, key):
97 def lookup(self, key):
98 return self._repo.lookup(key)
98 return self._repo.lookup(key)
99
99
100 def branchmap(self):
100 def branchmap(self):
101 return self._repo.branchmap()
101 return self._repo.branchmap()
102
102
103 def heads(self):
103 def heads(self):
104 return self._repo.heads()
104 return self._repo.heads()
105
105
106 def known(self, nodes):
106 def known(self, nodes):
107 return self._repo.known(nodes)
107 return self._repo.known(nodes)
108
108
109 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
109 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
110 **kwargs):
110 **kwargs):
111 cg = exchange.getbundle(self._repo, source, heads=heads,
111 cg = exchange.getbundle(self._repo, source, heads=heads,
112 common=common, bundlecaps=bundlecaps, **kwargs)
112 common=common, bundlecaps=bundlecaps, **kwargs)
113 if bundlecaps is not None and 'HG2Y' in bundlecaps:
113 if bundlecaps is not None and 'HG2Y' in bundlecaps:
114 # When requesting a bundle2, getbundle returns a stream to make the
114 # When requesting a bundle2, getbundle returns a stream to make the
115 # wire level function happier. We need to build a proper object
115 # wire level function happier. We need to build a proper object
116 # from it in local peer.
116 # from it in local peer.
117 cg = bundle2.unbundle20(self.ui, cg)
117 cg = bundle2.getunbundler(self.ui, cg)
118 return cg
118 return cg
119
119
120 # TODO We might want to move the next two calls into legacypeer and add
120 # TODO We might want to move the next two calls into legacypeer and add
121 # unbundle instead.
121 # unbundle instead.
122
122
123 def unbundle(self, cg, heads, url):
123 def unbundle(self, cg, heads, url):
124 """apply a bundle on a repo
124 """apply a bundle on a repo
125
125
126 This function handles the repo locking itself."""
126 This function handles the repo locking itself."""
127 try:
127 try:
128 cg = exchange.readbundle(self.ui, cg, None)
128 cg = exchange.readbundle(self.ui, cg, None)
129 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
129 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
130 if util.safehasattr(ret, 'getchunks'):
130 if util.safehasattr(ret, 'getchunks'):
131 # This is a bundle20 object, turn it into an unbundler.
131 # This is a bundle20 object, turn it into an unbundler.
132 # This little dance should be dropped eventually when the API
132 # This little dance should be dropped eventually when the API
133 # is finally improved.
133 # is finally improved.
134 stream = util.chunkbuffer(ret.getchunks())
134 stream = util.chunkbuffer(ret.getchunks())
135 ret = bundle2.unbundle20(self.ui, stream)
135 ret = bundle2.getunbundler(self.ui, stream)
136 return ret
136 return ret
137 except error.PushRaced, exc:
137 except error.PushRaced, exc:
138 raise error.ResponseError(_('push failed:'), str(exc))
138 raise error.ResponseError(_('push failed:'), str(exc))
139
139
140 def lock(self):
140 def lock(self):
141 return self._repo.lock()
141 return self._repo.lock()
142
142
143 def addchangegroup(self, cg, source, url):
143 def addchangegroup(self, cg, source, url):
144 return changegroup.addchangegroup(self._repo, cg, source, url)
144 return changegroup.addchangegroup(self._repo, cg, source, url)
145
145
146 def pushkey(self, namespace, key, old, new):
146 def pushkey(self, namespace, key, old, new):
147 return self._repo.pushkey(namespace, key, old, new)
147 return self._repo.pushkey(namespace, key, old, new)
148
148
149 def listkeys(self, namespace):
149 def listkeys(self, namespace):
150 return self._repo.listkeys(namespace)
150 return self._repo.listkeys(namespace)
151
151
152 def debugwireargs(self, one, two, three=None, four=None, five=None):
152 def debugwireargs(self, one, two, three=None, four=None, five=None):
153 '''used to test argument passing over the wire'''
153 '''used to test argument passing over the wire'''
154 return "%s %s %s %s %s" % (one, two, three, four, five)
154 return "%s %s %s %s %s" % (one, two, three, four, five)
155
155
156 class locallegacypeer(localpeer):
156 class locallegacypeer(localpeer):
157 '''peer extension which implements legacy methods too; used for tests with
157 '''peer extension which implements legacy methods too; used for tests with
158 restricted capabilities'''
158 restricted capabilities'''
159
159
160 def __init__(self, repo):
160 def __init__(self, repo):
161 localpeer.__init__(self, repo, caps=legacycaps)
161 localpeer.__init__(self, repo, caps=legacycaps)
162
162
163 def branches(self, nodes):
163 def branches(self, nodes):
164 return self._repo.branches(nodes)
164 return self._repo.branches(nodes)
165
165
166 def between(self, pairs):
166 def between(self, pairs):
167 return self._repo.between(pairs)
167 return self._repo.between(pairs)
168
168
169 def changegroup(self, basenodes, source):
169 def changegroup(self, basenodes, source):
170 return changegroup.changegroup(self._repo, basenodes, source)
170 return changegroup.changegroup(self._repo, basenodes, source)
171
171
172 def changegroupsubset(self, bases, heads, source):
172 def changegroupsubset(self, bases, heads, source):
173 return changegroup.changegroupsubset(self._repo, bases, heads, source)
173 return changegroup.changegroupsubset(self._repo, bases, heads, source)
174
174
175 class localrepository(object):
175 class localrepository(object):
176
176
177 supportedformats = set(('revlogv1', 'generaldelta', 'manifestv2'))
177 supportedformats = set(('revlogv1', 'generaldelta', 'manifestv2'))
178 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
178 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
179 'dotencode'))
179 'dotencode'))
180 openerreqs = set(('revlogv1', 'generaldelta', 'manifestv2'))
180 openerreqs = set(('revlogv1', 'generaldelta', 'manifestv2'))
181 requirements = ['revlogv1']
181 requirements = ['revlogv1']
182 filtername = None
182 filtername = None
183
183
184 # a list of (ui, featureset) functions.
184 # a list of (ui, featureset) functions.
185 # only functions defined in module of enabled extensions are invoked
185 # only functions defined in module of enabled extensions are invoked
186 featuresetupfuncs = set()
186 featuresetupfuncs = set()
187
187
188 def _baserequirements(self, create):
188 def _baserequirements(self, create):
189 return self.requirements[:]
189 return self.requirements[:]
190
190
191 def __init__(self, baseui, path=None, create=False):
191 def __init__(self, baseui, path=None, create=False):
192 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
192 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
193 self.wopener = self.wvfs
193 self.wopener = self.wvfs
194 self.root = self.wvfs.base
194 self.root = self.wvfs.base
195 self.path = self.wvfs.join(".hg")
195 self.path = self.wvfs.join(".hg")
196 self.origroot = path
196 self.origroot = path
197 self.auditor = pathutil.pathauditor(self.root, self._checknested)
197 self.auditor = pathutil.pathauditor(self.root, self._checknested)
198 self.vfs = scmutil.vfs(self.path)
198 self.vfs = scmutil.vfs(self.path)
199 self.opener = self.vfs
199 self.opener = self.vfs
200 self.baseui = baseui
200 self.baseui = baseui
201 self.ui = baseui.copy()
201 self.ui = baseui.copy()
202 self.ui.copy = baseui.copy # prevent copying repo configuration
202 self.ui.copy = baseui.copy # prevent copying repo configuration
203 # A list of callback to shape the phase if no data were found.
203 # A list of callback to shape the phase if no data were found.
204 # Callback are in the form: func(repo, roots) --> processed root.
204 # Callback are in the form: func(repo, roots) --> processed root.
205 # This list it to be filled by extension during repo setup
205 # This list it to be filled by extension during repo setup
206 self._phasedefaults = []
206 self._phasedefaults = []
207 try:
207 try:
208 self.ui.readconfig(self.join("hgrc"), self.root)
208 self.ui.readconfig(self.join("hgrc"), self.root)
209 extensions.loadall(self.ui)
209 extensions.loadall(self.ui)
210 except IOError:
210 except IOError:
211 pass
211 pass
212
212
213 if self.featuresetupfuncs:
213 if self.featuresetupfuncs:
214 self.supported = set(self._basesupported) # use private copy
214 self.supported = set(self._basesupported) # use private copy
215 extmods = set(m.__name__ for n, m
215 extmods = set(m.__name__ for n, m
216 in extensions.extensions(self.ui))
216 in extensions.extensions(self.ui))
217 for setupfunc in self.featuresetupfuncs:
217 for setupfunc in self.featuresetupfuncs:
218 if setupfunc.__module__ in extmods:
218 if setupfunc.__module__ in extmods:
219 setupfunc(self.ui, self.supported)
219 setupfunc(self.ui, self.supported)
220 else:
220 else:
221 self.supported = self._basesupported
221 self.supported = self._basesupported
222
222
223 if not self.vfs.isdir():
223 if not self.vfs.isdir():
224 if create:
224 if create:
225 if not self.wvfs.exists():
225 if not self.wvfs.exists():
226 self.wvfs.makedirs()
226 self.wvfs.makedirs()
227 self.vfs.makedir(notindexed=True)
227 self.vfs.makedir(notindexed=True)
228 requirements = self._baserequirements(create)
228 requirements = self._baserequirements(create)
229 if self.ui.configbool('format', 'usestore', True):
229 if self.ui.configbool('format', 'usestore', True):
230 self.vfs.mkdir("store")
230 self.vfs.mkdir("store")
231 requirements.append("store")
231 requirements.append("store")
232 if self.ui.configbool('format', 'usefncache', True):
232 if self.ui.configbool('format', 'usefncache', True):
233 requirements.append("fncache")
233 requirements.append("fncache")
234 if self.ui.configbool('format', 'dotencode', True):
234 if self.ui.configbool('format', 'dotencode', True):
235 requirements.append('dotencode')
235 requirements.append('dotencode')
236 # create an invalid changelog
236 # create an invalid changelog
237 self.vfs.append(
237 self.vfs.append(
238 "00changelog.i",
238 "00changelog.i",
239 '\0\0\0\2' # represents revlogv2
239 '\0\0\0\2' # represents revlogv2
240 ' dummy changelog to prevent using the old repo layout'
240 ' dummy changelog to prevent using the old repo layout'
241 )
241 )
242 if self.ui.configbool('format', 'generaldelta', False):
242 if self.ui.configbool('format', 'generaldelta', False):
243 requirements.append("generaldelta")
243 requirements.append("generaldelta")
244 if self.ui.configbool('experimental', 'manifestv2', False):
244 if self.ui.configbool('experimental', 'manifestv2', False):
245 requirements.append("manifestv2")
245 requirements.append("manifestv2")
246 requirements = set(requirements)
246 requirements = set(requirements)
247 else:
247 else:
248 raise error.RepoError(_("repository %s not found") % path)
248 raise error.RepoError(_("repository %s not found") % path)
249 elif create:
249 elif create:
250 raise error.RepoError(_("repository %s already exists") % path)
250 raise error.RepoError(_("repository %s already exists") % path)
251 else:
251 else:
252 try:
252 try:
253 requirements = scmutil.readrequires(self.vfs, self.supported)
253 requirements = scmutil.readrequires(self.vfs, self.supported)
254 except IOError, inst:
254 except IOError, inst:
255 if inst.errno != errno.ENOENT:
255 if inst.errno != errno.ENOENT:
256 raise
256 raise
257 requirements = set()
257 requirements = set()
258
258
259 self.sharedpath = self.path
259 self.sharedpath = self.path
260 try:
260 try:
261 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
261 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
262 realpath=True)
262 realpath=True)
263 s = vfs.base
263 s = vfs.base
264 if not vfs.exists():
264 if not vfs.exists():
265 raise error.RepoError(
265 raise error.RepoError(
266 _('.hg/sharedpath points to nonexistent directory %s') % s)
266 _('.hg/sharedpath points to nonexistent directory %s') % s)
267 self.sharedpath = s
267 self.sharedpath = s
268 except IOError, inst:
268 except IOError, inst:
269 if inst.errno != errno.ENOENT:
269 if inst.errno != errno.ENOENT:
270 raise
270 raise
271
271
272 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
272 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
273 self.spath = self.store.path
273 self.spath = self.store.path
274 self.svfs = self.store.vfs
274 self.svfs = self.store.vfs
275 self.sopener = self.svfs
275 self.sopener = self.svfs
276 self.sjoin = self.store.join
276 self.sjoin = self.store.join
277 self.vfs.createmode = self.store.createmode
277 self.vfs.createmode = self.store.createmode
278 self._applyrequirements(requirements)
278 self._applyrequirements(requirements)
279 if create:
279 if create:
280 self._writerequirements()
280 self._writerequirements()
281
281
282
282
283 self._branchcaches = {}
283 self._branchcaches = {}
284 self._revbranchcache = None
284 self._revbranchcache = None
285 self.filterpats = {}
285 self.filterpats = {}
286 self._datafilters = {}
286 self._datafilters = {}
287 self._transref = self._lockref = self._wlockref = None
287 self._transref = self._lockref = self._wlockref = None
288
288
289 # A cache for various files under .hg/ that tracks file changes,
289 # A cache for various files under .hg/ that tracks file changes,
290 # (used by the filecache decorator)
290 # (used by the filecache decorator)
291 #
291 #
292 # Maps a property name to its util.filecacheentry
292 # Maps a property name to its util.filecacheentry
293 self._filecache = {}
293 self._filecache = {}
294
294
295 # hold sets of revision to be filtered
295 # hold sets of revision to be filtered
296 # should be cleared when something might have changed the filter value:
296 # should be cleared when something might have changed the filter value:
297 # - new changesets,
297 # - new changesets,
298 # - phase change,
298 # - phase change,
299 # - new obsolescence marker,
299 # - new obsolescence marker,
300 # - working directory parent change,
300 # - working directory parent change,
301 # - bookmark changes
301 # - bookmark changes
302 self.filteredrevcache = {}
302 self.filteredrevcache = {}
303
303
304 # generic mapping between names and nodes
304 # generic mapping between names and nodes
305 self.names = namespaces.namespaces()
305 self.names = namespaces.namespaces()
306
306
307 def close(self):
307 def close(self):
308 self._writecaches()
308 self._writecaches()
309
309
310 def _writecaches(self):
310 def _writecaches(self):
311 if self._revbranchcache:
311 if self._revbranchcache:
312 self._revbranchcache.write()
312 self._revbranchcache.write()
313
313
314 def _restrictcapabilities(self, caps):
314 def _restrictcapabilities(self, caps):
315 # bundle2 is not ready for prime time, drop it unless explicitly
315 # bundle2 is not ready for prime time, drop it unless explicitly
316 # required by the tests (or some brave tester)
316 # required by the tests (or some brave tester)
317 if self.ui.configbool('experimental', 'bundle2-exp', False):
317 if self.ui.configbool('experimental', 'bundle2-exp', False):
318 caps = set(caps)
318 caps = set(caps)
319 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self))
319 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self))
320 caps.add('bundle2-exp=' + urllib.quote(capsblob))
320 caps.add('bundle2-exp=' + urllib.quote(capsblob))
321 return caps
321 return caps
322
322
323 def _applyrequirements(self, requirements):
323 def _applyrequirements(self, requirements):
324 self.requirements = requirements
324 self.requirements = requirements
325 self.svfs.options = dict((r, 1) for r in requirements
325 self.svfs.options = dict((r, 1) for r in requirements
326 if r in self.openerreqs)
326 if r in self.openerreqs)
327 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
327 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
328 if chunkcachesize is not None:
328 if chunkcachesize is not None:
329 self.svfs.options['chunkcachesize'] = chunkcachesize
329 self.svfs.options['chunkcachesize'] = chunkcachesize
330 maxchainlen = self.ui.configint('format', 'maxchainlen')
330 maxchainlen = self.ui.configint('format', 'maxchainlen')
331 if maxchainlen is not None:
331 if maxchainlen is not None:
332 self.svfs.options['maxchainlen'] = maxchainlen
332 self.svfs.options['maxchainlen'] = maxchainlen
333 manifestcachesize = self.ui.configint('format', 'manifestcachesize')
333 manifestcachesize = self.ui.configint('format', 'manifestcachesize')
334 if manifestcachesize is not None:
334 if manifestcachesize is not None:
335 self.svfs.options['manifestcachesize'] = manifestcachesize
335 self.svfs.options['manifestcachesize'] = manifestcachesize
336 usetreemanifest = self.ui.configbool('experimental', 'treemanifest')
336 usetreemanifest = self.ui.configbool('experimental', 'treemanifest')
337 if usetreemanifest is not None:
337 if usetreemanifest is not None:
338 self.svfs.options['usetreemanifest'] = usetreemanifest
338 self.svfs.options['usetreemanifest'] = usetreemanifest
339
339
340 def _writerequirements(self):
340 def _writerequirements(self):
341 reqfile = self.vfs("requires", "w")
341 reqfile = self.vfs("requires", "w")
342 for r in sorted(self.requirements):
342 for r in sorted(self.requirements):
343 reqfile.write("%s\n" % r)
343 reqfile.write("%s\n" % r)
344 reqfile.close()
344 reqfile.close()
345
345
346 def _checknested(self, path):
346 def _checknested(self, path):
347 """Determine if path is a legal nested repository."""
347 """Determine if path is a legal nested repository."""
348 if not path.startswith(self.root):
348 if not path.startswith(self.root):
349 return False
349 return False
350 subpath = path[len(self.root) + 1:]
350 subpath = path[len(self.root) + 1:]
351 normsubpath = util.pconvert(subpath)
351 normsubpath = util.pconvert(subpath)
352
352
353 # XXX: Checking against the current working copy is wrong in
353 # XXX: Checking against the current working copy is wrong in
354 # the sense that it can reject things like
354 # the sense that it can reject things like
355 #
355 #
356 # $ hg cat -r 10 sub/x.txt
356 # $ hg cat -r 10 sub/x.txt
357 #
357 #
358 # if sub/ is no longer a subrepository in the working copy
358 # if sub/ is no longer a subrepository in the working copy
359 # parent revision.
359 # parent revision.
360 #
360 #
361 # However, it can of course also allow things that would have
361 # However, it can of course also allow things that would have
362 # been rejected before, such as the above cat command if sub/
362 # been rejected before, such as the above cat command if sub/
363 # is a subrepository now, but was a normal directory before.
363 # is a subrepository now, but was a normal directory before.
364 # The old path auditor would have rejected by mistake since it
364 # The old path auditor would have rejected by mistake since it
365 # panics when it sees sub/.hg/.
365 # panics when it sees sub/.hg/.
366 #
366 #
367 # All in all, checking against the working copy seems sensible
367 # All in all, checking against the working copy seems sensible
368 # since we want to prevent access to nested repositories on
368 # since we want to prevent access to nested repositories on
369 # the filesystem *now*.
369 # the filesystem *now*.
370 ctx = self[None]
370 ctx = self[None]
371 parts = util.splitpath(subpath)
371 parts = util.splitpath(subpath)
372 while parts:
372 while parts:
373 prefix = '/'.join(parts)
373 prefix = '/'.join(parts)
374 if prefix in ctx.substate:
374 if prefix in ctx.substate:
375 if prefix == normsubpath:
375 if prefix == normsubpath:
376 return True
376 return True
377 else:
377 else:
378 sub = ctx.sub(prefix)
378 sub = ctx.sub(prefix)
379 return sub.checknested(subpath[len(prefix) + 1:])
379 return sub.checknested(subpath[len(prefix) + 1:])
380 else:
380 else:
381 parts.pop()
381 parts.pop()
382 return False
382 return False
383
383
384 def peer(self):
384 def peer(self):
385 return localpeer(self) # not cached to avoid reference cycle
385 return localpeer(self) # not cached to avoid reference cycle
386
386
387 def unfiltered(self):
387 def unfiltered(self):
388 """Return unfiltered version of the repository
388 """Return unfiltered version of the repository
389
389
390 Intended to be overwritten by filtered repo."""
390 Intended to be overwritten by filtered repo."""
391 return self
391 return self
392
392
393 def filtered(self, name):
393 def filtered(self, name):
394 """Return a filtered version of a repository"""
394 """Return a filtered version of a repository"""
395 # build a new class with the mixin and the current class
395 # build a new class with the mixin and the current class
396 # (possibly subclass of the repo)
396 # (possibly subclass of the repo)
397 class proxycls(repoview.repoview, self.unfiltered().__class__):
397 class proxycls(repoview.repoview, self.unfiltered().__class__):
398 pass
398 pass
399 return proxycls(self, name)
399 return proxycls(self, name)
400
400
401 @repofilecache('bookmarks')
401 @repofilecache('bookmarks')
402 def _bookmarks(self):
402 def _bookmarks(self):
403 return bookmarks.bmstore(self)
403 return bookmarks.bmstore(self)
404
404
405 @repofilecache('bookmarks.current')
405 @repofilecache('bookmarks.current')
406 def _bookmarkcurrent(self):
406 def _bookmarkcurrent(self):
407 return bookmarks.readcurrent(self)
407 return bookmarks.readcurrent(self)
408
408
409 def bookmarkheads(self, bookmark):
409 def bookmarkheads(self, bookmark):
410 name = bookmark.split('@', 1)[0]
410 name = bookmark.split('@', 1)[0]
411 heads = []
411 heads = []
412 for mark, n in self._bookmarks.iteritems():
412 for mark, n in self._bookmarks.iteritems():
413 if mark.split('@', 1)[0] == name:
413 if mark.split('@', 1)[0] == name:
414 heads.append(n)
414 heads.append(n)
415 return heads
415 return heads
416
416
417 @storecache('phaseroots')
417 @storecache('phaseroots')
418 def _phasecache(self):
418 def _phasecache(self):
419 return phases.phasecache(self, self._phasedefaults)
419 return phases.phasecache(self, self._phasedefaults)
420
420
421 @storecache('obsstore')
421 @storecache('obsstore')
422 def obsstore(self):
422 def obsstore(self):
423 # read default format for new obsstore.
423 # read default format for new obsstore.
424 defaultformat = self.ui.configint('format', 'obsstore-version', None)
424 defaultformat = self.ui.configint('format', 'obsstore-version', None)
425 # rely on obsstore class default when possible.
425 # rely on obsstore class default when possible.
426 kwargs = {}
426 kwargs = {}
427 if defaultformat is not None:
427 if defaultformat is not None:
428 kwargs['defaultformat'] = defaultformat
428 kwargs['defaultformat'] = defaultformat
429 readonly = not obsolete.isenabled(self, obsolete.createmarkersopt)
429 readonly = not obsolete.isenabled(self, obsolete.createmarkersopt)
430 store = obsolete.obsstore(self.svfs, readonly=readonly,
430 store = obsolete.obsstore(self.svfs, readonly=readonly,
431 **kwargs)
431 **kwargs)
432 if store and readonly:
432 if store and readonly:
433 self.ui.warn(
433 self.ui.warn(
434 _('obsolete feature not enabled but %i markers found!\n')
434 _('obsolete feature not enabled but %i markers found!\n')
435 % len(list(store)))
435 % len(list(store)))
436 return store
436 return store
437
437
438 @storecache('00changelog.i')
438 @storecache('00changelog.i')
439 def changelog(self):
439 def changelog(self):
440 c = changelog.changelog(self.svfs)
440 c = changelog.changelog(self.svfs)
441 if 'HG_PENDING' in os.environ:
441 if 'HG_PENDING' in os.environ:
442 p = os.environ['HG_PENDING']
442 p = os.environ['HG_PENDING']
443 if p.startswith(self.root):
443 if p.startswith(self.root):
444 c.readpending('00changelog.i.a')
444 c.readpending('00changelog.i.a')
445 return c
445 return c
446
446
447 @storecache('00manifest.i')
447 @storecache('00manifest.i')
448 def manifest(self):
448 def manifest(self):
449 return manifest.manifest(self.svfs)
449 return manifest.manifest(self.svfs)
450
450
451 @repofilecache('dirstate')
451 @repofilecache('dirstate')
452 def dirstate(self):
452 def dirstate(self):
453 warned = [0]
453 warned = [0]
454 def validate(node):
454 def validate(node):
455 try:
455 try:
456 self.changelog.rev(node)
456 self.changelog.rev(node)
457 return node
457 return node
458 except error.LookupError:
458 except error.LookupError:
459 if not warned[0]:
459 if not warned[0]:
460 warned[0] = True
460 warned[0] = True
461 self.ui.warn(_("warning: ignoring unknown"
461 self.ui.warn(_("warning: ignoring unknown"
462 " working parent %s!\n") % short(node))
462 " working parent %s!\n") % short(node))
463 return nullid
463 return nullid
464
464
465 return dirstate.dirstate(self.vfs, self.ui, self.root, validate)
465 return dirstate.dirstate(self.vfs, self.ui, self.root, validate)
466
466
467 def __getitem__(self, changeid):
467 def __getitem__(self, changeid):
468 if changeid is None:
468 if changeid is None:
469 return context.workingctx(self)
469 return context.workingctx(self)
470 if isinstance(changeid, slice):
470 if isinstance(changeid, slice):
471 return [context.changectx(self, i)
471 return [context.changectx(self, i)
472 for i in xrange(*changeid.indices(len(self)))
472 for i in xrange(*changeid.indices(len(self)))
473 if i not in self.changelog.filteredrevs]
473 if i not in self.changelog.filteredrevs]
474 return context.changectx(self, changeid)
474 return context.changectx(self, changeid)
475
475
476 def __contains__(self, changeid):
476 def __contains__(self, changeid):
477 try:
477 try:
478 self[changeid]
478 self[changeid]
479 return True
479 return True
480 except error.RepoLookupError:
480 except error.RepoLookupError:
481 return False
481 return False
482
482
483 def __nonzero__(self):
483 def __nonzero__(self):
484 return True
484 return True
485
485
486 def __len__(self):
486 def __len__(self):
487 return len(self.changelog)
487 return len(self.changelog)
488
488
489 def __iter__(self):
489 def __iter__(self):
490 return iter(self.changelog)
490 return iter(self.changelog)
491
491
492 def revs(self, expr, *args):
492 def revs(self, expr, *args):
493 '''Return a list of revisions matching the given revset'''
493 '''Return a list of revisions matching the given revset'''
494 expr = revset.formatspec(expr, *args)
494 expr = revset.formatspec(expr, *args)
495 m = revset.match(None, expr)
495 m = revset.match(None, expr)
496 return m(self)
496 return m(self)
497
497
498 def set(self, expr, *args):
498 def set(self, expr, *args):
499 '''
499 '''
500 Yield a context for each matching revision, after doing arg
500 Yield a context for each matching revision, after doing arg
501 replacement via revset.formatspec
501 replacement via revset.formatspec
502 '''
502 '''
503 for r in self.revs(expr, *args):
503 for r in self.revs(expr, *args):
504 yield self[r]
504 yield self[r]
505
505
506 def url(self):
506 def url(self):
507 return 'file:' + self.root
507 return 'file:' + self.root
508
508
509 def hook(self, name, throw=False, **args):
509 def hook(self, name, throw=False, **args):
510 """Call a hook, passing this repo instance.
510 """Call a hook, passing this repo instance.
511
511
512 This a convenience method to aid invoking hooks. Extensions likely
512 This a convenience method to aid invoking hooks. Extensions likely
513 won't call this unless they have registered a custom hook or are
513 won't call this unless they have registered a custom hook or are
514 replacing code that is expected to call a hook.
514 replacing code that is expected to call a hook.
515 """
515 """
516 return hook.hook(self.ui, self, name, throw, **args)
516 return hook.hook(self.ui, self, name, throw, **args)
517
517
518 @unfilteredmethod
518 @unfilteredmethod
519 def _tag(self, names, node, message, local, user, date, extra={},
519 def _tag(self, names, node, message, local, user, date, extra={},
520 editor=False):
520 editor=False):
521 if isinstance(names, str):
521 if isinstance(names, str):
522 names = (names,)
522 names = (names,)
523
523
524 branches = self.branchmap()
524 branches = self.branchmap()
525 for name in names:
525 for name in names:
526 self.hook('pretag', throw=True, node=hex(node), tag=name,
526 self.hook('pretag', throw=True, node=hex(node), tag=name,
527 local=local)
527 local=local)
528 if name in branches:
528 if name in branches:
529 self.ui.warn(_("warning: tag %s conflicts with existing"
529 self.ui.warn(_("warning: tag %s conflicts with existing"
530 " branch name\n") % name)
530 " branch name\n") % name)
531
531
532 def writetags(fp, names, munge, prevtags):
532 def writetags(fp, names, munge, prevtags):
533 fp.seek(0, 2)
533 fp.seek(0, 2)
534 if prevtags and prevtags[-1] != '\n':
534 if prevtags and prevtags[-1] != '\n':
535 fp.write('\n')
535 fp.write('\n')
536 for name in names:
536 for name in names:
537 if munge:
537 if munge:
538 m = munge(name)
538 m = munge(name)
539 else:
539 else:
540 m = name
540 m = name
541
541
542 if (self._tagscache.tagtypes and
542 if (self._tagscache.tagtypes and
543 name in self._tagscache.tagtypes):
543 name in self._tagscache.tagtypes):
544 old = self.tags().get(name, nullid)
544 old = self.tags().get(name, nullid)
545 fp.write('%s %s\n' % (hex(old), m))
545 fp.write('%s %s\n' % (hex(old), m))
546 fp.write('%s %s\n' % (hex(node), m))
546 fp.write('%s %s\n' % (hex(node), m))
547 fp.close()
547 fp.close()
548
548
549 prevtags = ''
549 prevtags = ''
550 if local:
550 if local:
551 try:
551 try:
552 fp = self.vfs('localtags', 'r+')
552 fp = self.vfs('localtags', 'r+')
553 except IOError:
553 except IOError:
554 fp = self.vfs('localtags', 'a')
554 fp = self.vfs('localtags', 'a')
555 else:
555 else:
556 prevtags = fp.read()
556 prevtags = fp.read()
557
557
558 # local tags are stored in the current charset
558 # local tags are stored in the current charset
559 writetags(fp, names, None, prevtags)
559 writetags(fp, names, None, prevtags)
560 for name in names:
560 for name in names:
561 self.hook('tag', node=hex(node), tag=name, local=local)
561 self.hook('tag', node=hex(node), tag=name, local=local)
562 return
562 return
563
563
564 try:
564 try:
565 fp = self.wfile('.hgtags', 'rb+')
565 fp = self.wfile('.hgtags', 'rb+')
566 except IOError, e:
566 except IOError, e:
567 if e.errno != errno.ENOENT:
567 if e.errno != errno.ENOENT:
568 raise
568 raise
569 fp = self.wfile('.hgtags', 'ab')
569 fp = self.wfile('.hgtags', 'ab')
570 else:
570 else:
571 prevtags = fp.read()
571 prevtags = fp.read()
572
572
573 # committed tags are stored in UTF-8
573 # committed tags are stored in UTF-8
574 writetags(fp, names, encoding.fromlocal, prevtags)
574 writetags(fp, names, encoding.fromlocal, prevtags)
575
575
576 fp.close()
576 fp.close()
577
577
578 self.invalidatecaches()
578 self.invalidatecaches()
579
579
580 if '.hgtags' not in self.dirstate:
580 if '.hgtags' not in self.dirstate:
581 self[None].add(['.hgtags'])
581 self[None].add(['.hgtags'])
582
582
583 m = matchmod.exact(self.root, '', ['.hgtags'])
583 m = matchmod.exact(self.root, '', ['.hgtags'])
584 tagnode = self.commit(message, user, date, extra=extra, match=m,
584 tagnode = self.commit(message, user, date, extra=extra, match=m,
585 editor=editor)
585 editor=editor)
586
586
587 for name in names:
587 for name in names:
588 self.hook('tag', node=hex(node), tag=name, local=local)
588 self.hook('tag', node=hex(node), tag=name, local=local)
589
589
590 return tagnode
590 return tagnode
591
591
592 def tag(self, names, node, message, local, user, date, editor=False):
592 def tag(self, names, node, message, local, user, date, editor=False):
593 '''tag a revision with one or more symbolic names.
593 '''tag a revision with one or more symbolic names.
594
594
595 names is a list of strings or, when adding a single tag, names may be a
595 names is a list of strings or, when adding a single tag, names may be a
596 string.
596 string.
597
597
598 if local is True, the tags are stored in a per-repository file.
598 if local is True, the tags are stored in a per-repository file.
599 otherwise, they are stored in the .hgtags file, and a new
599 otherwise, they are stored in the .hgtags file, and a new
600 changeset is committed with the change.
600 changeset is committed with the change.
601
601
602 keyword arguments:
602 keyword arguments:
603
603
604 local: whether to store tags in non-version-controlled file
604 local: whether to store tags in non-version-controlled file
605 (default False)
605 (default False)
606
606
607 message: commit message to use if committing
607 message: commit message to use if committing
608
608
609 user: name of user to use if committing
609 user: name of user to use if committing
610
610
611 date: date tuple to use if committing'''
611 date: date tuple to use if committing'''
612
612
613 if not local:
613 if not local:
614 m = matchmod.exact(self.root, '', ['.hgtags'])
614 m = matchmod.exact(self.root, '', ['.hgtags'])
615 if util.any(self.status(match=m, unknown=True, ignored=True)):
615 if util.any(self.status(match=m, unknown=True, ignored=True)):
616 raise util.Abort(_('working copy of .hgtags is changed'),
616 raise util.Abort(_('working copy of .hgtags is changed'),
617 hint=_('please commit .hgtags manually'))
617 hint=_('please commit .hgtags manually'))
618
618
619 self.tags() # instantiate the cache
619 self.tags() # instantiate the cache
620 self._tag(names, node, message, local, user, date, editor=editor)
620 self._tag(names, node, message, local, user, date, editor=editor)
621
621
622 @filteredpropertycache
622 @filteredpropertycache
623 def _tagscache(self):
623 def _tagscache(self):
624 '''Returns a tagscache object that contains various tags related
624 '''Returns a tagscache object that contains various tags related
625 caches.'''
625 caches.'''
626
626
627 # This simplifies its cache management by having one decorated
627 # This simplifies its cache management by having one decorated
628 # function (this one) and the rest simply fetch things from it.
628 # function (this one) and the rest simply fetch things from it.
629 class tagscache(object):
629 class tagscache(object):
630 def __init__(self):
630 def __init__(self):
631 # These two define the set of tags for this repository. tags
631 # These two define the set of tags for this repository. tags
632 # maps tag name to node; tagtypes maps tag name to 'global' or
632 # maps tag name to node; tagtypes maps tag name to 'global' or
633 # 'local'. (Global tags are defined by .hgtags across all
633 # 'local'. (Global tags are defined by .hgtags across all
634 # heads, and local tags are defined in .hg/localtags.)
634 # heads, and local tags are defined in .hg/localtags.)
635 # They constitute the in-memory cache of tags.
635 # They constitute the in-memory cache of tags.
636 self.tags = self.tagtypes = None
636 self.tags = self.tagtypes = None
637
637
638 self.nodetagscache = self.tagslist = None
638 self.nodetagscache = self.tagslist = None
639
639
640 cache = tagscache()
640 cache = tagscache()
641 cache.tags, cache.tagtypes = self._findtags()
641 cache.tags, cache.tagtypes = self._findtags()
642
642
643 return cache
643 return cache
644
644
645 def tags(self):
645 def tags(self):
646 '''return a mapping of tag to node'''
646 '''return a mapping of tag to node'''
647 t = {}
647 t = {}
648 if self.changelog.filteredrevs:
648 if self.changelog.filteredrevs:
649 tags, tt = self._findtags()
649 tags, tt = self._findtags()
650 else:
650 else:
651 tags = self._tagscache.tags
651 tags = self._tagscache.tags
652 for k, v in tags.iteritems():
652 for k, v in tags.iteritems():
653 try:
653 try:
654 # ignore tags to unknown nodes
654 # ignore tags to unknown nodes
655 self.changelog.rev(v)
655 self.changelog.rev(v)
656 t[k] = v
656 t[k] = v
657 except (error.LookupError, ValueError):
657 except (error.LookupError, ValueError):
658 pass
658 pass
659 return t
659 return t
660
660
661 def _findtags(self):
661 def _findtags(self):
662 '''Do the hard work of finding tags. Return a pair of dicts
662 '''Do the hard work of finding tags. Return a pair of dicts
663 (tags, tagtypes) where tags maps tag name to node, and tagtypes
663 (tags, tagtypes) where tags maps tag name to node, and tagtypes
664 maps tag name to a string like \'global\' or \'local\'.
664 maps tag name to a string like \'global\' or \'local\'.
665 Subclasses or extensions are free to add their own tags, but
665 Subclasses or extensions are free to add their own tags, but
666 should be aware that the returned dicts will be retained for the
666 should be aware that the returned dicts will be retained for the
667 duration of the localrepo object.'''
667 duration of the localrepo object.'''
668
668
669 # XXX what tagtype should subclasses/extensions use? Currently
669 # XXX what tagtype should subclasses/extensions use? Currently
670 # mq and bookmarks add tags, but do not set the tagtype at all.
670 # mq and bookmarks add tags, but do not set the tagtype at all.
671 # Should each extension invent its own tag type? Should there
671 # Should each extension invent its own tag type? Should there
672 # be one tagtype for all such "virtual" tags? Or is the status
672 # be one tagtype for all such "virtual" tags? Or is the status
673 # quo fine?
673 # quo fine?
674
674
675 alltags = {} # map tag name to (node, hist)
675 alltags = {} # map tag name to (node, hist)
676 tagtypes = {}
676 tagtypes = {}
677
677
678 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
678 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
679 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
679 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
680
680
681 # Build the return dicts. Have to re-encode tag names because
681 # Build the return dicts. Have to re-encode tag names because
682 # the tags module always uses UTF-8 (in order not to lose info
682 # the tags module always uses UTF-8 (in order not to lose info
683 # writing to the cache), but the rest of Mercurial wants them in
683 # writing to the cache), but the rest of Mercurial wants them in
684 # local encoding.
684 # local encoding.
685 tags = {}
685 tags = {}
686 for (name, (node, hist)) in alltags.iteritems():
686 for (name, (node, hist)) in alltags.iteritems():
687 if node != nullid:
687 if node != nullid:
688 tags[encoding.tolocal(name)] = node
688 tags[encoding.tolocal(name)] = node
689 tags['tip'] = self.changelog.tip()
689 tags['tip'] = self.changelog.tip()
690 tagtypes = dict([(encoding.tolocal(name), value)
690 tagtypes = dict([(encoding.tolocal(name), value)
691 for (name, value) in tagtypes.iteritems()])
691 for (name, value) in tagtypes.iteritems()])
692 return (tags, tagtypes)
692 return (tags, tagtypes)
693
693
694 def tagtype(self, tagname):
694 def tagtype(self, tagname):
695 '''
695 '''
696 return the type of the given tag. result can be:
696 return the type of the given tag. result can be:
697
697
698 'local' : a local tag
698 'local' : a local tag
699 'global' : a global tag
699 'global' : a global tag
700 None : tag does not exist
700 None : tag does not exist
701 '''
701 '''
702
702
703 return self._tagscache.tagtypes.get(tagname)
703 return self._tagscache.tagtypes.get(tagname)
704
704
705 def tagslist(self):
705 def tagslist(self):
706 '''return a list of tags ordered by revision'''
706 '''return a list of tags ordered by revision'''
707 if not self._tagscache.tagslist:
707 if not self._tagscache.tagslist:
708 l = []
708 l = []
709 for t, n in self.tags().iteritems():
709 for t, n in self.tags().iteritems():
710 l.append((self.changelog.rev(n), t, n))
710 l.append((self.changelog.rev(n), t, n))
711 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
711 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
712
712
713 return self._tagscache.tagslist
713 return self._tagscache.tagslist
714
714
715 def nodetags(self, node):
715 def nodetags(self, node):
716 '''return the tags associated with a node'''
716 '''return the tags associated with a node'''
717 if not self._tagscache.nodetagscache:
717 if not self._tagscache.nodetagscache:
718 nodetagscache = {}
718 nodetagscache = {}
719 for t, n in self._tagscache.tags.iteritems():
719 for t, n in self._tagscache.tags.iteritems():
720 nodetagscache.setdefault(n, []).append(t)
720 nodetagscache.setdefault(n, []).append(t)
721 for tags in nodetagscache.itervalues():
721 for tags in nodetagscache.itervalues():
722 tags.sort()
722 tags.sort()
723 self._tagscache.nodetagscache = nodetagscache
723 self._tagscache.nodetagscache = nodetagscache
724 return self._tagscache.nodetagscache.get(node, [])
724 return self._tagscache.nodetagscache.get(node, [])
725
725
726 def nodebookmarks(self, node):
726 def nodebookmarks(self, node):
727 marks = []
727 marks = []
728 for bookmark, n in self._bookmarks.iteritems():
728 for bookmark, n in self._bookmarks.iteritems():
729 if n == node:
729 if n == node:
730 marks.append(bookmark)
730 marks.append(bookmark)
731 return sorted(marks)
731 return sorted(marks)
732
732
733 def branchmap(self):
733 def branchmap(self):
734 '''returns a dictionary {branch: [branchheads]} with branchheads
734 '''returns a dictionary {branch: [branchheads]} with branchheads
735 ordered by increasing revision number'''
735 ordered by increasing revision number'''
736 branchmap.updatecache(self)
736 branchmap.updatecache(self)
737 return self._branchcaches[self.filtername]
737 return self._branchcaches[self.filtername]
738
738
739 @unfilteredmethod
739 @unfilteredmethod
740 def revbranchcache(self):
740 def revbranchcache(self):
741 if not self._revbranchcache:
741 if not self._revbranchcache:
742 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
742 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
743 return self._revbranchcache
743 return self._revbranchcache
744
744
745 def branchtip(self, branch, ignoremissing=False):
745 def branchtip(self, branch, ignoremissing=False):
746 '''return the tip node for a given branch
746 '''return the tip node for a given branch
747
747
748 If ignoremissing is True, then this method will not raise an error.
748 If ignoremissing is True, then this method will not raise an error.
749 This is helpful for callers that only expect None for a missing branch
749 This is helpful for callers that only expect None for a missing branch
750 (e.g. namespace).
750 (e.g. namespace).
751
751
752 '''
752 '''
753 try:
753 try:
754 return self.branchmap().branchtip(branch)
754 return self.branchmap().branchtip(branch)
755 except KeyError:
755 except KeyError:
756 if not ignoremissing:
756 if not ignoremissing:
757 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
757 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
758 else:
758 else:
759 pass
759 pass
760
760
761 def lookup(self, key):
761 def lookup(self, key):
762 return self[key].node()
762 return self[key].node()
763
763
764 def lookupbranch(self, key, remote=None):
764 def lookupbranch(self, key, remote=None):
765 repo = remote or self
765 repo = remote or self
766 if key in repo.branchmap():
766 if key in repo.branchmap():
767 return key
767 return key
768
768
769 repo = (remote and remote.local()) and remote or self
769 repo = (remote and remote.local()) and remote or self
770 return repo[key].branch()
770 return repo[key].branch()
771
771
772 def known(self, nodes):
772 def known(self, nodes):
773 nm = self.changelog.nodemap
773 nm = self.changelog.nodemap
774 pc = self._phasecache
774 pc = self._phasecache
775 result = []
775 result = []
776 for n in nodes:
776 for n in nodes:
777 r = nm.get(n)
777 r = nm.get(n)
778 resp = not (r is None or pc.phase(self, r) >= phases.secret)
778 resp = not (r is None or pc.phase(self, r) >= phases.secret)
779 result.append(resp)
779 result.append(resp)
780 return result
780 return result
781
781
782 def local(self):
782 def local(self):
783 return self
783 return self
784
784
785 def cancopy(self):
785 def cancopy(self):
786 # so statichttprepo's override of local() works
786 # so statichttprepo's override of local() works
787 if not self.local():
787 if not self.local():
788 return False
788 return False
789 if not self.ui.configbool('phases', 'publish', True):
789 if not self.ui.configbool('phases', 'publish', True):
790 return True
790 return True
791 # if publishing we can't copy if there is filtered content
791 # if publishing we can't copy if there is filtered content
792 return not self.filtered('visible').changelog.filteredrevs
792 return not self.filtered('visible').changelog.filteredrevs
793
793
794 def shared(self):
794 def shared(self):
795 '''the type of shared repository (None if not shared)'''
795 '''the type of shared repository (None if not shared)'''
796 if self.sharedpath != self.path:
796 if self.sharedpath != self.path:
797 return 'store'
797 return 'store'
798 return None
798 return None
799
799
800 def join(self, f, *insidef):
800 def join(self, f, *insidef):
801 return self.vfs.join(os.path.join(f, *insidef))
801 return self.vfs.join(os.path.join(f, *insidef))
802
802
803 def wjoin(self, f, *insidef):
803 def wjoin(self, f, *insidef):
804 return self.vfs.reljoin(self.root, f, *insidef)
804 return self.vfs.reljoin(self.root, f, *insidef)
805
805
806 def file(self, f):
806 def file(self, f):
807 if f[0] == '/':
807 if f[0] == '/':
808 f = f[1:]
808 f = f[1:]
809 return filelog.filelog(self.svfs, f)
809 return filelog.filelog(self.svfs, f)
810
810
811 def changectx(self, changeid):
811 def changectx(self, changeid):
812 return self[changeid]
812 return self[changeid]
813
813
814 def parents(self, changeid=None):
814 def parents(self, changeid=None):
815 '''get list of changectxs for parents of changeid'''
815 '''get list of changectxs for parents of changeid'''
816 return self[changeid].parents()
816 return self[changeid].parents()
817
817
818 def setparents(self, p1, p2=nullid):
818 def setparents(self, p1, p2=nullid):
819 self.dirstate.beginparentchange()
819 self.dirstate.beginparentchange()
820 copies = self.dirstate.setparents(p1, p2)
820 copies = self.dirstate.setparents(p1, p2)
821 pctx = self[p1]
821 pctx = self[p1]
822 if copies:
822 if copies:
823 # Adjust copy records, the dirstate cannot do it, it
823 # Adjust copy records, the dirstate cannot do it, it
824 # requires access to parents manifests. Preserve them
824 # requires access to parents manifests. Preserve them
825 # only for entries added to first parent.
825 # only for entries added to first parent.
826 for f in copies:
826 for f in copies:
827 if f not in pctx and copies[f] in pctx:
827 if f not in pctx and copies[f] in pctx:
828 self.dirstate.copy(copies[f], f)
828 self.dirstate.copy(copies[f], f)
829 if p2 == nullid:
829 if p2 == nullid:
830 for f, s in sorted(self.dirstate.copies().items()):
830 for f, s in sorted(self.dirstate.copies().items()):
831 if f not in pctx and s not in pctx:
831 if f not in pctx and s not in pctx:
832 self.dirstate.copy(None, f)
832 self.dirstate.copy(None, f)
833 self.dirstate.endparentchange()
833 self.dirstate.endparentchange()
834
834
835 def filectx(self, path, changeid=None, fileid=None):
835 def filectx(self, path, changeid=None, fileid=None):
836 """changeid can be a changeset revision, node, or tag.
836 """changeid can be a changeset revision, node, or tag.
837 fileid can be a file revision or node."""
837 fileid can be a file revision or node."""
838 return context.filectx(self, path, changeid, fileid)
838 return context.filectx(self, path, changeid, fileid)
839
839
840 def getcwd(self):
840 def getcwd(self):
841 return self.dirstate.getcwd()
841 return self.dirstate.getcwd()
842
842
843 def pathto(self, f, cwd=None):
843 def pathto(self, f, cwd=None):
844 return self.dirstate.pathto(f, cwd)
844 return self.dirstate.pathto(f, cwd)
845
845
846 def wfile(self, f, mode='r'):
846 def wfile(self, f, mode='r'):
847 return self.wvfs(f, mode)
847 return self.wvfs(f, mode)
848
848
849 def _link(self, f):
849 def _link(self, f):
850 return self.wvfs.islink(f)
850 return self.wvfs.islink(f)
851
851
852 def _loadfilter(self, filter):
852 def _loadfilter(self, filter):
853 if filter not in self.filterpats:
853 if filter not in self.filterpats:
854 l = []
854 l = []
855 for pat, cmd in self.ui.configitems(filter):
855 for pat, cmd in self.ui.configitems(filter):
856 if cmd == '!':
856 if cmd == '!':
857 continue
857 continue
858 mf = matchmod.match(self.root, '', [pat])
858 mf = matchmod.match(self.root, '', [pat])
859 fn = None
859 fn = None
860 params = cmd
860 params = cmd
861 for name, filterfn in self._datafilters.iteritems():
861 for name, filterfn in self._datafilters.iteritems():
862 if cmd.startswith(name):
862 if cmd.startswith(name):
863 fn = filterfn
863 fn = filterfn
864 params = cmd[len(name):].lstrip()
864 params = cmd[len(name):].lstrip()
865 break
865 break
866 if not fn:
866 if not fn:
867 fn = lambda s, c, **kwargs: util.filter(s, c)
867 fn = lambda s, c, **kwargs: util.filter(s, c)
868 # Wrap old filters not supporting keyword arguments
868 # Wrap old filters not supporting keyword arguments
869 if not inspect.getargspec(fn)[2]:
869 if not inspect.getargspec(fn)[2]:
870 oldfn = fn
870 oldfn = fn
871 fn = lambda s, c, **kwargs: oldfn(s, c)
871 fn = lambda s, c, **kwargs: oldfn(s, c)
872 l.append((mf, fn, params))
872 l.append((mf, fn, params))
873 self.filterpats[filter] = l
873 self.filterpats[filter] = l
874 return self.filterpats[filter]
874 return self.filterpats[filter]
875
875
876 def _filter(self, filterpats, filename, data):
876 def _filter(self, filterpats, filename, data):
877 for mf, fn, cmd in filterpats:
877 for mf, fn, cmd in filterpats:
878 if mf(filename):
878 if mf(filename):
879 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
879 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
880 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
880 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
881 break
881 break
882
882
883 return data
883 return data
884
884
885 @unfilteredpropertycache
885 @unfilteredpropertycache
886 def _encodefilterpats(self):
886 def _encodefilterpats(self):
887 return self._loadfilter('encode')
887 return self._loadfilter('encode')
888
888
889 @unfilteredpropertycache
889 @unfilteredpropertycache
890 def _decodefilterpats(self):
890 def _decodefilterpats(self):
891 return self._loadfilter('decode')
891 return self._loadfilter('decode')
892
892
893 def adddatafilter(self, name, filter):
893 def adddatafilter(self, name, filter):
894 self._datafilters[name] = filter
894 self._datafilters[name] = filter
895
895
896 def wread(self, filename):
896 def wread(self, filename):
897 if self._link(filename):
897 if self._link(filename):
898 data = self.wvfs.readlink(filename)
898 data = self.wvfs.readlink(filename)
899 else:
899 else:
900 data = self.wvfs.read(filename)
900 data = self.wvfs.read(filename)
901 return self._filter(self._encodefilterpats, filename, data)
901 return self._filter(self._encodefilterpats, filename, data)
902
902
903 def wwrite(self, filename, data, flags):
903 def wwrite(self, filename, data, flags):
904 data = self._filter(self._decodefilterpats, filename, data)
904 data = self._filter(self._decodefilterpats, filename, data)
905 if 'l' in flags:
905 if 'l' in flags:
906 self.wvfs.symlink(data, filename)
906 self.wvfs.symlink(data, filename)
907 else:
907 else:
908 self.wvfs.write(filename, data)
908 self.wvfs.write(filename, data)
909 if 'x' in flags:
909 if 'x' in flags:
910 self.wvfs.setflags(filename, False, True)
910 self.wvfs.setflags(filename, False, True)
911
911
912 def wwritedata(self, filename, data):
912 def wwritedata(self, filename, data):
913 return self._filter(self._decodefilterpats, filename, data)
913 return self._filter(self._decodefilterpats, filename, data)
914
914
915 def currenttransaction(self):
915 def currenttransaction(self):
916 """return the current transaction or None if non exists"""
916 """return the current transaction or None if non exists"""
917 if self._transref:
917 if self._transref:
918 tr = self._transref()
918 tr = self._transref()
919 else:
919 else:
920 tr = None
920 tr = None
921
921
922 if tr and tr.running():
922 if tr and tr.running():
923 return tr
923 return tr
924 return None
924 return None
925
925
926 def transaction(self, desc, report=None):
926 def transaction(self, desc, report=None):
927 if (self.ui.configbool('devel', 'all')
927 if (self.ui.configbool('devel', 'all')
928 or self.ui.configbool('devel', 'check-locks')):
928 or self.ui.configbool('devel', 'check-locks')):
929 l = self._lockref and self._lockref()
929 l = self._lockref and self._lockref()
930 if l is None or not l.held:
930 if l is None or not l.held:
931 msg = 'transaction with no lock\n'
931 msg = 'transaction with no lock\n'
932 if self.ui.tracebackflag:
932 if self.ui.tracebackflag:
933 util.debugstacktrace(msg, 1)
933 util.debugstacktrace(msg, 1)
934 else:
934 else:
935 self.ui.write_err(msg)
935 self.ui.write_err(msg)
936 tr = self.currenttransaction()
936 tr = self.currenttransaction()
937 if tr is not None:
937 if tr is not None:
938 return tr.nest()
938 return tr.nest()
939
939
940 # abort here if the journal already exists
940 # abort here if the journal already exists
941 if self.svfs.exists("journal"):
941 if self.svfs.exists("journal"):
942 raise error.RepoError(
942 raise error.RepoError(
943 _("abandoned transaction found"),
943 _("abandoned transaction found"),
944 hint=_("run 'hg recover' to clean up transaction"))
944 hint=_("run 'hg recover' to clean up transaction"))
945
945
946 self.hook('pretxnopen', throw=True, txnname=desc)
946 self.hook('pretxnopen', throw=True, txnname=desc)
947
947
948 self._writejournal(desc)
948 self._writejournal(desc)
949 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
949 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
950 if report:
950 if report:
951 rp = report
951 rp = report
952 else:
952 else:
953 rp = self.ui.warn
953 rp = self.ui.warn
954 vfsmap = {'plain': self.vfs} # root of .hg/
954 vfsmap = {'plain': self.vfs} # root of .hg/
955 # we must avoid cyclic reference between repo and transaction.
955 # we must avoid cyclic reference between repo and transaction.
956 reporef = weakref.ref(self)
956 reporef = weakref.ref(self)
957 def validate(tr):
957 def validate(tr):
958 """will run pre-closing hooks"""
958 """will run pre-closing hooks"""
959 pending = lambda: tr.writepending() and self.root or ""
959 pending = lambda: tr.writepending() and self.root or ""
960 reporef().hook('pretxnclose', throw=True, pending=pending,
960 reporef().hook('pretxnclose', throw=True, pending=pending,
961 xnname=desc)
961 xnname=desc)
962
962
963 tr = transaction.transaction(rp, self.sopener, vfsmap,
963 tr = transaction.transaction(rp, self.sopener, vfsmap,
964 "journal",
964 "journal",
965 "undo",
965 "undo",
966 aftertrans(renames),
966 aftertrans(renames),
967 self.store.createmode,
967 self.store.createmode,
968 validator=validate)
968 validator=validate)
969 # note: writing the fncache only during finalize mean that the file is
969 # note: writing the fncache only during finalize mean that the file is
970 # outdated when running hooks. As fncache is used for streaming clone,
970 # outdated when running hooks. As fncache is used for streaming clone,
971 # this is not expected to break anything that happen during the hooks.
971 # this is not expected to break anything that happen during the hooks.
972 tr.addfinalize('flush-fncache', self.store.write)
972 tr.addfinalize('flush-fncache', self.store.write)
973 def txnclosehook(tr2):
973 def txnclosehook(tr2):
974 """To be run if transaction is successful, will schedule a hook run
974 """To be run if transaction is successful, will schedule a hook run
975 """
975 """
976 def hook():
976 def hook():
977 reporef().hook('txnclose', throw=False, txnname=desc,
977 reporef().hook('txnclose', throw=False, txnname=desc,
978 **tr2.hookargs)
978 **tr2.hookargs)
979 reporef()._afterlock(hook)
979 reporef()._afterlock(hook)
980 tr.addfinalize('txnclose-hook', txnclosehook)
980 tr.addfinalize('txnclose-hook', txnclosehook)
981 self._transref = weakref.ref(tr)
981 self._transref = weakref.ref(tr)
982 return tr
982 return tr
983
983
984 def _journalfiles(self):
984 def _journalfiles(self):
985 return ((self.svfs, 'journal'),
985 return ((self.svfs, 'journal'),
986 (self.vfs, 'journal.dirstate'),
986 (self.vfs, 'journal.dirstate'),
987 (self.vfs, 'journal.branch'),
987 (self.vfs, 'journal.branch'),
988 (self.vfs, 'journal.desc'),
988 (self.vfs, 'journal.desc'),
989 (self.vfs, 'journal.bookmarks'),
989 (self.vfs, 'journal.bookmarks'),
990 (self.svfs, 'journal.phaseroots'))
990 (self.svfs, 'journal.phaseroots'))
991
991
992 def undofiles(self):
992 def undofiles(self):
993 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
993 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
994
994
995 def _writejournal(self, desc):
995 def _writejournal(self, desc):
996 self.vfs.write("journal.dirstate",
996 self.vfs.write("journal.dirstate",
997 self.vfs.tryread("dirstate"))
997 self.vfs.tryread("dirstate"))
998 self.vfs.write("journal.branch",
998 self.vfs.write("journal.branch",
999 encoding.fromlocal(self.dirstate.branch()))
999 encoding.fromlocal(self.dirstate.branch()))
1000 self.vfs.write("journal.desc",
1000 self.vfs.write("journal.desc",
1001 "%d\n%s\n" % (len(self), desc))
1001 "%d\n%s\n" % (len(self), desc))
1002 self.vfs.write("journal.bookmarks",
1002 self.vfs.write("journal.bookmarks",
1003 self.vfs.tryread("bookmarks"))
1003 self.vfs.tryread("bookmarks"))
1004 self.svfs.write("journal.phaseroots",
1004 self.svfs.write("journal.phaseroots",
1005 self.svfs.tryread("phaseroots"))
1005 self.svfs.tryread("phaseroots"))
1006
1006
1007 def recover(self):
1007 def recover(self):
1008 lock = self.lock()
1008 lock = self.lock()
1009 try:
1009 try:
1010 if self.svfs.exists("journal"):
1010 if self.svfs.exists("journal"):
1011 self.ui.status(_("rolling back interrupted transaction\n"))
1011 self.ui.status(_("rolling back interrupted transaction\n"))
1012 vfsmap = {'': self.svfs,
1012 vfsmap = {'': self.svfs,
1013 'plain': self.vfs,}
1013 'plain': self.vfs,}
1014 transaction.rollback(self.svfs, vfsmap, "journal",
1014 transaction.rollback(self.svfs, vfsmap, "journal",
1015 self.ui.warn)
1015 self.ui.warn)
1016 self.invalidate()
1016 self.invalidate()
1017 return True
1017 return True
1018 else:
1018 else:
1019 self.ui.warn(_("no interrupted transaction available\n"))
1019 self.ui.warn(_("no interrupted transaction available\n"))
1020 return False
1020 return False
1021 finally:
1021 finally:
1022 lock.release()
1022 lock.release()
1023
1023
1024 def rollback(self, dryrun=False, force=False):
1024 def rollback(self, dryrun=False, force=False):
1025 wlock = lock = None
1025 wlock = lock = None
1026 try:
1026 try:
1027 wlock = self.wlock()
1027 wlock = self.wlock()
1028 lock = self.lock()
1028 lock = self.lock()
1029 if self.svfs.exists("undo"):
1029 if self.svfs.exists("undo"):
1030 return self._rollback(dryrun, force)
1030 return self._rollback(dryrun, force)
1031 else:
1031 else:
1032 self.ui.warn(_("no rollback information available\n"))
1032 self.ui.warn(_("no rollback information available\n"))
1033 return 1
1033 return 1
1034 finally:
1034 finally:
1035 release(lock, wlock)
1035 release(lock, wlock)
1036
1036
1037 @unfilteredmethod # Until we get smarter cache management
1037 @unfilteredmethod # Until we get smarter cache management
1038 def _rollback(self, dryrun, force):
1038 def _rollback(self, dryrun, force):
1039 ui = self.ui
1039 ui = self.ui
1040 try:
1040 try:
1041 args = self.vfs.read('undo.desc').splitlines()
1041 args = self.vfs.read('undo.desc').splitlines()
1042 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1042 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1043 if len(args) >= 3:
1043 if len(args) >= 3:
1044 detail = args[2]
1044 detail = args[2]
1045 oldtip = oldlen - 1
1045 oldtip = oldlen - 1
1046
1046
1047 if detail and ui.verbose:
1047 if detail and ui.verbose:
1048 msg = (_('repository tip rolled back to revision %s'
1048 msg = (_('repository tip rolled back to revision %s'
1049 ' (undo %s: %s)\n')
1049 ' (undo %s: %s)\n')
1050 % (oldtip, desc, detail))
1050 % (oldtip, desc, detail))
1051 else:
1051 else:
1052 msg = (_('repository tip rolled back to revision %s'
1052 msg = (_('repository tip rolled back to revision %s'
1053 ' (undo %s)\n')
1053 ' (undo %s)\n')
1054 % (oldtip, desc))
1054 % (oldtip, desc))
1055 except IOError:
1055 except IOError:
1056 msg = _('rolling back unknown transaction\n')
1056 msg = _('rolling back unknown transaction\n')
1057 desc = None
1057 desc = None
1058
1058
1059 if not force and self['.'] != self['tip'] and desc == 'commit':
1059 if not force and self['.'] != self['tip'] and desc == 'commit':
1060 raise util.Abort(
1060 raise util.Abort(
1061 _('rollback of last commit while not checked out '
1061 _('rollback of last commit while not checked out '
1062 'may lose data'), hint=_('use -f to force'))
1062 'may lose data'), hint=_('use -f to force'))
1063
1063
1064 ui.status(msg)
1064 ui.status(msg)
1065 if dryrun:
1065 if dryrun:
1066 return 0
1066 return 0
1067
1067
1068 parents = self.dirstate.parents()
1068 parents = self.dirstate.parents()
1069 self.destroying()
1069 self.destroying()
1070 vfsmap = {'plain': self.vfs, '': self.svfs}
1070 vfsmap = {'plain': self.vfs, '': self.svfs}
1071 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn)
1071 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn)
1072 if self.vfs.exists('undo.bookmarks'):
1072 if self.vfs.exists('undo.bookmarks'):
1073 self.vfs.rename('undo.bookmarks', 'bookmarks')
1073 self.vfs.rename('undo.bookmarks', 'bookmarks')
1074 if self.svfs.exists('undo.phaseroots'):
1074 if self.svfs.exists('undo.phaseroots'):
1075 self.svfs.rename('undo.phaseroots', 'phaseroots')
1075 self.svfs.rename('undo.phaseroots', 'phaseroots')
1076 self.invalidate()
1076 self.invalidate()
1077
1077
1078 parentgone = (parents[0] not in self.changelog.nodemap or
1078 parentgone = (parents[0] not in self.changelog.nodemap or
1079 parents[1] not in self.changelog.nodemap)
1079 parents[1] not in self.changelog.nodemap)
1080 if parentgone:
1080 if parentgone:
1081 self.vfs.rename('undo.dirstate', 'dirstate')
1081 self.vfs.rename('undo.dirstate', 'dirstate')
1082 try:
1082 try:
1083 branch = self.vfs.read('undo.branch')
1083 branch = self.vfs.read('undo.branch')
1084 self.dirstate.setbranch(encoding.tolocal(branch))
1084 self.dirstate.setbranch(encoding.tolocal(branch))
1085 except IOError:
1085 except IOError:
1086 ui.warn(_('named branch could not be reset: '
1086 ui.warn(_('named branch could not be reset: '
1087 'current branch is still \'%s\'\n')
1087 'current branch is still \'%s\'\n')
1088 % self.dirstate.branch())
1088 % self.dirstate.branch())
1089
1089
1090 self.dirstate.invalidate()
1090 self.dirstate.invalidate()
1091 parents = tuple([p.rev() for p in self.parents()])
1091 parents = tuple([p.rev() for p in self.parents()])
1092 if len(parents) > 1:
1092 if len(parents) > 1:
1093 ui.status(_('working directory now based on '
1093 ui.status(_('working directory now based on '
1094 'revisions %d and %d\n') % parents)
1094 'revisions %d and %d\n') % parents)
1095 else:
1095 else:
1096 ui.status(_('working directory now based on '
1096 ui.status(_('working directory now based on '
1097 'revision %d\n') % parents)
1097 'revision %d\n') % parents)
1098 # TODO: if we know which new heads may result from this rollback, pass
1098 # TODO: if we know which new heads may result from this rollback, pass
1099 # them to destroy(), which will prevent the branchhead cache from being
1099 # them to destroy(), which will prevent the branchhead cache from being
1100 # invalidated.
1100 # invalidated.
1101 self.destroyed()
1101 self.destroyed()
1102 return 0
1102 return 0
1103
1103
1104 def invalidatecaches(self):
1104 def invalidatecaches(self):
1105
1105
1106 if '_tagscache' in vars(self):
1106 if '_tagscache' in vars(self):
1107 # can't use delattr on proxy
1107 # can't use delattr on proxy
1108 del self.__dict__['_tagscache']
1108 del self.__dict__['_tagscache']
1109
1109
1110 self.unfiltered()._branchcaches.clear()
1110 self.unfiltered()._branchcaches.clear()
1111 self.invalidatevolatilesets()
1111 self.invalidatevolatilesets()
1112
1112
1113 def invalidatevolatilesets(self):
1113 def invalidatevolatilesets(self):
1114 self.filteredrevcache.clear()
1114 self.filteredrevcache.clear()
1115 obsolete.clearobscaches(self)
1115 obsolete.clearobscaches(self)
1116
1116
1117 def invalidatedirstate(self):
1117 def invalidatedirstate(self):
1118 '''Invalidates the dirstate, causing the next call to dirstate
1118 '''Invalidates the dirstate, causing the next call to dirstate
1119 to check if it was modified since the last time it was read,
1119 to check if it was modified since the last time it was read,
1120 rereading it if it has.
1120 rereading it if it has.
1121
1121
1122 This is different to dirstate.invalidate() that it doesn't always
1122 This is different to dirstate.invalidate() that it doesn't always
1123 rereads the dirstate. Use dirstate.invalidate() if you want to
1123 rereads the dirstate. Use dirstate.invalidate() if you want to
1124 explicitly read the dirstate again (i.e. restoring it to a previous
1124 explicitly read the dirstate again (i.e. restoring it to a previous
1125 known good state).'''
1125 known good state).'''
1126 if hasunfilteredcache(self, 'dirstate'):
1126 if hasunfilteredcache(self, 'dirstate'):
1127 for k in self.dirstate._filecache:
1127 for k in self.dirstate._filecache:
1128 try:
1128 try:
1129 delattr(self.dirstate, k)
1129 delattr(self.dirstate, k)
1130 except AttributeError:
1130 except AttributeError:
1131 pass
1131 pass
1132 delattr(self.unfiltered(), 'dirstate')
1132 delattr(self.unfiltered(), 'dirstate')
1133
1133
1134 def invalidate(self):
1134 def invalidate(self):
1135 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1135 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1136 for k in self._filecache:
1136 for k in self._filecache:
1137 # dirstate is invalidated separately in invalidatedirstate()
1137 # dirstate is invalidated separately in invalidatedirstate()
1138 if k == 'dirstate':
1138 if k == 'dirstate':
1139 continue
1139 continue
1140
1140
1141 try:
1141 try:
1142 delattr(unfiltered, k)
1142 delattr(unfiltered, k)
1143 except AttributeError:
1143 except AttributeError:
1144 pass
1144 pass
1145 self.invalidatecaches()
1145 self.invalidatecaches()
1146 self.store.invalidatecaches()
1146 self.store.invalidatecaches()
1147
1147
1148 def invalidateall(self):
1148 def invalidateall(self):
1149 '''Fully invalidates both store and non-store parts, causing the
1149 '''Fully invalidates both store and non-store parts, causing the
1150 subsequent operation to reread any outside changes.'''
1150 subsequent operation to reread any outside changes.'''
1151 # extension should hook this to invalidate its caches
1151 # extension should hook this to invalidate its caches
1152 self.invalidate()
1152 self.invalidate()
1153 self.invalidatedirstate()
1153 self.invalidatedirstate()
1154
1154
1155 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1155 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1156 try:
1156 try:
1157 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1157 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1158 except error.LockHeld, inst:
1158 except error.LockHeld, inst:
1159 if not wait:
1159 if not wait:
1160 raise
1160 raise
1161 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1161 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1162 (desc, inst.locker))
1162 (desc, inst.locker))
1163 # default to 600 seconds timeout
1163 # default to 600 seconds timeout
1164 l = lockmod.lock(vfs, lockname,
1164 l = lockmod.lock(vfs, lockname,
1165 int(self.ui.config("ui", "timeout", "600")),
1165 int(self.ui.config("ui", "timeout", "600")),
1166 releasefn, desc=desc)
1166 releasefn, desc=desc)
1167 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1167 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1168 if acquirefn:
1168 if acquirefn:
1169 acquirefn()
1169 acquirefn()
1170 return l
1170 return l
1171
1171
1172 def _afterlock(self, callback):
1172 def _afterlock(self, callback):
1173 """add a callback to the current repository lock.
1173 """add a callback to the current repository lock.
1174
1174
1175 The callback will be executed on lock release."""
1175 The callback will be executed on lock release."""
1176 l = self._lockref and self._lockref()
1176 l = self._lockref and self._lockref()
1177 if l:
1177 if l:
1178 l.postrelease.append(callback)
1178 l.postrelease.append(callback)
1179 else:
1179 else:
1180 callback()
1180 callback()
1181
1181
1182 def lock(self, wait=True):
1182 def lock(self, wait=True):
1183 '''Lock the repository store (.hg/store) and return a weak reference
1183 '''Lock the repository store (.hg/store) and return a weak reference
1184 to the lock. Use this before modifying the store (e.g. committing or
1184 to the lock. Use this before modifying the store (e.g. committing or
1185 stripping). If you are opening a transaction, get a lock as well.)'''
1185 stripping). If you are opening a transaction, get a lock as well.)'''
1186 l = self._lockref and self._lockref()
1186 l = self._lockref and self._lockref()
1187 if l is not None and l.held:
1187 if l is not None and l.held:
1188 l.lock()
1188 l.lock()
1189 return l
1189 return l
1190
1190
1191 def unlock():
1191 def unlock():
1192 for k, ce in self._filecache.items():
1192 for k, ce in self._filecache.items():
1193 if k == 'dirstate' or k not in self.__dict__:
1193 if k == 'dirstate' or k not in self.__dict__:
1194 continue
1194 continue
1195 ce.refresh()
1195 ce.refresh()
1196
1196
1197 l = self._lock(self.svfs, "lock", wait, unlock,
1197 l = self._lock(self.svfs, "lock", wait, unlock,
1198 self.invalidate, _('repository %s') % self.origroot)
1198 self.invalidate, _('repository %s') % self.origroot)
1199 self._lockref = weakref.ref(l)
1199 self._lockref = weakref.ref(l)
1200 return l
1200 return l
1201
1201
1202 def wlock(self, wait=True):
1202 def wlock(self, wait=True):
1203 '''Lock the non-store parts of the repository (everything under
1203 '''Lock the non-store parts of the repository (everything under
1204 .hg except .hg/store) and return a weak reference to the lock.
1204 .hg except .hg/store) and return a weak reference to the lock.
1205 Use this before modifying files in .hg.'''
1205 Use this before modifying files in .hg.'''
1206 if (self.ui.configbool('devel', 'all')
1206 if (self.ui.configbool('devel', 'all')
1207 or self.ui.configbool('devel', 'check-locks')):
1207 or self.ui.configbool('devel', 'check-locks')):
1208 l = self._lockref and self._lockref()
1208 l = self._lockref and self._lockref()
1209 if l is not None and l.held:
1209 if l is not None and l.held:
1210 msg = '"lock" taken before "wlock"\n'
1210 msg = '"lock" taken before "wlock"\n'
1211 if self.ui.tracebackflag:
1211 if self.ui.tracebackflag:
1212 util.debugstacktrace(msg, 1)
1212 util.debugstacktrace(msg, 1)
1213 else:
1213 else:
1214 self.ui.write_err(msg)
1214 self.ui.write_err(msg)
1215 l = self._wlockref and self._wlockref()
1215 l = self._wlockref and self._wlockref()
1216 if l is not None and l.held:
1216 if l is not None and l.held:
1217 l.lock()
1217 l.lock()
1218 return l
1218 return l
1219
1219
1220 def unlock():
1220 def unlock():
1221 if self.dirstate.pendingparentchange():
1221 if self.dirstate.pendingparentchange():
1222 self.dirstate.invalidate()
1222 self.dirstate.invalidate()
1223 else:
1223 else:
1224 self.dirstate.write()
1224 self.dirstate.write()
1225
1225
1226 self._filecache['dirstate'].refresh()
1226 self._filecache['dirstate'].refresh()
1227
1227
1228 l = self._lock(self.vfs, "wlock", wait, unlock,
1228 l = self._lock(self.vfs, "wlock", wait, unlock,
1229 self.invalidatedirstate, _('working directory of %s') %
1229 self.invalidatedirstate, _('working directory of %s') %
1230 self.origroot)
1230 self.origroot)
1231 self._wlockref = weakref.ref(l)
1231 self._wlockref = weakref.ref(l)
1232 return l
1232 return l
1233
1233
1234 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1234 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1235 """
1235 """
1236 commit an individual file as part of a larger transaction
1236 commit an individual file as part of a larger transaction
1237 """
1237 """
1238
1238
1239 fname = fctx.path()
1239 fname = fctx.path()
1240 fparent1 = manifest1.get(fname, nullid)
1240 fparent1 = manifest1.get(fname, nullid)
1241 fparent2 = manifest2.get(fname, nullid)
1241 fparent2 = manifest2.get(fname, nullid)
1242 if isinstance(fctx, context.filectx):
1242 if isinstance(fctx, context.filectx):
1243 node = fctx.filenode()
1243 node = fctx.filenode()
1244 if node in [fparent1, fparent2]:
1244 if node in [fparent1, fparent2]:
1245 self.ui.debug('reusing %s filelog entry\n' % fname)
1245 self.ui.debug('reusing %s filelog entry\n' % fname)
1246 return node
1246 return node
1247
1247
1248 flog = self.file(fname)
1248 flog = self.file(fname)
1249 meta = {}
1249 meta = {}
1250 copy = fctx.renamed()
1250 copy = fctx.renamed()
1251 if copy and copy[0] != fname:
1251 if copy and copy[0] != fname:
1252 # Mark the new revision of this file as a copy of another
1252 # Mark the new revision of this file as a copy of another
1253 # file. This copy data will effectively act as a parent
1253 # file. This copy data will effectively act as a parent
1254 # of this new revision. If this is a merge, the first
1254 # of this new revision. If this is a merge, the first
1255 # parent will be the nullid (meaning "look up the copy data")
1255 # parent will be the nullid (meaning "look up the copy data")
1256 # and the second one will be the other parent. For example:
1256 # and the second one will be the other parent. For example:
1257 #
1257 #
1258 # 0 --- 1 --- 3 rev1 changes file foo
1258 # 0 --- 1 --- 3 rev1 changes file foo
1259 # \ / rev2 renames foo to bar and changes it
1259 # \ / rev2 renames foo to bar and changes it
1260 # \- 2 -/ rev3 should have bar with all changes and
1260 # \- 2 -/ rev3 should have bar with all changes and
1261 # should record that bar descends from
1261 # should record that bar descends from
1262 # bar in rev2 and foo in rev1
1262 # bar in rev2 and foo in rev1
1263 #
1263 #
1264 # this allows this merge to succeed:
1264 # this allows this merge to succeed:
1265 #
1265 #
1266 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1266 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1267 # \ / merging rev3 and rev4 should use bar@rev2
1267 # \ / merging rev3 and rev4 should use bar@rev2
1268 # \- 2 --- 4 as the merge base
1268 # \- 2 --- 4 as the merge base
1269 #
1269 #
1270
1270
1271 cfname = copy[0]
1271 cfname = copy[0]
1272 crev = manifest1.get(cfname)
1272 crev = manifest1.get(cfname)
1273 newfparent = fparent2
1273 newfparent = fparent2
1274
1274
1275 if manifest2: # branch merge
1275 if manifest2: # branch merge
1276 if fparent2 == nullid or crev is None: # copied on remote side
1276 if fparent2 == nullid or crev is None: # copied on remote side
1277 if cfname in manifest2:
1277 if cfname in manifest2:
1278 crev = manifest2[cfname]
1278 crev = manifest2[cfname]
1279 newfparent = fparent1
1279 newfparent = fparent1
1280
1280
1281 # Here, we used to search backwards through history to try to find
1281 # Here, we used to search backwards through history to try to find
1282 # where the file copy came from if the source of a copy was not in
1282 # where the file copy came from if the source of a copy was not in
1283 # the parent directory. However, this doesn't actually make sense to
1283 # the parent directory. However, this doesn't actually make sense to
1284 # do (what does a copy from something not in your working copy even
1284 # do (what does a copy from something not in your working copy even
1285 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
1285 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
1286 # the user that copy information was dropped, so if they didn't
1286 # the user that copy information was dropped, so if they didn't
1287 # expect this outcome it can be fixed, but this is the correct
1287 # expect this outcome it can be fixed, but this is the correct
1288 # behavior in this circumstance.
1288 # behavior in this circumstance.
1289
1289
1290 if crev:
1290 if crev:
1291 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1291 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1292 meta["copy"] = cfname
1292 meta["copy"] = cfname
1293 meta["copyrev"] = hex(crev)
1293 meta["copyrev"] = hex(crev)
1294 fparent1, fparent2 = nullid, newfparent
1294 fparent1, fparent2 = nullid, newfparent
1295 else:
1295 else:
1296 self.ui.warn(_("warning: can't find ancestor for '%s' "
1296 self.ui.warn(_("warning: can't find ancestor for '%s' "
1297 "copied from '%s'!\n") % (fname, cfname))
1297 "copied from '%s'!\n") % (fname, cfname))
1298
1298
1299 elif fparent1 == nullid:
1299 elif fparent1 == nullid:
1300 fparent1, fparent2 = fparent2, nullid
1300 fparent1, fparent2 = fparent2, nullid
1301 elif fparent2 != nullid:
1301 elif fparent2 != nullid:
1302 # is one parent an ancestor of the other?
1302 # is one parent an ancestor of the other?
1303 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1303 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1304 if fparent1 in fparentancestors:
1304 if fparent1 in fparentancestors:
1305 fparent1, fparent2 = fparent2, nullid
1305 fparent1, fparent2 = fparent2, nullid
1306 elif fparent2 in fparentancestors:
1306 elif fparent2 in fparentancestors:
1307 fparent2 = nullid
1307 fparent2 = nullid
1308
1308
1309 # is the file changed?
1309 # is the file changed?
1310 text = fctx.data()
1310 text = fctx.data()
1311 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1311 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1312 changelist.append(fname)
1312 changelist.append(fname)
1313 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1313 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1314 # are just the flags changed during merge?
1314 # are just the flags changed during merge?
1315 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
1315 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
1316 changelist.append(fname)
1316 changelist.append(fname)
1317
1317
1318 return fparent1
1318 return fparent1
1319
1319
1320 @unfilteredmethod
1320 @unfilteredmethod
1321 def commit(self, text="", user=None, date=None, match=None, force=False,
1321 def commit(self, text="", user=None, date=None, match=None, force=False,
1322 editor=False, extra={}):
1322 editor=False, extra={}):
1323 """Add a new revision to current repository.
1323 """Add a new revision to current repository.
1324
1324
1325 Revision information is gathered from the working directory,
1325 Revision information is gathered from the working directory,
1326 match can be used to filter the committed files. If editor is
1326 match can be used to filter the committed files. If editor is
1327 supplied, it is called to get a commit message.
1327 supplied, it is called to get a commit message.
1328 """
1328 """
1329
1329
1330 def fail(f, msg):
1330 def fail(f, msg):
1331 raise util.Abort('%s: %s' % (f, msg))
1331 raise util.Abort('%s: %s' % (f, msg))
1332
1332
1333 if not match:
1333 if not match:
1334 match = matchmod.always(self.root, '')
1334 match = matchmod.always(self.root, '')
1335
1335
1336 if not force:
1336 if not force:
1337 vdirs = []
1337 vdirs = []
1338 match.explicitdir = vdirs.append
1338 match.explicitdir = vdirs.append
1339 match.bad = fail
1339 match.bad = fail
1340
1340
1341 wlock = self.wlock()
1341 wlock = self.wlock()
1342 try:
1342 try:
1343 wctx = self[None]
1343 wctx = self[None]
1344 merge = len(wctx.parents()) > 1
1344 merge = len(wctx.parents()) > 1
1345
1345
1346 if not force and merge and not match.always():
1346 if not force and merge and not match.always():
1347 raise util.Abort(_('cannot partially commit a merge '
1347 raise util.Abort(_('cannot partially commit a merge '
1348 '(do not specify files or patterns)'))
1348 '(do not specify files or patterns)'))
1349
1349
1350 status = self.status(match=match, clean=force)
1350 status = self.status(match=match, clean=force)
1351 if force:
1351 if force:
1352 status.modified.extend(status.clean) # mq may commit clean files
1352 status.modified.extend(status.clean) # mq may commit clean files
1353
1353
1354 # check subrepos
1354 # check subrepos
1355 subs = []
1355 subs = []
1356 commitsubs = set()
1356 commitsubs = set()
1357 newstate = wctx.substate.copy()
1357 newstate = wctx.substate.copy()
1358 # only manage subrepos and .hgsubstate if .hgsub is present
1358 # only manage subrepos and .hgsubstate if .hgsub is present
1359 if '.hgsub' in wctx:
1359 if '.hgsub' in wctx:
1360 # we'll decide whether to track this ourselves, thanks
1360 # we'll decide whether to track this ourselves, thanks
1361 for c in status.modified, status.added, status.removed:
1361 for c in status.modified, status.added, status.removed:
1362 if '.hgsubstate' in c:
1362 if '.hgsubstate' in c:
1363 c.remove('.hgsubstate')
1363 c.remove('.hgsubstate')
1364
1364
1365 # compare current state to last committed state
1365 # compare current state to last committed state
1366 # build new substate based on last committed state
1366 # build new substate based on last committed state
1367 oldstate = wctx.p1().substate
1367 oldstate = wctx.p1().substate
1368 for s in sorted(newstate.keys()):
1368 for s in sorted(newstate.keys()):
1369 if not match(s):
1369 if not match(s):
1370 # ignore working copy, use old state if present
1370 # ignore working copy, use old state if present
1371 if s in oldstate:
1371 if s in oldstate:
1372 newstate[s] = oldstate[s]
1372 newstate[s] = oldstate[s]
1373 continue
1373 continue
1374 if not force:
1374 if not force:
1375 raise util.Abort(
1375 raise util.Abort(
1376 _("commit with new subrepo %s excluded") % s)
1376 _("commit with new subrepo %s excluded") % s)
1377 dirtyreason = wctx.sub(s).dirtyreason(True)
1377 dirtyreason = wctx.sub(s).dirtyreason(True)
1378 if dirtyreason:
1378 if dirtyreason:
1379 if not self.ui.configbool('ui', 'commitsubrepos'):
1379 if not self.ui.configbool('ui', 'commitsubrepos'):
1380 raise util.Abort(dirtyreason,
1380 raise util.Abort(dirtyreason,
1381 hint=_("use --subrepos for recursive commit"))
1381 hint=_("use --subrepos for recursive commit"))
1382 subs.append(s)
1382 subs.append(s)
1383 commitsubs.add(s)
1383 commitsubs.add(s)
1384 else:
1384 else:
1385 bs = wctx.sub(s).basestate()
1385 bs = wctx.sub(s).basestate()
1386 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1386 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1387 if oldstate.get(s, (None, None, None))[1] != bs:
1387 if oldstate.get(s, (None, None, None))[1] != bs:
1388 subs.append(s)
1388 subs.append(s)
1389
1389
1390 # check for removed subrepos
1390 # check for removed subrepos
1391 for p in wctx.parents():
1391 for p in wctx.parents():
1392 r = [s for s in p.substate if s not in newstate]
1392 r = [s for s in p.substate if s not in newstate]
1393 subs += [s for s in r if match(s)]
1393 subs += [s for s in r if match(s)]
1394 if subs:
1394 if subs:
1395 if (not match('.hgsub') and
1395 if (not match('.hgsub') and
1396 '.hgsub' in (wctx.modified() + wctx.added())):
1396 '.hgsub' in (wctx.modified() + wctx.added())):
1397 raise util.Abort(
1397 raise util.Abort(
1398 _("can't commit subrepos without .hgsub"))
1398 _("can't commit subrepos without .hgsub"))
1399 status.modified.insert(0, '.hgsubstate')
1399 status.modified.insert(0, '.hgsubstate')
1400
1400
1401 elif '.hgsub' in status.removed:
1401 elif '.hgsub' in status.removed:
1402 # clean up .hgsubstate when .hgsub is removed
1402 # clean up .hgsubstate when .hgsub is removed
1403 if ('.hgsubstate' in wctx and
1403 if ('.hgsubstate' in wctx and
1404 '.hgsubstate' not in (status.modified + status.added +
1404 '.hgsubstate' not in (status.modified + status.added +
1405 status.removed)):
1405 status.removed)):
1406 status.removed.insert(0, '.hgsubstate')
1406 status.removed.insert(0, '.hgsubstate')
1407
1407
1408 # make sure all explicit patterns are matched
1408 # make sure all explicit patterns are matched
1409 if not force and match.files():
1409 if not force and match.files():
1410 matched = set(status.modified + status.added + status.removed)
1410 matched = set(status.modified + status.added + status.removed)
1411
1411
1412 for f in match.files():
1412 for f in match.files():
1413 f = self.dirstate.normalize(f)
1413 f = self.dirstate.normalize(f)
1414 if f == '.' or f in matched or f in wctx.substate:
1414 if f == '.' or f in matched or f in wctx.substate:
1415 continue
1415 continue
1416 if f in status.deleted:
1416 if f in status.deleted:
1417 fail(f, _('file not found!'))
1417 fail(f, _('file not found!'))
1418 if f in vdirs: # visited directory
1418 if f in vdirs: # visited directory
1419 d = f + '/'
1419 d = f + '/'
1420 for mf in matched:
1420 for mf in matched:
1421 if mf.startswith(d):
1421 if mf.startswith(d):
1422 break
1422 break
1423 else:
1423 else:
1424 fail(f, _("no match under directory!"))
1424 fail(f, _("no match under directory!"))
1425 elif f not in self.dirstate:
1425 elif f not in self.dirstate:
1426 fail(f, _("file not tracked!"))
1426 fail(f, _("file not tracked!"))
1427
1427
1428 cctx = context.workingcommitctx(self, status,
1428 cctx = context.workingcommitctx(self, status,
1429 text, user, date, extra)
1429 text, user, date, extra)
1430
1430
1431 if (not force and not extra.get("close") and not merge
1431 if (not force and not extra.get("close") and not merge
1432 and not cctx.files()
1432 and not cctx.files()
1433 and wctx.branch() == wctx.p1().branch()):
1433 and wctx.branch() == wctx.p1().branch()):
1434 return None
1434 return None
1435
1435
1436 if merge and cctx.deleted():
1436 if merge and cctx.deleted():
1437 raise util.Abort(_("cannot commit merge with missing files"))
1437 raise util.Abort(_("cannot commit merge with missing files"))
1438
1438
1439 ms = mergemod.mergestate(self)
1439 ms = mergemod.mergestate(self)
1440 for f in status.modified:
1440 for f in status.modified:
1441 if f in ms and ms[f] == 'u':
1441 if f in ms and ms[f] == 'u':
1442 raise util.Abort(_('unresolved merge conflicts '
1442 raise util.Abort(_('unresolved merge conflicts '
1443 '(see "hg help resolve")'))
1443 '(see "hg help resolve")'))
1444
1444
1445 if editor:
1445 if editor:
1446 cctx._text = editor(self, cctx, subs)
1446 cctx._text = editor(self, cctx, subs)
1447 edited = (text != cctx._text)
1447 edited = (text != cctx._text)
1448
1448
1449 # Save commit message in case this transaction gets rolled back
1449 # Save commit message in case this transaction gets rolled back
1450 # (e.g. by a pretxncommit hook). Leave the content alone on
1450 # (e.g. by a pretxncommit hook). Leave the content alone on
1451 # the assumption that the user will use the same editor again.
1451 # the assumption that the user will use the same editor again.
1452 msgfn = self.savecommitmessage(cctx._text)
1452 msgfn = self.savecommitmessage(cctx._text)
1453
1453
1454 # commit subs and write new state
1454 # commit subs and write new state
1455 if subs:
1455 if subs:
1456 for s in sorted(commitsubs):
1456 for s in sorted(commitsubs):
1457 sub = wctx.sub(s)
1457 sub = wctx.sub(s)
1458 self.ui.status(_('committing subrepository %s\n') %
1458 self.ui.status(_('committing subrepository %s\n') %
1459 subrepo.subrelpath(sub))
1459 subrepo.subrelpath(sub))
1460 sr = sub.commit(cctx._text, user, date)
1460 sr = sub.commit(cctx._text, user, date)
1461 newstate[s] = (newstate[s][0], sr)
1461 newstate[s] = (newstate[s][0], sr)
1462 subrepo.writestate(self, newstate)
1462 subrepo.writestate(self, newstate)
1463
1463
1464 p1, p2 = self.dirstate.parents()
1464 p1, p2 = self.dirstate.parents()
1465 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1465 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1466 try:
1466 try:
1467 self.hook("precommit", throw=True, parent1=hookp1,
1467 self.hook("precommit", throw=True, parent1=hookp1,
1468 parent2=hookp2)
1468 parent2=hookp2)
1469 ret = self.commitctx(cctx, True)
1469 ret = self.commitctx(cctx, True)
1470 except: # re-raises
1470 except: # re-raises
1471 if edited:
1471 if edited:
1472 self.ui.write(
1472 self.ui.write(
1473 _('note: commit message saved in %s\n') % msgfn)
1473 _('note: commit message saved in %s\n') % msgfn)
1474 raise
1474 raise
1475
1475
1476 # update bookmarks, dirstate and mergestate
1476 # update bookmarks, dirstate and mergestate
1477 bookmarks.update(self, [p1, p2], ret)
1477 bookmarks.update(self, [p1, p2], ret)
1478 cctx.markcommitted(ret)
1478 cctx.markcommitted(ret)
1479 ms.reset()
1479 ms.reset()
1480 finally:
1480 finally:
1481 wlock.release()
1481 wlock.release()
1482
1482
1483 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1483 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1484 # hack for command that use a temporary commit (eg: histedit)
1484 # hack for command that use a temporary commit (eg: histedit)
1485 # temporary commit got stripped before hook release
1485 # temporary commit got stripped before hook release
1486 if node in self:
1486 if node in self:
1487 self.hook("commit", node=node, parent1=parent1,
1487 self.hook("commit", node=node, parent1=parent1,
1488 parent2=parent2)
1488 parent2=parent2)
1489 self._afterlock(commithook)
1489 self._afterlock(commithook)
1490 return ret
1490 return ret
1491
1491
1492 @unfilteredmethod
1492 @unfilteredmethod
1493 def commitctx(self, ctx, error=False):
1493 def commitctx(self, ctx, error=False):
1494 """Add a new revision to current repository.
1494 """Add a new revision to current repository.
1495 Revision information is passed via the context argument.
1495 Revision information is passed via the context argument.
1496 """
1496 """
1497
1497
1498 tr = None
1498 tr = None
1499 p1, p2 = ctx.p1(), ctx.p2()
1499 p1, p2 = ctx.p1(), ctx.p2()
1500 user = ctx.user()
1500 user = ctx.user()
1501
1501
1502 lock = self.lock()
1502 lock = self.lock()
1503 try:
1503 try:
1504 tr = self.transaction("commit")
1504 tr = self.transaction("commit")
1505 trp = weakref.proxy(tr)
1505 trp = weakref.proxy(tr)
1506
1506
1507 if ctx.files():
1507 if ctx.files():
1508 m1 = p1.manifest()
1508 m1 = p1.manifest()
1509 m2 = p2.manifest()
1509 m2 = p2.manifest()
1510 m = m1.copy()
1510 m = m1.copy()
1511
1511
1512 # check in files
1512 # check in files
1513 added = []
1513 added = []
1514 changed = []
1514 changed = []
1515 removed = list(ctx.removed())
1515 removed = list(ctx.removed())
1516 linkrev = len(self)
1516 linkrev = len(self)
1517 self.ui.note(_("committing files:\n"))
1517 self.ui.note(_("committing files:\n"))
1518 for f in sorted(ctx.modified() + ctx.added()):
1518 for f in sorted(ctx.modified() + ctx.added()):
1519 self.ui.note(f + "\n")
1519 self.ui.note(f + "\n")
1520 try:
1520 try:
1521 fctx = ctx[f]
1521 fctx = ctx[f]
1522 if fctx is None:
1522 if fctx is None:
1523 removed.append(f)
1523 removed.append(f)
1524 else:
1524 else:
1525 added.append(f)
1525 added.append(f)
1526 m[f] = self._filecommit(fctx, m1, m2, linkrev,
1526 m[f] = self._filecommit(fctx, m1, m2, linkrev,
1527 trp, changed)
1527 trp, changed)
1528 m.setflag(f, fctx.flags())
1528 m.setflag(f, fctx.flags())
1529 except OSError, inst:
1529 except OSError, inst:
1530 self.ui.warn(_("trouble committing %s!\n") % f)
1530 self.ui.warn(_("trouble committing %s!\n") % f)
1531 raise
1531 raise
1532 except IOError, inst:
1532 except IOError, inst:
1533 errcode = getattr(inst, 'errno', errno.ENOENT)
1533 errcode = getattr(inst, 'errno', errno.ENOENT)
1534 if error or errcode and errcode != errno.ENOENT:
1534 if error or errcode and errcode != errno.ENOENT:
1535 self.ui.warn(_("trouble committing %s!\n") % f)
1535 self.ui.warn(_("trouble committing %s!\n") % f)
1536 raise
1536 raise
1537
1537
1538 # update manifest
1538 # update manifest
1539 self.ui.note(_("committing manifest\n"))
1539 self.ui.note(_("committing manifest\n"))
1540 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1540 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1541 drop = [f for f in removed if f in m]
1541 drop = [f for f in removed if f in m]
1542 for f in drop:
1542 for f in drop:
1543 del m[f]
1543 del m[f]
1544 mn = self.manifest.add(m, trp, linkrev,
1544 mn = self.manifest.add(m, trp, linkrev,
1545 p1.manifestnode(), p2.manifestnode(),
1545 p1.manifestnode(), p2.manifestnode(),
1546 added, drop)
1546 added, drop)
1547 files = changed + removed
1547 files = changed + removed
1548 else:
1548 else:
1549 mn = p1.manifestnode()
1549 mn = p1.manifestnode()
1550 files = []
1550 files = []
1551
1551
1552 # update changelog
1552 # update changelog
1553 self.ui.note(_("committing changelog\n"))
1553 self.ui.note(_("committing changelog\n"))
1554 self.changelog.delayupdate(tr)
1554 self.changelog.delayupdate(tr)
1555 n = self.changelog.add(mn, files, ctx.description(),
1555 n = self.changelog.add(mn, files, ctx.description(),
1556 trp, p1.node(), p2.node(),
1556 trp, p1.node(), p2.node(),
1557 user, ctx.date(), ctx.extra().copy())
1557 user, ctx.date(), ctx.extra().copy())
1558 p = lambda: tr.writepending() and self.root or ""
1558 p = lambda: tr.writepending() and self.root or ""
1559 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1559 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1560 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1560 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1561 parent2=xp2, pending=p)
1561 parent2=xp2, pending=p)
1562 # set the new commit is proper phase
1562 # set the new commit is proper phase
1563 targetphase = subrepo.newcommitphase(self.ui, ctx)
1563 targetphase = subrepo.newcommitphase(self.ui, ctx)
1564 if targetphase:
1564 if targetphase:
1565 # retract boundary do not alter parent changeset.
1565 # retract boundary do not alter parent changeset.
1566 # if a parent have higher the resulting phase will
1566 # if a parent have higher the resulting phase will
1567 # be compliant anyway
1567 # be compliant anyway
1568 #
1568 #
1569 # if minimal phase was 0 we don't need to retract anything
1569 # if minimal phase was 0 we don't need to retract anything
1570 phases.retractboundary(self, tr, targetphase, [n])
1570 phases.retractboundary(self, tr, targetphase, [n])
1571 tr.close()
1571 tr.close()
1572 branchmap.updatecache(self.filtered('served'))
1572 branchmap.updatecache(self.filtered('served'))
1573 return n
1573 return n
1574 finally:
1574 finally:
1575 if tr:
1575 if tr:
1576 tr.release()
1576 tr.release()
1577 lock.release()
1577 lock.release()
1578
1578
1579 @unfilteredmethod
1579 @unfilteredmethod
1580 def destroying(self):
1580 def destroying(self):
1581 '''Inform the repository that nodes are about to be destroyed.
1581 '''Inform the repository that nodes are about to be destroyed.
1582 Intended for use by strip and rollback, so there's a common
1582 Intended for use by strip and rollback, so there's a common
1583 place for anything that has to be done before destroying history.
1583 place for anything that has to be done before destroying history.
1584
1584
1585 This is mostly useful for saving state that is in memory and waiting
1585 This is mostly useful for saving state that is in memory and waiting
1586 to be flushed when the current lock is released. Because a call to
1586 to be flushed when the current lock is released. Because a call to
1587 destroyed is imminent, the repo will be invalidated causing those
1587 destroyed is imminent, the repo will be invalidated causing those
1588 changes to stay in memory (waiting for the next unlock), or vanish
1588 changes to stay in memory (waiting for the next unlock), or vanish
1589 completely.
1589 completely.
1590 '''
1590 '''
1591 # When using the same lock to commit and strip, the phasecache is left
1591 # When using the same lock to commit and strip, the phasecache is left
1592 # dirty after committing. Then when we strip, the repo is invalidated,
1592 # dirty after committing. Then when we strip, the repo is invalidated,
1593 # causing those changes to disappear.
1593 # causing those changes to disappear.
1594 if '_phasecache' in vars(self):
1594 if '_phasecache' in vars(self):
1595 self._phasecache.write()
1595 self._phasecache.write()
1596
1596
1597 @unfilteredmethod
1597 @unfilteredmethod
1598 def destroyed(self):
1598 def destroyed(self):
1599 '''Inform the repository that nodes have been destroyed.
1599 '''Inform the repository that nodes have been destroyed.
1600 Intended for use by strip and rollback, so there's a common
1600 Intended for use by strip and rollback, so there's a common
1601 place for anything that has to be done after destroying history.
1601 place for anything that has to be done after destroying history.
1602 '''
1602 '''
1603 # When one tries to:
1603 # When one tries to:
1604 # 1) destroy nodes thus calling this method (e.g. strip)
1604 # 1) destroy nodes thus calling this method (e.g. strip)
1605 # 2) use phasecache somewhere (e.g. commit)
1605 # 2) use phasecache somewhere (e.g. commit)
1606 #
1606 #
1607 # then 2) will fail because the phasecache contains nodes that were
1607 # then 2) will fail because the phasecache contains nodes that were
1608 # removed. We can either remove phasecache from the filecache,
1608 # removed. We can either remove phasecache from the filecache,
1609 # causing it to reload next time it is accessed, or simply filter
1609 # causing it to reload next time it is accessed, or simply filter
1610 # the removed nodes now and write the updated cache.
1610 # the removed nodes now and write the updated cache.
1611 self._phasecache.filterunknown(self)
1611 self._phasecache.filterunknown(self)
1612 self._phasecache.write()
1612 self._phasecache.write()
1613
1613
1614 # update the 'served' branch cache to help read only server process
1614 # update the 'served' branch cache to help read only server process
1615 # Thanks to branchcache collaboration this is done from the nearest
1615 # Thanks to branchcache collaboration this is done from the nearest
1616 # filtered subset and it is expected to be fast.
1616 # filtered subset and it is expected to be fast.
1617 branchmap.updatecache(self.filtered('served'))
1617 branchmap.updatecache(self.filtered('served'))
1618
1618
1619 # Ensure the persistent tag cache is updated. Doing it now
1619 # Ensure the persistent tag cache is updated. Doing it now
1620 # means that the tag cache only has to worry about destroyed
1620 # means that the tag cache only has to worry about destroyed
1621 # heads immediately after a strip/rollback. That in turn
1621 # heads immediately after a strip/rollback. That in turn
1622 # guarantees that "cachetip == currenttip" (comparing both rev
1622 # guarantees that "cachetip == currenttip" (comparing both rev
1623 # and node) always means no nodes have been added or destroyed.
1623 # and node) always means no nodes have been added or destroyed.
1624
1624
1625 # XXX this is suboptimal when qrefresh'ing: we strip the current
1625 # XXX this is suboptimal when qrefresh'ing: we strip the current
1626 # head, refresh the tag cache, then immediately add a new head.
1626 # head, refresh the tag cache, then immediately add a new head.
1627 # But I think doing it this way is necessary for the "instant
1627 # But I think doing it this way is necessary for the "instant
1628 # tag cache retrieval" case to work.
1628 # tag cache retrieval" case to work.
1629 self.invalidate()
1629 self.invalidate()
1630
1630
1631 def walk(self, match, node=None):
1631 def walk(self, match, node=None):
1632 '''
1632 '''
1633 walk recursively through the directory tree or a given
1633 walk recursively through the directory tree or a given
1634 changeset, finding all files matched by the match
1634 changeset, finding all files matched by the match
1635 function
1635 function
1636 '''
1636 '''
1637 return self[node].walk(match)
1637 return self[node].walk(match)
1638
1638
1639 def status(self, node1='.', node2=None, match=None,
1639 def status(self, node1='.', node2=None, match=None,
1640 ignored=False, clean=False, unknown=False,
1640 ignored=False, clean=False, unknown=False,
1641 listsubrepos=False):
1641 listsubrepos=False):
1642 '''a convenience method that calls node1.status(node2)'''
1642 '''a convenience method that calls node1.status(node2)'''
1643 return self[node1].status(node2, match, ignored, clean, unknown,
1643 return self[node1].status(node2, match, ignored, clean, unknown,
1644 listsubrepos)
1644 listsubrepos)
1645
1645
1646 def heads(self, start=None):
1646 def heads(self, start=None):
1647 heads = self.changelog.heads(start)
1647 heads = self.changelog.heads(start)
1648 # sort the output in rev descending order
1648 # sort the output in rev descending order
1649 return sorted(heads, key=self.changelog.rev, reverse=True)
1649 return sorted(heads, key=self.changelog.rev, reverse=True)
1650
1650
1651 def branchheads(self, branch=None, start=None, closed=False):
1651 def branchheads(self, branch=None, start=None, closed=False):
1652 '''return a (possibly filtered) list of heads for the given branch
1652 '''return a (possibly filtered) list of heads for the given branch
1653
1653
1654 Heads are returned in topological order, from newest to oldest.
1654 Heads are returned in topological order, from newest to oldest.
1655 If branch is None, use the dirstate branch.
1655 If branch is None, use the dirstate branch.
1656 If start is not None, return only heads reachable from start.
1656 If start is not None, return only heads reachable from start.
1657 If closed is True, return heads that are marked as closed as well.
1657 If closed is True, return heads that are marked as closed as well.
1658 '''
1658 '''
1659 if branch is None:
1659 if branch is None:
1660 branch = self[None].branch()
1660 branch = self[None].branch()
1661 branches = self.branchmap()
1661 branches = self.branchmap()
1662 if branch not in branches:
1662 if branch not in branches:
1663 return []
1663 return []
1664 # the cache returns heads ordered lowest to highest
1664 # the cache returns heads ordered lowest to highest
1665 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1665 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1666 if start is not None:
1666 if start is not None:
1667 # filter out the heads that cannot be reached from startrev
1667 # filter out the heads that cannot be reached from startrev
1668 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1668 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1669 bheads = [h for h in bheads if h in fbheads]
1669 bheads = [h for h in bheads if h in fbheads]
1670 return bheads
1670 return bheads
1671
1671
1672 def branches(self, nodes):
1672 def branches(self, nodes):
1673 if not nodes:
1673 if not nodes:
1674 nodes = [self.changelog.tip()]
1674 nodes = [self.changelog.tip()]
1675 b = []
1675 b = []
1676 for n in nodes:
1676 for n in nodes:
1677 t = n
1677 t = n
1678 while True:
1678 while True:
1679 p = self.changelog.parents(n)
1679 p = self.changelog.parents(n)
1680 if p[1] != nullid or p[0] == nullid:
1680 if p[1] != nullid or p[0] == nullid:
1681 b.append((t, n, p[0], p[1]))
1681 b.append((t, n, p[0], p[1]))
1682 break
1682 break
1683 n = p[0]
1683 n = p[0]
1684 return b
1684 return b
1685
1685
1686 def between(self, pairs):
1686 def between(self, pairs):
1687 r = []
1687 r = []
1688
1688
1689 for top, bottom in pairs:
1689 for top, bottom in pairs:
1690 n, l, i = top, [], 0
1690 n, l, i = top, [], 0
1691 f = 1
1691 f = 1
1692
1692
1693 while n != bottom and n != nullid:
1693 while n != bottom and n != nullid:
1694 p = self.changelog.parents(n)[0]
1694 p = self.changelog.parents(n)[0]
1695 if i == f:
1695 if i == f:
1696 l.append(n)
1696 l.append(n)
1697 f = f * 2
1697 f = f * 2
1698 n = p
1698 n = p
1699 i += 1
1699 i += 1
1700
1700
1701 r.append(l)
1701 r.append(l)
1702
1702
1703 return r
1703 return r
1704
1704
1705 def checkpush(self, pushop):
1705 def checkpush(self, pushop):
1706 """Extensions can override this function if additional checks have
1706 """Extensions can override this function if additional checks have
1707 to be performed before pushing, or call it if they override push
1707 to be performed before pushing, or call it if they override push
1708 command.
1708 command.
1709 """
1709 """
1710 pass
1710 pass
1711
1711
1712 @unfilteredpropertycache
1712 @unfilteredpropertycache
1713 def prepushoutgoinghooks(self):
1713 def prepushoutgoinghooks(self):
1714 """Return util.hooks consists of "(repo, remote, outgoing)"
1714 """Return util.hooks consists of "(repo, remote, outgoing)"
1715 functions, which are called before pushing changesets.
1715 functions, which are called before pushing changesets.
1716 """
1716 """
1717 return util.hooks()
1717 return util.hooks()
1718
1718
1719 def stream_in(self, remote, requirements):
1719 def stream_in(self, remote, requirements):
1720 lock = self.lock()
1720 lock = self.lock()
1721 try:
1721 try:
1722 # Save remote branchmap. We will use it later
1722 # Save remote branchmap. We will use it later
1723 # to speed up branchcache creation
1723 # to speed up branchcache creation
1724 rbranchmap = None
1724 rbranchmap = None
1725 if remote.capable("branchmap"):
1725 if remote.capable("branchmap"):
1726 rbranchmap = remote.branchmap()
1726 rbranchmap = remote.branchmap()
1727
1727
1728 fp = remote.stream_out()
1728 fp = remote.stream_out()
1729 l = fp.readline()
1729 l = fp.readline()
1730 try:
1730 try:
1731 resp = int(l)
1731 resp = int(l)
1732 except ValueError:
1732 except ValueError:
1733 raise error.ResponseError(
1733 raise error.ResponseError(
1734 _('unexpected response from remote server:'), l)
1734 _('unexpected response from remote server:'), l)
1735 if resp == 1:
1735 if resp == 1:
1736 raise util.Abort(_('operation forbidden by server'))
1736 raise util.Abort(_('operation forbidden by server'))
1737 elif resp == 2:
1737 elif resp == 2:
1738 raise util.Abort(_('locking the remote repository failed'))
1738 raise util.Abort(_('locking the remote repository failed'))
1739 elif resp != 0:
1739 elif resp != 0:
1740 raise util.Abort(_('the server sent an unknown error code'))
1740 raise util.Abort(_('the server sent an unknown error code'))
1741 self.ui.status(_('streaming all changes\n'))
1741 self.ui.status(_('streaming all changes\n'))
1742 l = fp.readline()
1742 l = fp.readline()
1743 try:
1743 try:
1744 total_files, total_bytes = map(int, l.split(' ', 1))
1744 total_files, total_bytes = map(int, l.split(' ', 1))
1745 except (ValueError, TypeError):
1745 except (ValueError, TypeError):
1746 raise error.ResponseError(
1746 raise error.ResponseError(
1747 _('unexpected response from remote server:'), l)
1747 _('unexpected response from remote server:'), l)
1748 self.ui.status(_('%d files to transfer, %s of data\n') %
1748 self.ui.status(_('%d files to transfer, %s of data\n') %
1749 (total_files, util.bytecount(total_bytes)))
1749 (total_files, util.bytecount(total_bytes)))
1750 handled_bytes = 0
1750 handled_bytes = 0
1751 self.ui.progress(_('clone'), 0, total=total_bytes)
1751 self.ui.progress(_('clone'), 0, total=total_bytes)
1752 start = time.time()
1752 start = time.time()
1753
1753
1754 tr = self.transaction(_('clone'))
1754 tr = self.transaction(_('clone'))
1755 try:
1755 try:
1756 for i in xrange(total_files):
1756 for i in xrange(total_files):
1757 # XXX doesn't support '\n' or '\r' in filenames
1757 # XXX doesn't support '\n' or '\r' in filenames
1758 l = fp.readline()
1758 l = fp.readline()
1759 try:
1759 try:
1760 name, size = l.split('\0', 1)
1760 name, size = l.split('\0', 1)
1761 size = int(size)
1761 size = int(size)
1762 except (ValueError, TypeError):
1762 except (ValueError, TypeError):
1763 raise error.ResponseError(
1763 raise error.ResponseError(
1764 _('unexpected response from remote server:'), l)
1764 _('unexpected response from remote server:'), l)
1765 if self.ui.debugflag:
1765 if self.ui.debugflag:
1766 self.ui.debug('adding %s (%s)\n' %
1766 self.ui.debug('adding %s (%s)\n' %
1767 (name, util.bytecount(size)))
1767 (name, util.bytecount(size)))
1768 # for backwards compat, name was partially encoded
1768 # for backwards compat, name was partially encoded
1769 ofp = self.svfs(store.decodedir(name), 'w')
1769 ofp = self.svfs(store.decodedir(name), 'w')
1770 for chunk in util.filechunkiter(fp, limit=size):
1770 for chunk in util.filechunkiter(fp, limit=size):
1771 handled_bytes += len(chunk)
1771 handled_bytes += len(chunk)
1772 self.ui.progress(_('clone'), handled_bytes,
1772 self.ui.progress(_('clone'), handled_bytes,
1773 total=total_bytes)
1773 total=total_bytes)
1774 ofp.write(chunk)
1774 ofp.write(chunk)
1775 ofp.close()
1775 ofp.close()
1776 tr.close()
1776 tr.close()
1777 finally:
1777 finally:
1778 tr.release()
1778 tr.release()
1779
1779
1780 # Writing straight to files circumvented the inmemory caches
1780 # Writing straight to files circumvented the inmemory caches
1781 self.invalidate()
1781 self.invalidate()
1782
1782
1783 elapsed = time.time() - start
1783 elapsed = time.time() - start
1784 if elapsed <= 0:
1784 if elapsed <= 0:
1785 elapsed = 0.001
1785 elapsed = 0.001
1786 self.ui.progress(_('clone'), None)
1786 self.ui.progress(_('clone'), None)
1787 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1787 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1788 (util.bytecount(total_bytes), elapsed,
1788 (util.bytecount(total_bytes), elapsed,
1789 util.bytecount(total_bytes / elapsed)))
1789 util.bytecount(total_bytes / elapsed)))
1790
1790
1791 # new requirements = old non-format requirements +
1791 # new requirements = old non-format requirements +
1792 # new format-related
1792 # new format-related
1793 # requirements from the streamed-in repository
1793 # requirements from the streamed-in repository
1794 requirements.update(set(self.requirements) - self.supportedformats)
1794 requirements.update(set(self.requirements) - self.supportedformats)
1795 self._applyrequirements(requirements)
1795 self._applyrequirements(requirements)
1796 self._writerequirements()
1796 self._writerequirements()
1797
1797
1798 if rbranchmap:
1798 if rbranchmap:
1799 rbheads = []
1799 rbheads = []
1800 closed = []
1800 closed = []
1801 for bheads in rbranchmap.itervalues():
1801 for bheads in rbranchmap.itervalues():
1802 rbheads.extend(bheads)
1802 rbheads.extend(bheads)
1803 for h in bheads:
1803 for h in bheads:
1804 r = self.changelog.rev(h)
1804 r = self.changelog.rev(h)
1805 b, c = self.changelog.branchinfo(r)
1805 b, c = self.changelog.branchinfo(r)
1806 if c:
1806 if c:
1807 closed.append(h)
1807 closed.append(h)
1808
1808
1809 if rbheads:
1809 if rbheads:
1810 rtiprev = max((int(self.changelog.rev(node))
1810 rtiprev = max((int(self.changelog.rev(node))
1811 for node in rbheads))
1811 for node in rbheads))
1812 cache = branchmap.branchcache(rbranchmap,
1812 cache = branchmap.branchcache(rbranchmap,
1813 self[rtiprev].node(),
1813 self[rtiprev].node(),
1814 rtiprev,
1814 rtiprev,
1815 closednodes=closed)
1815 closednodes=closed)
1816 # Try to stick it as low as possible
1816 # Try to stick it as low as possible
1817 # filter above served are unlikely to be fetch from a clone
1817 # filter above served are unlikely to be fetch from a clone
1818 for candidate in ('base', 'immutable', 'served'):
1818 for candidate in ('base', 'immutable', 'served'):
1819 rview = self.filtered(candidate)
1819 rview = self.filtered(candidate)
1820 if cache.validfor(rview):
1820 if cache.validfor(rview):
1821 self._branchcaches[candidate] = cache
1821 self._branchcaches[candidate] = cache
1822 cache.write(rview)
1822 cache.write(rview)
1823 break
1823 break
1824 self.invalidate()
1824 self.invalidate()
1825 return len(self.heads()) + 1
1825 return len(self.heads()) + 1
1826 finally:
1826 finally:
1827 lock.release()
1827 lock.release()
1828
1828
1829 def clone(self, remote, heads=[], stream=None):
1829 def clone(self, remote, heads=[], stream=None):
1830 '''clone remote repository.
1830 '''clone remote repository.
1831
1831
1832 keyword arguments:
1832 keyword arguments:
1833 heads: list of revs to clone (forces use of pull)
1833 heads: list of revs to clone (forces use of pull)
1834 stream: use streaming clone if possible'''
1834 stream: use streaming clone if possible'''
1835
1835
1836 # now, all clients that can request uncompressed clones can
1836 # now, all clients that can request uncompressed clones can
1837 # read repo formats supported by all servers that can serve
1837 # read repo formats supported by all servers that can serve
1838 # them.
1838 # them.
1839
1839
1840 # if revlog format changes, client will have to check version
1840 # if revlog format changes, client will have to check version
1841 # and format flags on "stream" capability, and use
1841 # and format flags on "stream" capability, and use
1842 # uncompressed only if compatible.
1842 # uncompressed only if compatible.
1843
1843
1844 if stream is None:
1844 if stream is None:
1845 # if the server explicitly prefers to stream (for fast LANs)
1845 # if the server explicitly prefers to stream (for fast LANs)
1846 stream = remote.capable('stream-preferred')
1846 stream = remote.capable('stream-preferred')
1847
1847
1848 if stream and not heads:
1848 if stream and not heads:
1849 # 'stream' means remote revlog format is revlogv1 only
1849 # 'stream' means remote revlog format is revlogv1 only
1850 if remote.capable('stream'):
1850 if remote.capable('stream'):
1851 self.stream_in(remote, set(('revlogv1',)))
1851 self.stream_in(remote, set(('revlogv1',)))
1852 else:
1852 else:
1853 # otherwise, 'streamreqs' contains the remote revlog format
1853 # otherwise, 'streamreqs' contains the remote revlog format
1854 streamreqs = remote.capable('streamreqs')
1854 streamreqs = remote.capable('streamreqs')
1855 if streamreqs:
1855 if streamreqs:
1856 streamreqs = set(streamreqs.split(','))
1856 streamreqs = set(streamreqs.split(','))
1857 # if we support it, stream in and adjust our requirements
1857 # if we support it, stream in and adjust our requirements
1858 if not streamreqs - self.supportedformats:
1858 if not streamreqs - self.supportedformats:
1859 self.stream_in(remote, streamreqs)
1859 self.stream_in(remote, streamreqs)
1860
1860
1861 quiet = self.ui.backupconfig('ui', 'quietbookmarkmove')
1861 quiet = self.ui.backupconfig('ui', 'quietbookmarkmove')
1862 try:
1862 try:
1863 self.ui.setconfig('ui', 'quietbookmarkmove', True, 'clone')
1863 self.ui.setconfig('ui', 'quietbookmarkmove', True, 'clone')
1864 ret = exchange.pull(self, remote, heads).cgresult
1864 ret = exchange.pull(self, remote, heads).cgresult
1865 finally:
1865 finally:
1866 self.ui.restoreconfig(quiet)
1866 self.ui.restoreconfig(quiet)
1867 return ret
1867 return ret
1868
1868
1869 def pushkey(self, namespace, key, old, new):
1869 def pushkey(self, namespace, key, old, new):
1870 try:
1870 try:
1871 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1871 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1872 old=old, new=new)
1872 old=old, new=new)
1873 except error.HookAbort, exc:
1873 except error.HookAbort, exc:
1874 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
1874 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
1875 if exc.hint:
1875 if exc.hint:
1876 self.ui.write_err(_("(%s)\n") % exc.hint)
1876 self.ui.write_err(_("(%s)\n") % exc.hint)
1877 return False
1877 return False
1878 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1878 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1879 ret = pushkey.push(self, namespace, key, old, new)
1879 ret = pushkey.push(self, namespace, key, old, new)
1880 def runhook():
1880 def runhook():
1881 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1881 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1882 ret=ret)
1882 ret=ret)
1883 self._afterlock(runhook)
1883 self._afterlock(runhook)
1884 return ret
1884 return ret
1885
1885
1886 def listkeys(self, namespace):
1886 def listkeys(self, namespace):
1887 self.hook('prelistkeys', throw=True, namespace=namespace)
1887 self.hook('prelistkeys', throw=True, namespace=namespace)
1888 self.ui.debug('listing keys for "%s"\n' % namespace)
1888 self.ui.debug('listing keys for "%s"\n' % namespace)
1889 values = pushkey.list(self, namespace)
1889 values = pushkey.list(self, namespace)
1890 self.hook('listkeys', namespace=namespace, values=values)
1890 self.hook('listkeys', namespace=namespace, values=values)
1891 return values
1891 return values
1892
1892
1893 def debugwireargs(self, one, two, three=None, four=None, five=None):
1893 def debugwireargs(self, one, two, three=None, four=None, five=None):
1894 '''used to test argument passing over the wire'''
1894 '''used to test argument passing over the wire'''
1895 return "%s %s %s %s %s" % (one, two, three, four, five)
1895 return "%s %s %s %s %s" % (one, two, three, four, five)
1896
1896
1897 def savecommitmessage(self, text):
1897 def savecommitmessage(self, text):
1898 fp = self.vfs('last-message.txt', 'wb')
1898 fp = self.vfs('last-message.txt', 'wb')
1899 try:
1899 try:
1900 fp.write(text)
1900 fp.write(text)
1901 finally:
1901 finally:
1902 fp.close()
1902 fp.close()
1903 return self.pathto(fp.name[len(self.root) + 1:])
1903 return self.pathto(fp.name[len(self.root) + 1:])
1904
1904
1905 # used to avoid circular references so destructors work
1905 # used to avoid circular references so destructors work
1906 def aftertrans(files):
1906 def aftertrans(files):
1907 renamefiles = [tuple(t) for t in files]
1907 renamefiles = [tuple(t) for t in files]
1908 def a():
1908 def a():
1909 for vfs, src, dest in renamefiles:
1909 for vfs, src, dest in renamefiles:
1910 try:
1910 try:
1911 vfs.rename(src, dest)
1911 vfs.rename(src, dest)
1912 except OSError: # journal file does not yet exist
1912 except OSError: # journal file does not yet exist
1913 pass
1913 pass
1914 return a
1914 return a
1915
1915
1916 def undoname(fn):
1916 def undoname(fn):
1917 base, name = os.path.split(fn)
1917 base, name = os.path.split(fn)
1918 assert name.startswith('journal')
1918 assert name.startswith('journal')
1919 return os.path.join(base, name.replace('journal', 'undo', 1))
1919 return os.path.join(base, name.replace('journal', 'undo', 1))
1920
1920
1921 def instance(ui, path, create):
1921 def instance(ui, path, create):
1922 return localrepository(ui, util.urllocalpath(path), create)
1922 return localrepository(ui, util.urllocalpath(path), create)
1923
1923
1924 def islocal(path):
1924 def islocal(path):
1925 return True
1925 return True
@@ -1,873 +1,873 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 try:
175 try:
176 return sep.join(map(hex, l))
176 return sep.join(map(hex, l))
177 except TypeError:
177 except TypeError:
178 print l
178 print l
179 raise
179 raise
180
180
181 # batched call argument encoding
181 # batched call argument encoding
182
182
183 def escapearg(plain):
183 def escapearg(plain):
184 return (plain
184 return (plain
185 .replace(':', '::')
185 .replace(':', '::')
186 .replace(',', ':,')
186 .replace(',', ':,')
187 .replace(';', ':;')
187 .replace(';', ':;')
188 .replace('=', ':='))
188 .replace('=', ':='))
189
189
190 def unescapearg(escaped):
190 def unescapearg(escaped):
191 return (escaped
191 return (escaped
192 .replace(':=', '=')
192 .replace(':=', '=')
193 .replace(':;', ';')
193 .replace(':;', ';')
194 .replace(':,', ',')
194 .replace(':,', ',')
195 .replace('::', ':'))
195 .replace('::', ':'))
196
196
197 # mapping of options accepted by getbundle and their types
197 # mapping of options accepted by getbundle and their types
198 #
198 #
199 # Meant to be extended by extensions. It is extensions responsibility to ensure
199 # Meant to be extended by extensions. It is extensions responsibility to ensure
200 # such options are properly processed in exchange.getbundle.
200 # such options are properly processed in exchange.getbundle.
201 #
201 #
202 # supported types are:
202 # supported types are:
203 #
203 #
204 # :nodes: list of binary nodes
204 # :nodes: list of binary nodes
205 # :csv: list of comma-separated values
205 # :csv: list of comma-separated values
206 # :plain: string with no transformation needed.
206 # :plain: string with no transformation needed.
207 gboptsmap = {'heads': 'nodes',
207 gboptsmap = {'heads': 'nodes',
208 'common': 'nodes',
208 'common': 'nodes',
209 'obsmarkers': 'boolean',
209 'obsmarkers': 'boolean',
210 'bundlecaps': 'csv',
210 'bundlecaps': 'csv',
211 'listkeys': 'csv',
211 'listkeys': 'csv',
212 'cg': 'boolean'}
212 'cg': 'boolean'}
213
213
214 # client side
214 # client side
215
215
216 class wirepeer(peer.peerrepository):
216 class wirepeer(peer.peerrepository):
217
217
218 def batch(self):
218 def batch(self):
219 return remotebatch(self)
219 return remotebatch(self)
220 def _submitbatch(self, req):
220 def _submitbatch(self, req):
221 cmds = []
221 cmds = []
222 for op, argsdict in req:
222 for op, argsdict in req:
223 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
223 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
224 cmds.append('%s %s' % (op, args))
224 cmds.append('%s %s' % (op, args))
225 rsp = self._call("batch", cmds=';'.join(cmds))
225 rsp = self._call("batch", cmds=';'.join(cmds))
226 return rsp.split(';')
226 return rsp.split(';')
227 def _submitone(self, op, args):
227 def _submitone(self, op, args):
228 return self._call(op, **args)
228 return self._call(op, **args)
229
229
230 @batchable
230 @batchable
231 def lookup(self, key):
231 def lookup(self, key):
232 self.requirecap('lookup', _('look up remote revision'))
232 self.requirecap('lookup', _('look up remote revision'))
233 f = future()
233 f = future()
234 yield {'key': encoding.fromlocal(key)}, f
234 yield {'key': encoding.fromlocal(key)}, f
235 d = f.value
235 d = f.value
236 success, data = d[:-1].split(" ", 1)
236 success, data = d[:-1].split(" ", 1)
237 if int(success):
237 if int(success):
238 yield bin(data)
238 yield bin(data)
239 self._abort(error.RepoError(data))
239 self._abort(error.RepoError(data))
240
240
241 @batchable
241 @batchable
242 def heads(self):
242 def heads(self):
243 f = future()
243 f = future()
244 yield {}, f
244 yield {}, f
245 d = f.value
245 d = f.value
246 try:
246 try:
247 yield decodelist(d[:-1])
247 yield decodelist(d[:-1])
248 except ValueError:
248 except ValueError:
249 self._abort(error.ResponseError(_("unexpected response:"), d))
249 self._abort(error.ResponseError(_("unexpected response:"), d))
250
250
251 @batchable
251 @batchable
252 def known(self, nodes):
252 def known(self, nodes):
253 f = future()
253 f = future()
254 yield {'nodes': encodelist(nodes)}, f
254 yield {'nodes': encodelist(nodes)}, f
255 d = f.value
255 d = f.value
256 try:
256 try:
257 yield [bool(int(b)) for b in d]
257 yield [bool(int(b)) for b in d]
258 except ValueError:
258 except ValueError:
259 self._abort(error.ResponseError(_("unexpected response:"), d))
259 self._abort(error.ResponseError(_("unexpected response:"), d))
260
260
261 @batchable
261 @batchable
262 def branchmap(self):
262 def branchmap(self):
263 f = future()
263 f = future()
264 yield {}, f
264 yield {}, f
265 d = f.value
265 d = f.value
266 try:
266 try:
267 branchmap = {}
267 branchmap = {}
268 for branchpart in d.splitlines():
268 for branchpart in d.splitlines():
269 branchname, branchheads = branchpart.split(' ', 1)
269 branchname, branchheads = branchpart.split(' ', 1)
270 branchname = encoding.tolocal(urllib.unquote(branchname))
270 branchname = encoding.tolocal(urllib.unquote(branchname))
271 branchheads = decodelist(branchheads)
271 branchheads = decodelist(branchheads)
272 branchmap[branchname] = branchheads
272 branchmap[branchname] = branchheads
273 yield branchmap
273 yield branchmap
274 except TypeError:
274 except TypeError:
275 self._abort(error.ResponseError(_("unexpected response:"), d))
275 self._abort(error.ResponseError(_("unexpected response:"), d))
276
276
277 def branches(self, nodes):
277 def branches(self, nodes):
278 n = encodelist(nodes)
278 n = encodelist(nodes)
279 d = self._call("branches", nodes=n)
279 d = self._call("branches", nodes=n)
280 try:
280 try:
281 br = [tuple(decodelist(b)) for b in d.splitlines()]
281 br = [tuple(decodelist(b)) for b in d.splitlines()]
282 return br
282 return br
283 except ValueError:
283 except ValueError:
284 self._abort(error.ResponseError(_("unexpected response:"), d))
284 self._abort(error.ResponseError(_("unexpected response:"), d))
285
285
286 def between(self, pairs):
286 def between(self, pairs):
287 batch = 8 # avoid giant requests
287 batch = 8 # avoid giant requests
288 r = []
288 r = []
289 for i in xrange(0, len(pairs), batch):
289 for i in xrange(0, len(pairs), batch):
290 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
290 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
291 d = self._call("between", pairs=n)
291 d = self._call("between", pairs=n)
292 try:
292 try:
293 r.extend(l and decodelist(l) or [] for l in d.splitlines())
293 r.extend(l and decodelist(l) or [] for l in d.splitlines())
294 except ValueError:
294 except ValueError:
295 self._abort(error.ResponseError(_("unexpected response:"), d))
295 self._abort(error.ResponseError(_("unexpected response:"), d))
296 return r
296 return r
297
297
298 @batchable
298 @batchable
299 def pushkey(self, namespace, key, old, new):
299 def pushkey(self, namespace, key, old, new):
300 if not self.capable('pushkey'):
300 if not self.capable('pushkey'):
301 yield False, None
301 yield False, None
302 f = future()
302 f = future()
303 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
303 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
304 yield {'namespace': encoding.fromlocal(namespace),
304 yield {'namespace': encoding.fromlocal(namespace),
305 'key': encoding.fromlocal(key),
305 'key': encoding.fromlocal(key),
306 'old': encoding.fromlocal(old),
306 'old': encoding.fromlocal(old),
307 'new': encoding.fromlocal(new)}, f
307 'new': encoding.fromlocal(new)}, f
308 d = f.value
308 d = f.value
309 d, output = d.split('\n', 1)
309 d, output = d.split('\n', 1)
310 try:
310 try:
311 d = bool(int(d))
311 d = bool(int(d))
312 except ValueError:
312 except ValueError:
313 raise error.ResponseError(
313 raise error.ResponseError(
314 _('push failed (unexpected response):'), d)
314 _('push failed (unexpected response):'), d)
315 for l in output.splitlines(True):
315 for l in output.splitlines(True):
316 self.ui.status(_('remote: '), l)
316 self.ui.status(_('remote: '), l)
317 yield d
317 yield d
318
318
319 @batchable
319 @batchable
320 def listkeys(self, namespace):
320 def listkeys(self, namespace):
321 if not self.capable('pushkey'):
321 if not self.capable('pushkey'):
322 yield {}, None
322 yield {}, None
323 f = future()
323 f = future()
324 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
324 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
325 yield {'namespace': encoding.fromlocal(namespace)}, f
325 yield {'namespace': encoding.fromlocal(namespace)}, f
326 d = f.value
326 d = f.value
327 yield pushkeymod.decodekeys(d)
327 yield pushkeymod.decodekeys(d)
328
328
329 def stream_out(self):
329 def stream_out(self):
330 return self._callstream('stream_out')
330 return self._callstream('stream_out')
331
331
332 def changegroup(self, nodes, kind):
332 def changegroup(self, nodes, kind):
333 n = encodelist(nodes)
333 n = encodelist(nodes)
334 f = self._callcompressable("changegroup", roots=n)
334 f = self._callcompressable("changegroup", roots=n)
335 return changegroupmod.cg1unpacker(f, 'UN')
335 return changegroupmod.cg1unpacker(f, 'UN')
336
336
337 def changegroupsubset(self, bases, heads, kind):
337 def changegroupsubset(self, bases, heads, kind):
338 self.requirecap('changegroupsubset', _('look up remote changes'))
338 self.requirecap('changegroupsubset', _('look up remote changes'))
339 bases = encodelist(bases)
339 bases = encodelist(bases)
340 heads = encodelist(heads)
340 heads = encodelist(heads)
341 f = self._callcompressable("changegroupsubset",
341 f = self._callcompressable("changegroupsubset",
342 bases=bases, heads=heads)
342 bases=bases, heads=heads)
343 return changegroupmod.cg1unpacker(f, 'UN')
343 return changegroupmod.cg1unpacker(f, 'UN')
344
344
345 def getbundle(self, source, **kwargs):
345 def getbundle(self, source, **kwargs):
346 self.requirecap('getbundle', _('look up remote changes'))
346 self.requirecap('getbundle', _('look up remote changes'))
347 opts = {}
347 opts = {}
348 for key, value in kwargs.iteritems():
348 for key, value in kwargs.iteritems():
349 if value is None:
349 if value is None:
350 continue
350 continue
351 keytype = gboptsmap.get(key)
351 keytype = gboptsmap.get(key)
352 if keytype is None:
352 if keytype is None:
353 assert False, 'unexpected'
353 assert False, 'unexpected'
354 elif keytype == 'nodes':
354 elif keytype == 'nodes':
355 value = encodelist(value)
355 value = encodelist(value)
356 elif keytype == 'csv':
356 elif keytype == 'csv':
357 value = ','.join(value)
357 value = ','.join(value)
358 elif keytype == 'boolean':
358 elif keytype == 'boolean':
359 value = '%i' % bool(value)
359 value = '%i' % bool(value)
360 elif keytype != 'plain':
360 elif keytype != 'plain':
361 raise KeyError('unknown getbundle option type %s'
361 raise KeyError('unknown getbundle option type %s'
362 % keytype)
362 % keytype)
363 opts[key] = value
363 opts[key] = value
364 f = self._callcompressable("getbundle", **opts)
364 f = self._callcompressable("getbundle", **opts)
365 bundlecaps = kwargs.get('bundlecaps')
365 bundlecaps = kwargs.get('bundlecaps')
366 if bundlecaps is not None and 'HG2Y' in bundlecaps:
366 if bundlecaps is not None and 'HG2Y' in bundlecaps:
367 return bundle2.unbundle20(self.ui, f)
367 return bundle2.getunbundler(self.ui, f)
368 else:
368 else:
369 return changegroupmod.cg1unpacker(f, 'UN')
369 return changegroupmod.cg1unpacker(f, 'UN')
370
370
371 def unbundle(self, cg, heads, source):
371 def unbundle(self, cg, heads, source):
372 '''Send cg (a readable file-like object representing the
372 '''Send cg (a readable file-like object representing the
373 changegroup to push, typically a chunkbuffer object) to the
373 changegroup to push, typically a chunkbuffer object) to the
374 remote server as a bundle.
374 remote server as a bundle.
375
375
376 When pushing a bundle10 stream, return an integer indicating the
376 When pushing a bundle10 stream, return an integer indicating the
377 result of the push (see localrepository.addchangegroup()).
377 result of the push (see localrepository.addchangegroup()).
378
378
379 When pushing a bundle20 stream, return a bundle20 stream.'''
379 When pushing a bundle20 stream, return a bundle20 stream.'''
380
380
381 if heads != ['force'] and self.capable('unbundlehash'):
381 if heads != ['force'] and self.capable('unbundlehash'):
382 heads = encodelist(['hashed',
382 heads = encodelist(['hashed',
383 util.sha1(''.join(sorted(heads))).digest()])
383 util.sha1(''.join(sorted(heads))).digest()])
384 else:
384 else:
385 heads = encodelist(heads)
385 heads = encodelist(heads)
386
386
387 if util.safehasattr(cg, 'deltaheader'):
387 if util.safehasattr(cg, 'deltaheader'):
388 # this a bundle10, do the old style call sequence
388 # this a bundle10, do the old style call sequence
389 ret, output = self._callpush("unbundle", cg, heads=heads)
389 ret, output = self._callpush("unbundle", cg, heads=heads)
390 if ret == "":
390 if ret == "":
391 raise error.ResponseError(
391 raise error.ResponseError(
392 _('push failed:'), output)
392 _('push failed:'), output)
393 try:
393 try:
394 ret = int(ret)
394 ret = int(ret)
395 except ValueError:
395 except ValueError:
396 raise error.ResponseError(
396 raise error.ResponseError(
397 _('push failed (unexpected response):'), ret)
397 _('push failed (unexpected response):'), ret)
398
398
399 for l in output.splitlines(True):
399 for l in output.splitlines(True):
400 self.ui.status(_('remote: '), l)
400 self.ui.status(_('remote: '), l)
401 else:
401 else:
402 # bundle2 push. Send a stream, fetch a stream.
402 # bundle2 push. Send a stream, fetch a stream.
403 stream = self._calltwowaystream('unbundle', cg, heads=heads)
403 stream = self._calltwowaystream('unbundle', cg, heads=heads)
404 ret = bundle2.unbundle20(self.ui, stream)
404 ret = bundle2.getunbundler(self.ui, stream)
405 return ret
405 return ret
406
406
407 def debugwireargs(self, one, two, three=None, four=None, five=None):
407 def debugwireargs(self, one, two, three=None, four=None, five=None):
408 # don't pass optional arguments left at their default value
408 # don't pass optional arguments left at their default value
409 opts = {}
409 opts = {}
410 if three is not None:
410 if three is not None:
411 opts['three'] = three
411 opts['three'] = three
412 if four is not None:
412 if four is not None:
413 opts['four'] = four
413 opts['four'] = four
414 return self._call('debugwireargs', one=one, two=two, **opts)
414 return self._call('debugwireargs', one=one, two=two, **opts)
415
415
416 def _call(self, cmd, **args):
416 def _call(self, cmd, **args):
417 """execute <cmd> on the server
417 """execute <cmd> on the server
418
418
419 The command is expected to return a simple string.
419 The command is expected to return a simple string.
420
420
421 returns the server reply as a string."""
421 returns the server reply as a string."""
422 raise NotImplementedError()
422 raise NotImplementedError()
423
423
424 def _callstream(self, cmd, **args):
424 def _callstream(self, cmd, **args):
425 """execute <cmd> on the server
425 """execute <cmd> on the server
426
426
427 The command is expected to return a stream.
427 The command is expected to return a stream.
428
428
429 returns the server reply as a file like object."""
429 returns the server reply as a file like object."""
430 raise NotImplementedError()
430 raise NotImplementedError()
431
431
432 def _callcompressable(self, cmd, **args):
432 def _callcompressable(self, cmd, **args):
433 """execute <cmd> on the server
433 """execute <cmd> on the server
434
434
435 The command is expected to return a stream.
435 The command is expected to return a stream.
436
436
437 The stream may have been compressed in some implementations. This
437 The stream may have been compressed in some implementations. This
438 function takes care of the decompression. This is the only difference
438 function takes care of the decompression. This is the only difference
439 with _callstream.
439 with _callstream.
440
440
441 returns the server reply as a file like object.
441 returns the server reply as a file like object.
442 """
442 """
443 raise NotImplementedError()
443 raise NotImplementedError()
444
444
445 def _callpush(self, cmd, fp, **args):
445 def _callpush(self, cmd, fp, **args):
446 """execute a <cmd> on server
446 """execute a <cmd> on server
447
447
448 The command is expected to be related to a push. Push has a special
448 The command is expected to be related to a push. Push has a special
449 return method.
449 return method.
450
450
451 returns the server reply as a (ret, output) tuple. ret is either
451 returns the server reply as a (ret, output) tuple. ret is either
452 empty (error) or a stringified int.
452 empty (error) or a stringified int.
453 """
453 """
454 raise NotImplementedError()
454 raise NotImplementedError()
455
455
456 def _calltwowaystream(self, cmd, fp, **args):
456 def _calltwowaystream(self, cmd, fp, **args):
457 """execute <cmd> on server
457 """execute <cmd> on server
458
458
459 The command will send a stream to the server and get a stream in reply.
459 The command will send a stream to the server and get a stream in reply.
460 """
460 """
461 raise NotImplementedError()
461 raise NotImplementedError()
462
462
463 def _abort(self, exception):
463 def _abort(self, exception):
464 """clearly abort the wire protocol connection and raise the exception
464 """clearly abort the wire protocol connection and raise the exception
465 """
465 """
466 raise NotImplementedError()
466 raise NotImplementedError()
467
467
468 # server side
468 # server side
469
469
470 # wire protocol command can either return a string or one of these classes.
470 # wire protocol command can either return a string or one of these classes.
471 class streamres(object):
471 class streamres(object):
472 """wireproto reply: binary stream
472 """wireproto reply: binary stream
473
473
474 The call was successful and the result is a stream.
474 The call was successful and the result is a stream.
475 Iterate on the `self.gen` attribute to retrieve chunks.
475 Iterate on the `self.gen` attribute to retrieve chunks.
476 """
476 """
477 def __init__(self, gen):
477 def __init__(self, gen):
478 self.gen = gen
478 self.gen = gen
479
479
480 class pushres(object):
480 class pushres(object):
481 """wireproto reply: success with simple integer return
481 """wireproto reply: success with simple integer return
482
482
483 The call was successful and returned an integer contained in `self.res`.
483 The call was successful and returned an integer contained in `self.res`.
484 """
484 """
485 def __init__(self, res):
485 def __init__(self, res):
486 self.res = res
486 self.res = res
487
487
488 class pusherr(object):
488 class pusherr(object):
489 """wireproto reply: failure
489 """wireproto reply: failure
490
490
491 The call failed. The `self.res` attribute contains the error message.
491 The call failed. The `self.res` attribute contains the error message.
492 """
492 """
493 def __init__(self, res):
493 def __init__(self, res):
494 self.res = res
494 self.res = res
495
495
496 class ooberror(object):
496 class ooberror(object):
497 """wireproto reply: failure of a batch of operation
497 """wireproto reply: failure of a batch of operation
498
498
499 Something failed during a batch call. The error message is stored in
499 Something failed during a batch call. The error message is stored in
500 `self.message`.
500 `self.message`.
501 """
501 """
502 def __init__(self, message):
502 def __init__(self, message):
503 self.message = message
503 self.message = message
504
504
505 def dispatch(repo, proto, command):
505 def dispatch(repo, proto, command):
506 repo = repo.filtered("served")
506 repo = repo.filtered("served")
507 func, spec = commands[command]
507 func, spec = commands[command]
508 args = proto.getargs(spec)
508 args = proto.getargs(spec)
509 return func(repo, proto, *args)
509 return func(repo, proto, *args)
510
510
511 def options(cmd, keys, others):
511 def options(cmd, keys, others):
512 opts = {}
512 opts = {}
513 for k in keys:
513 for k in keys:
514 if k in others:
514 if k in others:
515 opts[k] = others[k]
515 opts[k] = others[k]
516 del others[k]
516 del others[k]
517 if others:
517 if others:
518 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
518 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
519 % (cmd, ",".join(others)))
519 % (cmd, ",".join(others)))
520 return opts
520 return opts
521
521
522 # list of commands
522 # list of commands
523 commands = {}
523 commands = {}
524
524
525 def wireprotocommand(name, args=''):
525 def wireprotocommand(name, args=''):
526 """decorator for wire protocol command"""
526 """decorator for wire protocol command"""
527 def register(func):
527 def register(func):
528 commands[name] = (func, args)
528 commands[name] = (func, args)
529 return func
529 return func
530 return register
530 return register
531
531
532 @wireprotocommand('batch', 'cmds *')
532 @wireprotocommand('batch', 'cmds *')
533 def batch(repo, proto, cmds, others):
533 def batch(repo, proto, cmds, others):
534 repo = repo.filtered("served")
534 repo = repo.filtered("served")
535 res = []
535 res = []
536 for pair in cmds.split(';'):
536 for pair in cmds.split(';'):
537 op, args = pair.split(' ', 1)
537 op, args = pair.split(' ', 1)
538 vals = {}
538 vals = {}
539 for a in args.split(','):
539 for a in args.split(','):
540 if a:
540 if a:
541 n, v = a.split('=')
541 n, v = a.split('=')
542 vals[n] = unescapearg(v)
542 vals[n] = unescapearg(v)
543 func, spec = commands[op]
543 func, spec = commands[op]
544 if spec:
544 if spec:
545 keys = spec.split()
545 keys = spec.split()
546 data = {}
546 data = {}
547 for k in keys:
547 for k in keys:
548 if k == '*':
548 if k == '*':
549 star = {}
549 star = {}
550 for key in vals.keys():
550 for key in vals.keys():
551 if key not in keys:
551 if key not in keys:
552 star[key] = vals[key]
552 star[key] = vals[key]
553 data['*'] = star
553 data['*'] = star
554 else:
554 else:
555 data[k] = vals[k]
555 data[k] = vals[k]
556 result = func(repo, proto, *[data[k] for k in keys])
556 result = func(repo, proto, *[data[k] for k in keys])
557 else:
557 else:
558 result = func(repo, proto)
558 result = func(repo, proto)
559 if isinstance(result, ooberror):
559 if isinstance(result, ooberror):
560 return result
560 return result
561 res.append(escapearg(result))
561 res.append(escapearg(result))
562 return ';'.join(res)
562 return ';'.join(res)
563
563
564 @wireprotocommand('between', 'pairs')
564 @wireprotocommand('between', 'pairs')
565 def between(repo, proto, pairs):
565 def between(repo, proto, pairs):
566 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
566 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
567 r = []
567 r = []
568 for b in repo.between(pairs):
568 for b in repo.between(pairs):
569 r.append(encodelist(b) + "\n")
569 r.append(encodelist(b) + "\n")
570 return "".join(r)
570 return "".join(r)
571
571
572 @wireprotocommand('branchmap')
572 @wireprotocommand('branchmap')
573 def branchmap(repo, proto):
573 def branchmap(repo, proto):
574 branchmap = repo.branchmap()
574 branchmap = repo.branchmap()
575 heads = []
575 heads = []
576 for branch, nodes in branchmap.iteritems():
576 for branch, nodes in branchmap.iteritems():
577 branchname = urllib.quote(encoding.fromlocal(branch))
577 branchname = urllib.quote(encoding.fromlocal(branch))
578 branchnodes = encodelist(nodes)
578 branchnodes = encodelist(nodes)
579 heads.append('%s %s' % (branchname, branchnodes))
579 heads.append('%s %s' % (branchname, branchnodes))
580 return '\n'.join(heads)
580 return '\n'.join(heads)
581
581
582 @wireprotocommand('branches', 'nodes')
582 @wireprotocommand('branches', 'nodes')
583 def branches(repo, proto, nodes):
583 def branches(repo, proto, nodes):
584 nodes = decodelist(nodes)
584 nodes = decodelist(nodes)
585 r = []
585 r = []
586 for b in repo.branches(nodes):
586 for b in repo.branches(nodes):
587 r.append(encodelist(b) + "\n")
587 r.append(encodelist(b) + "\n")
588 return "".join(r)
588 return "".join(r)
589
589
590
590
591 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
591 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
592 'known', 'getbundle', 'unbundlehash', 'batch']
592 'known', 'getbundle', 'unbundlehash', 'batch']
593
593
594 def _capabilities(repo, proto):
594 def _capabilities(repo, proto):
595 """return a list of capabilities for a repo
595 """return a list of capabilities for a repo
596
596
597 This function exists to allow extensions to easily wrap capabilities
597 This function exists to allow extensions to easily wrap capabilities
598 computation
598 computation
599
599
600 - returns a lists: easy to alter
600 - returns a lists: easy to alter
601 - change done here will be propagated to both `capabilities` and `hello`
601 - change done here will be propagated to both `capabilities` and `hello`
602 command without any other action needed.
602 command without any other action needed.
603 """
603 """
604 # copy to prevent modification of the global list
604 # copy to prevent modification of the global list
605 caps = list(wireprotocaps)
605 caps = list(wireprotocaps)
606 if _allowstream(repo.ui):
606 if _allowstream(repo.ui):
607 if repo.ui.configbool('server', 'preferuncompressed', False):
607 if repo.ui.configbool('server', 'preferuncompressed', False):
608 caps.append('stream-preferred')
608 caps.append('stream-preferred')
609 requiredformats = repo.requirements & repo.supportedformats
609 requiredformats = repo.requirements & repo.supportedformats
610 # if our local revlogs are just revlogv1, add 'stream' cap
610 # if our local revlogs are just revlogv1, add 'stream' cap
611 if not requiredformats - set(('revlogv1',)):
611 if not requiredformats - set(('revlogv1',)):
612 caps.append('stream')
612 caps.append('stream')
613 # otherwise, add 'streamreqs' detailing our local revlog format
613 # otherwise, add 'streamreqs' detailing our local revlog format
614 else:
614 else:
615 caps.append('streamreqs=%s' % ','.join(requiredformats))
615 caps.append('streamreqs=%s' % ','.join(requiredformats))
616 if repo.ui.configbool('experimental', 'bundle2-exp', False):
616 if repo.ui.configbool('experimental', 'bundle2-exp', False):
617 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
617 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
618 caps.append('bundle2-exp=' + urllib.quote(capsblob))
618 caps.append('bundle2-exp=' + urllib.quote(capsblob))
619 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
619 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
620 caps.append('httpheader=1024')
620 caps.append('httpheader=1024')
621 return caps
621 return caps
622
622
623 # If you are writing an extension and consider wrapping this function. Wrap
623 # If you are writing an extension and consider wrapping this function. Wrap
624 # `_capabilities` instead.
624 # `_capabilities` instead.
625 @wireprotocommand('capabilities')
625 @wireprotocommand('capabilities')
626 def capabilities(repo, proto):
626 def capabilities(repo, proto):
627 return ' '.join(_capabilities(repo, proto))
627 return ' '.join(_capabilities(repo, proto))
628
628
629 @wireprotocommand('changegroup', 'roots')
629 @wireprotocommand('changegroup', 'roots')
630 def changegroup(repo, proto, roots):
630 def changegroup(repo, proto, roots):
631 nodes = decodelist(roots)
631 nodes = decodelist(roots)
632 cg = changegroupmod.changegroup(repo, nodes, 'serve')
632 cg = changegroupmod.changegroup(repo, nodes, 'serve')
633 return streamres(proto.groupchunks(cg))
633 return streamres(proto.groupchunks(cg))
634
634
635 @wireprotocommand('changegroupsubset', 'bases heads')
635 @wireprotocommand('changegroupsubset', 'bases heads')
636 def changegroupsubset(repo, proto, bases, heads):
636 def changegroupsubset(repo, proto, bases, heads):
637 bases = decodelist(bases)
637 bases = decodelist(bases)
638 heads = decodelist(heads)
638 heads = decodelist(heads)
639 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
639 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
640 return streamres(proto.groupchunks(cg))
640 return streamres(proto.groupchunks(cg))
641
641
642 @wireprotocommand('debugwireargs', 'one two *')
642 @wireprotocommand('debugwireargs', 'one two *')
643 def debugwireargs(repo, proto, one, two, others):
643 def debugwireargs(repo, proto, one, two, others):
644 # only accept optional args from the known set
644 # only accept optional args from the known set
645 opts = options('debugwireargs', ['three', 'four'], others)
645 opts = options('debugwireargs', ['three', 'four'], others)
646 return repo.debugwireargs(one, two, **opts)
646 return repo.debugwireargs(one, two, **opts)
647
647
648 # List of options accepted by getbundle.
648 # List of options accepted by getbundle.
649 #
649 #
650 # Meant to be extended by extensions. It is the extension's responsibility to
650 # Meant to be extended by extensions. It is the extension's responsibility to
651 # ensure such options are properly processed in exchange.getbundle.
651 # ensure such options are properly processed in exchange.getbundle.
652 gboptslist = ['heads', 'common', 'bundlecaps']
652 gboptslist = ['heads', 'common', 'bundlecaps']
653
653
654 @wireprotocommand('getbundle', '*')
654 @wireprotocommand('getbundle', '*')
655 def getbundle(repo, proto, others):
655 def getbundle(repo, proto, others):
656 opts = options('getbundle', gboptsmap.keys(), others)
656 opts = options('getbundle', gboptsmap.keys(), others)
657 for k, v in opts.iteritems():
657 for k, v in opts.iteritems():
658 keytype = gboptsmap[k]
658 keytype = gboptsmap[k]
659 if keytype == 'nodes':
659 if keytype == 'nodes':
660 opts[k] = decodelist(v)
660 opts[k] = decodelist(v)
661 elif keytype == 'csv':
661 elif keytype == 'csv':
662 opts[k] = set(v.split(','))
662 opts[k] = set(v.split(','))
663 elif keytype == 'boolean':
663 elif keytype == 'boolean':
664 opts[k] = bool(v)
664 opts[k] = bool(v)
665 elif keytype != 'plain':
665 elif keytype != 'plain':
666 raise KeyError('unknown getbundle option type %s'
666 raise KeyError('unknown getbundle option type %s'
667 % keytype)
667 % keytype)
668 cg = exchange.getbundle(repo, 'serve', **opts)
668 cg = exchange.getbundle(repo, 'serve', **opts)
669 return streamres(proto.groupchunks(cg))
669 return streamres(proto.groupchunks(cg))
670
670
671 @wireprotocommand('heads')
671 @wireprotocommand('heads')
672 def heads(repo, proto):
672 def heads(repo, proto):
673 h = repo.heads()
673 h = repo.heads()
674 return encodelist(h) + "\n"
674 return encodelist(h) + "\n"
675
675
676 @wireprotocommand('hello')
676 @wireprotocommand('hello')
677 def hello(repo, proto):
677 def hello(repo, proto):
678 '''the hello command returns a set of lines describing various
678 '''the hello command returns a set of lines describing various
679 interesting things about the server, in an RFC822-like format.
679 interesting things about the server, in an RFC822-like format.
680 Currently the only one defined is "capabilities", which
680 Currently the only one defined is "capabilities", which
681 consists of a line in the form:
681 consists of a line in the form:
682
682
683 capabilities: space separated list of tokens
683 capabilities: space separated list of tokens
684 '''
684 '''
685 return "capabilities: %s\n" % (capabilities(repo, proto))
685 return "capabilities: %s\n" % (capabilities(repo, proto))
686
686
687 @wireprotocommand('listkeys', 'namespace')
687 @wireprotocommand('listkeys', 'namespace')
688 def listkeys(repo, proto, namespace):
688 def listkeys(repo, proto, namespace):
689 d = repo.listkeys(encoding.tolocal(namespace)).items()
689 d = repo.listkeys(encoding.tolocal(namespace)).items()
690 return pushkeymod.encodekeys(d)
690 return pushkeymod.encodekeys(d)
691
691
692 @wireprotocommand('lookup', 'key')
692 @wireprotocommand('lookup', 'key')
693 def lookup(repo, proto, key):
693 def lookup(repo, proto, key):
694 try:
694 try:
695 k = encoding.tolocal(key)
695 k = encoding.tolocal(key)
696 c = repo[k]
696 c = repo[k]
697 r = c.hex()
697 r = c.hex()
698 success = 1
698 success = 1
699 except Exception, inst:
699 except Exception, inst:
700 r = str(inst)
700 r = str(inst)
701 success = 0
701 success = 0
702 return "%s %s\n" % (success, r)
702 return "%s %s\n" % (success, r)
703
703
704 @wireprotocommand('known', 'nodes *')
704 @wireprotocommand('known', 'nodes *')
705 def known(repo, proto, nodes, others):
705 def known(repo, proto, nodes, others):
706 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
706 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
707
707
708 @wireprotocommand('pushkey', 'namespace key old new')
708 @wireprotocommand('pushkey', 'namespace key old new')
709 def pushkey(repo, proto, namespace, key, old, new):
709 def pushkey(repo, proto, namespace, key, old, new):
710 # compatibility with pre-1.8 clients which were accidentally
710 # compatibility with pre-1.8 clients which were accidentally
711 # sending raw binary nodes rather than utf-8-encoded hex
711 # sending raw binary nodes rather than utf-8-encoded hex
712 if len(new) == 20 and new.encode('string-escape') != new:
712 if len(new) == 20 and new.encode('string-escape') != new:
713 # looks like it could be a binary node
713 # looks like it could be a binary node
714 try:
714 try:
715 new.decode('utf-8')
715 new.decode('utf-8')
716 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
716 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
717 except UnicodeDecodeError:
717 except UnicodeDecodeError:
718 pass # binary, leave unmodified
718 pass # binary, leave unmodified
719 else:
719 else:
720 new = encoding.tolocal(new) # normal path
720 new = encoding.tolocal(new) # normal path
721
721
722 if util.safehasattr(proto, 'restore'):
722 if util.safehasattr(proto, 'restore'):
723
723
724 proto.redirect()
724 proto.redirect()
725
725
726 try:
726 try:
727 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
727 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
728 encoding.tolocal(old), new) or False
728 encoding.tolocal(old), new) or False
729 except util.Abort:
729 except util.Abort:
730 r = False
730 r = False
731
731
732 output = proto.restore()
732 output = proto.restore()
733
733
734 return '%s\n%s' % (int(r), output)
734 return '%s\n%s' % (int(r), output)
735
735
736 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
736 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
737 encoding.tolocal(old), new)
737 encoding.tolocal(old), new)
738 return '%s\n' % int(r)
738 return '%s\n' % int(r)
739
739
740 def _allowstream(ui):
740 def _allowstream(ui):
741 return ui.configbool('server', 'uncompressed', True, untrusted=True)
741 return ui.configbool('server', 'uncompressed', True, untrusted=True)
742
742
743 def _walkstreamfiles(repo):
743 def _walkstreamfiles(repo):
744 # this is it's own function so extensions can override it
744 # this is it's own function so extensions can override it
745 return repo.store.walk()
745 return repo.store.walk()
746
746
747 @wireprotocommand('stream_out')
747 @wireprotocommand('stream_out')
748 def stream(repo, proto):
748 def stream(repo, proto):
749 '''If the server supports streaming clone, it advertises the "stream"
749 '''If the server supports streaming clone, it advertises the "stream"
750 capability with a value representing the version and flags of the repo
750 capability with a value representing the version and flags of the repo
751 it is serving. Client checks to see if it understands the format.
751 it is serving. Client checks to see if it understands the format.
752
752
753 The format is simple: the server writes out a line with the amount
753 The format is simple: the server writes out a line with the amount
754 of files, then the total amount of bytes to be transferred (separated
754 of files, then the total amount of bytes to be transferred (separated
755 by a space). Then, for each file, the server first writes the filename
755 by a space). Then, for each file, the server first writes the filename
756 and file size (separated by the null character), then the file contents.
756 and file size (separated by the null character), then the file contents.
757 '''
757 '''
758
758
759 if not _allowstream(repo.ui):
759 if not _allowstream(repo.ui):
760 return '1\n'
760 return '1\n'
761
761
762 entries = []
762 entries = []
763 total_bytes = 0
763 total_bytes = 0
764 try:
764 try:
765 # get consistent snapshot of repo, lock during scan
765 # get consistent snapshot of repo, lock during scan
766 lock = repo.lock()
766 lock = repo.lock()
767 try:
767 try:
768 repo.ui.debug('scanning\n')
768 repo.ui.debug('scanning\n')
769 for name, ename, size in _walkstreamfiles(repo):
769 for name, ename, size in _walkstreamfiles(repo):
770 if size:
770 if size:
771 entries.append((name, size))
771 entries.append((name, size))
772 total_bytes += size
772 total_bytes += size
773 finally:
773 finally:
774 lock.release()
774 lock.release()
775 except error.LockError:
775 except error.LockError:
776 return '2\n' # error: 2
776 return '2\n' # error: 2
777
777
778 def streamer(repo, entries, total):
778 def streamer(repo, entries, total):
779 '''stream out all metadata files in repository.'''
779 '''stream out all metadata files in repository.'''
780 yield '0\n' # success
780 yield '0\n' # success
781 repo.ui.debug('%d files, %d bytes to transfer\n' %
781 repo.ui.debug('%d files, %d bytes to transfer\n' %
782 (len(entries), total_bytes))
782 (len(entries), total_bytes))
783 yield '%d %d\n' % (len(entries), total_bytes)
783 yield '%d %d\n' % (len(entries), total_bytes)
784
784
785 sopener = repo.svfs
785 sopener = repo.svfs
786 oldaudit = sopener.mustaudit
786 oldaudit = sopener.mustaudit
787 debugflag = repo.ui.debugflag
787 debugflag = repo.ui.debugflag
788 sopener.mustaudit = False
788 sopener.mustaudit = False
789
789
790 try:
790 try:
791 for name, size in entries:
791 for name, size in entries:
792 if debugflag:
792 if debugflag:
793 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
793 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
794 # partially encode name over the wire for backwards compat
794 # partially encode name over the wire for backwards compat
795 yield '%s\0%d\n' % (store.encodedir(name), size)
795 yield '%s\0%d\n' % (store.encodedir(name), size)
796 if size <= 65536:
796 if size <= 65536:
797 fp = sopener(name)
797 fp = sopener(name)
798 try:
798 try:
799 data = fp.read(size)
799 data = fp.read(size)
800 finally:
800 finally:
801 fp.close()
801 fp.close()
802 yield data
802 yield data
803 else:
803 else:
804 for chunk in util.filechunkiter(sopener(name), limit=size):
804 for chunk in util.filechunkiter(sopener(name), limit=size):
805 yield chunk
805 yield chunk
806 # replace with "finally:" when support for python 2.4 has been dropped
806 # replace with "finally:" when support for python 2.4 has been dropped
807 except Exception:
807 except Exception:
808 sopener.mustaudit = oldaudit
808 sopener.mustaudit = oldaudit
809 raise
809 raise
810 sopener.mustaudit = oldaudit
810 sopener.mustaudit = oldaudit
811
811
812 return streamres(streamer(repo, entries, total_bytes))
812 return streamres(streamer(repo, entries, total_bytes))
813
813
814 @wireprotocommand('unbundle', 'heads')
814 @wireprotocommand('unbundle', 'heads')
815 def unbundle(repo, proto, heads):
815 def unbundle(repo, proto, heads):
816 their_heads = decodelist(heads)
816 their_heads = decodelist(heads)
817
817
818 try:
818 try:
819 proto.redirect()
819 proto.redirect()
820
820
821 exchange.check_heads(repo, their_heads, 'preparing changes')
821 exchange.check_heads(repo, their_heads, 'preparing changes')
822
822
823 # write bundle data to temporary file because it can be big
823 # write bundle data to temporary file because it can be big
824 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
824 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
825 fp = os.fdopen(fd, 'wb+')
825 fp = os.fdopen(fd, 'wb+')
826 r = 0
826 r = 0
827 try:
827 try:
828 proto.getfile(fp)
828 proto.getfile(fp)
829 fp.seek(0)
829 fp.seek(0)
830 gen = exchange.readbundle(repo.ui, fp, None)
830 gen = exchange.readbundle(repo.ui, fp, None)
831 r = exchange.unbundle(repo, gen, their_heads, 'serve',
831 r = exchange.unbundle(repo, gen, their_heads, 'serve',
832 proto._client())
832 proto._client())
833 if util.safehasattr(r, 'addpart'):
833 if util.safehasattr(r, 'addpart'):
834 # The return looks streamable, we are in the bundle2 case and
834 # The return looks streamable, we are in the bundle2 case and
835 # should return a stream.
835 # should return a stream.
836 return streamres(r.getchunks())
836 return streamres(r.getchunks())
837 return pushres(r)
837 return pushres(r)
838
838
839 finally:
839 finally:
840 fp.close()
840 fp.close()
841 os.unlink(tempname)
841 os.unlink(tempname)
842 except error.BundleValueError, exc:
842 except error.BundleValueError, exc:
843 bundler = bundle2.bundle20(repo.ui)
843 bundler = bundle2.bundle20(repo.ui)
844 errpart = bundler.newpart('b2x:error:unsupportedcontent')
844 errpart = bundler.newpart('b2x:error:unsupportedcontent')
845 if exc.parttype is not None:
845 if exc.parttype is not None:
846 errpart.addparam('parttype', exc.parttype)
846 errpart.addparam('parttype', exc.parttype)
847 if exc.params:
847 if exc.params:
848 errpart.addparam('params', '\0'.join(exc.params))
848 errpart.addparam('params', '\0'.join(exc.params))
849 return streamres(bundler.getchunks())
849 return streamres(bundler.getchunks())
850 except util.Abort, inst:
850 except util.Abort, inst:
851 # The old code we moved used sys.stderr directly.
851 # The old code we moved used sys.stderr directly.
852 # We did not change it to minimise code change.
852 # We did not change it to minimise code change.
853 # This need to be moved to something proper.
853 # This need to be moved to something proper.
854 # Feel free to do it.
854 # Feel free to do it.
855 if getattr(inst, 'duringunbundle2', False):
855 if getattr(inst, 'duringunbundle2', False):
856 bundler = bundle2.bundle20(repo.ui)
856 bundler = bundle2.bundle20(repo.ui)
857 manargs = [('message', str(inst))]
857 manargs = [('message', str(inst))]
858 advargs = []
858 advargs = []
859 if inst.hint is not None:
859 if inst.hint is not None:
860 advargs.append(('hint', inst.hint))
860 advargs.append(('hint', inst.hint))
861 bundler.addpart(bundle2.bundlepart('b2x:error:abort',
861 bundler.addpart(bundle2.bundlepart('b2x:error:abort',
862 manargs, advargs))
862 manargs, advargs))
863 return streamres(bundler.getchunks())
863 return streamres(bundler.getchunks())
864 else:
864 else:
865 sys.stderr.write("abort: %s\n" % inst)
865 sys.stderr.write("abort: %s\n" % inst)
866 return pushres(0)
866 return pushres(0)
867 except error.PushRaced, exc:
867 except error.PushRaced, exc:
868 if getattr(exc, 'duringunbundle2', False):
868 if getattr(exc, 'duringunbundle2', False):
869 bundler = bundle2.bundle20(repo.ui)
869 bundler = bundle2.bundle20(repo.ui)
870 bundler.newpart('b2x:error:pushraced', [('message', str(exc))])
870 bundler.newpart('b2x:error:pushraced', [('message', str(exc))])
871 return streamres(bundler.getchunks())
871 return streamres(bundler.getchunks())
872 else:
872 else:
873 return pusherr(str(exc))
873 return pusherr(str(exc))
@@ -1,807 +1,807 b''
1 This test is dedicated to test the bundle2 container format
1 This test is dedicated to test the bundle2 container format
2
2
3 It test multiple existing parts to test different feature of the container. You
3 It test multiple existing parts to test different feature of the container. You
4 probably do not need to touch this test unless you change the binary encoding
4 probably do not need to touch this test unless you change the binary encoding
5 of the bundle2 format itself.
5 of the bundle2 format itself.
6
6
7 Create an extension to test bundle2 API
7 Create an extension to test bundle2 API
8
8
9 $ cat > bundle2.py << EOF
9 $ cat > bundle2.py << EOF
10 > """A small extension to test bundle2 implementation
10 > """A small extension to test bundle2 implementation
11 >
11 >
12 > Current bundle2 implementation is far too limited to be used in any core
12 > Current bundle2 implementation is far too limited to be used in any core
13 > code. We still need to be able to test it while it grow up.
13 > code. We still need to be able to test it while it grow up.
14 > """
14 > """
15 >
15 >
16 > import sys, os
16 > import sys, os
17 > from mercurial import cmdutil
17 > from mercurial import cmdutil
18 > from mercurial import util
18 > from mercurial import util
19 > from mercurial import bundle2
19 > from mercurial import bundle2
20 > from mercurial import scmutil
20 > from mercurial import scmutil
21 > from mercurial import discovery
21 > from mercurial import discovery
22 > from mercurial import changegroup
22 > from mercurial import changegroup
23 > from mercurial import error
23 > from mercurial import error
24 > from mercurial import obsolete
24 > from mercurial import obsolete
25 >
25 >
26 >
26 >
27 > try:
27 > try:
28 > import msvcrt
28 > import msvcrt
29 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
29 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
30 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
30 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
31 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
31 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
32 > except ImportError:
32 > except ImportError:
33 > pass
33 > pass
34 >
34 >
35 > cmdtable = {}
35 > cmdtable = {}
36 > command = cmdutil.command(cmdtable)
36 > command = cmdutil.command(cmdtable)
37 >
37 >
38 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
38 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
39 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
39 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
40 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
40 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
41 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
41 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
42 >
42 >
43 > @bundle2.parthandler('test:song')
43 > @bundle2.parthandler('test:song')
44 > def songhandler(op, part):
44 > def songhandler(op, part):
45 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
45 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
46 > op.ui.write('The choir starts singing:\n')
46 > op.ui.write('The choir starts singing:\n')
47 > verses = 0
47 > verses = 0
48 > for line in part.read().split('\n'):
48 > for line in part.read().split('\n'):
49 > op.ui.write(' %s\n' % line)
49 > op.ui.write(' %s\n' % line)
50 > verses += 1
50 > verses += 1
51 > op.records.add('song', {'verses': verses})
51 > op.records.add('song', {'verses': verses})
52 >
52 >
53 > @bundle2.parthandler('test:ping')
53 > @bundle2.parthandler('test:ping')
54 > def pinghandler(op, part):
54 > def pinghandler(op, part):
55 > op.ui.write('received ping request (id %i)\n' % part.id)
55 > op.ui.write('received ping request (id %i)\n' % part.id)
56 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
56 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
57 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
57 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
58 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))],
58 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))],
59 > mandatory=False)
59 > mandatory=False)
60 >
60 >
61 > @bundle2.parthandler('test:debugreply')
61 > @bundle2.parthandler('test:debugreply')
62 > def debugreply(op, part):
62 > def debugreply(op, part):
63 > """print data about the capacity of the bundle reply"""
63 > """print data about the capacity of the bundle reply"""
64 > if op.reply is None:
64 > if op.reply is None:
65 > op.ui.write('debugreply: no reply\n')
65 > op.ui.write('debugreply: no reply\n')
66 > else:
66 > else:
67 > op.ui.write('debugreply: capabilities:\n')
67 > op.ui.write('debugreply: capabilities:\n')
68 > for cap in sorted(op.reply.capabilities):
68 > for cap in sorted(op.reply.capabilities):
69 > op.ui.write('debugreply: %r\n' % cap)
69 > op.ui.write('debugreply: %r\n' % cap)
70 > for val in op.reply.capabilities[cap]:
70 > for val in op.reply.capabilities[cap]:
71 > op.ui.write('debugreply: %r\n' % val)
71 > op.ui.write('debugreply: %r\n' % val)
72 >
72 >
73 > @command('bundle2',
73 > @command('bundle2',
74 > [('', 'param', [], 'stream level parameter'),
74 > [('', 'param', [], 'stream level parameter'),
75 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
75 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
76 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
76 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
77 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
77 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
78 > ('', 'reply', False, 'produce a reply bundle'),
78 > ('', 'reply', False, 'produce a reply bundle'),
79 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
79 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
80 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
80 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
81 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
81 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
82 > '[OUTPUTFILE]')
82 > '[OUTPUTFILE]')
83 > def cmdbundle2(ui, repo, path=None, **opts):
83 > def cmdbundle2(ui, repo, path=None, **opts):
84 > """write a bundle2 container on standard output"""
84 > """write a bundle2 container on standard output"""
85 > bundler = bundle2.bundle20(ui)
85 > bundler = bundle2.bundle20(ui)
86 > for p in opts['param']:
86 > for p in opts['param']:
87 > p = p.split('=', 1)
87 > p = p.split('=', 1)
88 > try:
88 > try:
89 > bundler.addparam(*p)
89 > bundler.addparam(*p)
90 > except ValueError, exc:
90 > except ValueError, exc:
91 > raise util.Abort('%s' % exc)
91 > raise util.Abort('%s' % exc)
92 >
92 >
93 > if opts['reply']:
93 > if opts['reply']:
94 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
94 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
95 > bundler.newpart('b2x:replycaps', data=capsstring)
95 > bundler.newpart('b2x:replycaps', data=capsstring)
96 >
96 >
97 > if opts['pushrace']:
97 > if opts['pushrace']:
98 > # also serve to test the assignement of data outside of init
98 > # also serve to test the assignement of data outside of init
99 > part = bundler.newpart('b2x:check:heads')
99 > part = bundler.newpart('b2x:check:heads')
100 > part.data = '01234567890123456789'
100 > part.data = '01234567890123456789'
101 >
101 >
102 > revs = opts['rev']
102 > revs = opts['rev']
103 > if 'rev' in opts:
103 > if 'rev' in opts:
104 > revs = scmutil.revrange(repo, opts['rev'])
104 > revs = scmutil.revrange(repo, opts['rev'])
105 > if revs:
105 > if revs:
106 > # very crude version of a changegroup part creation
106 > # very crude version of a changegroup part creation
107 > bundled = repo.revs('%ld::%ld', revs, revs)
107 > bundled = repo.revs('%ld::%ld', revs, revs)
108 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
108 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
109 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
109 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
110 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
110 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
111 > cg = changegroup.getlocalchangegroup(repo, 'test:bundle2', outgoing, None)
111 > cg = changegroup.getlocalchangegroup(repo, 'test:bundle2', outgoing, None)
112 > bundler.newpart('b2x:changegroup', data=cg.getchunks(),
112 > bundler.newpart('b2x:changegroup', data=cg.getchunks(),
113 > mandatory=False)
113 > mandatory=False)
114 >
114 >
115 > if opts['parts']:
115 > if opts['parts']:
116 > bundler.newpart('test:empty', mandatory=False)
116 > bundler.newpart('test:empty', mandatory=False)
117 > # add a second one to make sure we handle multiple parts
117 > # add a second one to make sure we handle multiple parts
118 > bundler.newpart('test:empty', mandatory=False)
118 > bundler.newpart('test:empty', mandatory=False)
119 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
119 > bundler.newpart('test:song', data=ELEPHANTSSONG, mandatory=False)
120 > bundler.newpart('test:debugreply', mandatory=False)
120 > bundler.newpart('test:debugreply', mandatory=False)
121 > mathpart = bundler.newpart('test:math')
121 > mathpart = bundler.newpart('test:math')
122 > mathpart.addparam('pi', '3.14')
122 > mathpart.addparam('pi', '3.14')
123 > mathpart.addparam('e', '2.72')
123 > mathpart.addparam('e', '2.72')
124 > mathpart.addparam('cooking', 'raw', mandatory=False)
124 > mathpart.addparam('cooking', 'raw', mandatory=False)
125 > mathpart.data = '42'
125 > mathpart.data = '42'
126 > mathpart.mandatory = False
126 > mathpart.mandatory = False
127 > # advisory known part with unknown mandatory param
127 > # advisory known part with unknown mandatory param
128 > bundler.newpart('test:song', [('randomparam','')], mandatory=False)
128 > bundler.newpart('test:song', [('randomparam','')], mandatory=False)
129 > if opts['unknown']:
129 > if opts['unknown']:
130 > bundler.newpart('test:unknown', data='some random content')
130 > bundler.newpart('test:unknown', data='some random content')
131 > if opts['unknownparams']:
131 > if opts['unknownparams']:
132 > bundler.newpart('test:song', [('randomparams', '')])
132 > bundler.newpart('test:song', [('randomparams', '')])
133 > if opts['parts']:
133 > if opts['parts']:
134 > bundler.newpart('test:ping', mandatory=False)
134 > bundler.newpart('test:ping', mandatory=False)
135 > if opts['genraise']:
135 > if opts['genraise']:
136 > def genraise():
136 > def genraise():
137 > yield 'first line\n'
137 > yield 'first line\n'
138 > raise RuntimeError('Someone set up us the bomb!')
138 > raise RuntimeError('Someone set up us the bomb!')
139 > bundler.newpart('b2x:output', data=genraise(), mandatory=False)
139 > bundler.newpart('b2x:output', data=genraise(), mandatory=False)
140 >
140 >
141 > if path is None:
141 > if path is None:
142 > file = sys.stdout
142 > file = sys.stdout
143 > else:
143 > else:
144 > file = open(path, 'wb')
144 > file = open(path, 'wb')
145 >
145 >
146 > try:
146 > try:
147 > for chunk in bundler.getchunks():
147 > for chunk in bundler.getchunks():
148 > file.write(chunk)
148 > file.write(chunk)
149 > except RuntimeError, exc:
149 > except RuntimeError, exc:
150 > raise util.Abort(exc)
150 > raise util.Abort(exc)
151 >
151 >
152 > @command('unbundle2', [], '')
152 > @command('unbundle2', [], '')
153 > def cmdunbundle2(ui, repo, replypath=None):
153 > def cmdunbundle2(ui, repo, replypath=None):
154 > """process a bundle2 stream from stdin on the current repo"""
154 > """process a bundle2 stream from stdin on the current repo"""
155 > try:
155 > try:
156 > tr = None
156 > tr = None
157 > lock = repo.lock()
157 > lock = repo.lock()
158 > tr = repo.transaction('processbundle')
158 > tr = repo.transaction('processbundle')
159 > try:
159 > try:
160 > unbundler = bundle2.unbundle20(ui, sys.stdin)
160 > unbundler = bundle2.getunbundler(ui, sys.stdin)
161 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
161 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
162 > tr.close()
162 > tr.close()
163 > except error.BundleValueError, exc:
163 > except error.BundleValueError, exc:
164 > raise util.Abort('missing support for %s' % exc)
164 > raise util.Abort('missing support for %s' % exc)
165 > except error.PushRaced, exc:
165 > except error.PushRaced, exc:
166 > raise util.Abort('push race: %s' % exc)
166 > raise util.Abort('push race: %s' % exc)
167 > finally:
167 > finally:
168 > if tr is not None:
168 > if tr is not None:
169 > tr.release()
169 > tr.release()
170 > lock.release()
170 > lock.release()
171 > remains = sys.stdin.read()
171 > remains = sys.stdin.read()
172 > ui.write('%i unread bytes\n' % len(remains))
172 > ui.write('%i unread bytes\n' % len(remains))
173 > if op.records['song']:
173 > if op.records['song']:
174 > totalverses = sum(r['verses'] for r in op.records['song'])
174 > totalverses = sum(r['verses'] for r in op.records['song'])
175 > ui.write('%i total verses sung\n' % totalverses)
175 > ui.write('%i total verses sung\n' % totalverses)
176 > for rec in op.records['changegroup']:
176 > for rec in op.records['changegroup']:
177 > ui.write('addchangegroup return: %i\n' % rec['return'])
177 > ui.write('addchangegroup return: %i\n' % rec['return'])
178 > if op.reply is not None and replypath is not None:
178 > if op.reply is not None and replypath is not None:
179 > file = open(replypath, 'wb')
179 > file = open(replypath, 'wb')
180 > for chunk in op.reply.getchunks():
180 > for chunk in op.reply.getchunks():
181 > file.write(chunk)
181 > file.write(chunk)
182 >
182 >
183 > @command('statbundle2', [], '')
183 > @command('statbundle2', [], '')
184 > def cmdstatbundle2(ui, repo):
184 > def cmdstatbundle2(ui, repo):
185 > """print statistic on the bundle2 container read from stdin"""
185 > """print statistic on the bundle2 container read from stdin"""
186 > unbundler = bundle2.unbundle20(ui, sys.stdin)
186 > unbundler = bundle2.getunbundler(ui, sys.stdin)
187 > try:
187 > try:
188 > params = unbundler.params
188 > params = unbundler.params
189 > except error.BundleValueError, exc:
189 > except error.BundleValueError, exc:
190 > raise util.Abort('unknown parameters: %s' % exc)
190 > raise util.Abort('unknown parameters: %s' % exc)
191 > ui.write('options count: %i\n' % len(params))
191 > ui.write('options count: %i\n' % len(params))
192 > for key in sorted(params):
192 > for key in sorted(params):
193 > ui.write('- %s\n' % key)
193 > ui.write('- %s\n' % key)
194 > value = params[key]
194 > value = params[key]
195 > if value is not None:
195 > if value is not None:
196 > ui.write(' %s\n' % value)
196 > ui.write(' %s\n' % value)
197 > count = 0
197 > count = 0
198 > for p in unbundler.iterparts():
198 > for p in unbundler.iterparts():
199 > count += 1
199 > count += 1
200 > ui.write(' :%s:\n' % p.type)
200 > ui.write(' :%s:\n' % p.type)
201 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
201 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
202 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
202 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
203 > ui.write(' payload: %i bytes\n' % len(p.read()))
203 > ui.write(' payload: %i bytes\n' % len(p.read()))
204 > ui.write('parts count: %i\n' % count)
204 > ui.write('parts count: %i\n' % count)
205 > EOF
205 > EOF
206 $ cat >> $HGRCPATH << EOF
206 $ cat >> $HGRCPATH << EOF
207 > [extensions]
207 > [extensions]
208 > bundle2=$TESTTMP/bundle2.py
208 > bundle2=$TESTTMP/bundle2.py
209 > [experimental]
209 > [experimental]
210 > bundle2-exp=True
210 > bundle2-exp=True
211 > evolution=createmarkers
211 > evolution=createmarkers
212 > [ui]
212 > [ui]
213 > ssh=python "$TESTDIR/dummyssh"
213 > ssh=python "$TESTDIR/dummyssh"
214 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
214 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
215 > [web]
215 > [web]
216 > push_ssl = false
216 > push_ssl = false
217 > allow_push = *
217 > allow_push = *
218 > [phases]
218 > [phases]
219 > publish=False
219 > publish=False
220 > EOF
220 > EOF
221
221
222 The extension requires a repo (currently unused)
222 The extension requires a repo (currently unused)
223
223
224 $ hg init main
224 $ hg init main
225 $ cd main
225 $ cd main
226 $ touch a
226 $ touch a
227 $ hg add a
227 $ hg add a
228 $ hg commit -m 'a'
228 $ hg commit -m 'a'
229
229
230
230
231 Empty bundle
231 Empty bundle
232 =================
232 =================
233
233
234 - no option
234 - no option
235 - no parts
235 - no parts
236
236
237 Test bundling
237 Test bundling
238
238
239 $ hg bundle2
239 $ hg bundle2
240 HG2Y\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
240 HG2Y\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
241
241
242 Test unbundling
242 Test unbundling
243
243
244 $ hg bundle2 | hg statbundle2
244 $ hg bundle2 | hg statbundle2
245 options count: 0
245 options count: 0
246 parts count: 0
246 parts count: 0
247
247
248 Test old style bundle are detected and refused
248 Test old style bundle are detected and refused
249
249
250 $ hg bundle --all ../bundle.hg
250 $ hg bundle --all ../bundle.hg
251 1 changesets found
251 1 changesets found
252 $ hg statbundle2 < ../bundle.hg
252 $ hg statbundle2 < ../bundle.hg
253 abort: unknown bundle version 10
253 abort: unknown bundle version 10
254 [255]
254 [255]
255
255
256 Test parameters
256 Test parameters
257 =================
257 =================
258
258
259 - some options
259 - some options
260 - no parts
260 - no parts
261
261
262 advisory parameters, no value
262 advisory parameters, no value
263 -------------------------------
263 -------------------------------
264
264
265 Simplest possible parameters form
265 Simplest possible parameters form
266
266
267 Test generation simple option
267 Test generation simple option
268
268
269 $ hg bundle2 --param 'caution'
269 $ hg bundle2 --param 'caution'
270 HG2Y\x00\x00\x00\x07caution\x00\x00\x00\x00 (no-eol) (esc)
270 HG2Y\x00\x00\x00\x07caution\x00\x00\x00\x00 (no-eol) (esc)
271
271
272 Test unbundling
272 Test unbundling
273
273
274 $ hg bundle2 --param 'caution' | hg statbundle2
274 $ hg bundle2 --param 'caution' | hg statbundle2
275 options count: 1
275 options count: 1
276 - caution
276 - caution
277 parts count: 0
277 parts count: 0
278
278
279 Test generation multiple option
279 Test generation multiple option
280
280
281 $ hg bundle2 --param 'caution' --param 'meal'
281 $ hg bundle2 --param 'caution' --param 'meal'
282 HG2Y\x00\x00\x00\x0ccaution meal\x00\x00\x00\x00 (no-eol) (esc)
282 HG2Y\x00\x00\x00\x0ccaution meal\x00\x00\x00\x00 (no-eol) (esc)
283
283
284 Test unbundling
284 Test unbundling
285
285
286 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
286 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
287 options count: 2
287 options count: 2
288 - caution
288 - caution
289 - meal
289 - meal
290 parts count: 0
290 parts count: 0
291
291
292 advisory parameters, with value
292 advisory parameters, with value
293 -------------------------------
293 -------------------------------
294
294
295 Test generation
295 Test generation
296
296
297 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
297 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
298 HG2Y\x00\x00\x00\x1ccaution meal=vegan elephants\x00\x00\x00\x00 (no-eol) (esc)
298 HG2Y\x00\x00\x00\x1ccaution meal=vegan elephants\x00\x00\x00\x00 (no-eol) (esc)
299
299
300 Test unbundling
300 Test unbundling
301
301
302 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
302 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
303 options count: 3
303 options count: 3
304 - caution
304 - caution
305 - elephants
305 - elephants
306 - meal
306 - meal
307 vegan
307 vegan
308 parts count: 0
308 parts count: 0
309
309
310 parameter with special char in value
310 parameter with special char in value
311 ---------------------------------------------------
311 ---------------------------------------------------
312
312
313 Test generation
313 Test generation
314
314
315 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
315 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
316 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
316 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
317
317
318 Test unbundling
318 Test unbundling
319
319
320 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
320 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
321 options count: 2
321 options count: 2
322 - e|! 7/
322 - e|! 7/
323 babar%#==tutu
323 babar%#==tutu
324 - simple
324 - simple
325 parts count: 0
325 parts count: 0
326
326
327 Test unknown mandatory option
327 Test unknown mandatory option
328 ---------------------------------------------------
328 ---------------------------------------------------
329
329
330 $ hg bundle2 --param 'Gravity' | hg statbundle2
330 $ hg bundle2 --param 'Gravity' | hg statbundle2
331 abort: unknown parameters: Stream Parameter - Gravity
331 abort: unknown parameters: Stream Parameter - Gravity
332 [255]
332 [255]
333
333
334 Test debug output
334 Test debug output
335 ---------------------------------------------------
335 ---------------------------------------------------
336
336
337 bundling debug
337 bundling debug
338
338
339 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
339 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
340 start emission of HG2Y stream
340 start emission of HG2Y stream
341 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
341 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
342 start of parts
342 start of parts
343 end of bundle
343 end of bundle
344
344
345 file content is ok
345 file content is ok
346
346
347 $ cat ../out.hg2
347 $ cat ../out.hg2
348 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
348 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
349
349
350 unbundling debug
350 unbundling debug
351
351
352 $ hg statbundle2 --debug < ../out.hg2
352 $ hg statbundle2 --debug < ../out.hg2
353 start processing of HG2Y stream
353 start processing of HG2Y stream
354 reading bundle2 stream parameters
354 reading bundle2 stream parameters
355 ignoring unknown parameter 'e|! 7/'
355 ignoring unknown parameter 'e|! 7/'
356 ignoring unknown parameter 'simple'
356 ignoring unknown parameter 'simple'
357 options count: 2
357 options count: 2
358 - e|! 7/
358 - e|! 7/
359 babar%#==tutu
359 babar%#==tutu
360 - simple
360 - simple
361 start extraction of bundle2 parts
361 start extraction of bundle2 parts
362 part header size: 0
362 part header size: 0
363 end of bundle2 stream
363 end of bundle2 stream
364 parts count: 0
364 parts count: 0
365
365
366
366
367 Test buggy input
367 Test buggy input
368 ---------------------------------------------------
368 ---------------------------------------------------
369
369
370 empty parameter name
370 empty parameter name
371
371
372 $ hg bundle2 --param '' --quiet
372 $ hg bundle2 --param '' --quiet
373 abort: empty parameter name
373 abort: empty parameter name
374 [255]
374 [255]
375
375
376 bad parameter name
376 bad parameter name
377
377
378 $ hg bundle2 --param 42babar
378 $ hg bundle2 --param 42babar
379 abort: non letter first character: '42babar'
379 abort: non letter first character: '42babar'
380 [255]
380 [255]
381
381
382
382
383 Test part
383 Test part
384 =================
384 =================
385
385
386 $ hg bundle2 --parts ../parts.hg2 --debug
386 $ hg bundle2 --parts ../parts.hg2 --debug
387 start emission of HG2Y stream
387 start emission of HG2Y stream
388 bundle parameter:
388 bundle parameter:
389 start of parts
389 start of parts
390 bundle part: "test:empty"
390 bundle part: "test:empty"
391 bundle part: "test:empty"
391 bundle part: "test:empty"
392 bundle part: "test:song"
392 bundle part: "test:song"
393 bundle part: "test:debugreply"
393 bundle part: "test:debugreply"
394 bundle part: "test:math"
394 bundle part: "test:math"
395 bundle part: "test:song"
395 bundle part: "test:song"
396 bundle part: "test:ping"
396 bundle part: "test:ping"
397 end of bundle
397 end of bundle
398
398
399 $ cat ../parts.hg2
399 $ cat ../parts.hg2
400 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
400 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
401 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
401 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
402 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
402 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
403 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
403 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
404 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
404 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
405
405
406
406
407 $ hg statbundle2 < ../parts.hg2
407 $ hg statbundle2 < ../parts.hg2
408 options count: 0
408 options count: 0
409 :test:empty:
409 :test:empty:
410 mandatory: 0
410 mandatory: 0
411 advisory: 0
411 advisory: 0
412 payload: 0 bytes
412 payload: 0 bytes
413 :test:empty:
413 :test:empty:
414 mandatory: 0
414 mandatory: 0
415 advisory: 0
415 advisory: 0
416 payload: 0 bytes
416 payload: 0 bytes
417 :test:song:
417 :test:song:
418 mandatory: 0
418 mandatory: 0
419 advisory: 0
419 advisory: 0
420 payload: 178 bytes
420 payload: 178 bytes
421 :test:debugreply:
421 :test:debugreply:
422 mandatory: 0
422 mandatory: 0
423 advisory: 0
423 advisory: 0
424 payload: 0 bytes
424 payload: 0 bytes
425 :test:math:
425 :test:math:
426 mandatory: 2
426 mandatory: 2
427 advisory: 1
427 advisory: 1
428 payload: 2 bytes
428 payload: 2 bytes
429 :test:song:
429 :test:song:
430 mandatory: 1
430 mandatory: 1
431 advisory: 0
431 advisory: 0
432 payload: 0 bytes
432 payload: 0 bytes
433 :test:ping:
433 :test:ping:
434 mandatory: 0
434 mandatory: 0
435 advisory: 0
435 advisory: 0
436 payload: 0 bytes
436 payload: 0 bytes
437 parts count: 7
437 parts count: 7
438
438
439 $ hg statbundle2 --debug < ../parts.hg2
439 $ hg statbundle2 --debug < ../parts.hg2
440 start processing of HG2Y stream
440 start processing of HG2Y stream
441 reading bundle2 stream parameters
441 reading bundle2 stream parameters
442 options count: 0
442 options count: 0
443 start extraction of bundle2 parts
443 start extraction of bundle2 parts
444 part header size: 17
444 part header size: 17
445 part type: "test:empty"
445 part type: "test:empty"
446 part id: "0"
446 part id: "0"
447 part parameters: 0
447 part parameters: 0
448 :test:empty:
448 :test:empty:
449 mandatory: 0
449 mandatory: 0
450 advisory: 0
450 advisory: 0
451 payload chunk size: 0
451 payload chunk size: 0
452 payload: 0 bytes
452 payload: 0 bytes
453 part header size: 17
453 part header size: 17
454 part type: "test:empty"
454 part type: "test:empty"
455 part id: "1"
455 part id: "1"
456 part parameters: 0
456 part parameters: 0
457 :test:empty:
457 :test:empty:
458 mandatory: 0
458 mandatory: 0
459 advisory: 0
459 advisory: 0
460 payload chunk size: 0
460 payload chunk size: 0
461 payload: 0 bytes
461 payload: 0 bytes
462 part header size: 16
462 part header size: 16
463 part type: "test:song"
463 part type: "test:song"
464 part id: "2"
464 part id: "2"
465 part parameters: 0
465 part parameters: 0
466 :test:song:
466 :test:song:
467 mandatory: 0
467 mandatory: 0
468 advisory: 0
468 advisory: 0
469 payload chunk size: 178
469 payload chunk size: 178
470 payload chunk size: 0
470 payload chunk size: 0
471 payload: 178 bytes
471 payload: 178 bytes
472 part header size: 22
472 part header size: 22
473 part type: "test:debugreply"
473 part type: "test:debugreply"
474 part id: "3"
474 part id: "3"
475 part parameters: 0
475 part parameters: 0
476 :test:debugreply:
476 :test:debugreply:
477 mandatory: 0
477 mandatory: 0
478 advisory: 0
478 advisory: 0
479 payload chunk size: 0
479 payload chunk size: 0
480 payload: 0 bytes
480 payload: 0 bytes
481 part header size: 43
481 part header size: 43
482 part type: "test:math"
482 part type: "test:math"
483 part id: "4"
483 part id: "4"
484 part parameters: 3
484 part parameters: 3
485 :test:math:
485 :test:math:
486 mandatory: 2
486 mandatory: 2
487 advisory: 1
487 advisory: 1
488 payload chunk size: 2
488 payload chunk size: 2
489 payload chunk size: 0
489 payload chunk size: 0
490 payload: 2 bytes
490 payload: 2 bytes
491 part header size: 29
491 part header size: 29
492 part type: "test:song"
492 part type: "test:song"
493 part id: "5"
493 part id: "5"
494 part parameters: 1
494 part parameters: 1
495 :test:song:
495 :test:song:
496 mandatory: 1
496 mandatory: 1
497 advisory: 0
497 advisory: 0
498 payload chunk size: 0
498 payload chunk size: 0
499 payload: 0 bytes
499 payload: 0 bytes
500 part header size: 16
500 part header size: 16
501 part type: "test:ping"
501 part type: "test:ping"
502 part id: "6"
502 part id: "6"
503 part parameters: 0
503 part parameters: 0
504 :test:ping:
504 :test:ping:
505 mandatory: 0
505 mandatory: 0
506 advisory: 0
506 advisory: 0
507 payload chunk size: 0
507 payload chunk size: 0
508 payload: 0 bytes
508 payload: 0 bytes
509 part header size: 0
509 part header size: 0
510 end of bundle2 stream
510 end of bundle2 stream
511 parts count: 7
511 parts count: 7
512
512
513 Test actual unbundling of test part
513 Test actual unbundling of test part
514 =======================================
514 =======================================
515
515
516 Process the bundle
516 Process the bundle
517
517
518 $ hg unbundle2 --debug < ../parts.hg2
518 $ hg unbundle2 --debug < ../parts.hg2
519 start processing of HG2Y stream
519 start processing of HG2Y stream
520 reading bundle2 stream parameters
520 reading bundle2 stream parameters
521 start extraction of bundle2 parts
521 start extraction of bundle2 parts
522 part header size: 17
522 part header size: 17
523 part type: "test:empty"
523 part type: "test:empty"
524 part id: "0"
524 part id: "0"
525 part parameters: 0
525 part parameters: 0
526 ignoring unsupported advisory part test:empty
526 ignoring unsupported advisory part test:empty
527 payload chunk size: 0
527 payload chunk size: 0
528 part header size: 17
528 part header size: 17
529 part type: "test:empty"
529 part type: "test:empty"
530 part id: "1"
530 part id: "1"
531 part parameters: 0
531 part parameters: 0
532 ignoring unsupported advisory part test:empty
532 ignoring unsupported advisory part test:empty
533 payload chunk size: 0
533 payload chunk size: 0
534 part header size: 16
534 part header size: 16
535 part type: "test:song"
535 part type: "test:song"
536 part id: "2"
536 part id: "2"
537 part parameters: 0
537 part parameters: 0
538 found a handler for part 'test:song'
538 found a handler for part 'test:song'
539 The choir starts singing:
539 The choir starts singing:
540 payload chunk size: 178
540 payload chunk size: 178
541 payload chunk size: 0
541 payload chunk size: 0
542 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
542 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
543 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
543 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
544 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
544 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
545 part header size: 22
545 part header size: 22
546 part type: "test:debugreply"
546 part type: "test:debugreply"
547 part id: "3"
547 part id: "3"
548 part parameters: 0
548 part parameters: 0
549 found a handler for part 'test:debugreply'
549 found a handler for part 'test:debugreply'
550 debugreply: no reply
550 debugreply: no reply
551 payload chunk size: 0
551 payload chunk size: 0
552 part header size: 43
552 part header size: 43
553 part type: "test:math"
553 part type: "test:math"
554 part id: "4"
554 part id: "4"
555 part parameters: 3
555 part parameters: 3
556 ignoring unsupported advisory part test:math
556 ignoring unsupported advisory part test:math
557 payload chunk size: 2
557 payload chunk size: 2
558 payload chunk size: 0
558 payload chunk size: 0
559 part header size: 29
559 part header size: 29
560 part type: "test:song"
560 part type: "test:song"
561 part id: "5"
561 part id: "5"
562 part parameters: 1
562 part parameters: 1
563 found a handler for part 'test:song'
563 found a handler for part 'test:song'
564 ignoring unsupported advisory part test:song - randomparam
564 ignoring unsupported advisory part test:song - randomparam
565 payload chunk size: 0
565 payload chunk size: 0
566 part header size: 16
566 part header size: 16
567 part type: "test:ping"
567 part type: "test:ping"
568 part id: "6"
568 part id: "6"
569 part parameters: 0
569 part parameters: 0
570 found a handler for part 'test:ping'
570 found a handler for part 'test:ping'
571 received ping request (id 6)
571 received ping request (id 6)
572 payload chunk size: 0
572 payload chunk size: 0
573 part header size: 0
573 part header size: 0
574 end of bundle2 stream
574 end of bundle2 stream
575 0 unread bytes
575 0 unread bytes
576 3 total verses sung
576 3 total verses sung
577
577
578 Unbundle with an unknown mandatory part
578 Unbundle with an unknown mandatory part
579 (should abort)
579 (should abort)
580
580
581 $ hg bundle2 --parts --unknown ../unknown.hg2
581 $ hg bundle2 --parts --unknown ../unknown.hg2
582
582
583 $ hg unbundle2 < ../unknown.hg2
583 $ hg unbundle2 < ../unknown.hg2
584 The choir starts singing:
584 The choir starts singing:
585 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
585 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
586 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
586 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
587 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
587 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
588 debugreply: no reply
588 debugreply: no reply
589 0 unread bytes
589 0 unread bytes
590 abort: missing support for test:unknown
590 abort: missing support for test:unknown
591 [255]
591 [255]
592
592
593 Unbundle with an unknown mandatory part parameters
593 Unbundle with an unknown mandatory part parameters
594 (should abort)
594 (should abort)
595
595
596 $ hg bundle2 --unknownparams ../unknown.hg2
596 $ hg bundle2 --unknownparams ../unknown.hg2
597
597
598 $ hg unbundle2 < ../unknown.hg2
598 $ hg unbundle2 < ../unknown.hg2
599 0 unread bytes
599 0 unread bytes
600 abort: missing support for test:song - randomparams
600 abort: missing support for test:song - randomparams
601 [255]
601 [255]
602
602
603 unbundle with a reply
603 unbundle with a reply
604
604
605 $ hg bundle2 --parts --reply ../parts-reply.hg2
605 $ hg bundle2 --parts --reply ../parts-reply.hg2
606 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
606 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
607 0 unread bytes
607 0 unread bytes
608 3 total verses sung
608 3 total verses sung
609
609
610 The reply is a bundle
610 The reply is a bundle
611
611
612 $ cat ../reply.hg2
612 $ cat ../reply.hg2
613 HG2Y\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
613 HG2Y\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
614 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
614 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
615 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
615 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
616 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
616 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
617 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
617 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
618 \x00\x00\x00\x00\x00\x00\x00\x1f (esc)
618 \x00\x00\x00\x00\x00\x00\x00\x1f (esc)
619 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
619 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
620 debugreply: 'city=!'
620 debugreply: 'city=!'
621 debugreply: 'celeste,ville'
621 debugreply: 'celeste,ville'
622 debugreply: 'elephants'
622 debugreply: 'elephants'
623 debugreply: 'babar'
623 debugreply: 'babar'
624 debugreply: 'celeste'
624 debugreply: 'celeste'
625 debugreply: 'ping-pong'
625 debugreply: 'ping-pong'
626 \x00\x00\x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
626 \x00\x00\x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
627 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to7\x00\x00\x00=received ping request (id 7) (esc)
627 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to7\x00\x00\x00=received ping request (id 7) (esc)
628 replying to ping request (id 7)
628 replying to ping request (id 7)
629 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
629 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
630
630
631 The reply is valid
631 The reply is valid
632
632
633 $ hg statbundle2 < ../reply.hg2
633 $ hg statbundle2 < ../reply.hg2
634 options count: 0
634 options count: 0
635 :b2x:output:
635 :b2x:output:
636 mandatory: 0
636 mandatory: 0
637 advisory: 1
637 advisory: 1
638 payload: 217 bytes
638 payload: 217 bytes
639 :b2x:output:
639 :b2x:output:
640 mandatory: 0
640 mandatory: 0
641 advisory: 1
641 advisory: 1
642 payload: 201 bytes
642 payload: 201 bytes
643 :test:pong:
643 :test:pong:
644 mandatory: 1
644 mandatory: 1
645 advisory: 0
645 advisory: 0
646 payload: 0 bytes
646 payload: 0 bytes
647 :b2x:output:
647 :b2x:output:
648 mandatory: 0
648 mandatory: 0
649 advisory: 1
649 advisory: 1
650 payload: 61 bytes
650 payload: 61 bytes
651 parts count: 4
651 parts count: 4
652
652
653 Unbundle the reply to get the output:
653 Unbundle the reply to get the output:
654
654
655 $ hg unbundle2 < ../reply.hg2
655 $ hg unbundle2 < ../reply.hg2
656 remote: The choir starts singing:
656 remote: The choir starts singing:
657 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
657 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
658 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
658 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
659 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
659 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
660 remote: debugreply: capabilities:
660 remote: debugreply: capabilities:
661 remote: debugreply: 'city=!'
661 remote: debugreply: 'city=!'
662 remote: debugreply: 'celeste,ville'
662 remote: debugreply: 'celeste,ville'
663 remote: debugreply: 'elephants'
663 remote: debugreply: 'elephants'
664 remote: debugreply: 'babar'
664 remote: debugreply: 'babar'
665 remote: debugreply: 'celeste'
665 remote: debugreply: 'celeste'
666 remote: debugreply: 'ping-pong'
666 remote: debugreply: 'ping-pong'
667 remote: received ping request (id 7)
667 remote: received ping request (id 7)
668 remote: replying to ping request (id 7)
668 remote: replying to ping request (id 7)
669 0 unread bytes
669 0 unread bytes
670
670
671 Test push race detection
671 Test push race detection
672
672
673 $ hg bundle2 --pushrace ../part-race.hg2
673 $ hg bundle2 --pushrace ../part-race.hg2
674
674
675 $ hg unbundle2 < ../part-race.hg2
675 $ hg unbundle2 < ../part-race.hg2
676 0 unread bytes
676 0 unread bytes
677 abort: push race: repository changed while pushing - please try again
677 abort: push race: repository changed while pushing - please try again
678 [255]
678 [255]
679
679
680 Support for changegroup
680 Support for changegroup
681 ===================================
681 ===================================
682
682
683 $ hg unbundle $TESTDIR/bundles/rebase.hg
683 $ hg unbundle $TESTDIR/bundles/rebase.hg
684 adding changesets
684 adding changesets
685 adding manifests
685 adding manifests
686 adding file changes
686 adding file changes
687 added 8 changesets with 7 changes to 7 files (+3 heads)
687 added 8 changesets with 7 changes to 7 files (+3 heads)
688 (run 'hg heads' to see heads, 'hg merge' to merge)
688 (run 'hg heads' to see heads, 'hg merge' to merge)
689
689
690 $ hg log -G
690 $ hg log -G
691 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
691 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
692 |
692 |
693 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
693 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
694 |/|
694 |/|
695 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
695 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
696 | |
696 | |
697 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
697 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
698 |/
698 |/
699 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
699 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
700 | |
700 | |
701 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
701 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
702 | |
702 | |
703 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
703 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
704 |/
704 |/
705 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
705 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
706
706
707 @ 0:3903775176ed draft test a
707 @ 0:3903775176ed draft test a
708
708
709
709
710 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
710 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
711 4 changesets found
711 4 changesets found
712 list of changesets:
712 list of changesets:
713 32af7686d403cf45b5d95f2d70cebea587ac806a
713 32af7686d403cf45b5d95f2d70cebea587ac806a
714 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
714 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
715 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
715 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
716 02de42196ebee42ef284b6780a87cdc96e8eaab6
716 02de42196ebee42ef284b6780a87cdc96e8eaab6
717 start emission of HG2Y stream
717 start emission of HG2Y stream
718 bundle parameter:
718 bundle parameter:
719 start of parts
719 start of parts
720 bundle part: "b2x:changegroup"
720 bundle part: "b2x:changegroup"
721 bundling: 1/4 changesets (25.00%)
721 bundling: 1/4 changesets (25.00%)
722 bundling: 2/4 changesets (50.00%)
722 bundling: 2/4 changesets (50.00%)
723 bundling: 3/4 changesets (75.00%)
723 bundling: 3/4 changesets (75.00%)
724 bundling: 4/4 changesets (100.00%)
724 bundling: 4/4 changesets (100.00%)
725 bundling: 1/4 manifests (25.00%)
725 bundling: 1/4 manifests (25.00%)
726 bundling: 2/4 manifests (50.00%)
726 bundling: 2/4 manifests (50.00%)
727 bundling: 3/4 manifests (75.00%)
727 bundling: 3/4 manifests (75.00%)
728 bundling: 4/4 manifests (100.00%)
728 bundling: 4/4 manifests (100.00%)
729 bundling: D 1/3 files (33.33%)
729 bundling: D 1/3 files (33.33%)
730 bundling: E 2/3 files (66.67%)
730 bundling: E 2/3 files (66.67%)
731 bundling: H 3/3 files (100.00%)
731 bundling: H 3/3 files (100.00%)
732 end of bundle
732 end of bundle
733
733
734 $ cat ../rev.hg2
734 $ cat ../rev.hg2
735 HG2Y\x00\x00\x00\x00\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
735 HG2Y\x00\x00\x00\x00\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
736 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
736 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
737 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
737 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
738 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
738 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
739 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
739 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
740 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
740 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
741 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
741 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
742 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
742 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
743 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
743 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
744 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
744 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
745 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
745 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
746 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
746 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
747 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
747 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
748 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
748 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
749 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
749 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
750 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
750 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
751 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
751 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
752 l\r (no-eol) (esc)
752 l\r (no-eol) (esc)
753 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
753 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
754 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
754 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
755 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
755 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
756 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
756 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
757
757
758 $ hg debugbundle ../rev.hg2
758 $ hg debugbundle ../rev.hg2
759 Stream params: {}
759 Stream params: {}
760 b2x:changegroup -- '{}'
760 b2x:changegroup -- '{}'
761 32af7686d403cf45b5d95f2d70cebea587ac806a
761 32af7686d403cf45b5d95f2d70cebea587ac806a
762 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
762 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
763 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
763 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
764 02de42196ebee42ef284b6780a87cdc96e8eaab6
764 02de42196ebee42ef284b6780a87cdc96e8eaab6
765 $ hg unbundle ../rev.hg2
765 $ hg unbundle ../rev.hg2
766 adding changesets
766 adding changesets
767 adding manifests
767 adding manifests
768 adding file changes
768 adding file changes
769 added 0 changesets with 0 changes to 3 files
769 added 0 changesets with 0 changes to 3 files
770
770
771 with reply
771 with reply
772
772
773 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
773 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
774 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
774 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
775 0 unread bytes
775 0 unread bytes
776 addchangegroup return: 1
776 addchangegroup return: 1
777
777
778 $ cat ../rev-reply.hg2
778 $ cat ../rev-reply.hg2
779 HG2Y\x00\x00\x00\x00\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
779 HG2Y\x00\x00\x00\x00\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
780 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
780 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
781 adding manifests
781 adding manifests
782 adding file changes
782 adding file changes
783 added 0 changesets with 0 changes to 3 files
783 added 0 changesets with 0 changes to 3 files
784 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
784 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
785
785
786 Check handling of exception during generation.
786 Check handling of exception during generation.
787 ----------------------------------------------
787 ----------------------------------------------
788
788
789 $ hg bundle2 --genraise > ../genfailed.hg2
789 $ hg bundle2 --genraise > ../genfailed.hg2
790 abort: Someone set up us the bomb!
790 abort: Someone set up us the bomb!
791 [255]
791 [255]
792
792
793 Should still be a valid bundle
793 Should still be a valid bundle
794
794
795 $ cat ../genfailed.hg2
795 $ cat ../genfailed.hg2
796 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
796 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
797 b2x:output\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00L\x0fb2x:error:abort\x00\x00\x00\x00\x01\x00\x07-messageunexpected error: Someone set up us the bomb!\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
797 b2x:output\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00L\x0fb2x:error:abort\x00\x00\x00\x00\x01\x00\x07-messageunexpected error: Someone set up us the bomb!\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
798
798
799 And its handling on the other size raise a clean exception
799 And its handling on the other size raise a clean exception
800
800
801 $ cat ../genfailed.hg2 | hg unbundle2
801 $ cat ../genfailed.hg2 | hg unbundle2
802 0 unread bytes
802 0 unread bytes
803 abort: unexpected error: Someone set up us the bomb!
803 abort: unexpected error: Someone set up us the bomb!
804 [255]
804 [255]
805
805
806
806
807 $ cd ..
807 $ cd ..
General Comments 0
You need to be logged in to leave comments. Login now