##// END OF EJS Templates
bundle2: gracefully handle PushRaced error during unbundle...
Pierre-Yves David -
r21186:9f3652e8 stable
parent child Browse files
Show More
@@ -1,762 +1,768 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 :payload:
116 :payload:
117
117
118 payload is a series of `<chunksize><chunkdata>`.
118 payload is a series of `<chunksize><chunkdata>`.
119
119
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122
122
123 The current implementation always produces either zero or one chunk.
123 The current implementation always produces either zero or one chunk.
124 This is an implementation limitation that will ultimately be lifted.
124 This is an implementation limitation that will ultimately be lifted.
125
125
126 Bundle processing
126 Bundle processing
127 ============================
127 ============================
128
128
129 Each part is processed in order using a "part handler". Handler are registered
129 Each part is processed in order using a "part handler". Handler are registered
130 for a certain part type.
130 for a certain part type.
131
131
132 The matching of a part to its handler is case insensitive. The case of the
132 The matching of a part to its handler is case insensitive. The case of the
133 part type is used to know if a part is mandatory or advisory. If the Part type
133 part type is used to know if a part is mandatory or advisory. If the Part type
134 contains any uppercase char it is considered mandatory. When no handler is
134 contains any uppercase char it is considered mandatory. When no handler is
135 known for a Mandatory part, the process is aborted and an exception is raised.
135 known for a Mandatory part, the process is aborted and an exception is raised.
136 If the part is advisory and no handler is known, the part is ignored. When the
136 If the part is advisory and no handler is known, the part is ignored. When the
137 process is aborted, the full bundle is still read from the stream to keep the
137 process is aborted, the full bundle is still read from the stream to keep the
138 channel usable. But none of the part read from an abort are processed. In the
138 channel usable. But none of the part read from an abort are processed. In the
139 future, dropping the stream may become an option for channel we do not care to
139 future, dropping the stream may become an option for channel we do not care to
140 preserve.
140 preserve.
141 """
141 """
142
142
143 import util
143 import util
144 import struct
144 import struct
145 import urllib
145 import urllib
146 import string
146 import string
147
147
148 import changegroup, error
148 import changegroup, error
149 from i18n import _
149 from i18n import _
150
150
151 _pack = struct.pack
151 _pack = struct.pack
152 _unpack = struct.unpack
152 _unpack = struct.unpack
153
153
154 _magicstring = 'HG2X'
154 _magicstring = 'HG2X'
155
155
156 _fstreamparamsize = '>H'
156 _fstreamparamsize = '>H'
157 _fpartheadersize = '>H'
157 _fpartheadersize = '>H'
158 _fparttypesize = '>B'
158 _fparttypesize = '>B'
159 _fpartid = '>I'
159 _fpartid = '>I'
160 _fpayloadsize = '>I'
160 _fpayloadsize = '>I'
161 _fpartparamcount = '>BB'
161 _fpartparamcount = '>BB'
162
162
163 preferedchunksize = 4096
163 preferedchunksize = 4096
164
164
165 def _makefpartparamsizes(nbparams):
165 def _makefpartparamsizes(nbparams):
166 """return a struct format to read part parameter sizes
166 """return a struct format to read part parameter sizes
167
167
168 The number parameters is variable so we need to build that format
168 The number parameters is variable so we need to build that format
169 dynamically.
169 dynamically.
170 """
170 """
171 return '>'+('BB'*nbparams)
171 return '>'+('BB'*nbparams)
172
172
173 class UnknownPartError(KeyError):
173 class UnknownPartError(KeyError):
174 """error raised when no handler is found for a Mandatory part"""
174 """error raised when no handler is found for a Mandatory part"""
175 pass
175 pass
176
176
177 parthandlermapping = {}
177 parthandlermapping = {}
178
178
179 def parthandler(parttype):
179 def parthandler(parttype):
180 """decorator that register a function as a bundle2 part handler
180 """decorator that register a function as a bundle2 part handler
181
181
182 eg::
182 eg::
183
183
184 @parthandler('myparttype')
184 @parthandler('myparttype')
185 def myparttypehandler(...):
185 def myparttypehandler(...):
186 '''process a part of type "my part".'''
186 '''process a part of type "my part".'''
187 ...
187 ...
188 """
188 """
189 def _decorator(func):
189 def _decorator(func):
190 lparttype = parttype.lower() # enforce lower case matching.
190 lparttype = parttype.lower() # enforce lower case matching.
191 assert lparttype not in parthandlermapping
191 assert lparttype not in parthandlermapping
192 parthandlermapping[lparttype] = func
192 parthandlermapping[lparttype] = func
193 return func
193 return func
194 return _decorator
194 return _decorator
195
195
196 class unbundlerecords(object):
196 class unbundlerecords(object):
197 """keep record of what happens during and unbundle
197 """keep record of what happens during and unbundle
198
198
199 New records are added using `records.add('cat', obj)`. Where 'cat' is a
199 New records are added using `records.add('cat', obj)`. Where 'cat' is a
200 category of record and obj is an arbitrary object.
200 category of record and obj is an arbitrary object.
201
201
202 `records['cat']` will return all entries of this category 'cat'.
202 `records['cat']` will return all entries of this category 'cat'.
203
203
204 Iterating on the object itself will yield `('category', obj)` tuples
204 Iterating on the object itself will yield `('category', obj)` tuples
205 for all entries.
205 for all entries.
206
206
207 All iterations happens in chronological order.
207 All iterations happens in chronological order.
208 """
208 """
209
209
210 def __init__(self):
210 def __init__(self):
211 self._categories = {}
211 self._categories = {}
212 self._sequences = []
212 self._sequences = []
213 self._replies = {}
213 self._replies = {}
214
214
215 def add(self, category, entry, inreplyto=None):
215 def add(self, category, entry, inreplyto=None):
216 """add a new record of a given category.
216 """add a new record of a given category.
217
217
218 The entry can then be retrieved in the list returned by
218 The entry can then be retrieved in the list returned by
219 self['category']."""
219 self['category']."""
220 self._categories.setdefault(category, []).append(entry)
220 self._categories.setdefault(category, []).append(entry)
221 self._sequences.append((category, entry))
221 self._sequences.append((category, entry))
222 if inreplyto is not None:
222 if inreplyto is not None:
223 self.getreplies(inreplyto).add(category, entry)
223 self.getreplies(inreplyto).add(category, entry)
224
224
225 def getreplies(self, partid):
225 def getreplies(self, partid):
226 """get the subrecords that replies to a specific part"""
226 """get the subrecords that replies to a specific part"""
227 return self._replies.setdefault(partid, unbundlerecords())
227 return self._replies.setdefault(partid, unbundlerecords())
228
228
229 def __getitem__(self, cat):
229 def __getitem__(self, cat):
230 return tuple(self._categories.get(cat, ()))
230 return tuple(self._categories.get(cat, ()))
231
231
232 def __iter__(self):
232 def __iter__(self):
233 return iter(self._sequences)
233 return iter(self._sequences)
234
234
235 def __len__(self):
235 def __len__(self):
236 return len(self._sequences)
236 return len(self._sequences)
237
237
238 def __nonzero__(self):
238 def __nonzero__(self):
239 return bool(self._sequences)
239 return bool(self._sequences)
240
240
241 class bundleoperation(object):
241 class bundleoperation(object):
242 """an object that represents a single bundling process
242 """an object that represents a single bundling process
243
243
244 Its purpose is to carry unbundle-related objects and states.
244 Its purpose is to carry unbundle-related objects and states.
245
245
246 A new object should be created at the beginning of each bundle processing.
246 A new object should be created at the beginning of each bundle processing.
247 The object is to be returned by the processing function.
247 The object is to be returned by the processing function.
248
248
249 The object has very little content now it will ultimately contain:
249 The object has very little content now it will ultimately contain:
250 * an access to the repo the bundle is applied to,
250 * an access to the repo the bundle is applied to,
251 * a ui object,
251 * a ui object,
252 * a way to retrieve a transaction to add changes to the repo,
252 * a way to retrieve a transaction to add changes to the repo,
253 * a way to record the result of processing each part,
253 * a way to record the result of processing each part,
254 * a way to construct a bundle response when applicable.
254 * a way to construct a bundle response when applicable.
255 """
255 """
256
256
257 def __init__(self, repo, transactiongetter):
257 def __init__(self, repo, transactiongetter):
258 self.repo = repo
258 self.repo = repo
259 self.ui = repo.ui
259 self.ui = repo.ui
260 self.records = unbundlerecords()
260 self.records = unbundlerecords()
261 self.gettransaction = transactiongetter
261 self.gettransaction = transactiongetter
262 self.reply = None
262 self.reply = None
263
263
264 class TransactionUnavailable(RuntimeError):
264 class TransactionUnavailable(RuntimeError):
265 pass
265 pass
266
266
267 def _notransaction():
267 def _notransaction():
268 """default method to get a transaction while processing a bundle
268 """default method to get a transaction while processing a bundle
269
269
270 Raise an exception to highlight the fact that no transaction was expected
270 Raise an exception to highlight the fact that no transaction was expected
271 to be created"""
271 to be created"""
272 raise TransactionUnavailable()
272 raise TransactionUnavailable()
273
273
274 def processbundle(repo, unbundler, transactiongetter=_notransaction):
274 def processbundle(repo, unbundler, transactiongetter=_notransaction):
275 """This function process a bundle, apply effect to/from a repo
275 """This function process a bundle, apply effect to/from a repo
276
276
277 It iterates over each part then searches for and uses the proper handling
277 It iterates over each part then searches for and uses the proper handling
278 code to process the part. Parts are processed in order.
278 code to process the part. Parts are processed in order.
279
279
280 This is very early version of this function that will be strongly reworked
280 This is very early version of this function that will be strongly reworked
281 before final usage.
281 before final usage.
282
282
283 Unknown Mandatory part will abort the process.
283 Unknown Mandatory part will abort the process.
284 """
284 """
285 op = bundleoperation(repo, transactiongetter)
285 op = bundleoperation(repo, transactiongetter)
286 # todo:
286 # todo:
287 # - replace this is a init function soon.
287 # - replace this is a init function soon.
288 # - exception catching
288 # - exception catching
289 unbundler.params
289 unbundler.params
290 iterparts = unbundler.iterparts()
290 iterparts = unbundler.iterparts()
291 part = None
291 part = None
292 try:
292 try:
293 for part in iterparts:
293 for part in iterparts:
294 parttype = part.type
294 parttype = part.type
295 # part key are matched lower case
295 # part key are matched lower case
296 key = parttype.lower()
296 key = parttype.lower()
297 try:
297 try:
298 handler = parthandlermapping[key]
298 handler = parthandlermapping[key]
299 op.ui.debug('found a handler for part %r\n' % parttype)
299 op.ui.debug('found a handler for part %r\n' % parttype)
300 except KeyError:
300 except KeyError:
301 if key != parttype: # mandatory parts
301 if key != parttype: # mandatory parts
302 # todo:
302 # todo:
303 # - use a more precise exception
303 # - use a more precise exception
304 raise UnknownPartError(key)
304 raise UnknownPartError(key)
305 op.ui.debug('ignoring unknown advisory part %r\n' % key)
305 op.ui.debug('ignoring unknown advisory part %r\n' % key)
306 # consuming the part
306 # consuming the part
307 part.read()
307 part.read()
308 continue
308 continue
309
309
310 # handler is called outside the above try block so that we don't
310 # handler is called outside the above try block so that we don't
311 # risk catching KeyErrors from anything other than the
311 # risk catching KeyErrors from anything other than the
312 # parthandlermapping lookup (any KeyError raised by handler()
312 # parthandlermapping lookup (any KeyError raised by handler()
313 # itself represents a defect of a different variety).
313 # itself represents a defect of a different variety).
314 output = None
314 output = None
315 if op.reply is not None:
315 if op.reply is not None:
316 op.ui.pushbuffer(error=True)
316 op.ui.pushbuffer(error=True)
317 output = ''
317 output = ''
318 try:
318 try:
319 handler(op, part)
319 handler(op, part)
320 finally:
320 finally:
321 if output is not None:
321 if output is not None:
322 output = op.ui.popbuffer()
322 output = op.ui.popbuffer()
323 if output:
323 if output:
324 outpart = bundlepart('b2x:output',
324 outpart = bundlepart('b2x:output',
325 advisoryparams=[('in-reply-to',
325 advisoryparams=[('in-reply-to',
326 str(part.id))],
326 str(part.id))],
327 data=output)
327 data=output)
328 op.reply.addpart(outpart)
328 op.reply.addpart(outpart)
329 part.read()
329 part.read()
330 except Exception, exc:
330 except Exception, exc:
331 if part is not None:
331 if part is not None:
332 # consume the bundle content
332 # consume the bundle content
333 part.read()
333 part.read()
334 for part in iterparts:
334 for part in iterparts:
335 # consume the bundle content
335 # consume the bundle content
336 part.read()
336 part.read()
337 # Small hack to let caller code distinguish exceptions from bundle2
337 # Small hack to let caller code distinguish exceptions from bundle2
338 # processing fron the ones from bundle1 processing. This is mostly
338 # processing fron the ones from bundle1 processing. This is mostly
339 # needed to handle different return codes to unbundle according to the
339 # needed to handle different return codes to unbundle according to the
340 # type of bundle. We should probably clean up or drop this return code
340 # type of bundle. We should probably clean up or drop this return code
341 # craziness in a future version.
341 # craziness in a future version.
342 exc.duringunbundle2 = True
342 exc.duringunbundle2 = True
343 raise
343 raise
344 return op
344 return op
345
345
346 def decodecaps(blob):
346 def decodecaps(blob):
347 """decode a bundle2 caps bytes blob into a dictionnary
347 """decode a bundle2 caps bytes blob into a dictionnary
348
348
349 The blob is a list of capabilities (one per line)
349 The blob is a list of capabilities (one per line)
350 Capabilities may have values using a line of the form::
350 Capabilities may have values using a line of the form::
351
351
352 capability=value1,value2,value3
352 capability=value1,value2,value3
353
353
354 The values are always a list."""
354 The values are always a list."""
355 caps = {}
355 caps = {}
356 for line in blob.splitlines():
356 for line in blob.splitlines():
357 if not line:
357 if not line:
358 continue
358 continue
359 if '=' not in line:
359 if '=' not in line:
360 key, vals = line, ()
360 key, vals = line, ()
361 else:
361 else:
362 key, vals = line.split('=', 1)
362 key, vals = line.split('=', 1)
363 vals = vals.split(',')
363 vals = vals.split(',')
364 key = urllib.unquote(key)
364 key = urllib.unquote(key)
365 vals = [urllib.unquote(v) for v in vals]
365 vals = [urllib.unquote(v) for v in vals]
366 caps[key] = vals
366 caps[key] = vals
367 return caps
367 return caps
368
368
369 def encodecaps(caps):
369 def encodecaps(caps):
370 """encode a bundle2 caps dictionary into a bytes blob"""
370 """encode a bundle2 caps dictionary into a bytes blob"""
371 chunks = []
371 chunks = []
372 for ca in sorted(caps):
372 for ca in sorted(caps):
373 vals = caps[ca]
373 vals = caps[ca]
374 ca = urllib.quote(ca)
374 ca = urllib.quote(ca)
375 vals = [urllib.quote(v) for v in vals]
375 vals = [urllib.quote(v) for v in vals]
376 if vals:
376 if vals:
377 ca = "%s=%s" % (ca, ','.join(vals))
377 ca = "%s=%s" % (ca, ','.join(vals))
378 chunks.append(ca)
378 chunks.append(ca)
379 return '\n'.join(chunks)
379 return '\n'.join(chunks)
380
380
381 class bundle20(object):
381 class bundle20(object):
382 """represent an outgoing bundle2 container
382 """represent an outgoing bundle2 container
383
383
384 Use the `addparam` method to add stream level parameter. and `addpart` to
384 Use the `addparam` method to add stream level parameter. and `addpart` to
385 populate it. Then call `getchunks` to retrieve all the binary chunks of
385 populate it. Then call `getchunks` to retrieve all the binary chunks of
386 data that compose the bundle2 container."""
386 data that compose the bundle2 container."""
387
387
388 def __init__(self, ui, capabilities=()):
388 def __init__(self, ui, capabilities=()):
389 self.ui = ui
389 self.ui = ui
390 self._params = []
390 self._params = []
391 self._parts = []
391 self._parts = []
392 self.capabilities = dict(capabilities)
392 self.capabilities = dict(capabilities)
393
393
394 def addparam(self, name, value=None):
394 def addparam(self, name, value=None):
395 """add a stream level parameter"""
395 """add a stream level parameter"""
396 if not name:
396 if not name:
397 raise ValueError('empty parameter name')
397 raise ValueError('empty parameter name')
398 if name[0] not in string.letters:
398 if name[0] not in string.letters:
399 raise ValueError('non letter first character: %r' % name)
399 raise ValueError('non letter first character: %r' % name)
400 self._params.append((name, value))
400 self._params.append((name, value))
401
401
402 def addpart(self, part):
402 def addpart(self, part):
403 """add a new part to the bundle2 container
403 """add a new part to the bundle2 container
404
404
405 Parts contains the actual applicative payload."""
405 Parts contains the actual applicative payload."""
406 assert part.id is None
406 assert part.id is None
407 part.id = len(self._parts) # very cheap counter
407 part.id = len(self._parts) # very cheap counter
408 self._parts.append(part)
408 self._parts.append(part)
409
409
410 def getchunks(self):
410 def getchunks(self):
411 self.ui.debug('start emission of %s stream\n' % _magicstring)
411 self.ui.debug('start emission of %s stream\n' % _magicstring)
412 yield _magicstring
412 yield _magicstring
413 param = self._paramchunk()
413 param = self._paramchunk()
414 self.ui.debug('bundle parameter: %s\n' % param)
414 self.ui.debug('bundle parameter: %s\n' % param)
415 yield _pack(_fstreamparamsize, len(param))
415 yield _pack(_fstreamparamsize, len(param))
416 if param:
416 if param:
417 yield param
417 yield param
418
418
419 self.ui.debug('start of parts\n')
419 self.ui.debug('start of parts\n')
420 for part in self._parts:
420 for part in self._parts:
421 self.ui.debug('bundle part: "%s"\n' % part.type)
421 self.ui.debug('bundle part: "%s"\n' % part.type)
422 for chunk in part.getchunks():
422 for chunk in part.getchunks():
423 yield chunk
423 yield chunk
424 self.ui.debug('end of bundle\n')
424 self.ui.debug('end of bundle\n')
425 yield '\0\0'
425 yield '\0\0'
426
426
427 def _paramchunk(self):
427 def _paramchunk(self):
428 """return a encoded version of all stream parameters"""
428 """return a encoded version of all stream parameters"""
429 blocks = []
429 blocks = []
430 for par, value in self._params:
430 for par, value in self._params:
431 par = urllib.quote(par)
431 par = urllib.quote(par)
432 if value is not None:
432 if value is not None:
433 value = urllib.quote(value)
433 value = urllib.quote(value)
434 par = '%s=%s' % (par, value)
434 par = '%s=%s' % (par, value)
435 blocks.append(par)
435 blocks.append(par)
436 return ' '.join(blocks)
436 return ' '.join(blocks)
437
437
438 class unpackermixin(object):
438 class unpackermixin(object):
439 """A mixin to extract bytes and struct data from a stream"""
439 """A mixin to extract bytes and struct data from a stream"""
440
440
441 def __init__(self, fp):
441 def __init__(self, fp):
442 self._fp = fp
442 self._fp = fp
443
443
444 def _unpack(self, format):
444 def _unpack(self, format):
445 """unpack this struct format from the stream"""
445 """unpack this struct format from the stream"""
446 data = self._readexact(struct.calcsize(format))
446 data = self._readexact(struct.calcsize(format))
447 return _unpack(format, data)
447 return _unpack(format, data)
448
448
449 def _readexact(self, size):
449 def _readexact(self, size):
450 """read exactly <size> bytes from the stream"""
450 """read exactly <size> bytes from the stream"""
451 return changegroup.readexactly(self._fp, size)
451 return changegroup.readexactly(self._fp, size)
452
452
453
453
454 class unbundle20(unpackermixin):
454 class unbundle20(unpackermixin):
455 """interpret a bundle2 stream
455 """interpret a bundle2 stream
456
456
457 This class is fed with a binary stream and yields parts through its
457 This class is fed with a binary stream and yields parts through its
458 `iterparts` methods."""
458 `iterparts` methods."""
459
459
460 def __init__(self, ui, fp, header=None):
460 def __init__(self, ui, fp, header=None):
461 """If header is specified, we do not read it out of the stream."""
461 """If header is specified, we do not read it out of the stream."""
462 self.ui = ui
462 self.ui = ui
463 super(unbundle20, self).__init__(fp)
463 super(unbundle20, self).__init__(fp)
464 if header is None:
464 if header is None:
465 header = self._readexact(4)
465 header = self._readexact(4)
466 magic, version = header[0:2], header[2:4]
466 magic, version = header[0:2], header[2:4]
467 if magic != 'HG':
467 if magic != 'HG':
468 raise util.Abort(_('not a Mercurial bundle'))
468 raise util.Abort(_('not a Mercurial bundle'))
469 if version != '2X':
469 if version != '2X':
470 raise util.Abort(_('unknown bundle version %s') % version)
470 raise util.Abort(_('unknown bundle version %s') % version)
471 self.ui.debug('start processing of %s stream\n' % header)
471 self.ui.debug('start processing of %s stream\n' % header)
472
472
473 @util.propertycache
473 @util.propertycache
474 def params(self):
474 def params(self):
475 """dictionary of stream level parameters"""
475 """dictionary of stream level parameters"""
476 self.ui.debug('reading bundle2 stream parameters\n')
476 self.ui.debug('reading bundle2 stream parameters\n')
477 params = {}
477 params = {}
478 paramssize = self._unpack(_fstreamparamsize)[0]
478 paramssize = self._unpack(_fstreamparamsize)[0]
479 if paramssize:
479 if paramssize:
480 for p in self._readexact(paramssize).split(' '):
480 for p in self._readexact(paramssize).split(' '):
481 p = p.split('=', 1)
481 p = p.split('=', 1)
482 p = [urllib.unquote(i) for i in p]
482 p = [urllib.unquote(i) for i in p]
483 if len(p) < 2:
483 if len(p) < 2:
484 p.append(None)
484 p.append(None)
485 self._processparam(*p)
485 self._processparam(*p)
486 params[p[0]] = p[1]
486 params[p[0]] = p[1]
487 return params
487 return params
488
488
489 def _processparam(self, name, value):
489 def _processparam(self, name, value):
490 """process a parameter, applying its effect if needed
490 """process a parameter, applying its effect if needed
491
491
492 Parameter starting with a lower case letter are advisory and will be
492 Parameter starting with a lower case letter are advisory and will be
493 ignored when unknown. Those starting with an upper case letter are
493 ignored when unknown. Those starting with an upper case letter are
494 mandatory and will this function will raise a KeyError when unknown.
494 mandatory and will this function will raise a KeyError when unknown.
495
495
496 Note: no option are currently supported. Any input will be either
496 Note: no option are currently supported. Any input will be either
497 ignored or failing.
497 ignored or failing.
498 """
498 """
499 if not name:
499 if not name:
500 raise ValueError('empty parameter name')
500 raise ValueError('empty parameter name')
501 if name[0] not in string.letters:
501 if name[0] not in string.letters:
502 raise ValueError('non letter first character: %r' % name)
502 raise ValueError('non letter first character: %r' % name)
503 # Some logic will be later added here to try to process the option for
503 # Some logic will be later added here to try to process the option for
504 # a dict of known parameter.
504 # a dict of known parameter.
505 if name[0].islower():
505 if name[0].islower():
506 self.ui.debug("ignoring unknown parameter %r\n" % name)
506 self.ui.debug("ignoring unknown parameter %r\n" % name)
507 else:
507 else:
508 raise KeyError(name)
508 raise KeyError(name)
509
509
510
510
511 def iterparts(self):
511 def iterparts(self):
512 """yield all parts contained in the stream"""
512 """yield all parts contained in the stream"""
513 # make sure param have been loaded
513 # make sure param have been loaded
514 self.params
514 self.params
515 self.ui.debug('start extraction of bundle2 parts\n')
515 self.ui.debug('start extraction of bundle2 parts\n')
516 headerblock = self._readpartheader()
516 headerblock = self._readpartheader()
517 while headerblock is not None:
517 while headerblock is not None:
518 part = unbundlepart(self.ui, headerblock, self._fp)
518 part = unbundlepart(self.ui, headerblock, self._fp)
519 yield part
519 yield part
520 headerblock = self._readpartheader()
520 headerblock = self._readpartheader()
521 self.ui.debug('end of bundle2 stream\n')
521 self.ui.debug('end of bundle2 stream\n')
522
522
523 def _readpartheader(self):
523 def _readpartheader(self):
524 """reads a part header size and return the bytes blob
524 """reads a part header size and return the bytes blob
525
525
526 returns None if empty"""
526 returns None if empty"""
527 headersize = self._unpack(_fpartheadersize)[0]
527 headersize = self._unpack(_fpartheadersize)[0]
528 self.ui.debug('part header size: %i\n' % headersize)
528 self.ui.debug('part header size: %i\n' % headersize)
529 if headersize:
529 if headersize:
530 return self._readexact(headersize)
530 return self._readexact(headersize)
531 return None
531 return None
532
532
533
533
534 class bundlepart(object):
534 class bundlepart(object):
535 """A bundle2 part contains application level payload
535 """A bundle2 part contains application level payload
536
536
537 The part `type` is used to route the part to the application level
537 The part `type` is used to route the part to the application level
538 handler.
538 handler.
539 """
539 """
540
540
541 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
541 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
542 data=''):
542 data=''):
543 self.id = None
543 self.id = None
544 self.type = parttype
544 self.type = parttype
545 self.data = data
545 self.data = data
546 self.mandatoryparams = mandatoryparams
546 self.mandatoryparams = mandatoryparams
547 self.advisoryparams = advisoryparams
547 self.advisoryparams = advisoryparams
548
548
549 def getchunks(self):
549 def getchunks(self):
550 #### header
550 #### header
551 ## parttype
551 ## parttype
552 header = [_pack(_fparttypesize, len(self.type)),
552 header = [_pack(_fparttypesize, len(self.type)),
553 self.type, _pack(_fpartid, self.id),
553 self.type, _pack(_fpartid, self.id),
554 ]
554 ]
555 ## parameters
555 ## parameters
556 # count
556 # count
557 manpar = self.mandatoryparams
557 manpar = self.mandatoryparams
558 advpar = self.advisoryparams
558 advpar = self.advisoryparams
559 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
559 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
560 # size
560 # size
561 parsizes = []
561 parsizes = []
562 for key, value in manpar:
562 for key, value in manpar:
563 parsizes.append(len(key))
563 parsizes.append(len(key))
564 parsizes.append(len(value))
564 parsizes.append(len(value))
565 for key, value in advpar:
565 for key, value in advpar:
566 parsizes.append(len(key))
566 parsizes.append(len(key))
567 parsizes.append(len(value))
567 parsizes.append(len(value))
568 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
568 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
569 header.append(paramsizes)
569 header.append(paramsizes)
570 # key, value
570 # key, value
571 for key, value in manpar:
571 for key, value in manpar:
572 header.append(key)
572 header.append(key)
573 header.append(value)
573 header.append(value)
574 for key, value in advpar:
574 for key, value in advpar:
575 header.append(key)
575 header.append(key)
576 header.append(value)
576 header.append(value)
577 ## finalize header
577 ## finalize header
578 headerchunk = ''.join(header)
578 headerchunk = ''.join(header)
579 yield _pack(_fpartheadersize, len(headerchunk))
579 yield _pack(_fpartheadersize, len(headerchunk))
580 yield headerchunk
580 yield headerchunk
581 ## payload
581 ## payload
582 for chunk in self._payloadchunks():
582 for chunk in self._payloadchunks():
583 yield _pack(_fpayloadsize, len(chunk))
583 yield _pack(_fpayloadsize, len(chunk))
584 yield chunk
584 yield chunk
585 # end of payload
585 # end of payload
586 yield _pack(_fpayloadsize, 0)
586 yield _pack(_fpayloadsize, 0)
587
587
588 def _payloadchunks(self):
588 def _payloadchunks(self):
589 """yield chunks of a the part payload
589 """yield chunks of a the part payload
590
590
591 Exists to handle the different methods to provide data to a part."""
591 Exists to handle the different methods to provide data to a part."""
592 # we only support fixed size data now.
592 # we only support fixed size data now.
593 # This will be improved in the future.
593 # This will be improved in the future.
594 if util.safehasattr(self.data, 'next'):
594 if util.safehasattr(self.data, 'next'):
595 buff = util.chunkbuffer(self.data)
595 buff = util.chunkbuffer(self.data)
596 chunk = buff.read(preferedchunksize)
596 chunk = buff.read(preferedchunksize)
597 while chunk:
597 while chunk:
598 yield chunk
598 yield chunk
599 chunk = buff.read(preferedchunksize)
599 chunk = buff.read(preferedchunksize)
600 elif len(self.data):
600 elif len(self.data):
601 yield self.data
601 yield self.data
602
602
603 class unbundlepart(unpackermixin):
603 class unbundlepart(unpackermixin):
604 """a bundle part read from a bundle"""
604 """a bundle part read from a bundle"""
605
605
606 def __init__(self, ui, header, fp):
606 def __init__(self, ui, header, fp):
607 super(unbundlepart, self).__init__(fp)
607 super(unbundlepart, self).__init__(fp)
608 self.ui = ui
608 self.ui = ui
609 # unbundle state attr
609 # unbundle state attr
610 self._headerdata = header
610 self._headerdata = header
611 self._headeroffset = 0
611 self._headeroffset = 0
612 self._initialized = False
612 self._initialized = False
613 self.consumed = False
613 self.consumed = False
614 # part data
614 # part data
615 self.id = None
615 self.id = None
616 self.type = None
616 self.type = None
617 self.mandatoryparams = None
617 self.mandatoryparams = None
618 self.advisoryparams = None
618 self.advisoryparams = None
619 self._payloadstream = None
619 self._payloadstream = None
620 self._readheader()
620 self._readheader()
621
621
622 def _fromheader(self, size):
622 def _fromheader(self, size):
623 """return the next <size> byte from the header"""
623 """return the next <size> byte from the header"""
624 offset = self._headeroffset
624 offset = self._headeroffset
625 data = self._headerdata[offset:(offset + size)]
625 data = self._headerdata[offset:(offset + size)]
626 self._headeroffset = offset + size
626 self._headeroffset = offset + size
627 return data
627 return data
628
628
629 def _unpackheader(self, format):
629 def _unpackheader(self, format):
630 """read given format from header
630 """read given format from header
631
631
632 This automatically compute the size of the format to read."""
632 This automatically compute the size of the format to read."""
633 data = self._fromheader(struct.calcsize(format))
633 data = self._fromheader(struct.calcsize(format))
634 return _unpack(format, data)
634 return _unpack(format, data)
635
635
636 def _readheader(self):
636 def _readheader(self):
637 """read the header and setup the object"""
637 """read the header and setup the object"""
638 typesize = self._unpackheader(_fparttypesize)[0]
638 typesize = self._unpackheader(_fparttypesize)[0]
639 self.type = self._fromheader(typesize)
639 self.type = self._fromheader(typesize)
640 self.ui.debug('part type: "%s"\n' % self.type)
640 self.ui.debug('part type: "%s"\n' % self.type)
641 self.id = self._unpackheader(_fpartid)[0]
641 self.id = self._unpackheader(_fpartid)[0]
642 self.ui.debug('part id: "%s"\n' % self.id)
642 self.ui.debug('part id: "%s"\n' % self.id)
643 ## reading parameters
643 ## reading parameters
644 # param count
644 # param count
645 mancount, advcount = self._unpackheader(_fpartparamcount)
645 mancount, advcount = self._unpackheader(_fpartparamcount)
646 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
646 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
647 # param size
647 # param size
648 fparamsizes = _makefpartparamsizes(mancount + advcount)
648 fparamsizes = _makefpartparamsizes(mancount + advcount)
649 paramsizes = self._unpackheader(fparamsizes)
649 paramsizes = self._unpackheader(fparamsizes)
650 # make it a list of couple again
650 # make it a list of couple again
651 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
651 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
652 # split mandatory from advisory
652 # split mandatory from advisory
653 mansizes = paramsizes[:mancount]
653 mansizes = paramsizes[:mancount]
654 advsizes = paramsizes[mancount:]
654 advsizes = paramsizes[mancount:]
655 # retrive param value
655 # retrive param value
656 manparams = []
656 manparams = []
657 for key, value in mansizes:
657 for key, value in mansizes:
658 manparams.append((self._fromheader(key), self._fromheader(value)))
658 manparams.append((self._fromheader(key), self._fromheader(value)))
659 advparams = []
659 advparams = []
660 for key, value in advsizes:
660 for key, value in advsizes:
661 advparams.append((self._fromheader(key), self._fromheader(value)))
661 advparams.append((self._fromheader(key), self._fromheader(value)))
662 self.mandatoryparams = manparams
662 self.mandatoryparams = manparams
663 self.advisoryparams = advparams
663 self.advisoryparams = advparams
664 ## part payload
664 ## part payload
665 def payloadchunks():
665 def payloadchunks():
666 payloadsize = self._unpack(_fpayloadsize)[0]
666 payloadsize = self._unpack(_fpayloadsize)[0]
667 self.ui.debug('payload chunk size: %i\n' % payloadsize)
667 self.ui.debug('payload chunk size: %i\n' % payloadsize)
668 while payloadsize:
668 while payloadsize:
669 yield self._readexact(payloadsize)
669 yield self._readexact(payloadsize)
670 payloadsize = self._unpack(_fpayloadsize)[0]
670 payloadsize = self._unpack(_fpayloadsize)[0]
671 self.ui.debug('payload chunk size: %i\n' % payloadsize)
671 self.ui.debug('payload chunk size: %i\n' % payloadsize)
672 self._payloadstream = util.chunkbuffer(payloadchunks())
672 self._payloadstream = util.chunkbuffer(payloadchunks())
673 # we read the data, tell it
673 # we read the data, tell it
674 self._initialized = True
674 self._initialized = True
675
675
676 def read(self, size=None):
676 def read(self, size=None):
677 """read payload data"""
677 """read payload data"""
678 if not self._initialized:
678 if not self._initialized:
679 self._readheader()
679 self._readheader()
680 if size is None:
680 if size is None:
681 data = self._payloadstream.read()
681 data = self._payloadstream.read()
682 else:
682 else:
683 data = self._payloadstream.read(size)
683 data = self._payloadstream.read(size)
684 if size is None or len(data) < size:
684 if size is None or len(data) < size:
685 self.consumed = True
685 self.consumed = True
686 return data
686 return data
687
687
688
688
689 @parthandler('b2x:changegroup')
689 @parthandler('b2x:changegroup')
690 def handlechangegroup(op, inpart):
690 def handlechangegroup(op, inpart):
691 """apply a changegroup part on the repo
691 """apply a changegroup part on the repo
692
692
693 This is a very early implementation that will massive rework before being
693 This is a very early implementation that will massive rework before being
694 inflicted to any end-user.
694 inflicted to any end-user.
695 """
695 """
696 # Make sure we trigger a transaction creation
696 # Make sure we trigger a transaction creation
697 #
697 #
698 # The addchangegroup function will get a transaction object by itself, but
698 # The addchangegroup function will get a transaction object by itself, but
699 # we need to make sure we trigger the creation of a transaction object used
699 # we need to make sure we trigger the creation of a transaction object used
700 # for the whole processing scope.
700 # for the whole processing scope.
701 op.gettransaction()
701 op.gettransaction()
702 cg = changegroup.unbundle10(inpart, 'UN')
702 cg = changegroup.unbundle10(inpart, 'UN')
703 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
703 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
704 op.records.add('changegroup', {'return': ret})
704 op.records.add('changegroup', {'return': ret})
705 if op.reply is not None:
705 if op.reply is not None:
706 # This is definitly not the final form of this
706 # This is definitly not the final form of this
707 # return. But one need to start somewhere.
707 # return. But one need to start somewhere.
708 part = bundlepart('b2x:reply:changegroup', (),
708 part = bundlepart('b2x:reply:changegroup', (),
709 [('in-reply-to', str(inpart.id)),
709 [('in-reply-to', str(inpart.id)),
710 ('return', '%i' % ret)])
710 ('return', '%i' % ret)])
711 op.reply.addpart(part)
711 op.reply.addpart(part)
712 assert not inpart.read()
712 assert not inpart.read()
713
713
714 @parthandler('b2x:reply:changegroup')
714 @parthandler('b2x:reply:changegroup')
715 def handlechangegroup(op, inpart):
715 def handlechangegroup(op, inpart):
716 p = dict(inpart.advisoryparams)
716 p = dict(inpart.advisoryparams)
717 ret = int(p['return'])
717 ret = int(p['return'])
718 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
718 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
719
719
720 @parthandler('b2x:check:heads')
720 @parthandler('b2x:check:heads')
721 def handlechangegroup(op, inpart):
721 def handlechangegroup(op, inpart):
722 """check that head of the repo did not change
722 """check that head of the repo did not change
723
723
724 This is used to detect a push race when using unbundle.
724 This is used to detect a push race when using unbundle.
725 This replaces the "heads" argument of unbundle."""
725 This replaces the "heads" argument of unbundle."""
726 h = inpart.read(20)
726 h = inpart.read(20)
727 heads = []
727 heads = []
728 while len(h) == 20:
728 while len(h) == 20:
729 heads.append(h)
729 heads.append(h)
730 h = inpart.read(20)
730 h = inpart.read(20)
731 assert not h
731 assert not h
732 if heads != op.repo.heads():
732 if heads != op.repo.heads():
733 raise error.PushRaced('repository changed while pushing - '
733 raise error.PushRaced('repository changed while pushing - '
734 'please try again')
734 'please try again')
735
735
736 @parthandler('b2x:output')
736 @parthandler('b2x:output')
737 def handleoutput(op, inpart):
737 def handleoutput(op, inpart):
738 """forward output captured on the server to the client"""
738 """forward output captured on the server to the client"""
739 for line in inpart.read().splitlines():
739 for line in inpart.read().splitlines():
740 op.ui.write(('remote: %s\n' % line))
740 op.ui.write(('remote: %s\n' % line))
741
741
742 @parthandler('b2x:replycaps')
742 @parthandler('b2x:replycaps')
743 def handlereplycaps(op, inpart):
743 def handlereplycaps(op, inpart):
744 """Notify that a reply bundle should be created
744 """Notify that a reply bundle should be created
745
745
746 The payload contains the capabilities information for the reply"""
746 The payload contains the capabilities information for the reply"""
747 caps = decodecaps(inpart.read())
747 caps = decodecaps(inpart.read())
748 if op.reply is None:
748 if op.reply is None:
749 op.reply = bundle20(op.ui, caps)
749 op.reply = bundle20(op.ui, caps)
750
750
751 @parthandler('b2x:error:abort')
751 @parthandler('b2x:error:abort')
752 def handlereplycaps(op, inpart):
752 def handlereplycaps(op, inpart):
753 """Used to transmit abort error over the wire"""
753 """Used to transmit abort error over the wire"""
754 manargs = dict(inpart.mandatoryparams)
754 manargs = dict(inpart.mandatoryparams)
755 advargs = dict(inpart.advisoryparams)
755 advargs = dict(inpart.advisoryparams)
756 raise util.Abort(manargs['message'], hint=advargs.get('hint'))
756 raise util.Abort(manargs['message'], hint=advargs.get('hint'))
757
757
758 @parthandler('b2x:error:unknownpart')
758 @parthandler('b2x:error:unknownpart')
759 def handlereplycaps(op, inpart):
759 def handlereplycaps(op, inpart):
760 """Used to transmit unknown part error over the wire"""
760 """Used to transmit unknown part error over the wire"""
761 manargs = dict(inpart.mandatoryparams)
761 manargs = dict(inpart.mandatoryparams)
762 raise UnknownPartError(manargs['parttype'])
762 raise UnknownPartError(manargs['parttype'])
763
764 @parthandler('b2x:error:pushraced')
765 def handlereplycaps(op, inpart):
766 """Used to transmit push race error over the wire"""
767 manargs = dict(inpart.mandatoryparams)
768 raise error.ResponseError(_('push failed:'), manargs['message'])
@@ -1,1910 +1,1910 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from node import hex, nullid, short
7 from node import hex, nullid, short
8 from i18n import _
8 from i18n import _
9 import urllib
9 import urllib
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
12 import lock as lockmod
12 import lock as lockmod
13 import transaction, store, encoding, exchange, bundle2
13 import transaction, store, encoding, exchange, bundle2
14 import scmutil, util, extensions, hook, error, revset
14 import scmutil, util, extensions, hook, error, revset
15 import match as matchmod
15 import match as matchmod
16 import merge as mergemod
16 import merge as mergemod
17 import tags as tagsmod
17 import tags as tagsmod
18 from lock import release
18 from lock import release
19 import weakref, errno, os, time, inspect
19 import weakref, errno, os, time, inspect
20 import branchmap, pathutil
20 import branchmap, pathutil
21 propertycache = util.propertycache
21 propertycache = util.propertycache
22 filecache = scmutil.filecache
22 filecache = scmutil.filecache
23
23
24 class repofilecache(filecache):
24 class repofilecache(filecache):
25 """All filecache usage on repo are done for logic that should be unfiltered
25 """All filecache usage on repo are done for logic that should be unfiltered
26 """
26 """
27
27
28 def __get__(self, repo, type=None):
28 def __get__(self, repo, type=None):
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
30 def __set__(self, repo, value):
30 def __set__(self, repo, value):
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
32 def __delete__(self, repo):
32 def __delete__(self, repo):
33 return super(repofilecache, self).__delete__(repo.unfiltered())
33 return super(repofilecache, self).__delete__(repo.unfiltered())
34
34
35 class storecache(repofilecache):
35 class storecache(repofilecache):
36 """filecache for files in the store"""
36 """filecache for files in the store"""
37 def join(self, obj, fname):
37 def join(self, obj, fname):
38 return obj.sjoin(fname)
38 return obj.sjoin(fname)
39
39
40 class unfilteredpropertycache(propertycache):
40 class unfilteredpropertycache(propertycache):
41 """propertycache that apply to unfiltered repo only"""
41 """propertycache that apply to unfiltered repo only"""
42
42
43 def __get__(self, repo, type=None):
43 def __get__(self, repo, type=None):
44 unfi = repo.unfiltered()
44 unfi = repo.unfiltered()
45 if unfi is repo:
45 if unfi is repo:
46 return super(unfilteredpropertycache, self).__get__(unfi)
46 return super(unfilteredpropertycache, self).__get__(unfi)
47 return getattr(unfi, self.name)
47 return getattr(unfi, self.name)
48
48
49 class filteredpropertycache(propertycache):
49 class filteredpropertycache(propertycache):
50 """propertycache that must take filtering in account"""
50 """propertycache that must take filtering in account"""
51
51
52 def cachevalue(self, obj, value):
52 def cachevalue(self, obj, value):
53 object.__setattr__(obj, self.name, value)
53 object.__setattr__(obj, self.name, value)
54
54
55
55
56 def hasunfilteredcache(repo, name):
56 def hasunfilteredcache(repo, name):
57 """check if a repo has an unfilteredpropertycache value for <name>"""
57 """check if a repo has an unfilteredpropertycache value for <name>"""
58 return name in vars(repo.unfiltered())
58 return name in vars(repo.unfiltered())
59
59
60 def unfilteredmethod(orig):
60 def unfilteredmethod(orig):
61 """decorate method that always need to be run on unfiltered version"""
61 """decorate method that always need to be run on unfiltered version"""
62 def wrapper(repo, *args, **kwargs):
62 def wrapper(repo, *args, **kwargs):
63 return orig(repo.unfiltered(), *args, **kwargs)
63 return orig(repo.unfiltered(), *args, **kwargs)
64 return wrapper
64 return wrapper
65
65
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
67 'unbundle'))
67 'unbundle'))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
69
69
70 class localpeer(peer.peerrepository):
70 class localpeer(peer.peerrepository):
71 '''peer for a local repo; reflects only the most recent API'''
71 '''peer for a local repo; reflects only the most recent API'''
72
72
73 def __init__(self, repo, caps=moderncaps):
73 def __init__(self, repo, caps=moderncaps):
74 peer.peerrepository.__init__(self)
74 peer.peerrepository.__init__(self)
75 self._repo = repo.filtered('served')
75 self._repo = repo.filtered('served')
76 self.ui = repo.ui
76 self.ui = repo.ui
77 self._caps = repo._restrictcapabilities(caps)
77 self._caps = repo._restrictcapabilities(caps)
78 self.requirements = repo.requirements
78 self.requirements = repo.requirements
79 self.supportedformats = repo.supportedformats
79 self.supportedformats = repo.supportedformats
80
80
81 def close(self):
81 def close(self):
82 self._repo.close()
82 self._repo.close()
83
83
84 def _capabilities(self):
84 def _capabilities(self):
85 return self._caps
85 return self._caps
86
86
87 def local(self):
87 def local(self):
88 return self._repo
88 return self._repo
89
89
90 def canpush(self):
90 def canpush(self):
91 return True
91 return True
92
92
93 def url(self):
93 def url(self):
94 return self._repo.url()
94 return self._repo.url()
95
95
96 def lookup(self, key):
96 def lookup(self, key):
97 return self._repo.lookup(key)
97 return self._repo.lookup(key)
98
98
99 def branchmap(self):
99 def branchmap(self):
100 return self._repo.branchmap()
100 return self._repo.branchmap()
101
101
102 def heads(self):
102 def heads(self):
103 return self._repo.heads()
103 return self._repo.heads()
104
104
105 def known(self, nodes):
105 def known(self, nodes):
106 return self._repo.known(nodes)
106 return self._repo.known(nodes)
107
107
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
109 format='HG10', **kwargs):
109 format='HG10', **kwargs):
110 cg = exchange.getbundle(self._repo, source, heads=heads,
110 cg = exchange.getbundle(self._repo, source, heads=heads,
111 common=common, bundlecaps=bundlecaps, **kwargs)
111 common=common, bundlecaps=bundlecaps, **kwargs)
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
113 # When requesting a bundle2, getbundle returns a stream to make the
113 # When requesting a bundle2, getbundle returns a stream to make the
114 # wire level function happier. We need to build a proper object
114 # wire level function happier. We need to build a proper object
115 # from it in local peer.
115 # from it in local peer.
116 cg = bundle2.unbundle20(self.ui, cg)
116 cg = bundle2.unbundle20(self.ui, cg)
117 return cg
117 return cg
118
118
119 # TODO We might want to move the next two calls into legacypeer and add
119 # TODO We might want to move the next two calls into legacypeer and add
120 # unbundle instead.
120 # unbundle instead.
121
121
122 def unbundle(self, cg, heads, url):
122 def unbundle(self, cg, heads, url):
123 """apply a bundle on a repo
123 """apply a bundle on a repo
124
124
125 This function handles the repo locking itself."""
125 This function handles the repo locking itself."""
126 try:
126 try:
127 cg = exchange.readbundle(self.ui, cg, None)
127 cg = exchange.readbundle(self.ui, cg, None)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
129 if util.safehasattr(ret, 'getchunks'):
129 if util.safehasattr(ret, 'getchunks'):
130 # This is a bundle20 object, turn it into an unbundler.
130 # This is a bundle20 object, turn it into an unbundler.
131 # This little dance should be dropped eventually when the API
131 # This little dance should be dropped eventually when the API
132 # is finally improved.
132 # is finally improved.
133 stream = util.chunkbuffer(ret.getchunks())
133 stream = util.chunkbuffer(ret.getchunks())
134 ret = bundle2.unbundle20(self.ui, stream)
134 ret = bundle2.unbundle20(self.ui, stream)
135 return ret
135 return ret
136 except error.PushRaced, exc:
136 except error.PushRaced, exc:
137 raise error.ResponseError(_('push failed:'), exc.message)
137 raise error.ResponseError(_('push failed:'), str(exc))
138
138
139 def lock(self):
139 def lock(self):
140 return self._repo.lock()
140 return self._repo.lock()
141
141
142 def addchangegroup(self, cg, source, url):
142 def addchangegroup(self, cg, source, url):
143 return changegroup.addchangegroup(self._repo, cg, source, url)
143 return changegroup.addchangegroup(self._repo, cg, source, url)
144
144
145 def pushkey(self, namespace, key, old, new):
145 def pushkey(self, namespace, key, old, new):
146 return self._repo.pushkey(namespace, key, old, new)
146 return self._repo.pushkey(namespace, key, old, new)
147
147
148 def listkeys(self, namespace):
148 def listkeys(self, namespace):
149 return self._repo.listkeys(namespace)
149 return self._repo.listkeys(namespace)
150
150
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
152 '''used to test argument passing over the wire'''
152 '''used to test argument passing over the wire'''
153 return "%s %s %s %s %s" % (one, two, three, four, five)
153 return "%s %s %s %s %s" % (one, two, three, four, five)
154
154
155 class locallegacypeer(localpeer):
155 class locallegacypeer(localpeer):
156 '''peer extension which implements legacy methods too; used for tests with
156 '''peer extension which implements legacy methods too; used for tests with
157 restricted capabilities'''
157 restricted capabilities'''
158
158
159 def __init__(self, repo):
159 def __init__(self, repo):
160 localpeer.__init__(self, repo, caps=legacycaps)
160 localpeer.__init__(self, repo, caps=legacycaps)
161
161
162 def branches(self, nodes):
162 def branches(self, nodes):
163 return self._repo.branches(nodes)
163 return self._repo.branches(nodes)
164
164
165 def between(self, pairs):
165 def between(self, pairs):
166 return self._repo.between(pairs)
166 return self._repo.between(pairs)
167
167
168 def changegroup(self, basenodes, source):
168 def changegroup(self, basenodes, source):
169 return changegroup.changegroup(self._repo, basenodes, source)
169 return changegroup.changegroup(self._repo, basenodes, source)
170
170
171 def changegroupsubset(self, bases, heads, source):
171 def changegroupsubset(self, bases, heads, source):
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
173
173
174 class localrepository(object):
174 class localrepository(object):
175
175
176 supportedformats = set(('revlogv1', 'generaldelta'))
176 supportedformats = set(('revlogv1', 'generaldelta'))
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
178 'dotencode'))
178 'dotencode'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
180 requirements = ['revlogv1']
180 requirements = ['revlogv1']
181 filtername = None
181 filtername = None
182
182
183 bundle2caps = {'HG2X': ()}
183 bundle2caps = {'HG2X': ()}
184
184
185 # a list of (ui, featureset) functions.
185 # a list of (ui, featureset) functions.
186 # only functions defined in module of enabled extensions are invoked
186 # only functions defined in module of enabled extensions are invoked
187 featuresetupfuncs = set()
187 featuresetupfuncs = set()
188
188
189 def _baserequirements(self, create):
189 def _baserequirements(self, create):
190 return self.requirements[:]
190 return self.requirements[:]
191
191
192 def __init__(self, baseui, path=None, create=False):
192 def __init__(self, baseui, path=None, create=False):
193 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
193 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
194 self.wopener = self.wvfs
194 self.wopener = self.wvfs
195 self.root = self.wvfs.base
195 self.root = self.wvfs.base
196 self.path = self.wvfs.join(".hg")
196 self.path = self.wvfs.join(".hg")
197 self.origroot = path
197 self.origroot = path
198 self.auditor = pathutil.pathauditor(self.root, self._checknested)
198 self.auditor = pathutil.pathauditor(self.root, self._checknested)
199 self.vfs = scmutil.vfs(self.path)
199 self.vfs = scmutil.vfs(self.path)
200 self.opener = self.vfs
200 self.opener = self.vfs
201 self.baseui = baseui
201 self.baseui = baseui
202 self.ui = baseui.copy()
202 self.ui = baseui.copy()
203 self.ui.copy = baseui.copy # prevent copying repo configuration
203 self.ui.copy = baseui.copy # prevent copying repo configuration
204 # A list of callback to shape the phase if no data were found.
204 # A list of callback to shape the phase if no data were found.
205 # Callback are in the form: func(repo, roots) --> processed root.
205 # Callback are in the form: func(repo, roots) --> processed root.
206 # This list it to be filled by extension during repo setup
206 # This list it to be filled by extension during repo setup
207 self._phasedefaults = []
207 self._phasedefaults = []
208 try:
208 try:
209 self.ui.readconfig(self.join("hgrc"), self.root)
209 self.ui.readconfig(self.join("hgrc"), self.root)
210 extensions.loadall(self.ui)
210 extensions.loadall(self.ui)
211 except IOError:
211 except IOError:
212 pass
212 pass
213
213
214 if self.featuresetupfuncs:
214 if self.featuresetupfuncs:
215 self.supported = set(self._basesupported) # use private copy
215 self.supported = set(self._basesupported) # use private copy
216 extmods = set(m.__name__ for n, m
216 extmods = set(m.__name__ for n, m
217 in extensions.extensions(self.ui))
217 in extensions.extensions(self.ui))
218 for setupfunc in self.featuresetupfuncs:
218 for setupfunc in self.featuresetupfuncs:
219 if setupfunc.__module__ in extmods:
219 if setupfunc.__module__ in extmods:
220 setupfunc(self.ui, self.supported)
220 setupfunc(self.ui, self.supported)
221 else:
221 else:
222 self.supported = self._basesupported
222 self.supported = self._basesupported
223
223
224 if not self.vfs.isdir():
224 if not self.vfs.isdir():
225 if create:
225 if create:
226 if not self.wvfs.exists():
226 if not self.wvfs.exists():
227 self.wvfs.makedirs()
227 self.wvfs.makedirs()
228 self.vfs.makedir(notindexed=True)
228 self.vfs.makedir(notindexed=True)
229 requirements = self._baserequirements(create)
229 requirements = self._baserequirements(create)
230 if self.ui.configbool('format', 'usestore', True):
230 if self.ui.configbool('format', 'usestore', True):
231 self.vfs.mkdir("store")
231 self.vfs.mkdir("store")
232 requirements.append("store")
232 requirements.append("store")
233 if self.ui.configbool('format', 'usefncache', True):
233 if self.ui.configbool('format', 'usefncache', True):
234 requirements.append("fncache")
234 requirements.append("fncache")
235 if self.ui.configbool('format', 'dotencode', True):
235 if self.ui.configbool('format', 'dotencode', True):
236 requirements.append('dotencode')
236 requirements.append('dotencode')
237 # create an invalid changelog
237 # create an invalid changelog
238 self.vfs.append(
238 self.vfs.append(
239 "00changelog.i",
239 "00changelog.i",
240 '\0\0\0\2' # represents revlogv2
240 '\0\0\0\2' # represents revlogv2
241 ' dummy changelog to prevent using the old repo layout'
241 ' dummy changelog to prevent using the old repo layout'
242 )
242 )
243 if self.ui.configbool('format', 'generaldelta', False):
243 if self.ui.configbool('format', 'generaldelta', False):
244 requirements.append("generaldelta")
244 requirements.append("generaldelta")
245 requirements = set(requirements)
245 requirements = set(requirements)
246 else:
246 else:
247 raise error.RepoError(_("repository %s not found") % path)
247 raise error.RepoError(_("repository %s not found") % path)
248 elif create:
248 elif create:
249 raise error.RepoError(_("repository %s already exists") % path)
249 raise error.RepoError(_("repository %s already exists") % path)
250 else:
250 else:
251 try:
251 try:
252 requirements = scmutil.readrequires(self.vfs, self.supported)
252 requirements = scmutil.readrequires(self.vfs, self.supported)
253 except IOError, inst:
253 except IOError, inst:
254 if inst.errno != errno.ENOENT:
254 if inst.errno != errno.ENOENT:
255 raise
255 raise
256 requirements = set()
256 requirements = set()
257
257
258 self.sharedpath = self.path
258 self.sharedpath = self.path
259 try:
259 try:
260 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
260 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
261 realpath=True)
261 realpath=True)
262 s = vfs.base
262 s = vfs.base
263 if not vfs.exists():
263 if not vfs.exists():
264 raise error.RepoError(
264 raise error.RepoError(
265 _('.hg/sharedpath points to nonexistent directory %s') % s)
265 _('.hg/sharedpath points to nonexistent directory %s') % s)
266 self.sharedpath = s
266 self.sharedpath = s
267 except IOError, inst:
267 except IOError, inst:
268 if inst.errno != errno.ENOENT:
268 if inst.errno != errno.ENOENT:
269 raise
269 raise
270
270
271 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
271 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
272 self.spath = self.store.path
272 self.spath = self.store.path
273 self.svfs = self.store.vfs
273 self.svfs = self.store.vfs
274 self.sopener = self.svfs
274 self.sopener = self.svfs
275 self.sjoin = self.store.join
275 self.sjoin = self.store.join
276 self.vfs.createmode = self.store.createmode
276 self.vfs.createmode = self.store.createmode
277 self._applyrequirements(requirements)
277 self._applyrequirements(requirements)
278 if create:
278 if create:
279 self._writerequirements()
279 self._writerequirements()
280
280
281
281
282 self._branchcaches = {}
282 self._branchcaches = {}
283 self.filterpats = {}
283 self.filterpats = {}
284 self._datafilters = {}
284 self._datafilters = {}
285 self._transref = self._lockref = self._wlockref = None
285 self._transref = self._lockref = self._wlockref = None
286
286
287 # A cache for various files under .hg/ that tracks file changes,
287 # A cache for various files under .hg/ that tracks file changes,
288 # (used by the filecache decorator)
288 # (used by the filecache decorator)
289 #
289 #
290 # Maps a property name to its util.filecacheentry
290 # Maps a property name to its util.filecacheentry
291 self._filecache = {}
291 self._filecache = {}
292
292
293 # hold sets of revision to be filtered
293 # hold sets of revision to be filtered
294 # should be cleared when something might have changed the filter value:
294 # should be cleared when something might have changed the filter value:
295 # - new changesets,
295 # - new changesets,
296 # - phase change,
296 # - phase change,
297 # - new obsolescence marker,
297 # - new obsolescence marker,
298 # - working directory parent change,
298 # - working directory parent change,
299 # - bookmark changes
299 # - bookmark changes
300 self.filteredrevcache = {}
300 self.filteredrevcache = {}
301
301
302 def close(self):
302 def close(self):
303 pass
303 pass
304
304
305 def _restrictcapabilities(self, caps):
305 def _restrictcapabilities(self, caps):
306 # bundle2 is not ready for prime time, drop it unless explicitly
306 # bundle2 is not ready for prime time, drop it unless explicitly
307 # required by the tests (or some brave tester)
307 # required by the tests (or some brave tester)
308 if self.ui.configbool('experimental', 'bundle2-exp', False):
308 if self.ui.configbool('experimental', 'bundle2-exp', False):
309 caps = set(caps)
309 caps = set(caps)
310 capsblob = bundle2.encodecaps(self.bundle2caps)
310 capsblob = bundle2.encodecaps(self.bundle2caps)
311 caps.add('bundle2-exp=' + urllib.quote(capsblob))
311 caps.add('bundle2-exp=' + urllib.quote(capsblob))
312 return caps
312 return caps
313
313
314 def _applyrequirements(self, requirements):
314 def _applyrequirements(self, requirements):
315 self.requirements = requirements
315 self.requirements = requirements
316 self.sopener.options = dict((r, 1) for r in requirements
316 self.sopener.options = dict((r, 1) for r in requirements
317 if r in self.openerreqs)
317 if r in self.openerreqs)
318 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
318 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
319 if chunkcachesize is not None:
319 if chunkcachesize is not None:
320 self.sopener.options['chunkcachesize'] = chunkcachesize
320 self.sopener.options['chunkcachesize'] = chunkcachesize
321
321
322 def _writerequirements(self):
322 def _writerequirements(self):
323 reqfile = self.opener("requires", "w")
323 reqfile = self.opener("requires", "w")
324 for r in sorted(self.requirements):
324 for r in sorted(self.requirements):
325 reqfile.write("%s\n" % r)
325 reqfile.write("%s\n" % r)
326 reqfile.close()
326 reqfile.close()
327
327
328 def _checknested(self, path):
328 def _checknested(self, path):
329 """Determine if path is a legal nested repository."""
329 """Determine if path is a legal nested repository."""
330 if not path.startswith(self.root):
330 if not path.startswith(self.root):
331 return False
331 return False
332 subpath = path[len(self.root) + 1:]
332 subpath = path[len(self.root) + 1:]
333 normsubpath = util.pconvert(subpath)
333 normsubpath = util.pconvert(subpath)
334
334
335 # XXX: Checking against the current working copy is wrong in
335 # XXX: Checking against the current working copy is wrong in
336 # the sense that it can reject things like
336 # the sense that it can reject things like
337 #
337 #
338 # $ hg cat -r 10 sub/x.txt
338 # $ hg cat -r 10 sub/x.txt
339 #
339 #
340 # if sub/ is no longer a subrepository in the working copy
340 # if sub/ is no longer a subrepository in the working copy
341 # parent revision.
341 # parent revision.
342 #
342 #
343 # However, it can of course also allow things that would have
343 # However, it can of course also allow things that would have
344 # been rejected before, such as the above cat command if sub/
344 # been rejected before, such as the above cat command if sub/
345 # is a subrepository now, but was a normal directory before.
345 # is a subrepository now, but was a normal directory before.
346 # The old path auditor would have rejected by mistake since it
346 # The old path auditor would have rejected by mistake since it
347 # panics when it sees sub/.hg/.
347 # panics when it sees sub/.hg/.
348 #
348 #
349 # All in all, checking against the working copy seems sensible
349 # All in all, checking against the working copy seems sensible
350 # since we want to prevent access to nested repositories on
350 # since we want to prevent access to nested repositories on
351 # the filesystem *now*.
351 # the filesystem *now*.
352 ctx = self[None]
352 ctx = self[None]
353 parts = util.splitpath(subpath)
353 parts = util.splitpath(subpath)
354 while parts:
354 while parts:
355 prefix = '/'.join(parts)
355 prefix = '/'.join(parts)
356 if prefix in ctx.substate:
356 if prefix in ctx.substate:
357 if prefix == normsubpath:
357 if prefix == normsubpath:
358 return True
358 return True
359 else:
359 else:
360 sub = ctx.sub(prefix)
360 sub = ctx.sub(prefix)
361 return sub.checknested(subpath[len(prefix) + 1:])
361 return sub.checknested(subpath[len(prefix) + 1:])
362 else:
362 else:
363 parts.pop()
363 parts.pop()
364 return False
364 return False
365
365
366 def peer(self):
366 def peer(self):
367 return localpeer(self) # not cached to avoid reference cycle
367 return localpeer(self) # not cached to avoid reference cycle
368
368
369 def unfiltered(self):
369 def unfiltered(self):
370 """Return unfiltered version of the repository
370 """Return unfiltered version of the repository
371
371
372 Intended to be overwritten by filtered repo."""
372 Intended to be overwritten by filtered repo."""
373 return self
373 return self
374
374
375 def filtered(self, name):
375 def filtered(self, name):
376 """Return a filtered version of a repository"""
376 """Return a filtered version of a repository"""
377 # build a new class with the mixin and the current class
377 # build a new class with the mixin and the current class
378 # (possibly subclass of the repo)
378 # (possibly subclass of the repo)
379 class proxycls(repoview.repoview, self.unfiltered().__class__):
379 class proxycls(repoview.repoview, self.unfiltered().__class__):
380 pass
380 pass
381 return proxycls(self, name)
381 return proxycls(self, name)
382
382
383 @repofilecache('bookmarks')
383 @repofilecache('bookmarks')
384 def _bookmarks(self):
384 def _bookmarks(self):
385 return bookmarks.bmstore(self)
385 return bookmarks.bmstore(self)
386
386
387 @repofilecache('bookmarks.current')
387 @repofilecache('bookmarks.current')
388 def _bookmarkcurrent(self):
388 def _bookmarkcurrent(self):
389 return bookmarks.readcurrent(self)
389 return bookmarks.readcurrent(self)
390
390
391 def bookmarkheads(self, bookmark):
391 def bookmarkheads(self, bookmark):
392 name = bookmark.split('@', 1)[0]
392 name = bookmark.split('@', 1)[0]
393 heads = []
393 heads = []
394 for mark, n in self._bookmarks.iteritems():
394 for mark, n in self._bookmarks.iteritems():
395 if mark.split('@', 1)[0] == name:
395 if mark.split('@', 1)[0] == name:
396 heads.append(n)
396 heads.append(n)
397 return heads
397 return heads
398
398
399 @storecache('phaseroots')
399 @storecache('phaseroots')
400 def _phasecache(self):
400 def _phasecache(self):
401 return phases.phasecache(self, self._phasedefaults)
401 return phases.phasecache(self, self._phasedefaults)
402
402
403 @storecache('obsstore')
403 @storecache('obsstore')
404 def obsstore(self):
404 def obsstore(self):
405 store = obsolete.obsstore(self.sopener)
405 store = obsolete.obsstore(self.sopener)
406 if store and not obsolete._enabled:
406 if store and not obsolete._enabled:
407 # message is rare enough to not be translated
407 # message is rare enough to not be translated
408 msg = 'obsolete feature not enabled but %i markers found!\n'
408 msg = 'obsolete feature not enabled but %i markers found!\n'
409 self.ui.warn(msg % len(list(store)))
409 self.ui.warn(msg % len(list(store)))
410 return store
410 return store
411
411
412 @storecache('00changelog.i')
412 @storecache('00changelog.i')
413 def changelog(self):
413 def changelog(self):
414 c = changelog.changelog(self.sopener)
414 c = changelog.changelog(self.sopener)
415 if 'HG_PENDING' in os.environ:
415 if 'HG_PENDING' in os.environ:
416 p = os.environ['HG_PENDING']
416 p = os.environ['HG_PENDING']
417 if p.startswith(self.root):
417 if p.startswith(self.root):
418 c.readpending('00changelog.i.a')
418 c.readpending('00changelog.i.a')
419 return c
419 return c
420
420
421 @storecache('00manifest.i')
421 @storecache('00manifest.i')
422 def manifest(self):
422 def manifest(self):
423 return manifest.manifest(self.sopener)
423 return manifest.manifest(self.sopener)
424
424
425 @repofilecache('dirstate')
425 @repofilecache('dirstate')
426 def dirstate(self):
426 def dirstate(self):
427 warned = [0]
427 warned = [0]
428 def validate(node):
428 def validate(node):
429 try:
429 try:
430 self.changelog.rev(node)
430 self.changelog.rev(node)
431 return node
431 return node
432 except error.LookupError:
432 except error.LookupError:
433 if not warned[0]:
433 if not warned[0]:
434 warned[0] = True
434 warned[0] = True
435 self.ui.warn(_("warning: ignoring unknown"
435 self.ui.warn(_("warning: ignoring unknown"
436 " working parent %s!\n") % short(node))
436 " working parent %s!\n") % short(node))
437 return nullid
437 return nullid
438
438
439 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
439 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
440
440
441 def __getitem__(self, changeid):
441 def __getitem__(self, changeid):
442 if changeid is None:
442 if changeid is None:
443 return context.workingctx(self)
443 return context.workingctx(self)
444 return context.changectx(self, changeid)
444 return context.changectx(self, changeid)
445
445
446 def __contains__(self, changeid):
446 def __contains__(self, changeid):
447 try:
447 try:
448 return bool(self.lookup(changeid))
448 return bool(self.lookup(changeid))
449 except error.RepoLookupError:
449 except error.RepoLookupError:
450 return False
450 return False
451
451
452 def __nonzero__(self):
452 def __nonzero__(self):
453 return True
453 return True
454
454
455 def __len__(self):
455 def __len__(self):
456 return len(self.changelog)
456 return len(self.changelog)
457
457
458 def __iter__(self):
458 def __iter__(self):
459 return iter(self.changelog)
459 return iter(self.changelog)
460
460
461 def revs(self, expr, *args):
461 def revs(self, expr, *args):
462 '''Return a list of revisions matching the given revset'''
462 '''Return a list of revisions matching the given revset'''
463 expr = revset.formatspec(expr, *args)
463 expr = revset.formatspec(expr, *args)
464 m = revset.match(None, expr)
464 m = revset.match(None, expr)
465 return m(self, revset.spanset(self))
465 return m(self, revset.spanset(self))
466
466
467 def set(self, expr, *args):
467 def set(self, expr, *args):
468 '''
468 '''
469 Yield a context for each matching revision, after doing arg
469 Yield a context for each matching revision, after doing arg
470 replacement via revset.formatspec
470 replacement via revset.formatspec
471 '''
471 '''
472 for r in self.revs(expr, *args):
472 for r in self.revs(expr, *args):
473 yield self[r]
473 yield self[r]
474
474
475 def url(self):
475 def url(self):
476 return 'file:' + self.root
476 return 'file:' + self.root
477
477
478 def hook(self, name, throw=False, **args):
478 def hook(self, name, throw=False, **args):
479 return hook.hook(self.ui, self, name, throw, **args)
479 return hook.hook(self.ui, self, name, throw, **args)
480
480
481 @unfilteredmethod
481 @unfilteredmethod
482 def _tag(self, names, node, message, local, user, date, extra={}):
482 def _tag(self, names, node, message, local, user, date, extra={}):
483 if isinstance(names, str):
483 if isinstance(names, str):
484 names = (names,)
484 names = (names,)
485
485
486 branches = self.branchmap()
486 branches = self.branchmap()
487 for name in names:
487 for name in names:
488 self.hook('pretag', throw=True, node=hex(node), tag=name,
488 self.hook('pretag', throw=True, node=hex(node), tag=name,
489 local=local)
489 local=local)
490 if name in branches:
490 if name in branches:
491 self.ui.warn(_("warning: tag %s conflicts with existing"
491 self.ui.warn(_("warning: tag %s conflicts with existing"
492 " branch name\n") % name)
492 " branch name\n") % name)
493
493
494 def writetags(fp, names, munge, prevtags):
494 def writetags(fp, names, munge, prevtags):
495 fp.seek(0, 2)
495 fp.seek(0, 2)
496 if prevtags and prevtags[-1] != '\n':
496 if prevtags and prevtags[-1] != '\n':
497 fp.write('\n')
497 fp.write('\n')
498 for name in names:
498 for name in names:
499 m = munge and munge(name) or name
499 m = munge and munge(name) or name
500 if (self._tagscache.tagtypes and
500 if (self._tagscache.tagtypes and
501 name in self._tagscache.tagtypes):
501 name in self._tagscache.tagtypes):
502 old = self.tags().get(name, nullid)
502 old = self.tags().get(name, nullid)
503 fp.write('%s %s\n' % (hex(old), m))
503 fp.write('%s %s\n' % (hex(old), m))
504 fp.write('%s %s\n' % (hex(node), m))
504 fp.write('%s %s\n' % (hex(node), m))
505 fp.close()
505 fp.close()
506
506
507 prevtags = ''
507 prevtags = ''
508 if local:
508 if local:
509 try:
509 try:
510 fp = self.opener('localtags', 'r+')
510 fp = self.opener('localtags', 'r+')
511 except IOError:
511 except IOError:
512 fp = self.opener('localtags', 'a')
512 fp = self.opener('localtags', 'a')
513 else:
513 else:
514 prevtags = fp.read()
514 prevtags = fp.read()
515
515
516 # local tags are stored in the current charset
516 # local tags are stored in the current charset
517 writetags(fp, names, None, prevtags)
517 writetags(fp, names, None, prevtags)
518 for name in names:
518 for name in names:
519 self.hook('tag', node=hex(node), tag=name, local=local)
519 self.hook('tag', node=hex(node), tag=name, local=local)
520 return
520 return
521
521
522 try:
522 try:
523 fp = self.wfile('.hgtags', 'rb+')
523 fp = self.wfile('.hgtags', 'rb+')
524 except IOError, e:
524 except IOError, e:
525 if e.errno != errno.ENOENT:
525 if e.errno != errno.ENOENT:
526 raise
526 raise
527 fp = self.wfile('.hgtags', 'ab')
527 fp = self.wfile('.hgtags', 'ab')
528 else:
528 else:
529 prevtags = fp.read()
529 prevtags = fp.read()
530
530
531 # committed tags are stored in UTF-8
531 # committed tags are stored in UTF-8
532 writetags(fp, names, encoding.fromlocal, prevtags)
532 writetags(fp, names, encoding.fromlocal, prevtags)
533
533
534 fp.close()
534 fp.close()
535
535
536 self.invalidatecaches()
536 self.invalidatecaches()
537
537
538 if '.hgtags' not in self.dirstate:
538 if '.hgtags' not in self.dirstate:
539 self[None].add(['.hgtags'])
539 self[None].add(['.hgtags'])
540
540
541 m = matchmod.exact(self.root, '', ['.hgtags'])
541 m = matchmod.exact(self.root, '', ['.hgtags'])
542 tagnode = self.commit(message, user, date, extra=extra, match=m)
542 tagnode = self.commit(message, user, date, extra=extra, match=m)
543
543
544 for name in names:
544 for name in names:
545 self.hook('tag', node=hex(node), tag=name, local=local)
545 self.hook('tag', node=hex(node), tag=name, local=local)
546
546
547 return tagnode
547 return tagnode
548
548
549 def tag(self, names, node, message, local, user, date):
549 def tag(self, names, node, message, local, user, date):
550 '''tag a revision with one or more symbolic names.
550 '''tag a revision with one or more symbolic names.
551
551
552 names is a list of strings or, when adding a single tag, names may be a
552 names is a list of strings or, when adding a single tag, names may be a
553 string.
553 string.
554
554
555 if local is True, the tags are stored in a per-repository file.
555 if local is True, the tags are stored in a per-repository file.
556 otherwise, they are stored in the .hgtags file, and a new
556 otherwise, they are stored in the .hgtags file, and a new
557 changeset is committed with the change.
557 changeset is committed with the change.
558
558
559 keyword arguments:
559 keyword arguments:
560
560
561 local: whether to store tags in non-version-controlled file
561 local: whether to store tags in non-version-controlled file
562 (default False)
562 (default False)
563
563
564 message: commit message to use if committing
564 message: commit message to use if committing
565
565
566 user: name of user to use if committing
566 user: name of user to use if committing
567
567
568 date: date tuple to use if committing'''
568 date: date tuple to use if committing'''
569
569
570 if not local:
570 if not local:
571 for x in self.status()[:5]:
571 for x in self.status()[:5]:
572 if '.hgtags' in x:
572 if '.hgtags' in x:
573 raise util.Abort(_('working copy of .hgtags is changed '
573 raise util.Abort(_('working copy of .hgtags is changed '
574 '(please commit .hgtags manually)'))
574 '(please commit .hgtags manually)'))
575
575
576 self.tags() # instantiate the cache
576 self.tags() # instantiate the cache
577 self._tag(names, node, message, local, user, date)
577 self._tag(names, node, message, local, user, date)
578
578
579 @filteredpropertycache
579 @filteredpropertycache
580 def _tagscache(self):
580 def _tagscache(self):
581 '''Returns a tagscache object that contains various tags related
581 '''Returns a tagscache object that contains various tags related
582 caches.'''
582 caches.'''
583
583
584 # This simplifies its cache management by having one decorated
584 # This simplifies its cache management by having one decorated
585 # function (this one) and the rest simply fetch things from it.
585 # function (this one) and the rest simply fetch things from it.
586 class tagscache(object):
586 class tagscache(object):
587 def __init__(self):
587 def __init__(self):
588 # These two define the set of tags for this repository. tags
588 # These two define the set of tags for this repository. tags
589 # maps tag name to node; tagtypes maps tag name to 'global' or
589 # maps tag name to node; tagtypes maps tag name to 'global' or
590 # 'local'. (Global tags are defined by .hgtags across all
590 # 'local'. (Global tags are defined by .hgtags across all
591 # heads, and local tags are defined in .hg/localtags.)
591 # heads, and local tags are defined in .hg/localtags.)
592 # They constitute the in-memory cache of tags.
592 # They constitute the in-memory cache of tags.
593 self.tags = self.tagtypes = None
593 self.tags = self.tagtypes = None
594
594
595 self.nodetagscache = self.tagslist = None
595 self.nodetagscache = self.tagslist = None
596
596
597 cache = tagscache()
597 cache = tagscache()
598 cache.tags, cache.tagtypes = self._findtags()
598 cache.tags, cache.tagtypes = self._findtags()
599
599
600 return cache
600 return cache
601
601
602 def tags(self):
602 def tags(self):
603 '''return a mapping of tag to node'''
603 '''return a mapping of tag to node'''
604 t = {}
604 t = {}
605 if self.changelog.filteredrevs:
605 if self.changelog.filteredrevs:
606 tags, tt = self._findtags()
606 tags, tt = self._findtags()
607 else:
607 else:
608 tags = self._tagscache.tags
608 tags = self._tagscache.tags
609 for k, v in tags.iteritems():
609 for k, v in tags.iteritems():
610 try:
610 try:
611 # ignore tags to unknown nodes
611 # ignore tags to unknown nodes
612 self.changelog.rev(v)
612 self.changelog.rev(v)
613 t[k] = v
613 t[k] = v
614 except (error.LookupError, ValueError):
614 except (error.LookupError, ValueError):
615 pass
615 pass
616 return t
616 return t
617
617
618 def _findtags(self):
618 def _findtags(self):
619 '''Do the hard work of finding tags. Return a pair of dicts
619 '''Do the hard work of finding tags. Return a pair of dicts
620 (tags, tagtypes) where tags maps tag name to node, and tagtypes
620 (tags, tagtypes) where tags maps tag name to node, and tagtypes
621 maps tag name to a string like \'global\' or \'local\'.
621 maps tag name to a string like \'global\' or \'local\'.
622 Subclasses or extensions are free to add their own tags, but
622 Subclasses or extensions are free to add their own tags, but
623 should be aware that the returned dicts will be retained for the
623 should be aware that the returned dicts will be retained for the
624 duration of the localrepo object.'''
624 duration of the localrepo object.'''
625
625
626 # XXX what tagtype should subclasses/extensions use? Currently
626 # XXX what tagtype should subclasses/extensions use? Currently
627 # mq and bookmarks add tags, but do not set the tagtype at all.
627 # mq and bookmarks add tags, but do not set the tagtype at all.
628 # Should each extension invent its own tag type? Should there
628 # Should each extension invent its own tag type? Should there
629 # be one tagtype for all such "virtual" tags? Or is the status
629 # be one tagtype for all such "virtual" tags? Or is the status
630 # quo fine?
630 # quo fine?
631
631
632 alltags = {} # map tag name to (node, hist)
632 alltags = {} # map tag name to (node, hist)
633 tagtypes = {}
633 tagtypes = {}
634
634
635 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
635 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
636 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
636 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
637
637
638 # Build the return dicts. Have to re-encode tag names because
638 # Build the return dicts. Have to re-encode tag names because
639 # the tags module always uses UTF-8 (in order not to lose info
639 # the tags module always uses UTF-8 (in order not to lose info
640 # writing to the cache), but the rest of Mercurial wants them in
640 # writing to the cache), but the rest of Mercurial wants them in
641 # local encoding.
641 # local encoding.
642 tags = {}
642 tags = {}
643 for (name, (node, hist)) in alltags.iteritems():
643 for (name, (node, hist)) in alltags.iteritems():
644 if node != nullid:
644 if node != nullid:
645 tags[encoding.tolocal(name)] = node
645 tags[encoding.tolocal(name)] = node
646 tags['tip'] = self.changelog.tip()
646 tags['tip'] = self.changelog.tip()
647 tagtypes = dict([(encoding.tolocal(name), value)
647 tagtypes = dict([(encoding.tolocal(name), value)
648 for (name, value) in tagtypes.iteritems()])
648 for (name, value) in tagtypes.iteritems()])
649 return (tags, tagtypes)
649 return (tags, tagtypes)
650
650
651 def tagtype(self, tagname):
651 def tagtype(self, tagname):
652 '''
652 '''
653 return the type of the given tag. result can be:
653 return the type of the given tag. result can be:
654
654
655 'local' : a local tag
655 'local' : a local tag
656 'global' : a global tag
656 'global' : a global tag
657 None : tag does not exist
657 None : tag does not exist
658 '''
658 '''
659
659
660 return self._tagscache.tagtypes.get(tagname)
660 return self._tagscache.tagtypes.get(tagname)
661
661
662 def tagslist(self):
662 def tagslist(self):
663 '''return a list of tags ordered by revision'''
663 '''return a list of tags ordered by revision'''
664 if not self._tagscache.tagslist:
664 if not self._tagscache.tagslist:
665 l = []
665 l = []
666 for t, n in self.tags().iteritems():
666 for t, n in self.tags().iteritems():
667 r = self.changelog.rev(n)
667 r = self.changelog.rev(n)
668 l.append((r, t, n))
668 l.append((r, t, n))
669 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
669 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
670
670
671 return self._tagscache.tagslist
671 return self._tagscache.tagslist
672
672
673 def nodetags(self, node):
673 def nodetags(self, node):
674 '''return the tags associated with a node'''
674 '''return the tags associated with a node'''
675 if not self._tagscache.nodetagscache:
675 if not self._tagscache.nodetagscache:
676 nodetagscache = {}
676 nodetagscache = {}
677 for t, n in self._tagscache.tags.iteritems():
677 for t, n in self._tagscache.tags.iteritems():
678 nodetagscache.setdefault(n, []).append(t)
678 nodetagscache.setdefault(n, []).append(t)
679 for tags in nodetagscache.itervalues():
679 for tags in nodetagscache.itervalues():
680 tags.sort()
680 tags.sort()
681 self._tagscache.nodetagscache = nodetagscache
681 self._tagscache.nodetagscache = nodetagscache
682 return self._tagscache.nodetagscache.get(node, [])
682 return self._tagscache.nodetagscache.get(node, [])
683
683
684 def nodebookmarks(self, node):
684 def nodebookmarks(self, node):
685 marks = []
685 marks = []
686 for bookmark, n in self._bookmarks.iteritems():
686 for bookmark, n in self._bookmarks.iteritems():
687 if n == node:
687 if n == node:
688 marks.append(bookmark)
688 marks.append(bookmark)
689 return sorted(marks)
689 return sorted(marks)
690
690
691 def branchmap(self):
691 def branchmap(self):
692 '''returns a dictionary {branch: [branchheads]} with branchheads
692 '''returns a dictionary {branch: [branchheads]} with branchheads
693 ordered by increasing revision number'''
693 ordered by increasing revision number'''
694 branchmap.updatecache(self)
694 branchmap.updatecache(self)
695 return self._branchcaches[self.filtername]
695 return self._branchcaches[self.filtername]
696
696
697 def branchtip(self, branch):
697 def branchtip(self, branch):
698 '''return the tip node for a given branch'''
698 '''return the tip node for a given branch'''
699 try:
699 try:
700 return self.branchmap().branchtip(branch)
700 return self.branchmap().branchtip(branch)
701 except KeyError:
701 except KeyError:
702 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
702 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
703
703
704 def lookup(self, key):
704 def lookup(self, key):
705 return self[key].node()
705 return self[key].node()
706
706
707 def lookupbranch(self, key, remote=None):
707 def lookupbranch(self, key, remote=None):
708 repo = remote or self
708 repo = remote or self
709 if key in repo.branchmap():
709 if key in repo.branchmap():
710 return key
710 return key
711
711
712 repo = (remote and remote.local()) and remote or self
712 repo = (remote and remote.local()) and remote or self
713 return repo[key].branch()
713 return repo[key].branch()
714
714
715 def known(self, nodes):
715 def known(self, nodes):
716 nm = self.changelog.nodemap
716 nm = self.changelog.nodemap
717 pc = self._phasecache
717 pc = self._phasecache
718 result = []
718 result = []
719 for n in nodes:
719 for n in nodes:
720 r = nm.get(n)
720 r = nm.get(n)
721 resp = not (r is None or pc.phase(self, r) >= phases.secret)
721 resp = not (r is None or pc.phase(self, r) >= phases.secret)
722 result.append(resp)
722 result.append(resp)
723 return result
723 return result
724
724
725 def local(self):
725 def local(self):
726 return self
726 return self
727
727
728 def cancopy(self):
728 def cancopy(self):
729 # so statichttprepo's override of local() works
729 # so statichttprepo's override of local() works
730 if not self.local():
730 if not self.local():
731 return False
731 return False
732 if not self.ui.configbool('phases', 'publish', True):
732 if not self.ui.configbool('phases', 'publish', True):
733 return True
733 return True
734 # if publishing we can't copy if there is filtered content
734 # if publishing we can't copy if there is filtered content
735 return not self.filtered('visible').changelog.filteredrevs
735 return not self.filtered('visible').changelog.filteredrevs
736
736
737 def join(self, f):
737 def join(self, f):
738 return os.path.join(self.path, f)
738 return os.path.join(self.path, f)
739
739
740 def wjoin(self, f):
740 def wjoin(self, f):
741 return os.path.join(self.root, f)
741 return os.path.join(self.root, f)
742
742
743 def file(self, f):
743 def file(self, f):
744 if f[0] == '/':
744 if f[0] == '/':
745 f = f[1:]
745 f = f[1:]
746 return filelog.filelog(self.sopener, f)
746 return filelog.filelog(self.sopener, f)
747
747
748 def changectx(self, changeid):
748 def changectx(self, changeid):
749 return self[changeid]
749 return self[changeid]
750
750
751 def parents(self, changeid=None):
751 def parents(self, changeid=None):
752 '''get list of changectxs for parents of changeid'''
752 '''get list of changectxs for parents of changeid'''
753 return self[changeid].parents()
753 return self[changeid].parents()
754
754
755 def setparents(self, p1, p2=nullid):
755 def setparents(self, p1, p2=nullid):
756 copies = self.dirstate.setparents(p1, p2)
756 copies = self.dirstate.setparents(p1, p2)
757 pctx = self[p1]
757 pctx = self[p1]
758 if copies:
758 if copies:
759 # Adjust copy records, the dirstate cannot do it, it
759 # Adjust copy records, the dirstate cannot do it, it
760 # requires access to parents manifests. Preserve them
760 # requires access to parents manifests. Preserve them
761 # only for entries added to first parent.
761 # only for entries added to first parent.
762 for f in copies:
762 for f in copies:
763 if f not in pctx and copies[f] in pctx:
763 if f not in pctx and copies[f] in pctx:
764 self.dirstate.copy(copies[f], f)
764 self.dirstate.copy(copies[f], f)
765 if p2 == nullid:
765 if p2 == nullid:
766 for f, s in sorted(self.dirstate.copies().items()):
766 for f, s in sorted(self.dirstate.copies().items()):
767 if f not in pctx and s not in pctx:
767 if f not in pctx and s not in pctx:
768 self.dirstate.copy(None, f)
768 self.dirstate.copy(None, f)
769
769
770 def filectx(self, path, changeid=None, fileid=None):
770 def filectx(self, path, changeid=None, fileid=None):
771 """changeid can be a changeset revision, node, or tag.
771 """changeid can be a changeset revision, node, or tag.
772 fileid can be a file revision or node."""
772 fileid can be a file revision or node."""
773 return context.filectx(self, path, changeid, fileid)
773 return context.filectx(self, path, changeid, fileid)
774
774
775 def getcwd(self):
775 def getcwd(self):
776 return self.dirstate.getcwd()
776 return self.dirstate.getcwd()
777
777
778 def pathto(self, f, cwd=None):
778 def pathto(self, f, cwd=None):
779 return self.dirstate.pathto(f, cwd)
779 return self.dirstate.pathto(f, cwd)
780
780
781 def wfile(self, f, mode='r'):
781 def wfile(self, f, mode='r'):
782 return self.wopener(f, mode)
782 return self.wopener(f, mode)
783
783
784 def _link(self, f):
784 def _link(self, f):
785 return self.wvfs.islink(f)
785 return self.wvfs.islink(f)
786
786
787 def _loadfilter(self, filter):
787 def _loadfilter(self, filter):
788 if filter not in self.filterpats:
788 if filter not in self.filterpats:
789 l = []
789 l = []
790 for pat, cmd in self.ui.configitems(filter):
790 for pat, cmd in self.ui.configitems(filter):
791 if cmd == '!':
791 if cmd == '!':
792 continue
792 continue
793 mf = matchmod.match(self.root, '', [pat])
793 mf = matchmod.match(self.root, '', [pat])
794 fn = None
794 fn = None
795 params = cmd
795 params = cmd
796 for name, filterfn in self._datafilters.iteritems():
796 for name, filterfn in self._datafilters.iteritems():
797 if cmd.startswith(name):
797 if cmd.startswith(name):
798 fn = filterfn
798 fn = filterfn
799 params = cmd[len(name):].lstrip()
799 params = cmd[len(name):].lstrip()
800 break
800 break
801 if not fn:
801 if not fn:
802 fn = lambda s, c, **kwargs: util.filter(s, c)
802 fn = lambda s, c, **kwargs: util.filter(s, c)
803 # Wrap old filters not supporting keyword arguments
803 # Wrap old filters not supporting keyword arguments
804 if not inspect.getargspec(fn)[2]:
804 if not inspect.getargspec(fn)[2]:
805 oldfn = fn
805 oldfn = fn
806 fn = lambda s, c, **kwargs: oldfn(s, c)
806 fn = lambda s, c, **kwargs: oldfn(s, c)
807 l.append((mf, fn, params))
807 l.append((mf, fn, params))
808 self.filterpats[filter] = l
808 self.filterpats[filter] = l
809 return self.filterpats[filter]
809 return self.filterpats[filter]
810
810
811 def _filter(self, filterpats, filename, data):
811 def _filter(self, filterpats, filename, data):
812 for mf, fn, cmd in filterpats:
812 for mf, fn, cmd in filterpats:
813 if mf(filename):
813 if mf(filename):
814 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
814 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
815 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
815 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
816 break
816 break
817
817
818 return data
818 return data
819
819
820 @unfilteredpropertycache
820 @unfilteredpropertycache
821 def _encodefilterpats(self):
821 def _encodefilterpats(self):
822 return self._loadfilter('encode')
822 return self._loadfilter('encode')
823
823
824 @unfilteredpropertycache
824 @unfilteredpropertycache
825 def _decodefilterpats(self):
825 def _decodefilterpats(self):
826 return self._loadfilter('decode')
826 return self._loadfilter('decode')
827
827
828 def adddatafilter(self, name, filter):
828 def adddatafilter(self, name, filter):
829 self._datafilters[name] = filter
829 self._datafilters[name] = filter
830
830
831 def wread(self, filename):
831 def wread(self, filename):
832 if self._link(filename):
832 if self._link(filename):
833 data = self.wvfs.readlink(filename)
833 data = self.wvfs.readlink(filename)
834 else:
834 else:
835 data = self.wopener.read(filename)
835 data = self.wopener.read(filename)
836 return self._filter(self._encodefilterpats, filename, data)
836 return self._filter(self._encodefilterpats, filename, data)
837
837
838 def wwrite(self, filename, data, flags):
838 def wwrite(self, filename, data, flags):
839 data = self._filter(self._decodefilterpats, filename, data)
839 data = self._filter(self._decodefilterpats, filename, data)
840 if 'l' in flags:
840 if 'l' in flags:
841 self.wopener.symlink(data, filename)
841 self.wopener.symlink(data, filename)
842 else:
842 else:
843 self.wopener.write(filename, data)
843 self.wopener.write(filename, data)
844 if 'x' in flags:
844 if 'x' in flags:
845 self.wvfs.setflags(filename, False, True)
845 self.wvfs.setflags(filename, False, True)
846
846
847 def wwritedata(self, filename, data):
847 def wwritedata(self, filename, data):
848 return self._filter(self._decodefilterpats, filename, data)
848 return self._filter(self._decodefilterpats, filename, data)
849
849
850 def transaction(self, desc, report=None):
850 def transaction(self, desc, report=None):
851 tr = self._transref and self._transref() or None
851 tr = self._transref and self._transref() or None
852 if tr and tr.running():
852 if tr and tr.running():
853 return tr.nest()
853 return tr.nest()
854
854
855 # abort here if the journal already exists
855 # abort here if the journal already exists
856 if self.svfs.exists("journal"):
856 if self.svfs.exists("journal"):
857 raise error.RepoError(
857 raise error.RepoError(
858 _("abandoned transaction found - run hg recover"))
858 _("abandoned transaction found - run hg recover"))
859
859
860 def onclose():
860 def onclose():
861 self.store.write(tr)
861 self.store.write(tr)
862
862
863 self._writejournal(desc)
863 self._writejournal(desc)
864 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
864 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
865 rp = report and report or self.ui.warn
865 rp = report and report or self.ui.warn
866 tr = transaction.transaction(rp, self.sopener,
866 tr = transaction.transaction(rp, self.sopener,
867 "journal",
867 "journal",
868 aftertrans(renames),
868 aftertrans(renames),
869 self.store.createmode,
869 self.store.createmode,
870 onclose)
870 onclose)
871 self._transref = weakref.ref(tr)
871 self._transref = weakref.ref(tr)
872 return tr
872 return tr
873
873
874 def _journalfiles(self):
874 def _journalfiles(self):
875 return ((self.svfs, 'journal'),
875 return ((self.svfs, 'journal'),
876 (self.vfs, 'journal.dirstate'),
876 (self.vfs, 'journal.dirstate'),
877 (self.vfs, 'journal.branch'),
877 (self.vfs, 'journal.branch'),
878 (self.vfs, 'journal.desc'),
878 (self.vfs, 'journal.desc'),
879 (self.vfs, 'journal.bookmarks'),
879 (self.vfs, 'journal.bookmarks'),
880 (self.svfs, 'journal.phaseroots'))
880 (self.svfs, 'journal.phaseroots'))
881
881
882 def undofiles(self):
882 def undofiles(self):
883 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
883 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
884
884
885 def _writejournal(self, desc):
885 def _writejournal(self, desc):
886 self.opener.write("journal.dirstate",
886 self.opener.write("journal.dirstate",
887 self.opener.tryread("dirstate"))
887 self.opener.tryread("dirstate"))
888 self.opener.write("journal.branch",
888 self.opener.write("journal.branch",
889 encoding.fromlocal(self.dirstate.branch()))
889 encoding.fromlocal(self.dirstate.branch()))
890 self.opener.write("journal.desc",
890 self.opener.write("journal.desc",
891 "%d\n%s\n" % (len(self), desc))
891 "%d\n%s\n" % (len(self), desc))
892 self.opener.write("journal.bookmarks",
892 self.opener.write("journal.bookmarks",
893 self.opener.tryread("bookmarks"))
893 self.opener.tryread("bookmarks"))
894 self.sopener.write("journal.phaseroots",
894 self.sopener.write("journal.phaseroots",
895 self.sopener.tryread("phaseroots"))
895 self.sopener.tryread("phaseroots"))
896
896
897 def recover(self):
897 def recover(self):
898 lock = self.lock()
898 lock = self.lock()
899 try:
899 try:
900 if self.svfs.exists("journal"):
900 if self.svfs.exists("journal"):
901 self.ui.status(_("rolling back interrupted transaction\n"))
901 self.ui.status(_("rolling back interrupted transaction\n"))
902 transaction.rollback(self.sopener, "journal",
902 transaction.rollback(self.sopener, "journal",
903 self.ui.warn)
903 self.ui.warn)
904 self.invalidate()
904 self.invalidate()
905 return True
905 return True
906 else:
906 else:
907 self.ui.warn(_("no interrupted transaction available\n"))
907 self.ui.warn(_("no interrupted transaction available\n"))
908 return False
908 return False
909 finally:
909 finally:
910 lock.release()
910 lock.release()
911
911
912 def rollback(self, dryrun=False, force=False):
912 def rollback(self, dryrun=False, force=False):
913 wlock = lock = None
913 wlock = lock = None
914 try:
914 try:
915 wlock = self.wlock()
915 wlock = self.wlock()
916 lock = self.lock()
916 lock = self.lock()
917 if self.svfs.exists("undo"):
917 if self.svfs.exists("undo"):
918 return self._rollback(dryrun, force)
918 return self._rollback(dryrun, force)
919 else:
919 else:
920 self.ui.warn(_("no rollback information available\n"))
920 self.ui.warn(_("no rollback information available\n"))
921 return 1
921 return 1
922 finally:
922 finally:
923 release(lock, wlock)
923 release(lock, wlock)
924
924
925 @unfilteredmethod # Until we get smarter cache management
925 @unfilteredmethod # Until we get smarter cache management
926 def _rollback(self, dryrun, force):
926 def _rollback(self, dryrun, force):
927 ui = self.ui
927 ui = self.ui
928 try:
928 try:
929 args = self.opener.read('undo.desc').splitlines()
929 args = self.opener.read('undo.desc').splitlines()
930 (oldlen, desc, detail) = (int(args[0]), args[1], None)
930 (oldlen, desc, detail) = (int(args[0]), args[1], None)
931 if len(args) >= 3:
931 if len(args) >= 3:
932 detail = args[2]
932 detail = args[2]
933 oldtip = oldlen - 1
933 oldtip = oldlen - 1
934
934
935 if detail and ui.verbose:
935 if detail and ui.verbose:
936 msg = (_('repository tip rolled back to revision %s'
936 msg = (_('repository tip rolled back to revision %s'
937 ' (undo %s: %s)\n')
937 ' (undo %s: %s)\n')
938 % (oldtip, desc, detail))
938 % (oldtip, desc, detail))
939 else:
939 else:
940 msg = (_('repository tip rolled back to revision %s'
940 msg = (_('repository tip rolled back to revision %s'
941 ' (undo %s)\n')
941 ' (undo %s)\n')
942 % (oldtip, desc))
942 % (oldtip, desc))
943 except IOError:
943 except IOError:
944 msg = _('rolling back unknown transaction\n')
944 msg = _('rolling back unknown transaction\n')
945 desc = None
945 desc = None
946
946
947 if not force and self['.'] != self['tip'] and desc == 'commit':
947 if not force and self['.'] != self['tip'] and desc == 'commit':
948 raise util.Abort(
948 raise util.Abort(
949 _('rollback of last commit while not checked out '
949 _('rollback of last commit while not checked out '
950 'may lose data'), hint=_('use -f to force'))
950 'may lose data'), hint=_('use -f to force'))
951
951
952 ui.status(msg)
952 ui.status(msg)
953 if dryrun:
953 if dryrun:
954 return 0
954 return 0
955
955
956 parents = self.dirstate.parents()
956 parents = self.dirstate.parents()
957 self.destroying()
957 self.destroying()
958 transaction.rollback(self.sopener, 'undo', ui.warn)
958 transaction.rollback(self.sopener, 'undo', ui.warn)
959 if self.vfs.exists('undo.bookmarks'):
959 if self.vfs.exists('undo.bookmarks'):
960 self.vfs.rename('undo.bookmarks', 'bookmarks')
960 self.vfs.rename('undo.bookmarks', 'bookmarks')
961 if self.svfs.exists('undo.phaseroots'):
961 if self.svfs.exists('undo.phaseroots'):
962 self.svfs.rename('undo.phaseroots', 'phaseroots')
962 self.svfs.rename('undo.phaseroots', 'phaseroots')
963 self.invalidate()
963 self.invalidate()
964
964
965 parentgone = (parents[0] not in self.changelog.nodemap or
965 parentgone = (parents[0] not in self.changelog.nodemap or
966 parents[1] not in self.changelog.nodemap)
966 parents[1] not in self.changelog.nodemap)
967 if parentgone:
967 if parentgone:
968 self.vfs.rename('undo.dirstate', 'dirstate')
968 self.vfs.rename('undo.dirstate', 'dirstate')
969 try:
969 try:
970 branch = self.opener.read('undo.branch')
970 branch = self.opener.read('undo.branch')
971 self.dirstate.setbranch(encoding.tolocal(branch))
971 self.dirstate.setbranch(encoding.tolocal(branch))
972 except IOError:
972 except IOError:
973 ui.warn(_('named branch could not be reset: '
973 ui.warn(_('named branch could not be reset: '
974 'current branch is still \'%s\'\n')
974 'current branch is still \'%s\'\n')
975 % self.dirstate.branch())
975 % self.dirstate.branch())
976
976
977 self.dirstate.invalidate()
977 self.dirstate.invalidate()
978 parents = tuple([p.rev() for p in self.parents()])
978 parents = tuple([p.rev() for p in self.parents()])
979 if len(parents) > 1:
979 if len(parents) > 1:
980 ui.status(_('working directory now based on '
980 ui.status(_('working directory now based on '
981 'revisions %d and %d\n') % parents)
981 'revisions %d and %d\n') % parents)
982 else:
982 else:
983 ui.status(_('working directory now based on '
983 ui.status(_('working directory now based on '
984 'revision %d\n') % parents)
984 'revision %d\n') % parents)
985 # TODO: if we know which new heads may result from this rollback, pass
985 # TODO: if we know which new heads may result from this rollback, pass
986 # them to destroy(), which will prevent the branchhead cache from being
986 # them to destroy(), which will prevent the branchhead cache from being
987 # invalidated.
987 # invalidated.
988 self.destroyed()
988 self.destroyed()
989 return 0
989 return 0
990
990
991 def invalidatecaches(self):
991 def invalidatecaches(self):
992
992
993 if '_tagscache' in vars(self):
993 if '_tagscache' in vars(self):
994 # can't use delattr on proxy
994 # can't use delattr on proxy
995 del self.__dict__['_tagscache']
995 del self.__dict__['_tagscache']
996
996
997 self.unfiltered()._branchcaches.clear()
997 self.unfiltered()._branchcaches.clear()
998 self.invalidatevolatilesets()
998 self.invalidatevolatilesets()
999
999
1000 def invalidatevolatilesets(self):
1000 def invalidatevolatilesets(self):
1001 self.filteredrevcache.clear()
1001 self.filteredrevcache.clear()
1002 obsolete.clearobscaches(self)
1002 obsolete.clearobscaches(self)
1003
1003
1004 def invalidatedirstate(self):
1004 def invalidatedirstate(self):
1005 '''Invalidates the dirstate, causing the next call to dirstate
1005 '''Invalidates the dirstate, causing the next call to dirstate
1006 to check if it was modified since the last time it was read,
1006 to check if it was modified since the last time it was read,
1007 rereading it if it has.
1007 rereading it if it has.
1008
1008
1009 This is different to dirstate.invalidate() that it doesn't always
1009 This is different to dirstate.invalidate() that it doesn't always
1010 rereads the dirstate. Use dirstate.invalidate() if you want to
1010 rereads the dirstate. Use dirstate.invalidate() if you want to
1011 explicitly read the dirstate again (i.e. restoring it to a previous
1011 explicitly read the dirstate again (i.e. restoring it to a previous
1012 known good state).'''
1012 known good state).'''
1013 if hasunfilteredcache(self, 'dirstate'):
1013 if hasunfilteredcache(self, 'dirstate'):
1014 for k in self.dirstate._filecache:
1014 for k in self.dirstate._filecache:
1015 try:
1015 try:
1016 delattr(self.dirstate, k)
1016 delattr(self.dirstate, k)
1017 except AttributeError:
1017 except AttributeError:
1018 pass
1018 pass
1019 delattr(self.unfiltered(), 'dirstate')
1019 delattr(self.unfiltered(), 'dirstate')
1020
1020
1021 def invalidate(self):
1021 def invalidate(self):
1022 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1022 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1023 for k in self._filecache:
1023 for k in self._filecache:
1024 # dirstate is invalidated separately in invalidatedirstate()
1024 # dirstate is invalidated separately in invalidatedirstate()
1025 if k == 'dirstate':
1025 if k == 'dirstate':
1026 continue
1026 continue
1027
1027
1028 try:
1028 try:
1029 delattr(unfiltered, k)
1029 delattr(unfiltered, k)
1030 except AttributeError:
1030 except AttributeError:
1031 pass
1031 pass
1032 self.invalidatecaches()
1032 self.invalidatecaches()
1033 self.store.invalidatecaches()
1033 self.store.invalidatecaches()
1034
1034
1035 def invalidateall(self):
1035 def invalidateall(self):
1036 '''Fully invalidates both store and non-store parts, causing the
1036 '''Fully invalidates both store and non-store parts, causing the
1037 subsequent operation to reread any outside changes.'''
1037 subsequent operation to reread any outside changes.'''
1038 # extension should hook this to invalidate its caches
1038 # extension should hook this to invalidate its caches
1039 self.invalidate()
1039 self.invalidate()
1040 self.invalidatedirstate()
1040 self.invalidatedirstate()
1041
1041
1042 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1042 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1043 try:
1043 try:
1044 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1044 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1045 except error.LockHeld, inst:
1045 except error.LockHeld, inst:
1046 if not wait:
1046 if not wait:
1047 raise
1047 raise
1048 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1048 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1049 (desc, inst.locker))
1049 (desc, inst.locker))
1050 # default to 600 seconds timeout
1050 # default to 600 seconds timeout
1051 l = lockmod.lock(vfs, lockname,
1051 l = lockmod.lock(vfs, lockname,
1052 int(self.ui.config("ui", "timeout", "600")),
1052 int(self.ui.config("ui", "timeout", "600")),
1053 releasefn, desc=desc)
1053 releasefn, desc=desc)
1054 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1054 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1055 if acquirefn:
1055 if acquirefn:
1056 acquirefn()
1056 acquirefn()
1057 return l
1057 return l
1058
1058
1059 def _afterlock(self, callback):
1059 def _afterlock(self, callback):
1060 """add a callback to the current repository lock.
1060 """add a callback to the current repository lock.
1061
1061
1062 The callback will be executed on lock release."""
1062 The callback will be executed on lock release."""
1063 l = self._lockref and self._lockref()
1063 l = self._lockref and self._lockref()
1064 if l:
1064 if l:
1065 l.postrelease.append(callback)
1065 l.postrelease.append(callback)
1066 else:
1066 else:
1067 callback()
1067 callback()
1068
1068
1069 def lock(self, wait=True):
1069 def lock(self, wait=True):
1070 '''Lock the repository store (.hg/store) and return a weak reference
1070 '''Lock the repository store (.hg/store) and return a weak reference
1071 to the lock. Use this before modifying the store (e.g. committing or
1071 to the lock. Use this before modifying the store (e.g. committing or
1072 stripping). If you are opening a transaction, get a lock as well.)'''
1072 stripping). If you are opening a transaction, get a lock as well.)'''
1073 l = self._lockref and self._lockref()
1073 l = self._lockref and self._lockref()
1074 if l is not None and l.held:
1074 if l is not None and l.held:
1075 l.lock()
1075 l.lock()
1076 return l
1076 return l
1077
1077
1078 def unlock():
1078 def unlock():
1079 if hasunfilteredcache(self, '_phasecache'):
1079 if hasunfilteredcache(self, '_phasecache'):
1080 self._phasecache.write()
1080 self._phasecache.write()
1081 for k, ce in self._filecache.items():
1081 for k, ce in self._filecache.items():
1082 if k == 'dirstate' or k not in self.__dict__:
1082 if k == 'dirstate' or k not in self.__dict__:
1083 continue
1083 continue
1084 ce.refresh()
1084 ce.refresh()
1085
1085
1086 l = self._lock(self.svfs, "lock", wait, unlock,
1086 l = self._lock(self.svfs, "lock", wait, unlock,
1087 self.invalidate, _('repository %s') % self.origroot)
1087 self.invalidate, _('repository %s') % self.origroot)
1088 self._lockref = weakref.ref(l)
1088 self._lockref = weakref.ref(l)
1089 return l
1089 return l
1090
1090
1091 def wlock(self, wait=True):
1091 def wlock(self, wait=True):
1092 '''Lock the non-store parts of the repository (everything under
1092 '''Lock the non-store parts of the repository (everything under
1093 .hg except .hg/store) and return a weak reference to the lock.
1093 .hg except .hg/store) and return a weak reference to the lock.
1094 Use this before modifying files in .hg.'''
1094 Use this before modifying files in .hg.'''
1095 l = self._wlockref and self._wlockref()
1095 l = self._wlockref and self._wlockref()
1096 if l is not None and l.held:
1096 if l is not None and l.held:
1097 l.lock()
1097 l.lock()
1098 return l
1098 return l
1099
1099
1100 def unlock():
1100 def unlock():
1101 self.dirstate.write()
1101 self.dirstate.write()
1102 self._filecache['dirstate'].refresh()
1102 self._filecache['dirstate'].refresh()
1103
1103
1104 l = self._lock(self.vfs, "wlock", wait, unlock,
1104 l = self._lock(self.vfs, "wlock", wait, unlock,
1105 self.invalidatedirstate, _('working directory of %s') %
1105 self.invalidatedirstate, _('working directory of %s') %
1106 self.origroot)
1106 self.origroot)
1107 self._wlockref = weakref.ref(l)
1107 self._wlockref = weakref.ref(l)
1108 return l
1108 return l
1109
1109
1110 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1110 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1111 """
1111 """
1112 commit an individual file as part of a larger transaction
1112 commit an individual file as part of a larger transaction
1113 """
1113 """
1114
1114
1115 fname = fctx.path()
1115 fname = fctx.path()
1116 text = fctx.data()
1116 text = fctx.data()
1117 flog = self.file(fname)
1117 flog = self.file(fname)
1118 fparent1 = manifest1.get(fname, nullid)
1118 fparent1 = manifest1.get(fname, nullid)
1119 fparent2 = fparent2o = manifest2.get(fname, nullid)
1119 fparent2 = fparent2o = manifest2.get(fname, nullid)
1120
1120
1121 meta = {}
1121 meta = {}
1122 copy = fctx.renamed()
1122 copy = fctx.renamed()
1123 if copy and copy[0] != fname:
1123 if copy and copy[0] != fname:
1124 # Mark the new revision of this file as a copy of another
1124 # Mark the new revision of this file as a copy of another
1125 # file. This copy data will effectively act as a parent
1125 # file. This copy data will effectively act as a parent
1126 # of this new revision. If this is a merge, the first
1126 # of this new revision. If this is a merge, the first
1127 # parent will be the nullid (meaning "look up the copy data")
1127 # parent will be the nullid (meaning "look up the copy data")
1128 # and the second one will be the other parent. For example:
1128 # and the second one will be the other parent. For example:
1129 #
1129 #
1130 # 0 --- 1 --- 3 rev1 changes file foo
1130 # 0 --- 1 --- 3 rev1 changes file foo
1131 # \ / rev2 renames foo to bar and changes it
1131 # \ / rev2 renames foo to bar and changes it
1132 # \- 2 -/ rev3 should have bar with all changes and
1132 # \- 2 -/ rev3 should have bar with all changes and
1133 # should record that bar descends from
1133 # should record that bar descends from
1134 # bar in rev2 and foo in rev1
1134 # bar in rev2 and foo in rev1
1135 #
1135 #
1136 # this allows this merge to succeed:
1136 # this allows this merge to succeed:
1137 #
1137 #
1138 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1138 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1139 # \ / merging rev3 and rev4 should use bar@rev2
1139 # \ / merging rev3 and rev4 should use bar@rev2
1140 # \- 2 --- 4 as the merge base
1140 # \- 2 --- 4 as the merge base
1141 #
1141 #
1142
1142
1143 cfname = copy[0]
1143 cfname = copy[0]
1144 crev = manifest1.get(cfname)
1144 crev = manifest1.get(cfname)
1145 newfparent = fparent2
1145 newfparent = fparent2
1146
1146
1147 if manifest2: # branch merge
1147 if manifest2: # branch merge
1148 if fparent2 == nullid or crev is None: # copied on remote side
1148 if fparent2 == nullid or crev is None: # copied on remote side
1149 if cfname in manifest2:
1149 if cfname in manifest2:
1150 crev = manifest2[cfname]
1150 crev = manifest2[cfname]
1151 newfparent = fparent1
1151 newfparent = fparent1
1152
1152
1153 # find source in nearest ancestor if we've lost track
1153 # find source in nearest ancestor if we've lost track
1154 if not crev:
1154 if not crev:
1155 self.ui.debug(" %s: searching for copy revision for %s\n" %
1155 self.ui.debug(" %s: searching for copy revision for %s\n" %
1156 (fname, cfname))
1156 (fname, cfname))
1157 for ancestor in self[None].ancestors():
1157 for ancestor in self[None].ancestors():
1158 if cfname in ancestor:
1158 if cfname in ancestor:
1159 crev = ancestor[cfname].filenode()
1159 crev = ancestor[cfname].filenode()
1160 break
1160 break
1161
1161
1162 if crev:
1162 if crev:
1163 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1163 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1164 meta["copy"] = cfname
1164 meta["copy"] = cfname
1165 meta["copyrev"] = hex(crev)
1165 meta["copyrev"] = hex(crev)
1166 fparent1, fparent2 = nullid, newfparent
1166 fparent1, fparent2 = nullid, newfparent
1167 else:
1167 else:
1168 self.ui.warn(_("warning: can't find ancestor for '%s' "
1168 self.ui.warn(_("warning: can't find ancestor for '%s' "
1169 "copied from '%s'!\n") % (fname, cfname))
1169 "copied from '%s'!\n") % (fname, cfname))
1170
1170
1171 elif fparent1 == nullid:
1171 elif fparent1 == nullid:
1172 fparent1, fparent2 = fparent2, nullid
1172 fparent1, fparent2 = fparent2, nullid
1173 elif fparent2 != nullid:
1173 elif fparent2 != nullid:
1174 # is one parent an ancestor of the other?
1174 # is one parent an ancestor of the other?
1175 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1175 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1176 if fparent1 in fparentancestors:
1176 if fparent1 in fparentancestors:
1177 fparent1, fparent2 = fparent2, nullid
1177 fparent1, fparent2 = fparent2, nullid
1178 elif fparent2 in fparentancestors:
1178 elif fparent2 in fparentancestors:
1179 fparent2 = nullid
1179 fparent2 = nullid
1180
1180
1181 # is the file changed?
1181 # is the file changed?
1182 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1182 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1183 changelist.append(fname)
1183 changelist.append(fname)
1184 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1184 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1185
1185
1186 # are just the flags changed during merge?
1186 # are just the flags changed during merge?
1187 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
1187 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
1188 changelist.append(fname)
1188 changelist.append(fname)
1189
1189
1190 return fparent1
1190 return fparent1
1191
1191
1192 @unfilteredmethod
1192 @unfilteredmethod
1193 def commit(self, text="", user=None, date=None, match=None, force=False,
1193 def commit(self, text="", user=None, date=None, match=None, force=False,
1194 editor=False, extra={}):
1194 editor=False, extra={}):
1195 """Add a new revision to current repository.
1195 """Add a new revision to current repository.
1196
1196
1197 Revision information is gathered from the working directory,
1197 Revision information is gathered from the working directory,
1198 match can be used to filter the committed files. If editor is
1198 match can be used to filter the committed files. If editor is
1199 supplied, it is called to get a commit message.
1199 supplied, it is called to get a commit message.
1200 """
1200 """
1201
1201
1202 def fail(f, msg):
1202 def fail(f, msg):
1203 raise util.Abort('%s: %s' % (f, msg))
1203 raise util.Abort('%s: %s' % (f, msg))
1204
1204
1205 if not match:
1205 if not match:
1206 match = matchmod.always(self.root, '')
1206 match = matchmod.always(self.root, '')
1207
1207
1208 if not force:
1208 if not force:
1209 vdirs = []
1209 vdirs = []
1210 match.explicitdir = vdirs.append
1210 match.explicitdir = vdirs.append
1211 match.bad = fail
1211 match.bad = fail
1212
1212
1213 wlock = self.wlock()
1213 wlock = self.wlock()
1214 try:
1214 try:
1215 wctx = self[None]
1215 wctx = self[None]
1216 merge = len(wctx.parents()) > 1
1216 merge = len(wctx.parents()) > 1
1217
1217
1218 if (not force and merge and match and
1218 if (not force and merge and match and
1219 (match.files() or match.anypats())):
1219 (match.files() or match.anypats())):
1220 raise util.Abort(_('cannot partially commit a merge '
1220 raise util.Abort(_('cannot partially commit a merge '
1221 '(do not specify files or patterns)'))
1221 '(do not specify files or patterns)'))
1222
1222
1223 changes = self.status(match=match, clean=force)
1223 changes = self.status(match=match, clean=force)
1224 if force:
1224 if force:
1225 changes[0].extend(changes[6]) # mq may commit unchanged files
1225 changes[0].extend(changes[6]) # mq may commit unchanged files
1226
1226
1227 # check subrepos
1227 # check subrepos
1228 subs = []
1228 subs = []
1229 commitsubs = set()
1229 commitsubs = set()
1230 newstate = wctx.substate.copy()
1230 newstate = wctx.substate.copy()
1231 # only manage subrepos and .hgsubstate if .hgsub is present
1231 # only manage subrepos and .hgsubstate if .hgsub is present
1232 if '.hgsub' in wctx:
1232 if '.hgsub' in wctx:
1233 # we'll decide whether to track this ourselves, thanks
1233 # we'll decide whether to track this ourselves, thanks
1234 for c in changes[:3]:
1234 for c in changes[:3]:
1235 if '.hgsubstate' in c:
1235 if '.hgsubstate' in c:
1236 c.remove('.hgsubstate')
1236 c.remove('.hgsubstate')
1237
1237
1238 # compare current state to last committed state
1238 # compare current state to last committed state
1239 # build new substate based on last committed state
1239 # build new substate based on last committed state
1240 oldstate = wctx.p1().substate
1240 oldstate = wctx.p1().substate
1241 for s in sorted(newstate.keys()):
1241 for s in sorted(newstate.keys()):
1242 if not match(s):
1242 if not match(s):
1243 # ignore working copy, use old state if present
1243 # ignore working copy, use old state if present
1244 if s in oldstate:
1244 if s in oldstate:
1245 newstate[s] = oldstate[s]
1245 newstate[s] = oldstate[s]
1246 continue
1246 continue
1247 if not force:
1247 if not force:
1248 raise util.Abort(
1248 raise util.Abort(
1249 _("commit with new subrepo %s excluded") % s)
1249 _("commit with new subrepo %s excluded") % s)
1250 if wctx.sub(s).dirty(True):
1250 if wctx.sub(s).dirty(True):
1251 if not self.ui.configbool('ui', 'commitsubrepos'):
1251 if not self.ui.configbool('ui', 'commitsubrepos'):
1252 raise util.Abort(
1252 raise util.Abort(
1253 _("uncommitted changes in subrepo %s") % s,
1253 _("uncommitted changes in subrepo %s") % s,
1254 hint=_("use --subrepos for recursive commit"))
1254 hint=_("use --subrepos for recursive commit"))
1255 subs.append(s)
1255 subs.append(s)
1256 commitsubs.add(s)
1256 commitsubs.add(s)
1257 else:
1257 else:
1258 bs = wctx.sub(s).basestate()
1258 bs = wctx.sub(s).basestate()
1259 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1259 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1260 if oldstate.get(s, (None, None, None))[1] != bs:
1260 if oldstate.get(s, (None, None, None))[1] != bs:
1261 subs.append(s)
1261 subs.append(s)
1262
1262
1263 # check for removed subrepos
1263 # check for removed subrepos
1264 for p in wctx.parents():
1264 for p in wctx.parents():
1265 r = [s for s in p.substate if s not in newstate]
1265 r = [s for s in p.substate if s not in newstate]
1266 subs += [s for s in r if match(s)]
1266 subs += [s for s in r if match(s)]
1267 if subs:
1267 if subs:
1268 if (not match('.hgsub') and
1268 if (not match('.hgsub') and
1269 '.hgsub' in (wctx.modified() + wctx.added())):
1269 '.hgsub' in (wctx.modified() + wctx.added())):
1270 raise util.Abort(
1270 raise util.Abort(
1271 _("can't commit subrepos without .hgsub"))
1271 _("can't commit subrepos without .hgsub"))
1272 changes[0].insert(0, '.hgsubstate')
1272 changes[0].insert(0, '.hgsubstate')
1273
1273
1274 elif '.hgsub' in changes[2]:
1274 elif '.hgsub' in changes[2]:
1275 # clean up .hgsubstate when .hgsub is removed
1275 # clean up .hgsubstate when .hgsub is removed
1276 if ('.hgsubstate' in wctx and
1276 if ('.hgsubstate' in wctx and
1277 '.hgsubstate' not in changes[0] + changes[1] + changes[2]):
1277 '.hgsubstate' not in changes[0] + changes[1] + changes[2]):
1278 changes[2].insert(0, '.hgsubstate')
1278 changes[2].insert(0, '.hgsubstate')
1279
1279
1280 # make sure all explicit patterns are matched
1280 # make sure all explicit patterns are matched
1281 if not force and match.files():
1281 if not force and match.files():
1282 matched = set(changes[0] + changes[1] + changes[2])
1282 matched = set(changes[0] + changes[1] + changes[2])
1283
1283
1284 for f in match.files():
1284 for f in match.files():
1285 f = self.dirstate.normalize(f)
1285 f = self.dirstate.normalize(f)
1286 if f == '.' or f in matched or f in wctx.substate:
1286 if f == '.' or f in matched or f in wctx.substate:
1287 continue
1287 continue
1288 if f in changes[3]: # missing
1288 if f in changes[3]: # missing
1289 fail(f, _('file not found!'))
1289 fail(f, _('file not found!'))
1290 if f in vdirs: # visited directory
1290 if f in vdirs: # visited directory
1291 d = f + '/'
1291 d = f + '/'
1292 for mf in matched:
1292 for mf in matched:
1293 if mf.startswith(d):
1293 if mf.startswith(d):
1294 break
1294 break
1295 else:
1295 else:
1296 fail(f, _("no match under directory!"))
1296 fail(f, _("no match under directory!"))
1297 elif f not in self.dirstate:
1297 elif f not in self.dirstate:
1298 fail(f, _("file not tracked!"))
1298 fail(f, _("file not tracked!"))
1299
1299
1300 cctx = context.workingctx(self, text, user, date, extra, changes)
1300 cctx = context.workingctx(self, text, user, date, extra, changes)
1301
1301
1302 if (not force and not extra.get("close") and not merge
1302 if (not force and not extra.get("close") and not merge
1303 and not cctx.files()
1303 and not cctx.files()
1304 and wctx.branch() == wctx.p1().branch()):
1304 and wctx.branch() == wctx.p1().branch()):
1305 return None
1305 return None
1306
1306
1307 if merge and cctx.deleted():
1307 if merge and cctx.deleted():
1308 raise util.Abort(_("cannot commit merge with missing files"))
1308 raise util.Abort(_("cannot commit merge with missing files"))
1309
1309
1310 ms = mergemod.mergestate(self)
1310 ms = mergemod.mergestate(self)
1311 for f in changes[0]:
1311 for f in changes[0]:
1312 if f in ms and ms[f] == 'u':
1312 if f in ms and ms[f] == 'u':
1313 raise util.Abort(_("unresolved merge conflicts "
1313 raise util.Abort(_("unresolved merge conflicts "
1314 "(see hg help resolve)"))
1314 "(see hg help resolve)"))
1315
1315
1316 if editor:
1316 if editor:
1317 cctx._text = editor(self, cctx, subs)
1317 cctx._text = editor(self, cctx, subs)
1318 edited = (text != cctx._text)
1318 edited = (text != cctx._text)
1319
1319
1320 # Save commit message in case this transaction gets rolled back
1320 # Save commit message in case this transaction gets rolled back
1321 # (e.g. by a pretxncommit hook). Leave the content alone on
1321 # (e.g. by a pretxncommit hook). Leave the content alone on
1322 # the assumption that the user will use the same editor again.
1322 # the assumption that the user will use the same editor again.
1323 msgfn = self.savecommitmessage(cctx._text)
1323 msgfn = self.savecommitmessage(cctx._text)
1324
1324
1325 # commit subs and write new state
1325 # commit subs and write new state
1326 if subs:
1326 if subs:
1327 for s in sorted(commitsubs):
1327 for s in sorted(commitsubs):
1328 sub = wctx.sub(s)
1328 sub = wctx.sub(s)
1329 self.ui.status(_('committing subrepository %s\n') %
1329 self.ui.status(_('committing subrepository %s\n') %
1330 subrepo.subrelpath(sub))
1330 subrepo.subrelpath(sub))
1331 sr = sub.commit(cctx._text, user, date)
1331 sr = sub.commit(cctx._text, user, date)
1332 newstate[s] = (newstate[s][0], sr)
1332 newstate[s] = (newstate[s][0], sr)
1333 subrepo.writestate(self, newstate)
1333 subrepo.writestate(self, newstate)
1334
1334
1335 p1, p2 = self.dirstate.parents()
1335 p1, p2 = self.dirstate.parents()
1336 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1336 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1337 try:
1337 try:
1338 self.hook("precommit", throw=True, parent1=hookp1,
1338 self.hook("precommit", throw=True, parent1=hookp1,
1339 parent2=hookp2)
1339 parent2=hookp2)
1340 ret = self.commitctx(cctx, True)
1340 ret = self.commitctx(cctx, True)
1341 except: # re-raises
1341 except: # re-raises
1342 if edited:
1342 if edited:
1343 self.ui.write(
1343 self.ui.write(
1344 _('note: commit message saved in %s\n') % msgfn)
1344 _('note: commit message saved in %s\n') % msgfn)
1345 raise
1345 raise
1346
1346
1347 # update bookmarks, dirstate and mergestate
1347 # update bookmarks, dirstate and mergestate
1348 bookmarks.update(self, [p1, p2], ret)
1348 bookmarks.update(self, [p1, p2], ret)
1349 cctx.markcommitted(ret)
1349 cctx.markcommitted(ret)
1350 ms.reset()
1350 ms.reset()
1351 finally:
1351 finally:
1352 wlock.release()
1352 wlock.release()
1353
1353
1354 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1354 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1355 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1355 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1356 self._afterlock(commithook)
1356 self._afterlock(commithook)
1357 return ret
1357 return ret
1358
1358
1359 @unfilteredmethod
1359 @unfilteredmethod
1360 def commitctx(self, ctx, error=False):
1360 def commitctx(self, ctx, error=False):
1361 """Add a new revision to current repository.
1361 """Add a new revision to current repository.
1362 Revision information is passed via the context argument.
1362 Revision information is passed via the context argument.
1363 """
1363 """
1364
1364
1365 tr = lock = None
1365 tr = lock = None
1366 removed = list(ctx.removed())
1366 removed = list(ctx.removed())
1367 p1, p2 = ctx.p1(), ctx.p2()
1367 p1, p2 = ctx.p1(), ctx.p2()
1368 user = ctx.user()
1368 user = ctx.user()
1369
1369
1370 lock = self.lock()
1370 lock = self.lock()
1371 try:
1371 try:
1372 tr = self.transaction("commit")
1372 tr = self.transaction("commit")
1373 trp = weakref.proxy(tr)
1373 trp = weakref.proxy(tr)
1374
1374
1375 if ctx.files():
1375 if ctx.files():
1376 m1 = p1.manifest().copy()
1376 m1 = p1.manifest().copy()
1377 m2 = p2.manifest()
1377 m2 = p2.manifest()
1378
1378
1379 # check in files
1379 # check in files
1380 new = {}
1380 new = {}
1381 changed = []
1381 changed = []
1382 linkrev = len(self)
1382 linkrev = len(self)
1383 for f in sorted(ctx.modified() + ctx.added()):
1383 for f in sorted(ctx.modified() + ctx.added()):
1384 self.ui.note(f + "\n")
1384 self.ui.note(f + "\n")
1385 try:
1385 try:
1386 fctx = ctx[f]
1386 fctx = ctx[f]
1387 new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
1387 new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
1388 changed)
1388 changed)
1389 m1.set(f, fctx.flags())
1389 m1.set(f, fctx.flags())
1390 except OSError, inst:
1390 except OSError, inst:
1391 self.ui.warn(_("trouble committing %s!\n") % f)
1391 self.ui.warn(_("trouble committing %s!\n") % f)
1392 raise
1392 raise
1393 except IOError, inst:
1393 except IOError, inst:
1394 errcode = getattr(inst, 'errno', errno.ENOENT)
1394 errcode = getattr(inst, 'errno', errno.ENOENT)
1395 if error or errcode and errcode != errno.ENOENT:
1395 if error or errcode and errcode != errno.ENOENT:
1396 self.ui.warn(_("trouble committing %s!\n") % f)
1396 self.ui.warn(_("trouble committing %s!\n") % f)
1397 raise
1397 raise
1398 else:
1398 else:
1399 removed.append(f)
1399 removed.append(f)
1400
1400
1401 # update manifest
1401 # update manifest
1402 m1.update(new)
1402 m1.update(new)
1403 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1403 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1404 drop = [f for f in removed if f in m1]
1404 drop = [f for f in removed if f in m1]
1405 for f in drop:
1405 for f in drop:
1406 del m1[f]
1406 del m1[f]
1407 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1407 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1408 p2.manifestnode(), (new, drop))
1408 p2.manifestnode(), (new, drop))
1409 files = changed + removed
1409 files = changed + removed
1410 else:
1410 else:
1411 mn = p1.manifestnode()
1411 mn = p1.manifestnode()
1412 files = []
1412 files = []
1413
1413
1414 # update changelog
1414 # update changelog
1415 self.changelog.delayupdate()
1415 self.changelog.delayupdate()
1416 n = self.changelog.add(mn, files, ctx.description(),
1416 n = self.changelog.add(mn, files, ctx.description(),
1417 trp, p1.node(), p2.node(),
1417 trp, p1.node(), p2.node(),
1418 user, ctx.date(), ctx.extra().copy())
1418 user, ctx.date(), ctx.extra().copy())
1419 p = lambda: self.changelog.writepending() and self.root or ""
1419 p = lambda: self.changelog.writepending() and self.root or ""
1420 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1420 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1421 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1421 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1422 parent2=xp2, pending=p)
1422 parent2=xp2, pending=p)
1423 self.changelog.finalize(trp)
1423 self.changelog.finalize(trp)
1424 # set the new commit is proper phase
1424 # set the new commit is proper phase
1425 targetphase = subrepo.newcommitphase(self.ui, ctx)
1425 targetphase = subrepo.newcommitphase(self.ui, ctx)
1426 if targetphase:
1426 if targetphase:
1427 # retract boundary do not alter parent changeset.
1427 # retract boundary do not alter parent changeset.
1428 # if a parent have higher the resulting phase will
1428 # if a parent have higher the resulting phase will
1429 # be compliant anyway
1429 # be compliant anyway
1430 #
1430 #
1431 # if minimal phase was 0 we don't need to retract anything
1431 # if minimal phase was 0 we don't need to retract anything
1432 phases.retractboundary(self, targetphase, [n])
1432 phases.retractboundary(self, targetphase, [n])
1433 tr.close()
1433 tr.close()
1434 branchmap.updatecache(self.filtered('served'))
1434 branchmap.updatecache(self.filtered('served'))
1435 return n
1435 return n
1436 finally:
1436 finally:
1437 if tr:
1437 if tr:
1438 tr.release()
1438 tr.release()
1439 lock.release()
1439 lock.release()
1440
1440
1441 @unfilteredmethod
1441 @unfilteredmethod
1442 def destroying(self):
1442 def destroying(self):
1443 '''Inform the repository that nodes are about to be destroyed.
1443 '''Inform the repository that nodes are about to be destroyed.
1444 Intended for use by strip and rollback, so there's a common
1444 Intended for use by strip and rollback, so there's a common
1445 place for anything that has to be done before destroying history.
1445 place for anything that has to be done before destroying history.
1446
1446
1447 This is mostly useful for saving state that is in memory and waiting
1447 This is mostly useful for saving state that is in memory and waiting
1448 to be flushed when the current lock is released. Because a call to
1448 to be flushed when the current lock is released. Because a call to
1449 destroyed is imminent, the repo will be invalidated causing those
1449 destroyed is imminent, the repo will be invalidated causing those
1450 changes to stay in memory (waiting for the next unlock), or vanish
1450 changes to stay in memory (waiting for the next unlock), or vanish
1451 completely.
1451 completely.
1452 '''
1452 '''
1453 # When using the same lock to commit and strip, the phasecache is left
1453 # When using the same lock to commit and strip, the phasecache is left
1454 # dirty after committing. Then when we strip, the repo is invalidated,
1454 # dirty after committing. Then when we strip, the repo is invalidated,
1455 # causing those changes to disappear.
1455 # causing those changes to disappear.
1456 if '_phasecache' in vars(self):
1456 if '_phasecache' in vars(self):
1457 self._phasecache.write()
1457 self._phasecache.write()
1458
1458
1459 @unfilteredmethod
1459 @unfilteredmethod
1460 def destroyed(self):
1460 def destroyed(self):
1461 '''Inform the repository that nodes have been destroyed.
1461 '''Inform the repository that nodes have been destroyed.
1462 Intended for use by strip and rollback, so there's a common
1462 Intended for use by strip and rollback, so there's a common
1463 place for anything that has to be done after destroying history.
1463 place for anything that has to be done after destroying history.
1464 '''
1464 '''
1465 # When one tries to:
1465 # When one tries to:
1466 # 1) destroy nodes thus calling this method (e.g. strip)
1466 # 1) destroy nodes thus calling this method (e.g. strip)
1467 # 2) use phasecache somewhere (e.g. commit)
1467 # 2) use phasecache somewhere (e.g. commit)
1468 #
1468 #
1469 # then 2) will fail because the phasecache contains nodes that were
1469 # then 2) will fail because the phasecache contains nodes that were
1470 # removed. We can either remove phasecache from the filecache,
1470 # removed. We can either remove phasecache from the filecache,
1471 # causing it to reload next time it is accessed, or simply filter
1471 # causing it to reload next time it is accessed, or simply filter
1472 # the removed nodes now and write the updated cache.
1472 # the removed nodes now and write the updated cache.
1473 self._phasecache.filterunknown(self)
1473 self._phasecache.filterunknown(self)
1474 self._phasecache.write()
1474 self._phasecache.write()
1475
1475
1476 # update the 'served' branch cache to help read only server process
1476 # update the 'served' branch cache to help read only server process
1477 # Thanks to branchcache collaboration this is done from the nearest
1477 # Thanks to branchcache collaboration this is done from the nearest
1478 # filtered subset and it is expected to be fast.
1478 # filtered subset and it is expected to be fast.
1479 branchmap.updatecache(self.filtered('served'))
1479 branchmap.updatecache(self.filtered('served'))
1480
1480
1481 # Ensure the persistent tag cache is updated. Doing it now
1481 # Ensure the persistent tag cache is updated. Doing it now
1482 # means that the tag cache only has to worry about destroyed
1482 # means that the tag cache only has to worry about destroyed
1483 # heads immediately after a strip/rollback. That in turn
1483 # heads immediately after a strip/rollback. That in turn
1484 # guarantees that "cachetip == currenttip" (comparing both rev
1484 # guarantees that "cachetip == currenttip" (comparing both rev
1485 # and node) always means no nodes have been added or destroyed.
1485 # and node) always means no nodes have been added or destroyed.
1486
1486
1487 # XXX this is suboptimal when qrefresh'ing: we strip the current
1487 # XXX this is suboptimal when qrefresh'ing: we strip the current
1488 # head, refresh the tag cache, then immediately add a new head.
1488 # head, refresh the tag cache, then immediately add a new head.
1489 # But I think doing it this way is necessary for the "instant
1489 # But I think doing it this way is necessary for the "instant
1490 # tag cache retrieval" case to work.
1490 # tag cache retrieval" case to work.
1491 self.invalidate()
1491 self.invalidate()
1492
1492
1493 def walk(self, match, node=None):
1493 def walk(self, match, node=None):
1494 '''
1494 '''
1495 walk recursively through the directory tree or a given
1495 walk recursively through the directory tree or a given
1496 changeset, finding all files matched by the match
1496 changeset, finding all files matched by the match
1497 function
1497 function
1498 '''
1498 '''
1499 return self[node].walk(match)
1499 return self[node].walk(match)
1500
1500
1501 def status(self, node1='.', node2=None, match=None,
1501 def status(self, node1='.', node2=None, match=None,
1502 ignored=False, clean=False, unknown=False,
1502 ignored=False, clean=False, unknown=False,
1503 listsubrepos=False):
1503 listsubrepos=False):
1504 """return status of files between two nodes or node and working
1504 """return status of files between two nodes or node and working
1505 directory.
1505 directory.
1506
1506
1507 If node1 is None, use the first dirstate parent instead.
1507 If node1 is None, use the first dirstate parent instead.
1508 If node2 is None, compare node1 with working directory.
1508 If node2 is None, compare node1 with working directory.
1509 """
1509 """
1510
1510
1511 def mfmatches(ctx):
1511 def mfmatches(ctx):
1512 mf = ctx.manifest().copy()
1512 mf = ctx.manifest().copy()
1513 if match.always():
1513 if match.always():
1514 return mf
1514 return mf
1515 for fn in mf.keys():
1515 for fn in mf.keys():
1516 if not match(fn):
1516 if not match(fn):
1517 del mf[fn]
1517 del mf[fn]
1518 return mf
1518 return mf
1519
1519
1520 ctx1 = self[node1]
1520 ctx1 = self[node1]
1521 ctx2 = self[node2]
1521 ctx2 = self[node2]
1522
1522
1523 working = ctx2.rev() is None
1523 working = ctx2.rev() is None
1524 parentworking = working and ctx1 == self['.']
1524 parentworking = working and ctx1 == self['.']
1525 match = match or matchmod.always(self.root, self.getcwd())
1525 match = match or matchmod.always(self.root, self.getcwd())
1526 listignored, listclean, listunknown = ignored, clean, unknown
1526 listignored, listclean, listunknown = ignored, clean, unknown
1527
1527
1528 # load earliest manifest first for caching reasons
1528 # load earliest manifest first for caching reasons
1529 if not working and ctx2.rev() < ctx1.rev():
1529 if not working and ctx2.rev() < ctx1.rev():
1530 ctx2.manifest()
1530 ctx2.manifest()
1531
1531
1532 if not parentworking:
1532 if not parentworking:
1533 def bad(f, msg):
1533 def bad(f, msg):
1534 # 'f' may be a directory pattern from 'match.files()',
1534 # 'f' may be a directory pattern from 'match.files()',
1535 # so 'f not in ctx1' is not enough
1535 # so 'f not in ctx1' is not enough
1536 if f not in ctx1 and f not in ctx1.dirs():
1536 if f not in ctx1 and f not in ctx1.dirs():
1537 self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
1537 self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
1538 match.bad = bad
1538 match.bad = bad
1539
1539
1540 if working: # we need to scan the working dir
1540 if working: # we need to scan the working dir
1541 subrepos = []
1541 subrepos = []
1542 if '.hgsub' in self.dirstate:
1542 if '.hgsub' in self.dirstate:
1543 subrepos = sorted(ctx2.substate)
1543 subrepos = sorted(ctx2.substate)
1544 s = self.dirstate.status(match, subrepos, listignored,
1544 s = self.dirstate.status(match, subrepos, listignored,
1545 listclean, listunknown)
1545 listclean, listunknown)
1546 cmp, modified, added, removed, deleted, unknown, ignored, clean = s
1546 cmp, modified, added, removed, deleted, unknown, ignored, clean = s
1547
1547
1548 # check for any possibly clean files
1548 # check for any possibly clean files
1549 if parentworking and cmp:
1549 if parentworking and cmp:
1550 fixup = []
1550 fixup = []
1551 # do a full compare of any files that might have changed
1551 # do a full compare of any files that might have changed
1552 for f in sorted(cmp):
1552 for f in sorted(cmp):
1553 if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
1553 if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
1554 or ctx1[f].cmp(ctx2[f])):
1554 or ctx1[f].cmp(ctx2[f])):
1555 modified.append(f)
1555 modified.append(f)
1556 else:
1556 else:
1557 fixup.append(f)
1557 fixup.append(f)
1558
1558
1559 # update dirstate for files that are actually clean
1559 # update dirstate for files that are actually clean
1560 if fixup:
1560 if fixup:
1561 if listclean:
1561 if listclean:
1562 clean += fixup
1562 clean += fixup
1563
1563
1564 try:
1564 try:
1565 # updating the dirstate is optional
1565 # updating the dirstate is optional
1566 # so we don't wait on the lock
1566 # so we don't wait on the lock
1567 wlock = self.wlock(False)
1567 wlock = self.wlock(False)
1568 try:
1568 try:
1569 for f in fixup:
1569 for f in fixup:
1570 self.dirstate.normal(f)
1570 self.dirstate.normal(f)
1571 finally:
1571 finally:
1572 wlock.release()
1572 wlock.release()
1573 except error.LockError:
1573 except error.LockError:
1574 pass
1574 pass
1575
1575
1576 if not parentworking:
1576 if not parentworking:
1577 mf1 = mfmatches(ctx1)
1577 mf1 = mfmatches(ctx1)
1578 if working:
1578 if working:
1579 # we are comparing working dir against non-parent
1579 # we are comparing working dir against non-parent
1580 # generate a pseudo-manifest for the working dir
1580 # generate a pseudo-manifest for the working dir
1581 mf2 = mfmatches(self['.'])
1581 mf2 = mfmatches(self['.'])
1582 for f in cmp + modified + added:
1582 for f in cmp + modified + added:
1583 mf2[f] = None
1583 mf2[f] = None
1584 mf2.set(f, ctx2.flags(f))
1584 mf2.set(f, ctx2.flags(f))
1585 for f in removed:
1585 for f in removed:
1586 if f in mf2:
1586 if f in mf2:
1587 del mf2[f]
1587 del mf2[f]
1588 else:
1588 else:
1589 # we are comparing two revisions
1589 # we are comparing two revisions
1590 deleted, unknown, ignored = [], [], []
1590 deleted, unknown, ignored = [], [], []
1591 mf2 = mfmatches(ctx2)
1591 mf2 = mfmatches(ctx2)
1592
1592
1593 modified, added, clean = [], [], []
1593 modified, added, clean = [], [], []
1594 withflags = mf1.withflags() | mf2.withflags()
1594 withflags = mf1.withflags() | mf2.withflags()
1595 for fn, mf2node in mf2.iteritems():
1595 for fn, mf2node in mf2.iteritems():
1596 if fn in mf1:
1596 if fn in mf1:
1597 if (fn not in deleted and
1597 if (fn not in deleted and
1598 ((fn in withflags and mf1.flags(fn) != mf2.flags(fn)) or
1598 ((fn in withflags and mf1.flags(fn) != mf2.flags(fn)) or
1599 (mf1[fn] != mf2node and
1599 (mf1[fn] != mf2node and
1600 (mf2node or ctx1[fn].cmp(ctx2[fn]))))):
1600 (mf2node or ctx1[fn].cmp(ctx2[fn]))))):
1601 modified.append(fn)
1601 modified.append(fn)
1602 elif listclean:
1602 elif listclean:
1603 clean.append(fn)
1603 clean.append(fn)
1604 del mf1[fn]
1604 del mf1[fn]
1605 elif fn not in deleted:
1605 elif fn not in deleted:
1606 added.append(fn)
1606 added.append(fn)
1607 removed = mf1.keys()
1607 removed = mf1.keys()
1608
1608
1609 if working and modified and not self.dirstate._checklink:
1609 if working and modified and not self.dirstate._checklink:
1610 # Symlink placeholders may get non-symlink-like contents
1610 # Symlink placeholders may get non-symlink-like contents
1611 # via user error or dereferencing by NFS or Samba servers,
1611 # via user error or dereferencing by NFS or Samba servers,
1612 # so we filter out any placeholders that don't look like a
1612 # so we filter out any placeholders that don't look like a
1613 # symlink
1613 # symlink
1614 sane = []
1614 sane = []
1615 for f in modified:
1615 for f in modified:
1616 if ctx2.flags(f) == 'l':
1616 if ctx2.flags(f) == 'l':
1617 d = ctx2[f].data()
1617 d = ctx2[f].data()
1618 if d == '' or len(d) >= 1024 or '\n' in d or util.binary(d):
1618 if d == '' or len(d) >= 1024 or '\n' in d or util.binary(d):
1619 self.ui.debug('ignoring suspect symlink placeholder'
1619 self.ui.debug('ignoring suspect symlink placeholder'
1620 ' "%s"\n' % f)
1620 ' "%s"\n' % f)
1621 continue
1621 continue
1622 sane.append(f)
1622 sane.append(f)
1623 modified = sane
1623 modified = sane
1624
1624
1625 r = modified, added, removed, deleted, unknown, ignored, clean
1625 r = modified, added, removed, deleted, unknown, ignored, clean
1626
1626
1627 if listsubrepos:
1627 if listsubrepos:
1628 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1628 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1629 if working:
1629 if working:
1630 rev2 = None
1630 rev2 = None
1631 else:
1631 else:
1632 rev2 = ctx2.substate[subpath][1]
1632 rev2 = ctx2.substate[subpath][1]
1633 try:
1633 try:
1634 submatch = matchmod.narrowmatcher(subpath, match)
1634 submatch = matchmod.narrowmatcher(subpath, match)
1635 s = sub.status(rev2, match=submatch, ignored=listignored,
1635 s = sub.status(rev2, match=submatch, ignored=listignored,
1636 clean=listclean, unknown=listunknown,
1636 clean=listclean, unknown=listunknown,
1637 listsubrepos=True)
1637 listsubrepos=True)
1638 for rfiles, sfiles in zip(r, s):
1638 for rfiles, sfiles in zip(r, s):
1639 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
1639 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
1640 except error.LookupError:
1640 except error.LookupError:
1641 self.ui.status(_("skipping missing subrepository: %s\n")
1641 self.ui.status(_("skipping missing subrepository: %s\n")
1642 % subpath)
1642 % subpath)
1643
1643
1644 for l in r:
1644 for l in r:
1645 l.sort()
1645 l.sort()
1646 return r
1646 return r
1647
1647
1648 def heads(self, start=None):
1648 def heads(self, start=None):
1649 heads = self.changelog.heads(start)
1649 heads = self.changelog.heads(start)
1650 # sort the output in rev descending order
1650 # sort the output in rev descending order
1651 return sorted(heads, key=self.changelog.rev, reverse=True)
1651 return sorted(heads, key=self.changelog.rev, reverse=True)
1652
1652
1653 def branchheads(self, branch=None, start=None, closed=False):
1653 def branchheads(self, branch=None, start=None, closed=False):
1654 '''return a (possibly filtered) list of heads for the given branch
1654 '''return a (possibly filtered) list of heads for the given branch
1655
1655
1656 Heads are returned in topological order, from newest to oldest.
1656 Heads are returned in topological order, from newest to oldest.
1657 If branch is None, use the dirstate branch.
1657 If branch is None, use the dirstate branch.
1658 If start is not None, return only heads reachable from start.
1658 If start is not None, return only heads reachable from start.
1659 If closed is True, return heads that are marked as closed as well.
1659 If closed is True, return heads that are marked as closed as well.
1660 '''
1660 '''
1661 if branch is None:
1661 if branch is None:
1662 branch = self[None].branch()
1662 branch = self[None].branch()
1663 branches = self.branchmap()
1663 branches = self.branchmap()
1664 if branch not in branches:
1664 if branch not in branches:
1665 return []
1665 return []
1666 # the cache returns heads ordered lowest to highest
1666 # the cache returns heads ordered lowest to highest
1667 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1667 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1668 if start is not None:
1668 if start is not None:
1669 # filter out the heads that cannot be reached from startrev
1669 # filter out the heads that cannot be reached from startrev
1670 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1670 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1671 bheads = [h for h in bheads if h in fbheads]
1671 bheads = [h for h in bheads if h in fbheads]
1672 return bheads
1672 return bheads
1673
1673
1674 def branches(self, nodes):
1674 def branches(self, nodes):
1675 if not nodes:
1675 if not nodes:
1676 nodes = [self.changelog.tip()]
1676 nodes = [self.changelog.tip()]
1677 b = []
1677 b = []
1678 for n in nodes:
1678 for n in nodes:
1679 t = n
1679 t = n
1680 while True:
1680 while True:
1681 p = self.changelog.parents(n)
1681 p = self.changelog.parents(n)
1682 if p[1] != nullid or p[0] == nullid:
1682 if p[1] != nullid or p[0] == nullid:
1683 b.append((t, n, p[0], p[1]))
1683 b.append((t, n, p[0], p[1]))
1684 break
1684 break
1685 n = p[0]
1685 n = p[0]
1686 return b
1686 return b
1687
1687
1688 def between(self, pairs):
1688 def between(self, pairs):
1689 r = []
1689 r = []
1690
1690
1691 for top, bottom in pairs:
1691 for top, bottom in pairs:
1692 n, l, i = top, [], 0
1692 n, l, i = top, [], 0
1693 f = 1
1693 f = 1
1694
1694
1695 while n != bottom and n != nullid:
1695 while n != bottom and n != nullid:
1696 p = self.changelog.parents(n)[0]
1696 p = self.changelog.parents(n)[0]
1697 if i == f:
1697 if i == f:
1698 l.append(n)
1698 l.append(n)
1699 f = f * 2
1699 f = f * 2
1700 n = p
1700 n = p
1701 i += 1
1701 i += 1
1702
1702
1703 r.append(l)
1703 r.append(l)
1704
1704
1705 return r
1705 return r
1706
1706
1707 def pull(self, remote, heads=None, force=False):
1707 def pull(self, remote, heads=None, force=False):
1708 return exchange.pull (self, remote, heads, force)
1708 return exchange.pull (self, remote, heads, force)
1709
1709
1710 def checkpush(self, pushop):
1710 def checkpush(self, pushop):
1711 """Extensions can override this function if additional checks have
1711 """Extensions can override this function if additional checks have
1712 to be performed before pushing, or call it if they override push
1712 to be performed before pushing, or call it if they override push
1713 command.
1713 command.
1714 """
1714 """
1715 pass
1715 pass
1716
1716
1717 @unfilteredpropertycache
1717 @unfilteredpropertycache
1718 def prepushoutgoinghooks(self):
1718 def prepushoutgoinghooks(self):
1719 """Return util.hooks consists of "(repo, remote, outgoing)"
1719 """Return util.hooks consists of "(repo, remote, outgoing)"
1720 functions, which are called before pushing changesets.
1720 functions, which are called before pushing changesets.
1721 """
1721 """
1722 return util.hooks()
1722 return util.hooks()
1723
1723
1724 def push(self, remote, force=False, revs=None, newbranch=False):
1724 def push(self, remote, force=False, revs=None, newbranch=False):
1725 return exchange.push(self, remote, force, revs, newbranch)
1725 return exchange.push(self, remote, force, revs, newbranch)
1726
1726
1727 def stream_in(self, remote, requirements):
1727 def stream_in(self, remote, requirements):
1728 lock = self.lock()
1728 lock = self.lock()
1729 try:
1729 try:
1730 # Save remote branchmap. We will use it later
1730 # Save remote branchmap. We will use it later
1731 # to speed up branchcache creation
1731 # to speed up branchcache creation
1732 rbranchmap = None
1732 rbranchmap = None
1733 if remote.capable("branchmap"):
1733 if remote.capable("branchmap"):
1734 rbranchmap = remote.branchmap()
1734 rbranchmap = remote.branchmap()
1735
1735
1736 fp = remote.stream_out()
1736 fp = remote.stream_out()
1737 l = fp.readline()
1737 l = fp.readline()
1738 try:
1738 try:
1739 resp = int(l)
1739 resp = int(l)
1740 except ValueError:
1740 except ValueError:
1741 raise error.ResponseError(
1741 raise error.ResponseError(
1742 _('unexpected response from remote server:'), l)
1742 _('unexpected response from remote server:'), l)
1743 if resp == 1:
1743 if resp == 1:
1744 raise util.Abort(_('operation forbidden by server'))
1744 raise util.Abort(_('operation forbidden by server'))
1745 elif resp == 2:
1745 elif resp == 2:
1746 raise util.Abort(_('locking the remote repository failed'))
1746 raise util.Abort(_('locking the remote repository failed'))
1747 elif resp != 0:
1747 elif resp != 0:
1748 raise util.Abort(_('the server sent an unknown error code'))
1748 raise util.Abort(_('the server sent an unknown error code'))
1749 self.ui.status(_('streaming all changes\n'))
1749 self.ui.status(_('streaming all changes\n'))
1750 l = fp.readline()
1750 l = fp.readline()
1751 try:
1751 try:
1752 total_files, total_bytes = map(int, l.split(' ', 1))
1752 total_files, total_bytes = map(int, l.split(' ', 1))
1753 except (ValueError, TypeError):
1753 except (ValueError, TypeError):
1754 raise error.ResponseError(
1754 raise error.ResponseError(
1755 _('unexpected response from remote server:'), l)
1755 _('unexpected response from remote server:'), l)
1756 self.ui.status(_('%d files to transfer, %s of data\n') %
1756 self.ui.status(_('%d files to transfer, %s of data\n') %
1757 (total_files, util.bytecount(total_bytes)))
1757 (total_files, util.bytecount(total_bytes)))
1758 handled_bytes = 0
1758 handled_bytes = 0
1759 self.ui.progress(_('clone'), 0, total=total_bytes)
1759 self.ui.progress(_('clone'), 0, total=total_bytes)
1760 start = time.time()
1760 start = time.time()
1761
1761
1762 tr = self.transaction(_('clone'))
1762 tr = self.transaction(_('clone'))
1763 try:
1763 try:
1764 for i in xrange(total_files):
1764 for i in xrange(total_files):
1765 # XXX doesn't support '\n' or '\r' in filenames
1765 # XXX doesn't support '\n' or '\r' in filenames
1766 l = fp.readline()
1766 l = fp.readline()
1767 try:
1767 try:
1768 name, size = l.split('\0', 1)
1768 name, size = l.split('\0', 1)
1769 size = int(size)
1769 size = int(size)
1770 except (ValueError, TypeError):
1770 except (ValueError, TypeError):
1771 raise error.ResponseError(
1771 raise error.ResponseError(
1772 _('unexpected response from remote server:'), l)
1772 _('unexpected response from remote server:'), l)
1773 if self.ui.debugflag:
1773 if self.ui.debugflag:
1774 self.ui.debug('adding %s (%s)\n' %
1774 self.ui.debug('adding %s (%s)\n' %
1775 (name, util.bytecount(size)))
1775 (name, util.bytecount(size)))
1776 # for backwards compat, name was partially encoded
1776 # for backwards compat, name was partially encoded
1777 ofp = self.sopener(store.decodedir(name), 'w')
1777 ofp = self.sopener(store.decodedir(name), 'w')
1778 for chunk in util.filechunkiter(fp, limit=size):
1778 for chunk in util.filechunkiter(fp, limit=size):
1779 handled_bytes += len(chunk)
1779 handled_bytes += len(chunk)
1780 self.ui.progress(_('clone'), handled_bytes,
1780 self.ui.progress(_('clone'), handled_bytes,
1781 total=total_bytes)
1781 total=total_bytes)
1782 ofp.write(chunk)
1782 ofp.write(chunk)
1783 ofp.close()
1783 ofp.close()
1784 tr.close()
1784 tr.close()
1785 finally:
1785 finally:
1786 tr.release()
1786 tr.release()
1787
1787
1788 # Writing straight to files circumvented the inmemory caches
1788 # Writing straight to files circumvented the inmemory caches
1789 self.invalidate()
1789 self.invalidate()
1790
1790
1791 elapsed = time.time() - start
1791 elapsed = time.time() - start
1792 if elapsed <= 0:
1792 if elapsed <= 0:
1793 elapsed = 0.001
1793 elapsed = 0.001
1794 self.ui.progress(_('clone'), None)
1794 self.ui.progress(_('clone'), None)
1795 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1795 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1796 (util.bytecount(total_bytes), elapsed,
1796 (util.bytecount(total_bytes), elapsed,
1797 util.bytecount(total_bytes / elapsed)))
1797 util.bytecount(total_bytes / elapsed)))
1798
1798
1799 # new requirements = old non-format requirements +
1799 # new requirements = old non-format requirements +
1800 # new format-related
1800 # new format-related
1801 # requirements from the streamed-in repository
1801 # requirements from the streamed-in repository
1802 requirements.update(set(self.requirements) - self.supportedformats)
1802 requirements.update(set(self.requirements) - self.supportedformats)
1803 self._applyrequirements(requirements)
1803 self._applyrequirements(requirements)
1804 self._writerequirements()
1804 self._writerequirements()
1805
1805
1806 if rbranchmap:
1806 if rbranchmap:
1807 rbheads = []
1807 rbheads = []
1808 for bheads in rbranchmap.itervalues():
1808 for bheads in rbranchmap.itervalues():
1809 rbheads.extend(bheads)
1809 rbheads.extend(bheads)
1810
1810
1811 if rbheads:
1811 if rbheads:
1812 rtiprev = max((int(self.changelog.rev(node))
1812 rtiprev = max((int(self.changelog.rev(node))
1813 for node in rbheads))
1813 for node in rbheads))
1814 cache = branchmap.branchcache(rbranchmap,
1814 cache = branchmap.branchcache(rbranchmap,
1815 self[rtiprev].node(),
1815 self[rtiprev].node(),
1816 rtiprev)
1816 rtiprev)
1817 # Try to stick it as low as possible
1817 # Try to stick it as low as possible
1818 # filter above served are unlikely to be fetch from a clone
1818 # filter above served are unlikely to be fetch from a clone
1819 for candidate in ('base', 'immutable', 'served'):
1819 for candidate in ('base', 'immutable', 'served'):
1820 rview = self.filtered(candidate)
1820 rview = self.filtered(candidate)
1821 if cache.validfor(rview):
1821 if cache.validfor(rview):
1822 self._branchcaches[candidate] = cache
1822 self._branchcaches[candidate] = cache
1823 cache.write(rview)
1823 cache.write(rview)
1824 break
1824 break
1825 self.invalidate()
1825 self.invalidate()
1826 return len(self.heads()) + 1
1826 return len(self.heads()) + 1
1827 finally:
1827 finally:
1828 lock.release()
1828 lock.release()
1829
1829
1830 def clone(self, remote, heads=[], stream=False):
1830 def clone(self, remote, heads=[], stream=False):
1831 '''clone remote repository.
1831 '''clone remote repository.
1832
1832
1833 keyword arguments:
1833 keyword arguments:
1834 heads: list of revs to clone (forces use of pull)
1834 heads: list of revs to clone (forces use of pull)
1835 stream: use streaming clone if possible'''
1835 stream: use streaming clone if possible'''
1836
1836
1837 # now, all clients that can request uncompressed clones can
1837 # now, all clients that can request uncompressed clones can
1838 # read repo formats supported by all servers that can serve
1838 # read repo formats supported by all servers that can serve
1839 # them.
1839 # them.
1840
1840
1841 # if revlog format changes, client will have to check version
1841 # if revlog format changes, client will have to check version
1842 # and format flags on "stream" capability, and use
1842 # and format flags on "stream" capability, and use
1843 # uncompressed only if compatible.
1843 # uncompressed only if compatible.
1844
1844
1845 if not stream:
1845 if not stream:
1846 # if the server explicitly prefers to stream (for fast LANs)
1846 # if the server explicitly prefers to stream (for fast LANs)
1847 stream = remote.capable('stream-preferred')
1847 stream = remote.capable('stream-preferred')
1848
1848
1849 if stream and not heads:
1849 if stream and not heads:
1850 # 'stream' means remote revlog format is revlogv1 only
1850 # 'stream' means remote revlog format is revlogv1 only
1851 if remote.capable('stream'):
1851 if remote.capable('stream'):
1852 return self.stream_in(remote, set(('revlogv1',)))
1852 return self.stream_in(remote, set(('revlogv1',)))
1853 # otherwise, 'streamreqs' contains the remote revlog format
1853 # otherwise, 'streamreqs' contains the remote revlog format
1854 streamreqs = remote.capable('streamreqs')
1854 streamreqs = remote.capable('streamreqs')
1855 if streamreqs:
1855 if streamreqs:
1856 streamreqs = set(streamreqs.split(','))
1856 streamreqs = set(streamreqs.split(','))
1857 # if we support it, stream in and adjust our requirements
1857 # if we support it, stream in and adjust our requirements
1858 if not streamreqs - self.supportedformats:
1858 if not streamreqs - self.supportedformats:
1859 return self.stream_in(remote, streamreqs)
1859 return self.stream_in(remote, streamreqs)
1860 return self.pull(remote, heads)
1860 return self.pull(remote, heads)
1861
1861
1862 def pushkey(self, namespace, key, old, new):
1862 def pushkey(self, namespace, key, old, new):
1863 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1863 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1864 old=old, new=new)
1864 old=old, new=new)
1865 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1865 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1866 ret = pushkey.push(self, namespace, key, old, new)
1866 ret = pushkey.push(self, namespace, key, old, new)
1867 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1867 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1868 ret=ret)
1868 ret=ret)
1869 return ret
1869 return ret
1870
1870
1871 def listkeys(self, namespace):
1871 def listkeys(self, namespace):
1872 self.hook('prelistkeys', throw=True, namespace=namespace)
1872 self.hook('prelistkeys', throw=True, namespace=namespace)
1873 self.ui.debug('listing keys for "%s"\n' % namespace)
1873 self.ui.debug('listing keys for "%s"\n' % namespace)
1874 values = pushkey.list(self, namespace)
1874 values = pushkey.list(self, namespace)
1875 self.hook('listkeys', namespace=namespace, values=values)
1875 self.hook('listkeys', namespace=namespace, values=values)
1876 return values
1876 return values
1877
1877
1878 def debugwireargs(self, one, two, three=None, four=None, five=None):
1878 def debugwireargs(self, one, two, three=None, four=None, five=None):
1879 '''used to test argument passing over the wire'''
1879 '''used to test argument passing over the wire'''
1880 return "%s %s %s %s %s" % (one, two, three, four, five)
1880 return "%s %s %s %s %s" % (one, two, three, four, five)
1881
1881
1882 def savecommitmessage(self, text):
1882 def savecommitmessage(self, text):
1883 fp = self.opener('last-message.txt', 'wb')
1883 fp = self.opener('last-message.txt', 'wb')
1884 try:
1884 try:
1885 fp.write(text)
1885 fp.write(text)
1886 finally:
1886 finally:
1887 fp.close()
1887 fp.close()
1888 return self.pathto(fp.name[len(self.root) + 1:])
1888 return self.pathto(fp.name[len(self.root) + 1:])
1889
1889
1890 # used to avoid circular references so destructors work
1890 # used to avoid circular references so destructors work
1891 def aftertrans(files):
1891 def aftertrans(files):
1892 renamefiles = [tuple(t) for t in files]
1892 renamefiles = [tuple(t) for t in files]
1893 def a():
1893 def a():
1894 for vfs, src, dest in renamefiles:
1894 for vfs, src, dest in renamefiles:
1895 try:
1895 try:
1896 vfs.rename(src, dest)
1896 vfs.rename(src, dest)
1897 except OSError: # journal file does not yet exist
1897 except OSError: # journal file does not yet exist
1898 pass
1898 pass
1899 return a
1899 return a
1900
1900
1901 def undoname(fn):
1901 def undoname(fn):
1902 base, name = os.path.split(fn)
1902 base, name = os.path.split(fn)
1903 assert name.startswith('journal')
1903 assert name.startswith('journal')
1904 return os.path.join(base, name.replace('journal', 'undo', 1))
1904 return os.path.join(base, name.replace('journal', 'undo', 1))
1905
1905
1906 def instance(ui, path, create):
1906 def instance(ui, path, create):
1907 return localrepository(ui, util.urllocalpath(path), create)
1907 return localrepository(ui, util.urllocalpath(path), create)
1908
1908
1909 def islocal(path):
1909 def islocal(path):
1910 return True
1910 return True
@@ -1,830 +1,837 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2
11 import changegroup as changegroupmod, bundle2
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # client side
193 # client side
194
194
195 class wirepeer(peer.peerrepository):
195 class wirepeer(peer.peerrepository):
196
196
197 def batch(self):
197 def batch(self):
198 return remotebatch(self)
198 return remotebatch(self)
199 def _submitbatch(self, req):
199 def _submitbatch(self, req):
200 cmds = []
200 cmds = []
201 for op, argsdict in req:
201 for op, argsdict in req:
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
203 cmds.append('%s %s' % (op, args))
203 cmds.append('%s %s' % (op, args))
204 rsp = self._call("batch", cmds=';'.join(cmds))
204 rsp = self._call("batch", cmds=';'.join(cmds))
205 return rsp.split(';')
205 return rsp.split(';')
206 def _submitone(self, op, args):
206 def _submitone(self, op, args):
207 return self._call(op, **args)
207 return self._call(op, **args)
208
208
209 @batchable
209 @batchable
210 def lookup(self, key):
210 def lookup(self, key):
211 self.requirecap('lookup', _('look up remote revision'))
211 self.requirecap('lookup', _('look up remote revision'))
212 f = future()
212 f = future()
213 yield {'key': encoding.fromlocal(key)}, f
213 yield {'key': encoding.fromlocal(key)}, f
214 d = f.value
214 d = f.value
215 success, data = d[:-1].split(" ", 1)
215 success, data = d[:-1].split(" ", 1)
216 if int(success):
216 if int(success):
217 yield bin(data)
217 yield bin(data)
218 self._abort(error.RepoError(data))
218 self._abort(error.RepoError(data))
219
219
220 @batchable
220 @batchable
221 def heads(self):
221 def heads(self):
222 f = future()
222 f = future()
223 yield {}, f
223 yield {}, f
224 d = f.value
224 d = f.value
225 try:
225 try:
226 yield decodelist(d[:-1])
226 yield decodelist(d[:-1])
227 except ValueError:
227 except ValueError:
228 self._abort(error.ResponseError(_("unexpected response:"), d))
228 self._abort(error.ResponseError(_("unexpected response:"), d))
229
229
230 @batchable
230 @batchable
231 def known(self, nodes):
231 def known(self, nodes):
232 f = future()
232 f = future()
233 yield {'nodes': encodelist(nodes)}, f
233 yield {'nodes': encodelist(nodes)}, f
234 d = f.value
234 d = f.value
235 try:
235 try:
236 yield [bool(int(f)) for f in d]
236 yield [bool(int(f)) for f in d]
237 except ValueError:
237 except ValueError:
238 self._abort(error.ResponseError(_("unexpected response:"), d))
238 self._abort(error.ResponseError(_("unexpected response:"), d))
239
239
240 @batchable
240 @batchable
241 def branchmap(self):
241 def branchmap(self):
242 f = future()
242 f = future()
243 yield {}, f
243 yield {}, f
244 d = f.value
244 d = f.value
245 try:
245 try:
246 branchmap = {}
246 branchmap = {}
247 for branchpart in d.splitlines():
247 for branchpart in d.splitlines():
248 branchname, branchheads = branchpart.split(' ', 1)
248 branchname, branchheads = branchpart.split(' ', 1)
249 branchname = encoding.tolocal(urllib.unquote(branchname))
249 branchname = encoding.tolocal(urllib.unquote(branchname))
250 branchheads = decodelist(branchheads)
250 branchheads = decodelist(branchheads)
251 branchmap[branchname] = branchheads
251 branchmap[branchname] = branchheads
252 yield branchmap
252 yield branchmap
253 except TypeError:
253 except TypeError:
254 self._abort(error.ResponseError(_("unexpected response:"), d))
254 self._abort(error.ResponseError(_("unexpected response:"), d))
255
255
256 def branches(self, nodes):
256 def branches(self, nodes):
257 n = encodelist(nodes)
257 n = encodelist(nodes)
258 d = self._call("branches", nodes=n)
258 d = self._call("branches", nodes=n)
259 try:
259 try:
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
261 return br
261 return br
262 except ValueError:
262 except ValueError:
263 self._abort(error.ResponseError(_("unexpected response:"), d))
263 self._abort(error.ResponseError(_("unexpected response:"), d))
264
264
265 def between(self, pairs):
265 def between(self, pairs):
266 batch = 8 # avoid giant requests
266 batch = 8 # avoid giant requests
267 r = []
267 r = []
268 for i in xrange(0, len(pairs), batch):
268 for i in xrange(0, len(pairs), batch):
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
270 d = self._call("between", pairs=n)
270 d = self._call("between", pairs=n)
271 try:
271 try:
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
273 except ValueError:
273 except ValueError:
274 self._abort(error.ResponseError(_("unexpected response:"), d))
274 self._abort(error.ResponseError(_("unexpected response:"), d))
275 return r
275 return r
276
276
277 @batchable
277 @batchable
278 def pushkey(self, namespace, key, old, new):
278 def pushkey(self, namespace, key, old, new):
279 if not self.capable('pushkey'):
279 if not self.capable('pushkey'):
280 yield False, None
280 yield False, None
281 f = future()
281 f = future()
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
283 yield {'namespace': encoding.fromlocal(namespace),
283 yield {'namespace': encoding.fromlocal(namespace),
284 'key': encoding.fromlocal(key),
284 'key': encoding.fromlocal(key),
285 'old': encoding.fromlocal(old),
285 'old': encoding.fromlocal(old),
286 'new': encoding.fromlocal(new)}, f
286 'new': encoding.fromlocal(new)}, f
287 d = f.value
287 d = f.value
288 d, output = d.split('\n', 1)
288 d, output = d.split('\n', 1)
289 try:
289 try:
290 d = bool(int(d))
290 d = bool(int(d))
291 except ValueError:
291 except ValueError:
292 raise error.ResponseError(
292 raise error.ResponseError(
293 _('push failed (unexpected response):'), d)
293 _('push failed (unexpected response):'), d)
294 for l in output.splitlines(True):
294 for l in output.splitlines(True):
295 self.ui.status(_('remote: '), l)
295 self.ui.status(_('remote: '), l)
296 yield d
296 yield d
297
297
298 @batchable
298 @batchable
299 def listkeys(self, namespace):
299 def listkeys(self, namespace):
300 if not self.capable('pushkey'):
300 if not self.capable('pushkey'):
301 yield {}, None
301 yield {}, None
302 f = future()
302 f = future()
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
304 yield {'namespace': encoding.fromlocal(namespace)}, f
304 yield {'namespace': encoding.fromlocal(namespace)}, f
305 d = f.value
305 d = f.value
306 r = {}
306 r = {}
307 for l in d.splitlines():
307 for l in d.splitlines():
308 k, v = l.split('\t')
308 k, v = l.split('\t')
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
310 yield r
310 yield r
311
311
312 def stream_out(self):
312 def stream_out(self):
313 return self._callstream('stream_out')
313 return self._callstream('stream_out')
314
314
315 def changegroup(self, nodes, kind):
315 def changegroup(self, nodes, kind):
316 n = encodelist(nodes)
316 n = encodelist(nodes)
317 f = self._callcompressable("changegroup", roots=n)
317 f = self._callcompressable("changegroup", roots=n)
318 return changegroupmod.unbundle10(f, 'UN')
318 return changegroupmod.unbundle10(f, 'UN')
319
319
320 def changegroupsubset(self, bases, heads, kind):
320 def changegroupsubset(self, bases, heads, kind):
321 self.requirecap('changegroupsubset', _('look up remote changes'))
321 self.requirecap('changegroupsubset', _('look up remote changes'))
322 bases = encodelist(bases)
322 bases = encodelist(bases)
323 heads = encodelist(heads)
323 heads = encodelist(heads)
324 f = self._callcompressable("changegroupsubset",
324 f = self._callcompressable("changegroupsubset",
325 bases=bases, heads=heads)
325 bases=bases, heads=heads)
326 return changegroupmod.unbundle10(f, 'UN')
326 return changegroupmod.unbundle10(f, 'UN')
327
327
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
329 **kwargs):
329 **kwargs):
330 self.requirecap('getbundle', _('look up remote changes'))
330 self.requirecap('getbundle', _('look up remote changes'))
331 opts = {}
331 opts = {}
332 if heads is not None:
332 if heads is not None:
333 opts['heads'] = encodelist(heads)
333 opts['heads'] = encodelist(heads)
334 if common is not None:
334 if common is not None:
335 opts['common'] = encodelist(common)
335 opts['common'] = encodelist(common)
336 if bundlecaps is not None:
336 if bundlecaps is not None:
337 opts['bundlecaps'] = ','.join(bundlecaps)
337 opts['bundlecaps'] = ','.join(bundlecaps)
338 opts.update(kwargs)
338 opts.update(kwargs)
339 f = self._callcompressable("getbundle", **opts)
339 f = self._callcompressable("getbundle", **opts)
340 if bundlecaps is not None and 'HG2X' in bundlecaps:
340 if bundlecaps is not None and 'HG2X' in bundlecaps:
341 return bundle2.unbundle20(self.ui, f)
341 return bundle2.unbundle20(self.ui, f)
342 else:
342 else:
343 return changegroupmod.unbundle10(f, 'UN')
343 return changegroupmod.unbundle10(f, 'UN')
344
344
345 def unbundle(self, cg, heads, source):
345 def unbundle(self, cg, heads, source):
346 '''Send cg (a readable file-like object representing the
346 '''Send cg (a readable file-like object representing the
347 changegroup to push, typically a chunkbuffer object) to the
347 changegroup to push, typically a chunkbuffer object) to the
348 remote server as a bundle.
348 remote server as a bundle.
349
349
350 When pushing a bundle10 stream, return an integer indicating the
350 When pushing a bundle10 stream, return an integer indicating the
351 result of the push (see localrepository.addchangegroup()).
351 result of the push (see localrepository.addchangegroup()).
352
352
353 When pushing a bundle20 stream, return a bundle20 stream.'''
353 When pushing a bundle20 stream, return a bundle20 stream.'''
354
354
355 if heads != ['force'] and self.capable('unbundlehash'):
355 if heads != ['force'] and self.capable('unbundlehash'):
356 heads = encodelist(['hashed',
356 heads = encodelist(['hashed',
357 util.sha1(''.join(sorted(heads))).digest()])
357 util.sha1(''.join(sorted(heads))).digest()])
358 else:
358 else:
359 heads = encodelist(heads)
359 heads = encodelist(heads)
360
360
361 if util.safehasattr(cg, 'deltaheader'):
361 if util.safehasattr(cg, 'deltaheader'):
362 # this a bundle10, do the old style call sequence
362 # this a bundle10, do the old style call sequence
363 ret, output = self._callpush("unbundle", cg, heads=heads)
363 ret, output = self._callpush("unbundle", cg, heads=heads)
364 if ret == "":
364 if ret == "":
365 raise error.ResponseError(
365 raise error.ResponseError(
366 _('push failed:'), output)
366 _('push failed:'), output)
367 try:
367 try:
368 ret = int(ret)
368 ret = int(ret)
369 except ValueError:
369 except ValueError:
370 raise error.ResponseError(
370 raise error.ResponseError(
371 _('push failed (unexpected response):'), ret)
371 _('push failed (unexpected response):'), ret)
372
372
373 for l in output.splitlines(True):
373 for l in output.splitlines(True):
374 self.ui.status(_('remote: '), l)
374 self.ui.status(_('remote: '), l)
375 else:
375 else:
376 # bundle2 push. Send a stream, fetch a stream.
376 # bundle2 push. Send a stream, fetch a stream.
377 stream = self._calltwowaystream('unbundle', cg, heads=heads)
377 stream = self._calltwowaystream('unbundle', cg, heads=heads)
378 ret = bundle2.unbundle20(self.ui, stream)
378 ret = bundle2.unbundle20(self.ui, stream)
379 return ret
379 return ret
380
380
381 def debugwireargs(self, one, two, three=None, four=None, five=None):
381 def debugwireargs(self, one, two, three=None, four=None, five=None):
382 # don't pass optional arguments left at their default value
382 # don't pass optional arguments left at their default value
383 opts = {}
383 opts = {}
384 if three is not None:
384 if three is not None:
385 opts['three'] = three
385 opts['three'] = three
386 if four is not None:
386 if four is not None:
387 opts['four'] = four
387 opts['four'] = four
388 return self._call('debugwireargs', one=one, two=two, **opts)
388 return self._call('debugwireargs', one=one, two=two, **opts)
389
389
390 def _call(self, cmd, **args):
390 def _call(self, cmd, **args):
391 """execute <cmd> on the server
391 """execute <cmd> on the server
392
392
393 The command is expected to return a simple string.
393 The command is expected to return a simple string.
394
394
395 returns the server reply as a string."""
395 returns the server reply as a string."""
396 raise NotImplementedError()
396 raise NotImplementedError()
397
397
398 def _callstream(self, cmd, **args):
398 def _callstream(self, cmd, **args):
399 """execute <cmd> on the server
399 """execute <cmd> on the server
400
400
401 The command is expected to return a stream.
401 The command is expected to return a stream.
402
402
403 returns the server reply as a file like object."""
403 returns the server reply as a file like object."""
404 raise NotImplementedError()
404 raise NotImplementedError()
405
405
406 def _callcompressable(self, cmd, **args):
406 def _callcompressable(self, cmd, **args):
407 """execute <cmd> on the server
407 """execute <cmd> on the server
408
408
409 The command is expected to return a stream.
409 The command is expected to return a stream.
410
410
411 The stream may have been compressed in some implementations. This
411 The stream may have been compressed in some implementations. This
412 function takes care of the decompression. This is the only difference
412 function takes care of the decompression. This is the only difference
413 with _callstream.
413 with _callstream.
414
414
415 returns the server reply as a file like object.
415 returns the server reply as a file like object.
416 """
416 """
417 raise NotImplementedError()
417 raise NotImplementedError()
418
418
419 def _callpush(self, cmd, fp, **args):
419 def _callpush(self, cmd, fp, **args):
420 """execute a <cmd> on server
420 """execute a <cmd> on server
421
421
422 The command is expected to be related to a push. Push has a special
422 The command is expected to be related to a push. Push has a special
423 return method.
423 return method.
424
424
425 returns the server reply as a (ret, output) tuple. ret is either
425 returns the server reply as a (ret, output) tuple. ret is either
426 empty (error) or a stringified int.
426 empty (error) or a stringified int.
427 """
427 """
428 raise NotImplementedError()
428 raise NotImplementedError()
429
429
430 def _calltwowaystream(self, cmd, fp, **args):
430 def _calltwowaystream(self, cmd, fp, **args):
431 """execute <cmd> on server
431 """execute <cmd> on server
432
432
433 The command will send a stream to the server and get a stream in reply.
433 The command will send a stream to the server and get a stream in reply.
434 """
434 """
435 raise NotImplementedError()
435 raise NotImplementedError()
436
436
437 def _abort(self, exception):
437 def _abort(self, exception):
438 """clearly abort the wire protocol connection and raise the exception
438 """clearly abort the wire protocol connection and raise the exception
439 """
439 """
440 raise NotImplementedError()
440 raise NotImplementedError()
441
441
442 # server side
442 # server side
443
443
444 # wire protocol command can either return a string or one of these classes.
444 # wire protocol command can either return a string or one of these classes.
445 class streamres(object):
445 class streamres(object):
446 """wireproto reply: binary stream
446 """wireproto reply: binary stream
447
447
448 The call was successful and the result is a stream.
448 The call was successful and the result is a stream.
449 Iterate on the `self.gen` attribute to retrieve chunks.
449 Iterate on the `self.gen` attribute to retrieve chunks.
450 """
450 """
451 def __init__(self, gen):
451 def __init__(self, gen):
452 self.gen = gen
452 self.gen = gen
453
453
454 class pushres(object):
454 class pushres(object):
455 """wireproto reply: success with simple integer return
455 """wireproto reply: success with simple integer return
456
456
457 The call was successful and returned an integer contained in `self.res`.
457 The call was successful and returned an integer contained in `self.res`.
458 """
458 """
459 def __init__(self, res):
459 def __init__(self, res):
460 self.res = res
460 self.res = res
461
461
462 class pusherr(object):
462 class pusherr(object):
463 """wireproto reply: failure
463 """wireproto reply: failure
464
464
465 The call failed. The `self.res` attribute contains the error message.
465 The call failed. The `self.res` attribute contains the error message.
466 """
466 """
467 def __init__(self, res):
467 def __init__(self, res):
468 self.res = res
468 self.res = res
469
469
470 class ooberror(object):
470 class ooberror(object):
471 """wireproto reply: failure of a batch of operation
471 """wireproto reply: failure of a batch of operation
472
472
473 Something failed during a batch call. The error message is stored in
473 Something failed during a batch call. The error message is stored in
474 `self.message`.
474 `self.message`.
475 """
475 """
476 def __init__(self, message):
476 def __init__(self, message):
477 self.message = message
477 self.message = message
478
478
479 def dispatch(repo, proto, command):
479 def dispatch(repo, proto, command):
480 repo = repo.filtered("served")
480 repo = repo.filtered("served")
481 func, spec = commands[command]
481 func, spec = commands[command]
482 args = proto.getargs(spec)
482 args = proto.getargs(spec)
483 return func(repo, proto, *args)
483 return func(repo, proto, *args)
484
484
485 def options(cmd, keys, others):
485 def options(cmd, keys, others):
486 opts = {}
486 opts = {}
487 for k in keys:
487 for k in keys:
488 if k in others:
488 if k in others:
489 opts[k] = others[k]
489 opts[k] = others[k]
490 del others[k]
490 del others[k]
491 if others:
491 if others:
492 sys.stderr.write("abort: %s got unexpected arguments %s\n"
492 sys.stderr.write("abort: %s got unexpected arguments %s\n"
493 % (cmd, ",".join(others)))
493 % (cmd, ",".join(others)))
494 return opts
494 return opts
495
495
496 # list of commands
496 # list of commands
497 commands = {}
497 commands = {}
498
498
499 def wireprotocommand(name, args=''):
499 def wireprotocommand(name, args=''):
500 """decorator for wire protocol command"""
500 """decorator for wire protocol command"""
501 def register(func):
501 def register(func):
502 commands[name] = (func, args)
502 commands[name] = (func, args)
503 return func
503 return func
504 return register
504 return register
505
505
506 @wireprotocommand('batch', 'cmds *')
506 @wireprotocommand('batch', 'cmds *')
507 def batch(repo, proto, cmds, others):
507 def batch(repo, proto, cmds, others):
508 repo = repo.filtered("served")
508 repo = repo.filtered("served")
509 res = []
509 res = []
510 for pair in cmds.split(';'):
510 for pair in cmds.split(';'):
511 op, args = pair.split(' ', 1)
511 op, args = pair.split(' ', 1)
512 vals = {}
512 vals = {}
513 for a in args.split(','):
513 for a in args.split(','):
514 if a:
514 if a:
515 n, v = a.split('=')
515 n, v = a.split('=')
516 vals[n] = unescapearg(v)
516 vals[n] = unescapearg(v)
517 func, spec = commands[op]
517 func, spec = commands[op]
518 if spec:
518 if spec:
519 keys = spec.split()
519 keys = spec.split()
520 data = {}
520 data = {}
521 for k in keys:
521 for k in keys:
522 if k == '*':
522 if k == '*':
523 star = {}
523 star = {}
524 for key in vals.keys():
524 for key in vals.keys():
525 if key not in keys:
525 if key not in keys:
526 star[key] = vals[key]
526 star[key] = vals[key]
527 data['*'] = star
527 data['*'] = star
528 else:
528 else:
529 data[k] = vals[k]
529 data[k] = vals[k]
530 result = func(repo, proto, *[data[k] for k in keys])
530 result = func(repo, proto, *[data[k] for k in keys])
531 else:
531 else:
532 result = func(repo, proto)
532 result = func(repo, proto)
533 if isinstance(result, ooberror):
533 if isinstance(result, ooberror):
534 return result
534 return result
535 res.append(escapearg(result))
535 res.append(escapearg(result))
536 return ';'.join(res)
536 return ';'.join(res)
537
537
538 @wireprotocommand('between', 'pairs')
538 @wireprotocommand('between', 'pairs')
539 def between(repo, proto, pairs):
539 def between(repo, proto, pairs):
540 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
540 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
541 r = []
541 r = []
542 for b in repo.between(pairs):
542 for b in repo.between(pairs):
543 r.append(encodelist(b) + "\n")
543 r.append(encodelist(b) + "\n")
544 return "".join(r)
544 return "".join(r)
545
545
546 @wireprotocommand('branchmap')
546 @wireprotocommand('branchmap')
547 def branchmap(repo, proto):
547 def branchmap(repo, proto):
548 branchmap = repo.branchmap()
548 branchmap = repo.branchmap()
549 heads = []
549 heads = []
550 for branch, nodes in branchmap.iteritems():
550 for branch, nodes in branchmap.iteritems():
551 branchname = urllib.quote(encoding.fromlocal(branch))
551 branchname = urllib.quote(encoding.fromlocal(branch))
552 branchnodes = encodelist(nodes)
552 branchnodes = encodelist(nodes)
553 heads.append('%s %s' % (branchname, branchnodes))
553 heads.append('%s %s' % (branchname, branchnodes))
554 return '\n'.join(heads)
554 return '\n'.join(heads)
555
555
556 @wireprotocommand('branches', 'nodes')
556 @wireprotocommand('branches', 'nodes')
557 def branches(repo, proto, nodes):
557 def branches(repo, proto, nodes):
558 nodes = decodelist(nodes)
558 nodes = decodelist(nodes)
559 r = []
559 r = []
560 for b in repo.branches(nodes):
560 for b in repo.branches(nodes):
561 r.append(encodelist(b) + "\n")
561 r.append(encodelist(b) + "\n")
562 return "".join(r)
562 return "".join(r)
563
563
564
564
565 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
565 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
566 'known', 'getbundle', 'unbundlehash', 'batch']
566 'known', 'getbundle', 'unbundlehash', 'batch']
567
567
568 def _capabilities(repo, proto):
568 def _capabilities(repo, proto):
569 """return a list of capabilities for a repo
569 """return a list of capabilities for a repo
570
570
571 This function exists to allow extensions to easily wrap capabilities
571 This function exists to allow extensions to easily wrap capabilities
572 computation
572 computation
573
573
574 - returns a lists: easy to alter
574 - returns a lists: easy to alter
575 - change done here will be propagated to both `capabilities` and `hello`
575 - change done here will be propagated to both `capabilities` and `hello`
576 command without any other action needed.
576 command without any other action needed.
577 """
577 """
578 # copy to prevent modification of the global list
578 # copy to prevent modification of the global list
579 caps = list(wireprotocaps)
579 caps = list(wireprotocaps)
580 if _allowstream(repo.ui):
580 if _allowstream(repo.ui):
581 if repo.ui.configbool('server', 'preferuncompressed', False):
581 if repo.ui.configbool('server', 'preferuncompressed', False):
582 caps.append('stream-preferred')
582 caps.append('stream-preferred')
583 requiredformats = repo.requirements & repo.supportedformats
583 requiredformats = repo.requirements & repo.supportedformats
584 # if our local revlogs are just revlogv1, add 'stream' cap
584 # if our local revlogs are just revlogv1, add 'stream' cap
585 if not requiredformats - set(('revlogv1',)):
585 if not requiredformats - set(('revlogv1',)):
586 caps.append('stream')
586 caps.append('stream')
587 # otherwise, add 'streamreqs' detailing our local revlog format
587 # otherwise, add 'streamreqs' detailing our local revlog format
588 else:
588 else:
589 caps.append('streamreqs=%s' % ','.join(requiredformats))
589 caps.append('streamreqs=%s' % ','.join(requiredformats))
590 if repo.ui.configbool('experimental', 'bundle2-exp', False):
590 if repo.ui.configbool('experimental', 'bundle2-exp', False):
591 capsblob = bundle2.encodecaps(repo.bundle2caps)
591 capsblob = bundle2.encodecaps(repo.bundle2caps)
592 caps.append('bundle2-exp=' + urllib.quote(capsblob))
592 caps.append('bundle2-exp=' + urllib.quote(capsblob))
593 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
593 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
594 caps.append('httpheader=1024')
594 caps.append('httpheader=1024')
595 return caps
595 return caps
596
596
597 # If you are writing an extension and consider wrapping this function. Wrap
597 # If you are writing an extension and consider wrapping this function. Wrap
598 # `_capabilities` instead.
598 # `_capabilities` instead.
599 @wireprotocommand('capabilities')
599 @wireprotocommand('capabilities')
600 def capabilities(repo, proto):
600 def capabilities(repo, proto):
601 return ' '.join(_capabilities(repo, proto))
601 return ' '.join(_capabilities(repo, proto))
602
602
603 @wireprotocommand('changegroup', 'roots')
603 @wireprotocommand('changegroup', 'roots')
604 def changegroup(repo, proto, roots):
604 def changegroup(repo, proto, roots):
605 nodes = decodelist(roots)
605 nodes = decodelist(roots)
606 cg = changegroupmod.changegroup(repo, nodes, 'serve')
606 cg = changegroupmod.changegroup(repo, nodes, 'serve')
607 return streamres(proto.groupchunks(cg))
607 return streamres(proto.groupchunks(cg))
608
608
609 @wireprotocommand('changegroupsubset', 'bases heads')
609 @wireprotocommand('changegroupsubset', 'bases heads')
610 def changegroupsubset(repo, proto, bases, heads):
610 def changegroupsubset(repo, proto, bases, heads):
611 bases = decodelist(bases)
611 bases = decodelist(bases)
612 heads = decodelist(heads)
612 heads = decodelist(heads)
613 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
613 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
614 return streamres(proto.groupchunks(cg))
614 return streamres(proto.groupchunks(cg))
615
615
616 @wireprotocommand('debugwireargs', 'one two *')
616 @wireprotocommand('debugwireargs', 'one two *')
617 def debugwireargs(repo, proto, one, two, others):
617 def debugwireargs(repo, proto, one, two, others):
618 # only accept optional args from the known set
618 # only accept optional args from the known set
619 opts = options('debugwireargs', ['three', 'four'], others)
619 opts = options('debugwireargs', ['three', 'four'], others)
620 return repo.debugwireargs(one, two, **opts)
620 return repo.debugwireargs(one, two, **opts)
621
621
622 @wireprotocommand('getbundle', '*')
622 @wireprotocommand('getbundle', '*')
623 def getbundle(repo, proto, others):
623 def getbundle(repo, proto, others):
624 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
624 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
625 for k, v in opts.iteritems():
625 for k, v in opts.iteritems():
626 if k in ('heads', 'common'):
626 if k in ('heads', 'common'):
627 opts[k] = decodelist(v)
627 opts[k] = decodelist(v)
628 elif k == 'bundlecaps':
628 elif k == 'bundlecaps':
629 opts[k] = set(v.split(','))
629 opts[k] = set(v.split(','))
630 cg = exchange.getbundle(repo, 'serve', **opts)
630 cg = exchange.getbundle(repo, 'serve', **opts)
631 return streamres(proto.groupchunks(cg))
631 return streamres(proto.groupchunks(cg))
632
632
633 @wireprotocommand('heads')
633 @wireprotocommand('heads')
634 def heads(repo, proto):
634 def heads(repo, proto):
635 h = repo.heads()
635 h = repo.heads()
636 return encodelist(h) + "\n"
636 return encodelist(h) + "\n"
637
637
638 @wireprotocommand('hello')
638 @wireprotocommand('hello')
639 def hello(repo, proto):
639 def hello(repo, proto):
640 '''the hello command returns a set of lines describing various
640 '''the hello command returns a set of lines describing various
641 interesting things about the server, in an RFC822-like format.
641 interesting things about the server, in an RFC822-like format.
642 Currently the only one defined is "capabilities", which
642 Currently the only one defined is "capabilities", which
643 consists of a line in the form:
643 consists of a line in the form:
644
644
645 capabilities: space separated list of tokens
645 capabilities: space separated list of tokens
646 '''
646 '''
647 return "capabilities: %s\n" % (capabilities(repo, proto))
647 return "capabilities: %s\n" % (capabilities(repo, proto))
648
648
649 @wireprotocommand('listkeys', 'namespace')
649 @wireprotocommand('listkeys', 'namespace')
650 def listkeys(repo, proto, namespace):
650 def listkeys(repo, proto, namespace):
651 d = repo.listkeys(encoding.tolocal(namespace)).items()
651 d = repo.listkeys(encoding.tolocal(namespace)).items()
652 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
652 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
653 for k, v in d])
653 for k, v in d])
654 return t
654 return t
655
655
656 @wireprotocommand('lookup', 'key')
656 @wireprotocommand('lookup', 'key')
657 def lookup(repo, proto, key):
657 def lookup(repo, proto, key):
658 try:
658 try:
659 k = encoding.tolocal(key)
659 k = encoding.tolocal(key)
660 c = repo[k]
660 c = repo[k]
661 r = c.hex()
661 r = c.hex()
662 success = 1
662 success = 1
663 except Exception, inst:
663 except Exception, inst:
664 r = str(inst)
664 r = str(inst)
665 success = 0
665 success = 0
666 return "%s %s\n" % (success, r)
666 return "%s %s\n" % (success, r)
667
667
668 @wireprotocommand('known', 'nodes *')
668 @wireprotocommand('known', 'nodes *')
669 def known(repo, proto, nodes, others):
669 def known(repo, proto, nodes, others):
670 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
670 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
671
671
672 @wireprotocommand('pushkey', 'namespace key old new')
672 @wireprotocommand('pushkey', 'namespace key old new')
673 def pushkey(repo, proto, namespace, key, old, new):
673 def pushkey(repo, proto, namespace, key, old, new):
674 # compatibility with pre-1.8 clients which were accidentally
674 # compatibility with pre-1.8 clients which were accidentally
675 # sending raw binary nodes rather than utf-8-encoded hex
675 # sending raw binary nodes rather than utf-8-encoded hex
676 if len(new) == 20 and new.encode('string-escape') != new:
676 if len(new) == 20 and new.encode('string-escape') != new:
677 # looks like it could be a binary node
677 # looks like it could be a binary node
678 try:
678 try:
679 new.decode('utf-8')
679 new.decode('utf-8')
680 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
680 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
681 except UnicodeDecodeError:
681 except UnicodeDecodeError:
682 pass # binary, leave unmodified
682 pass # binary, leave unmodified
683 else:
683 else:
684 new = encoding.tolocal(new) # normal path
684 new = encoding.tolocal(new) # normal path
685
685
686 if util.safehasattr(proto, 'restore'):
686 if util.safehasattr(proto, 'restore'):
687
687
688 proto.redirect()
688 proto.redirect()
689
689
690 try:
690 try:
691 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
691 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
692 encoding.tolocal(old), new) or False
692 encoding.tolocal(old), new) or False
693 except util.Abort:
693 except util.Abort:
694 r = False
694 r = False
695
695
696 output = proto.restore()
696 output = proto.restore()
697
697
698 return '%s\n%s' % (int(r), output)
698 return '%s\n%s' % (int(r), output)
699
699
700 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
700 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
701 encoding.tolocal(old), new)
701 encoding.tolocal(old), new)
702 return '%s\n' % int(r)
702 return '%s\n' % int(r)
703
703
704 def _allowstream(ui):
704 def _allowstream(ui):
705 return ui.configbool('server', 'uncompressed', True, untrusted=True)
705 return ui.configbool('server', 'uncompressed', True, untrusted=True)
706
706
707 def _walkstreamfiles(repo):
707 def _walkstreamfiles(repo):
708 # this is it's own function so extensions can override it
708 # this is it's own function so extensions can override it
709 return repo.store.walk()
709 return repo.store.walk()
710
710
711 @wireprotocommand('stream_out')
711 @wireprotocommand('stream_out')
712 def stream(repo, proto):
712 def stream(repo, proto):
713 '''If the server supports streaming clone, it advertises the "stream"
713 '''If the server supports streaming clone, it advertises the "stream"
714 capability with a value representing the version and flags of the repo
714 capability with a value representing the version and flags of the repo
715 it is serving. Client checks to see if it understands the format.
715 it is serving. Client checks to see if it understands the format.
716
716
717 The format is simple: the server writes out a line with the amount
717 The format is simple: the server writes out a line with the amount
718 of files, then the total amount of bytes to be transferred (separated
718 of files, then the total amount of bytes to be transferred (separated
719 by a space). Then, for each file, the server first writes the filename
719 by a space). Then, for each file, the server first writes the filename
720 and file size (separated by the null character), then the file contents.
720 and file size (separated by the null character), then the file contents.
721 '''
721 '''
722
722
723 if not _allowstream(repo.ui):
723 if not _allowstream(repo.ui):
724 return '1\n'
724 return '1\n'
725
725
726 entries = []
726 entries = []
727 total_bytes = 0
727 total_bytes = 0
728 try:
728 try:
729 # get consistent snapshot of repo, lock during scan
729 # get consistent snapshot of repo, lock during scan
730 lock = repo.lock()
730 lock = repo.lock()
731 try:
731 try:
732 repo.ui.debug('scanning\n')
732 repo.ui.debug('scanning\n')
733 for name, ename, size in _walkstreamfiles(repo):
733 for name, ename, size in _walkstreamfiles(repo):
734 if size:
734 if size:
735 entries.append((name, size))
735 entries.append((name, size))
736 total_bytes += size
736 total_bytes += size
737 finally:
737 finally:
738 lock.release()
738 lock.release()
739 except error.LockError:
739 except error.LockError:
740 return '2\n' # error: 2
740 return '2\n' # error: 2
741
741
742 def streamer(repo, entries, total):
742 def streamer(repo, entries, total):
743 '''stream out all metadata files in repository.'''
743 '''stream out all metadata files in repository.'''
744 yield '0\n' # success
744 yield '0\n' # success
745 repo.ui.debug('%d files, %d bytes to transfer\n' %
745 repo.ui.debug('%d files, %d bytes to transfer\n' %
746 (len(entries), total_bytes))
746 (len(entries), total_bytes))
747 yield '%d %d\n' % (len(entries), total_bytes)
747 yield '%d %d\n' % (len(entries), total_bytes)
748
748
749 sopener = repo.sopener
749 sopener = repo.sopener
750 oldaudit = sopener.mustaudit
750 oldaudit = sopener.mustaudit
751 debugflag = repo.ui.debugflag
751 debugflag = repo.ui.debugflag
752 sopener.mustaudit = False
752 sopener.mustaudit = False
753
753
754 try:
754 try:
755 for name, size in entries:
755 for name, size in entries:
756 if debugflag:
756 if debugflag:
757 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
757 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
758 # partially encode name over the wire for backwards compat
758 # partially encode name over the wire for backwards compat
759 yield '%s\0%d\n' % (store.encodedir(name), size)
759 yield '%s\0%d\n' % (store.encodedir(name), size)
760 if size <= 65536:
760 if size <= 65536:
761 fp = sopener(name)
761 fp = sopener(name)
762 try:
762 try:
763 data = fp.read(size)
763 data = fp.read(size)
764 finally:
764 finally:
765 fp.close()
765 fp.close()
766 yield data
766 yield data
767 else:
767 else:
768 for chunk in util.filechunkiter(sopener(name), limit=size):
768 for chunk in util.filechunkiter(sopener(name), limit=size):
769 yield chunk
769 yield chunk
770 # replace with "finally:" when support for python 2.4 has been dropped
770 # replace with "finally:" when support for python 2.4 has been dropped
771 except Exception:
771 except Exception:
772 sopener.mustaudit = oldaudit
772 sopener.mustaudit = oldaudit
773 raise
773 raise
774 sopener.mustaudit = oldaudit
774 sopener.mustaudit = oldaudit
775
775
776 return streamres(streamer(repo, entries, total_bytes))
776 return streamres(streamer(repo, entries, total_bytes))
777
777
778 @wireprotocommand('unbundle', 'heads')
778 @wireprotocommand('unbundle', 'heads')
779 def unbundle(repo, proto, heads):
779 def unbundle(repo, proto, heads):
780 their_heads = decodelist(heads)
780 their_heads = decodelist(heads)
781
781
782 try:
782 try:
783 proto.redirect()
783 proto.redirect()
784
784
785 exchange.check_heads(repo, their_heads, 'preparing changes')
785 exchange.check_heads(repo, their_heads, 'preparing changes')
786
786
787 # write bundle data to temporary file because it can be big
787 # write bundle data to temporary file because it can be big
788 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
788 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
789 fp = os.fdopen(fd, 'wb+')
789 fp = os.fdopen(fd, 'wb+')
790 r = 0
790 r = 0
791 try:
791 try:
792 proto.getfile(fp)
792 proto.getfile(fp)
793 fp.seek(0)
793 fp.seek(0)
794 gen = exchange.readbundle(repo.ui, fp, None)
794 gen = exchange.readbundle(repo.ui, fp, None)
795 r = exchange.unbundle(repo, gen, their_heads, 'serve',
795 r = exchange.unbundle(repo, gen, their_heads, 'serve',
796 proto._client())
796 proto._client())
797 if util.safehasattr(r, 'addpart'):
797 if util.safehasattr(r, 'addpart'):
798 # The return looks streameable, we are in the bundle2 case and
798 # The return looks streameable, we are in the bundle2 case and
799 # should return a stream.
799 # should return a stream.
800 return streamres(r.getchunks())
800 return streamres(r.getchunks())
801 return pushres(r)
801 return pushres(r)
802
802
803 finally:
803 finally:
804 fp.close()
804 fp.close()
805 os.unlink(tempname)
805 os.unlink(tempname)
806 except bundle2.UnknownPartError, exc:
806 except bundle2.UnknownPartError, exc:
807 bundler = bundle2.bundle20(repo.ui)
807 bundler = bundle2.bundle20(repo.ui)
808 part = bundle2.bundlepart('B2X:ERROR:UNKNOWNPART',
808 part = bundle2.bundlepart('B2X:ERROR:UNKNOWNPART',
809 [('parttype', str(exc))])
809 [('parttype', str(exc))])
810 bundler.addpart(part)
810 bundler.addpart(part)
811 return streamres(bundler.getchunks())
811 return streamres(bundler.getchunks())
812 except util.Abort, inst:
812 except util.Abort, inst:
813 # The old code we moved used sys.stderr directly.
813 # The old code we moved used sys.stderr directly.
814 # We did not change it to minimise code change.
814 # We did not change it to minimise code change.
815 # This need to be moved to something proper.
815 # This need to be moved to something proper.
816 # Feel free to do it.
816 # Feel free to do it.
817 if getattr(inst, 'duringunbundle2', False):
817 if getattr(inst, 'duringunbundle2', False):
818 bundler = bundle2.bundle20(repo.ui)
818 bundler = bundle2.bundle20(repo.ui)
819 manargs = [('message', str(inst))]
819 manargs = [('message', str(inst))]
820 advargs = []
820 advargs = []
821 if inst.hint is not None:
821 if inst.hint is not None:
822 advargs.append(('hint', inst.hint))
822 advargs.append(('hint', inst.hint))
823 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
823 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
824 manargs, advargs))
824 manargs, advargs))
825 return streamres(bundler.getchunks())
825 return streamres(bundler.getchunks())
826 else:
826 else:
827 sys.stderr.write("abort: %s\n" % inst)
827 sys.stderr.write("abort: %s\n" % inst)
828 return pushres(0)
828 return pushres(0)
829 except error.PushRaced, exc:
829 except error.PushRaced, exc:
830 return pusherr(str(exc))
830 if getattr(exc, 'duringunbundle2', False):
831 bundler = bundle2.bundle20(repo.ui)
832 part = bundle2.bundlepart('B2X:ERROR:PUSHRACED',
833 [('message', str(exc))])
834 bundler.addpart(part)
835 return streamres(bundler.getchunks())
836 else:
837 return pusherr(str(exc))
@@ -1,1014 +1,1045 b''
1
1
2 Create an extension to test bundle2 API
2 Create an extension to test bundle2 API
3
3
4 $ cat > bundle2.py << EOF
4 $ cat > bundle2.py << EOF
5 > """A small extension to test bundle2 implementation
5 > """A small extension to test bundle2 implementation
6 >
6 >
7 > Current bundle2 implementation is far too limited to be used in any core
7 > Current bundle2 implementation is far too limited to be used in any core
8 > code. We still need to be able to test it while it grow up.
8 > code. We still need to be able to test it while it grow up.
9 > """
9 > """
10 >
10 >
11 > import sys
11 > import sys
12 > from mercurial import cmdutil
12 > from mercurial import cmdutil
13 > from mercurial import util
13 > from mercurial import util
14 > from mercurial import bundle2
14 > from mercurial import bundle2
15 > from mercurial import scmutil
15 > from mercurial import scmutil
16 > from mercurial import discovery
16 > from mercurial import discovery
17 > from mercurial import changegroup
17 > from mercurial import changegroup
18 > from mercurial import error
18 > from mercurial import error
19 > cmdtable = {}
19 > cmdtable = {}
20 > command = cmdutil.command(cmdtable)
20 > command = cmdutil.command(cmdtable)
21 >
21 >
22 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
22 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
23 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
23 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
24 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
24 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
25 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
25 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
26 >
26 >
27 > @bundle2.parthandler('test:song')
27 > @bundle2.parthandler('test:song')
28 > def songhandler(op, part):
28 > def songhandler(op, part):
29 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
29 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
30 > op.ui.write('The choir starts singing:\n')
30 > op.ui.write('The choir starts singing:\n')
31 > verses = 0
31 > verses = 0
32 > for line in part.read().split('\n'):
32 > for line in part.read().split('\n'):
33 > op.ui.write(' %s\n' % line)
33 > op.ui.write(' %s\n' % line)
34 > verses += 1
34 > verses += 1
35 > op.records.add('song', {'verses': verses})
35 > op.records.add('song', {'verses': verses})
36 >
36 >
37 > @bundle2.parthandler('test:ping')
37 > @bundle2.parthandler('test:ping')
38 > def pinghandler(op, part):
38 > def pinghandler(op, part):
39 > op.ui.write('received ping request (id %i)\n' % part.id)
39 > op.ui.write('received ping request (id %i)\n' % part.id)
40 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
40 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
41 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
41 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
42 > rpart = bundle2.bundlepart('test:pong',
42 > rpart = bundle2.bundlepart('test:pong',
43 > [('in-reply-to', str(part.id))])
43 > [('in-reply-to', str(part.id))])
44 > op.reply.addpart(rpart)
44 > op.reply.addpart(rpart)
45 >
45 >
46 > @bundle2.parthandler('test:debugreply')
46 > @bundle2.parthandler('test:debugreply')
47 > def debugreply(op, part):
47 > def debugreply(op, part):
48 > """print data about the capacity of the bundle reply"""
48 > """print data about the capacity of the bundle reply"""
49 > if op.reply is None:
49 > if op.reply is None:
50 > op.ui.write('debugreply: no reply\n')
50 > op.ui.write('debugreply: no reply\n')
51 > else:
51 > else:
52 > op.ui.write('debugreply: capabilities:\n')
52 > op.ui.write('debugreply: capabilities:\n')
53 > for cap in sorted(op.reply.capabilities):
53 > for cap in sorted(op.reply.capabilities):
54 > op.ui.write('debugreply: %r\n' % cap)
54 > op.ui.write('debugreply: %r\n' % cap)
55 > for val in op.reply.capabilities[cap]:
55 > for val in op.reply.capabilities[cap]:
56 > op.ui.write('debugreply: %r\n' % val)
56 > op.ui.write('debugreply: %r\n' % val)
57 >
57 >
58 > @command('bundle2',
58 > @command('bundle2',
59 > [('', 'param', [], 'stream level parameter'),
59 > [('', 'param', [], 'stream level parameter'),
60 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
60 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
61 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
61 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
62 > ('', 'reply', False, 'produce a reply bundle'),
62 > ('', 'reply', False, 'produce a reply bundle'),
63 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
63 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
64 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
64 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
65 > '[OUTPUTFILE]')
65 > '[OUTPUTFILE]')
66 > def cmdbundle2(ui, repo, path=None, **opts):
66 > def cmdbundle2(ui, repo, path=None, **opts):
67 > """write a bundle2 container on standard ouput"""
67 > """write a bundle2 container on standard ouput"""
68 > bundler = bundle2.bundle20(ui)
68 > bundler = bundle2.bundle20(ui)
69 > for p in opts['param']:
69 > for p in opts['param']:
70 > p = p.split('=', 1)
70 > p = p.split('=', 1)
71 > try:
71 > try:
72 > bundler.addparam(*p)
72 > bundler.addparam(*p)
73 > except ValueError, exc:
73 > except ValueError, exc:
74 > raise util.Abort('%s' % exc)
74 > raise util.Abort('%s' % exc)
75 >
75 >
76 > if opts['reply']:
76 > if opts['reply']:
77 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
77 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
78 > bundler.addpart(bundle2.bundlepart('b2x:replycaps', data=capsstring))
78 > bundler.addpart(bundle2.bundlepart('b2x:replycaps', data=capsstring))
79 >
79 >
80 > if opts['pushrace']:
80 > if opts['pushrace']:
81 > dummynode = '01234567890123456789'
81 > dummynode = '01234567890123456789'
82 > bundler.addpart(bundle2.bundlepart('b2x:check:heads', data=dummynode))
82 > bundler.addpart(bundle2.bundlepart('b2x:check:heads', data=dummynode))
83 >
83 >
84 > revs = opts['rev']
84 > revs = opts['rev']
85 > if 'rev' in opts:
85 > if 'rev' in opts:
86 > revs = scmutil.revrange(repo, opts['rev'])
86 > revs = scmutil.revrange(repo, opts['rev'])
87 > if revs:
87 > if revs:
88 > # very crude version of a changegroup part creation
88 > # very crude version of a changegroup part creation
89 > bundled = repo.revs('%ld::%ld', revs, revs)
89 > bundled = repo.revs('%ld::%ld', revs, revs)
90 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
90 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
91 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
91 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
92 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
92 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
93 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
93 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
94 > part = bundle2.bundlepart('b2x:changegroup', data=cg.getchunks())
94 > part = bundle2.bundlepart('b2x:changegroup', data=cg.getchunks())
95 > bundler.addpart(part)
95 > bundler.addpart(part)
96 >
96 >
97 > if opts['parts']:
97 > if opts['parts']:
98 > part = bundle2.bundlepart('test:empty')
98 > part = bundle2.bundlepart('test:empty')
99 > bundler.addpart(part)
99 > bundler.addpart(part)
100 > # add a second one to make sure we handle multiple parts
100 > # add a second one to make sure we handle multiple parts
101 > part = bundle2.bundlepart('test:empty')
101 > part = bundle2.bundlepart('test:empty')
102 > bundler.addpart(part)
102 > bundler.addpart(part)
103 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
103 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
104 > bundler.addpart(part)
104 > bundler.addpart(part)
105 > part = bundle2.bundlepart('test:debugreply')
105 > part = bundle2.bundlepart('test:debugreply')
106 > bundler.addpart(part)
106 > bundler.addpart(part)
107 > part = bundle2.bundlepart('test:math',
107 > part = bundle2.bundlepart('test:math',
108 > [('pi', '3.14'), ('e', '2.72')],
108 > [('pi', '3.14'), ('e', '2.72')],
109 > [('cooking', 'raw')],
109 > [('cooking', 'raw')],
110 > '42')
110 > '42')
111 > bundler.addpart(part)
111 > bundler.addpart(part)
112 > if opts['unknown']:
112 > if opts['unknown']:
113 > part = bundle2.bundlepart('test:UNKNOWN',
113 > part = bundle2.bundlepart('test:UNKNOWN',
114 > data='some random content')
114 > data='some random content')
115 > bundler.addpart(part)
115 > bundler.addpart(part)
116 > if opts['parts']:
116 > if opts['parts']:
117 > part = bundle2.bundlepart('test:ping')
117 > part = bundle2.bundlepart('test:ping')
118 > bundler.addpart(part)
118 > bundler.addpart(part)
119 >
119 >
120 > if path is None:
120 > if path is None:
121 > file = sys.stdout
121 > file = sys.stdout
122 > else:
122 > else:
123 > file = open(path, 'w')
123 > file = open(path, 'w')
124 >
124 >
125 > for chunk in bundler.getchunks():
125 > for chunk in bundler.getchunks():
126 > file.write(chunk)
126 > file.write(chunk)
127 >
127 >
128 > @command('unbundle2', [], '')
128 > @command('unbundle2', [], '')
129 > def cmdunbundle2(ui, repo, replypath=None):
129 > def cmdunbundle2(ui, repo, replypath=None):
130 > """process a bundle2 stream from stdin on the current repo"""
130 > """process a bundle2 stream from stdin on the current repo"""
131 > try:
131 > try:
132 > tr = None
132 > tr = None
133 > lock = repo.lock()
133 > lock = repo.lock()
134 > tr = repo.transaction('processbundle')
134 > tr = repo.transaction('processbundle')
135 > try:
135 > try:
136 > unbundler = bundle2.unbundle20(ui, sys.stdin)
136 > unbundler = bundle2.unbundle20(ui, sys.stdin)
137 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
137 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
138 > tr.close()
138 > tr.close()
139 > except KeyError, exc:
139 > except KeyError, exc:
140 > raise util.Abort('missing support for %s' % exc)
140 > raise util.Abort('missing support for %s' % exc)
141 > except error.PushRaced, exc:
141 > except error.PushRaced, exc:
142 > raise util.Abort('push race: %s' % exc)
142 > raise util.Abort('push race: %s' % exc)
143 > finally:
143 > finally:
144 > if tr is not None:
144 > if tr is not None:
145 > tr.release()
145 > tr.release()
146 > lock.release()
146 > lock.release()
147 > remains = sys.stdin.read()
147 > remains = sys.stdin.read()
148 > ui.write('%i unread bytes\n' % len(remains))
148 > ui.write('%i unread bytes\n' % len(remains))
149 > if op.records['song']:
149 > if op.records['song']:
150 > totalverses = sum(r['verses'] for r in op.records['song'])
150 > totalverses = sum(r['verses'] for r in op.records['song'])
151 > ui.write('%i total verses sung\n' % totalverses)
151 > ui.write('%i total verses sung\n' % totalverses)
152 > for rec in op.records['changegroup']:
152 > for rec in op.records['changegroup']:
153 > ui.write('addchangegroup return: %i\n' % rec['return'])
153 > ui.write('addchangegroup return: %i\n' % rec['return'])
154 > if op.reply is not None and replypath is not None:
154 > if op.reply is not None and replypath is not None:
155 > file = open(replypath, 'w')
155 > file = open(replypath, 'w')
156 > for chunk in op.reply.getchunks():
156 > for chunk in op.reply.getchunks():
157 > file.write(chunk)
157 > file.write(chunk)
158 >
158 >
159 > @command('statbundle2', [], '')
159 > @command('statbundle2', [], '')
160 > def cmdstatbundle2(ui, repo):
160 > def cmdstatbundle2(ui, repo):
161 > """print statistic on the bundle2 container read from stdin"""
161 > """print statistic on the bundle2 container read from stdin"""
162 > unbundler = bundle2.unbundle20(ui, sys.stdin)
162 > unbundler = bundle2.unbundle20(ui, sys.stdin)
163 > try:
163 > try:
164 > params = unbundler.params
164 > params = unbundler.params
165 > except KeyError, exc:
165 > except KeyError, exc:
166 > raise util.Abort('unknown parameters: %s' % exc)
166 > raise util.Abort('unknown parameters: %s' % exc)
167 > ui.write('options count: %i\n' % len(params))
167 > ui.write('options count: %i\n' % len(params))
168 > for key in sorted(params):
168 > for key in sorted(params):
169 > ui.write('- %s\n' % key)
169 > ui.write('- %s\n' % key)
170 > value = params[key]
170 > value = params[key]
171 > if value is not None:
171 > if value is not None:
172 > ui.write(' %s\n' % value)
172 > ui.write(' %s\n' % value)
173 > count = 0
173 > count = 0
174 > for p in unbundler.iterparts():
174 > for p in unbundler.iterparts():
175 > count += 1
175 > count += 1
176 > ui.write(' :%s:\n' % p.type)
176 > ui.write(' :%s:\n' % p.type)
177 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
177 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
178 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
178 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
179 > ui.write(' payload: %i bytes\n' % len(p.read()))
179 > ui.write(' payload: %i bytes\n' % len(p.read()))
180 > ui.write('parts count: %i\n' % count)
180 > ui.write('parts count: %i\n' % count)
181 > EOF
181 > EOF
182 $ cat >> $HGRCPATH << EOF
182 $ cat >> $HGRCPATH << EOF
183 > [extensions]
183 > [extensions]
184 > bundle2=$TESTTMP/bundle2.py
184 > bundle2=$TESTTMP/bundle2.py
185 > [experimental]
185 > [experimental]
186 > bundle2-exp=True
186 > bundle2-exp=True
187 > [ui]
187 > [ui]
188 > ssh=python "$TESTDIR/dummyssh"
188 > ssh=python "$TESTDIR/dummyssh"
189 > [web]
189 > [web]
190 > push_ssl = false
190 > push_ssl = false
191 > allow_push = *
191 > allow_push = *
192 > EOF
192 > EOF
193
193
194 The extension requires a repo (currently unused)
194 The extension requires a repo (currently unused)
195
195
196 $ hg init main
196 $ hg init main
197 $ cd main
197 $ cd main
198 $ touch a
198 $ touch a
199 $ hg add a
199 $ hg add a
200 $ hg commit -m 'a'
200 $ hg commit -m 'a'
201
201
202
202
203 Empty bundle
203 Empty bundle
204 =================
204 =================
205
205
206 - no option
206 - no option
207 - no parts
207 - no parts
208
208
209 Test bundling
209 Test bundling
210
210
211 $ hg bundle2
211 $ hg bundle2
212 HG2X\x00\x00\x00\x00 (no-eol) (esc)
212 HG2X\x00\x00\x00\x00 (no-eol) (esc)
213
213
214 Test unbundling
214 Test unbundling
215
215
216 $ hg bundle2 | hg statbundle2
216 $ hg bundle2 | hg statbundle2
217 options count: 0
217 options count: 0
218 parts count: 0
218 parts count: 0
219
219
220 Test old style bundle are detected and refused
220 Test old style bundle are detected and refused
221
221
222 $ hg bundle --all ../bundle.hg
222 $ hg bundle --all ../bundle.hg
223 1 changesets found
223 1 changesets found
224 $ hg statbundle2 < ../bundle.hg
224 $ hg statbundle2 < ../bundle.hg
225 abort: unknown bundle version 10
225 abort: unknown bundle version 10
226 [255]
226 [255]
227
227
228 Test parameters
228 Test parameters
229 =================
229 =================
230
230
231 - some options
231 - some options
232 - no parts
232 - no parts
233
233
234 advisory parameters, no value
234 advisory parameters, no value
235 -------------------------------
235 -------------------------------
236
236
237 Simplest possible parameters form
237 Simplest possible parameters form
238
238
239 Test generation simple option
239 Test generation simple option
240
240
241 $ hg bundle2 --param 'caution'
241 $ hg bundle2 --param 'caution'
242 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
242 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
243
243
244 Test unbundling
244 Test unbundling
245
245
246 $ hg bundle2 --param 'caution' | hg statbundle2
246 $ hg bundle2 --param 'caution' | hg statbundle2
247 options count: 1
247 options count: 1
248 - caution
248 - caution
249 parts count: 0
249 parts count: 0
250
250
251 Test generation multiple option
251 Test generation multiple option
252
252
253 $ hg bundle2 --param 'caution' --param 'meal'
253 $ hg bundle2 --param 'caution' --param 'meal'
254 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
254 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
255
255
256 Test unbundling
256 Test unbundling
257
257
258 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
258 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
259 options count: 2
259 options count: 2
260 - caution
260 - caution
261 - meal
261 - meal
262 parts count: 0
262 parts count: 0
263
263
264 advisory parameters, with value
264 advisory parameters, with value
265 -------------------------------
265 -------------------------------
266
266
267 Test generation
267 Test generation
268
268
269 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
269 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
270 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
270 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
271
271
272 Test unbundling
272 Test unbundling
273
273
274 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
274 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
275 options count: 3
275 options count: 3
276 - caution
276 - caution
277 - elephants
277 - elephants
278 - meal
278 - meal
279 vegan
279 vegan
280 parts count: 0
280 parts count: 0
281
281
282 parameter with special char in value
282 parameter with special char in value
283 ---------------------------------------------------
283 ---------------------------------------------------
284
284
285 Test generation
285 Test generation
286
286
287 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
287 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
288 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
288 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
289
289
290 Test unbundling
290 Test unbundling
291
291
292 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
292 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
293 options count: 2
293 options count: 2
294 - e|! 7/
294 - e|! 7/
295 babar%#==tutu
295 babar%#==tutu
296 - simple
296 - simple
297 parts count: 0
297 parts count: 0
298
298
299 Test unknown mandatory option
299 Test unknown mandatory option
300 ---------------------------------------------------
300 ---------------------------------------------------
301
301
302 $ hg bundle2 --param 'Gravity' | hg statbundle2
302 $ hg bundle2 --param 'Gravity' | hg statbundle2
303 abort: unknown parameters: 'Gravity'
303 abort: unknown parameters: 'Gravity'
304 [255]
304 [255]
305
305
306 Test debug output
306 Test debug output
307 ---------------------------------------------------
307 ---------------------------------------------------
308
308
309 bundling debug
309 bundling debug
310
310
311 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
311 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
312 start emission of HG2X stream
312 start emission of HG2X stream
313 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
313 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
314 start of parts
314 start of parts
315 end of bundle
315 end of bundle
316
316
317 file content is ok
317 file content is ok
318
318
319 $ cat ../out.hg2
319 $ cat ../out.hg2
320 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
320 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
321
321
322 unbundling debug
322 unbundling debug
323
323
324 $ hg statbundle2 --debug < ../out.hg2
324 $ hg statbundle2 --debug < ../out.hg2
325 start processing of HG2X stream
325 start processing of HG2X stream
326 reading bundle2 stream parameters
326 reading bundle2 stream parameters
327 ignoring unknown parameter 'e|! 7/'
327 ignoring unknown parameter 'e|! 7/'
328 ignoring unknown parameter 'simple'
328 ignoring unknown parameter 'simple'
329 options count: 2
329 options count: 2
330 - e|! 7/
330 - e|! 7/
331 babar%#==tutu
331 babar%#==tutu
332 - simple
332 - simple
333 start extraction of bundle2 parts
333 start extraction of bundle2 parts
334 part header size: 0
334 part header size: 0
335 end of bundle2 stream
335 end of bundle2 stream
336 parts count: 0
336 parts count: 0
337
337
338
338
339 Test buggy input
339 Test buggy input
340 ---------------------------------------------------
340 ---------------------------------------------------
341
341
342 empty parameter name
342 empty parameter name
343
343
344 $ hg bundle2 --param '' --quiet
344 $ hg bundle2 --param '' --quiet
345 abort: empty parameter name
345 abort: empty parameter name
346 [255]
346 [255]
347
347
348 bad parameter name
348 bad parameter name
349
349
350 $ hg bundle2 --param 42babar
350 $ hg bundle2 --param 42babar
351 abort: non letter first character: '42babar'
351 abort: non letter first character: '42babar'
352 [255]
352 [255]
353
353
354
354
355 Test part
355 Test part
356 =================
356 =================
357
357
358 $ hg bundle2 --parts ../parts.hg2 --debug
358 $ hg bundle2 --parts ../parts.hg2 --debug
359 start emission of HG2X stream
359 start emission of HG2X stream
360 bundle parameter:
360 bundle parameter:
361 start of parts
361 start of parts
362 bundle part: "test:empty"
362 bundle part: "test:empty"
363 bundle part: "test:empty"
363 bundle part: "test:empty"
364 bundle part: "test:song"
364 bundle part: "test:song"
365 bundle part: "test:debugreply"
365 bundle part: "test:debugreply"
366 bundle part: "test:math"
366 bundle part: "test:math"
367 bundle part: "test:ping"
367 bundle part: "test:ping"
368 end of bundle
368 end of bundle
369
369
370 $ cat ../parts.hg2
370 $ cat ../parts.hg2
371 HG2X\x00\x00\x00\x11 (esc)
371 HG2X\x00\x00\x00\x11 (esc)
372 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
372 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
373 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
373 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
374 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
374 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
375 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
375 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
376
376
377
377
378 $ hg statbundle2 < ../parts.hg2
378 $ hg statbundle2 < ../parts.hg2
379 options count: 0
379 options count: 0
380 :test:empty:
380 :test:empty:
381 mandatory: 0
381 mandatory: 0
382 advisory: 0
382 advisory: 0
383 payload: 0 bytes
383 payload: 0 bytes
384 :test:empty:
384 :test:empty:
385 mandatory: 0
385 mandatory: 0
386 advisory: 0
386 advisory: 0
387 payload: 0 bytes
387 payload: 0 bytes
388 :test:song:
388 :test:song:
389 mandatory: 0
389 mandatory: 0
390 advisory: 0
390 advisory: 0
391 payload: 178 bytes
391 payload: 178 bytes
392 :test:debugreply:
392 :test:debugreply:
393 mandatory: 0
393 mandatory: 0
394 advisory: 0
394 advisory: 0
395 payload: 0 bytes
395 payload: 0 bytes
396 :test:math:
396 :test:math:
397 mandatory: 2
397 mandatory: 2
398 advisory: 1
398 advisory: 1
399 payload: 2 bytes
399 payload: 2 bytes
400 :test:ping:
400 :test:ping:
401 mandatory: 0
401 mandatory: 0
402 advisory: 0
402 advisory: 0
403 payload: 0 bytes
403 payload: 0 bytes
404 parts count: 6
404 parts count: 6
405
405
406 $ hg statbundle2 --debug < ../parts.hg2
406 $ hg statbundle2 --debug < ../parts.hg2
407 start processing of HG2X stream
407 start processing of HG2X stream
408 reading bundle2 stream parameters
408 reading bundle2 stream parameters
409 options count: 0
409 options count: 0
410 start extraction of bundle2 parts
410 start extraction of bundle2 parts
411 part header size: 17
411 part header size: 17
412 part type: "test:empty"
412 part type: "test:empty"
413 part id: "0"
413 part id: "0"
414 part parameters: 0
414 part parameters: 0
415 :test:empty:
415 :test:empty:
416 mandatory: 0
416 mandatory: 0
417 advisory: 0
417 advisory: 0
418 payload chunk size: 0
418 payload chunk size: 0
419 payload: 0 bytes
419 payload: 0 bytes
420 part header size: 17
420 part header size: 17
421 part type: "test:empty"
421 part type: "test:empty"
422 part id: "1"
422 part id: "1"
423 part parameters: 0
423 part parameters: 0
424 :test:empty:
424 :test:empty:
425 mandatory: 0
425 mandatory: 0
426 advisory: 0
426 advisory: 0
427 payload chunk size: 0
427 payload chunk size: 0
428 payload: 0 bytes
428 payload: 0 bytes
429 part header size: 16
429 part header size: 16
430 part type: "test:song"
430 part type: "test:song"
431 part id: "2"
431 part id: "2"
432 part parameters: 0
432 part parameters: 0
433 :test:song:
433 :test:song:
434 mandatory: 0
434 mandatory: 0
435 advisory: 0
435 advisory: 0
436 payload chunk size: 178
436 payload chunk size: 178
437 payload chunk size: 0
437 payload chunk size: 0
438 payload: 178 bytes
438 payload: 178 bytes
439 part header size: 22
439 part header size: 22
440 part type: "test:debugreply"
440 part type: "test:debugreply"
441 part id: "3"
441 part id: "3"
442 part parameters: 0
442 part parameters: 0
443 :test:debugreply:
443 :test:debugreply:
444 mandatory: 0
444 mandatory: 0
445 advisory: 0
445 advisory: 0
446 payload chunk size: 0
446 payload chunk size: 0
447 payload: 0 bytes
447 payload: 0 bytes
448 part header size: 43
448 part header size: 43
449 part type: "test:math"
449 part type: "test:math"
450 part id: "4"
450 part id: "4"
451 part parameters: 3
451 part parameters: 3
452 :test:math:
452 :test:math:
453 mandatory: 2
453 mandatory: 2
454 advisory: 1
454 advisory: 1
455 payload chunk size: 2
455 payload chunk size: 2
456 payload chunk size: 0
456 payload chunk size: 0
457 payload: 2 bytes
457 payload: 2 bytes
458 part header size: 16
458 part header size: 16
459 part type: "test:ping"
459 part type: "test:ping"
460 part id: "5"
460 part id: "5"
461 part parameters: 0
461 part parameters: 0
462 :test:ping:
462 :test:ping:
463 mandatory: 0
463 mandatory: 0
464 advisory: 0
464 advisory: 0
465 payload chunk size: 0
465 payload chunk size: 0
466 payload: 0 bytes
466 payload: 0 bytes
467 part header size: 0
467 part header size: 0
468 end of bundle2 stream
468 end of bundle2 stream
469 parts count: 6
469 parts count: 6
470
470
471 Test actual unbundling of test part
471 Test actual unbundling of test part
472 =======================================
472 =======================================
473
473
474 Process the bundle
474 Process the bundle
475
475
476 $ hg unbundle2 --debug < ../parts.hg2
476 $ hg unbundle2 --debug < ../parts.hg2
477 start processing of HG2X stream
477 start processing of HG2X stream
478 reading bundle2 stream parameters
478 reading bundle2 stream parameters
479 start extraction of bundle2 parts
479 start extraction of bundle2 parts
480 part header size: 17
480 part header size: 17
481 part type: "test:empty"
481 part type: "test:empty"
482 part id: "0"
482 part id: "0"
483 part parameters: 0
483 part parameters: 0
484 ignoring unknown advisory part 'test:empty'
484 ignoring unknown advisory part 'test:empty'
485 payload chunk size: 0
485 payload chunk size: 0
486 part header size: 17
486 part header size: 17
487 part type: "test:empty"
487 part type: "test:empty"
488 part id: "1"
488 part id: "1"
489 part parameters: 0
489 part parameters: 0
490 ignoring unknown advisory part 'test:empty'
490 ignoring unknown advisory part 'test:empty'
491 payload chunk size: 0
491 payload chunk size: 0
492 part header size: 16
492 part header size: 16
493 part type: "test:song"
493 part type: "test:song"
494 part id: "2"
494 part id: "2"
495 part parameters: 0
495 part parameters: 0
496 found a handler for part 'test:song'
496 found a handler for part 'test:song'
497 The choir starts singing:
497 The choir starts singing:
498 payload chunk size: 178
498 payload chunk size: 178
499 payload chunk size: 0
499 payload chunk size: 0
500 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
500 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
501 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
501 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
502 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
502 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
503 part header size: 22
503 part header size: 22
504 part type: "test:debugreply"
504 part type: "test:debugreply"
505 part id: "3"
505 part id: "3"
506 part parameters: 0
506 part parameters: 0
507 found a handler for part 'test:debugreply'
507 found a handler for part 'test:debugreply'
508 debugreply: no reply
508 debugreply: no reply
509 payload chunk size: 0
509 payload chunk size: 0
510 part header size: 43
510 part header size: 43
511 part type: "test:math"
511 part type: "test:math"
512 part id: "4"
512 part id: "4"
513 part parameters: 3
513 part parameters: 3
514 ignoring unknown advisory part 'test:math'
514 ignoring unknown advisory part 'test:math'
515 payload chunk size: 2
515 payload chunk size: 2
516 payload chunk size: 0
516 payload chunk size: 0
517 part header size: 16
517 part header size: 16
518 part type: "test:ping"
518 part type: "test:ping"
519 part id: "5"
519 part id: "5"
520 part parameters: 0
520 part parameters: 0
521 found a handler for part 'test:ping'
521 found a handler for part 'test:ping'
522 received ping request (id 5)
522 received ping request (id 5)
523 payload chunk size: 0
523 payload chunk size: 0
524 part header size: 0
524 part header size: 0
525 end of bundle2 stream
525 end of bundle2 stream
526 0 unread bytes
526 0 unread bytes
527 3 total verses sung
527 3 total verses sung
528
528
529 Unbundle with an unknown mandatory part
529 Unbundle with an unknown mandatory part
530 (should abort)
530 (should abort)
531
531
532 $ hg bundle2 --parts --unknown ../unknown.hg2
532 $ hg bundle2 --parts --unknown ../unknown.hg2
533
533
534 $ hg unbundle2 < ../unknown.hg2
534 $ hg unbundle2 < ../unknown.hg2
535 The choir starts singing:
535 The choir starts singing:
536 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
536 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
537 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
537 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
538 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
538 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
539 debugreply: no reply
539 debugreply: no reply
540 0 unread bytes
540 0 unread bytes
541 abort: missing support for 'test:unknown'
541 abort: missing support for 'test:unknown'
542 [255]
542 [255]
543
543
544 unbundle with a reply
544 unbundle with a reply
545
545
546 $ hg bundle2 --parts --reply ../parts-reply.hg2
546 $ hg bundle2 --parts --reply ../parts-reply.hg2
547 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
547 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
548 0 unread bytes
548 0 unread bytes
549 3 total verses sung
549 3 total verses sung
550
550
551 The reply is a bundle
551 The reply is a bundle
552
552
553 $ cat ../reply.hg2
553 $ cat ../reply.hg2
554 HG2X\x00\x00\x00\x1f (esc)
554 HG2X\x00\x00\x00\x1f (esc)
555 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
555 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
556 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
556 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
557 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
557 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
558 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
558 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
559 \x00\x00\x00\x00\x00\x1f (esc)
559 \x00\x00\x00\x00\x00\x1f (esc)
560 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
560 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
561 debugreply: 'city=!'
561 debugreply: 'city=!'
562 debugreply: 'celeste,ville'
562 debugreply: 'celeste,ville'
563 debugreply: 'elephants'
563 debugreply: 'elephants'
564 debugreply: 'babar'
564 debugreply: 'babar'
565 debugreply: 'celeste'
565 debugreply: 'celeste'
566 debugreply: 'ping-pong'
566 debugreply: 'ping-pong'
567 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1f (esc)
567 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1f (esc)
568 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
568 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
569 replying to ping request (id 6)
569 replying to ping request (id 6)
570 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
570 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
571
571
572 The reply is valid
572 The reply is valid
573
573
574 $ hg statbundle2 < ../reply.hg2
574 $ hg statbundle2 < ../reply.hg2
575 options count: 0
575 options count: 0
576 :b2x:output:
576 :b2x:output:
577 mandatory: 0
577 mandatory: 0
578 advisory: 1
578 advisory: 1
579 payload: 217 bytes
579 payload: 217 bytes
580 :b2x:output:
580 :b2x:output:
581 mandatory: 0
581 mandatory: 0
582 advisory: 1
582 advisory: 1
583 payload: 201 bytes
583 payload: 201 bytes
584 :test:pong:
584 :test:pong:
585 mandatory: 1
585 mandatory: 1
586 advisory: 0
586 advisory: 0
587 payload: 0 bytes
587 payload: 0 bytes
588 :b2x:output:
588 :b2x:output:
589 mandatory: 0
589 mandatory: 0
590 advisory: 1
590 advisory: 1
591 payload: 61 bytes
591 payload: 61 bytes
592 parts count: 4
592 parts count: 4
593
593
594 Unbundle the reply to get the output:
594 Unbundle the reply to get the output:
595
595
596 $ hg unbundle2 < ../reply.hg2
596 $ hg unbundle2 < ../reply.hg2
597 remote: The choir starts singing:
597 remote: The choir starts singing:
598 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
598 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
599 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
599 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
600 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
600 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
601 remote: debugreply: capabilities:
601 remote: debugreply: capabilities:
602 remote: debugreply: 'city=!'
602 remote: debugreply: 'city=!'
603 remote: debugreply: 'celeste,ville'
603 remote: debugreply: 'celeste,ville'
604 remote: debugreply: 'elephants'
604 remote: debugreply: 'elephants'
605 remote: debugreply: 'babar'
605 remote: debugreply: 'babar'
606 remote: debugreply: 'celeste'
606 remote: debugreply: 'celeste'
607 remote: debugreply: 'ping-pong'
607 remote: debugreply: 'ping-pong'
608 remote: received ping request (id 6)
608 remote: received ping request (id 6)
609 remote: replying to ping request (id 6)
609 remote: replying to ping request (id 6)
610 0 unread bytes
610 0 unread bytes
611
611
612 Test push race detection
612 Test push race detection
613
613
614 $ hg bundle2 --pushrace ../part-race.hg2
614 $ hg bundle2 --pushrace ../part-race.hg2
615
615
616 $ hg unbundle2 < ../part-race.hg2
616 $ hg unbundle2 < ../part-race.hg2
617 0 unread bytes
617 0 unread bytes
618 abort: push race: repository changed while pushing - please try again
618 abort: push race: repository changed while pushing - please try again
619 [255]
619 [255]
620
620
621 Support for changegroup
621 Support for changegroup
622 ===================================
622 ===================================
623
623
624 $ hg unbundle $TESTDIR/bundles/rebase.hg
624 $ hg unbundle $TESTDIR/bundles/rebase.hg
625 adding changesets
625 adding changesets
626 adding manifests
626 adding manifests
627 adding file changes
627 adding file changes
628 added 8 changesets with 7 changes to 7 files (+3 heads)
628 added 8 changesets with 7 changes to 7 files (+3 heads)
629 (run 'hg heads' to see heads, 'hg merge' to merge)
629 (run 'hg heads' to see heads, 'hg merge' to merge)
630
630
631 $ hg log -G
631 $ hg log -G
632 o changeset: 8:02de42196ebe
632 o changeset: 8:02de42196ebe
633 | tag: tip
633 | tag: tip
634 | parent: 6:24b6387c8c8c
634 | parent: 6:24b6387c8c8c
635 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
635 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
636 | date: Sat Apr 30 15:24:48 2011 +0200
636 | date: Sat Apr 30 15:24:48 2011 +0200
637 | summary: H
637 | summary: H
638 |
638 |
639 | o changeset: 7:eea13746799a
639 | o changeset: 7:eea13746799a
640 |/| parent: 6:24b6387c8c8c
640 |/| parent: 6:24b6387c8c8c
641 | | parent: 5:9520eea781bc
641 | | parent: 5:9520eea781bc
642 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
642 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
643 | | date: Sat Apr 30 15:24:48 2011 +0200
643 | | date: Sat Apr 30 15:24:48 2011 +0200
644 | | summary: G
644 | | summary: G
645 | |
645 | |
646 o | changeset: 6:24b6387c8c8c
646 o | changeset: 6:24b6387c8c8c
647 | | parent: 1:cd010b8cd998
647 | | parent: 1:cd010b8cd998
648 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
648 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
649 | | date: Sat Apr 30 15:24:48 2011 +0200
649 | | date: Sat Apr 30 15:24:48 2011 +0200
650 | | summary: F
650 | | summary: F
651 | |
651 | |
652 | o changeset: 5:9520eea781bc
652 | o changeset: 5:9520eea781bc
653 |/ parent: 1:cd010b8cd998
653 |/ parent: 1:cd010b8cd998
654 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
654 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
655 | date: Sat Apr 30 15:24:48 2011 +0200
655 | date: Sat Apr 30 15:24:48 2011 +0200
656 | summary: E
656 | summary: E
657 |
657 |
658 | o changeset: 4:32af7686d403
658 | o changeset: 4:32af7686d403
659 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
659 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
660 | | date: Sat Apr 30 15:24:48 2011 +0200
660 | | date: Sat Apr 30 15:24:48 2011 +0200
661 | | summary: D
661 | | summary: D
662 | |
662 | |
663 | o changeset: 3:5fddd98957c8
663 | o changeset: 3:5fddd98957c8
664 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
664 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
665 | | date: Sat Apr 30 15:24:48 2011 +0200
665 | | date: Sat Apr 30 15:24:48 2011 +0200
666 | | summary: C
666 | | summary: C
667 | |
667 | |
668 | o changeset: 2:42ccdea3bb16
668 | o changeset: 2:42ccdea3bb16
669 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
669 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
670 | date: Sat Apr 30 15:24:48 2011 +0200
670 | date: Sat Apr 30 15:24:48 2011 +0200
671 | summary: B
671 | summary: B
672 |
672 |
673 o changeset: 1:cd010b8cd998
673 o changeset: 1:cd010b8cd998
674 parent: -1:000000000000
674 parent: -1:000000000000
675 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
675 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
676 date: Sat Apr 30 15:24:48 2011 +0200
676 date: Sat Apr 30 15:24:48 2011 +0200
677 summary: A
677 summary: A
678
678
679 @ changeset: 0:3903775176ed
679 @ changeset: 0:3903775176ed
680 user: test
680 user: test
681 date: Thu Jan 01 00:00:00 1970 +0000
681 date: Thu Jan 01 00:00:00 1970 +0000
682 summary: a
682 summary: a
683
683
684
684
685 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
685 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
686 4 changesets found
686 4 changesets found
687 list of changesets:
687 list of changesets:
688 32af7686d403cf45b5d95f2d70cebea587ac806a
688 32af7686d403cf45b5d95f2d70cebea587ac806a
689 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
689 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
690 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
690 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
691 02de42196ebee42ef284b6780a87cdc96e8eaab6
691 02de42196ebee42ef284b6780a87cdc96e8eaab6
692 start emission of HG2X stream
692 start emission of HG2X stream
693 bundle parameter:
693 bundle parameter:
694 start of parts
694 start of parts
695 bundle part: "b2x:changegroup"
695 bundle part: "b2x:changegroup"
696 bundling: 1/4 changesets (25.00%)
696 bundling: 1/4 changesets (25.00%)
697 bundling: 2/4 changesets (50.00%)
697 bundling: 2/4 changesets (50.00%)
698 bundling: 3/4 changesets (75.00%)
698 bundling: 3/4 changesets (75.00%)
699 bundling: 4/4 changesets (100.00%)
699 bundling: 4/4 changesets (100.00%)
700 bundling: 1/4 manifests (25.00%)
700 bundling: 1/4 manifests (25.00%)
701 bundling: 2/4 manifests (50.00%)
701 bundling: 2/4 manifests (50.00%)
702 bundling: 3/4 manifests (75.00%)
702 bundling: 3/4 manifests (75.00%)
703 bundling: 4/4 manifests (100.00%)
703 bundling: 4/4 manifests (100.00%)
704 bundling: D 1/3 files (33.33%)
704 bundling: D 1/3 files (33.33%)
705 bundling: E 2/3 files (66.67%)
705 bundling: E 2/3 files (66.67%)
706 bundling: H 3/3 files (100.00%)
706 bundling: H 3/3 files (100.00%)
707 end of bundle
707 end of bundle
708
708
709 $ cat ../rev.hg2
709 $ cat ../rev.hg2
710 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
710 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
711 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
711 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
712 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
712 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
713 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
713 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
714 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
714 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
715 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
715 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
716 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
716 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
717 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
717 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
718 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
718 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
719 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
719 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
720 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
720 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
721 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
721 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
722 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
722 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
723 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
723 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
724 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
724 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
725 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
725 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
726 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
726 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
727 l\r (no-eol) (esc)
727 l\r (no-eol) (esc)
728 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
728 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
729 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
729 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
730 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
730 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
731 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
731 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
732
732
733 $ hg unbundle2 < ../rev.hg2
733 $ hg unbundle2 < ../rev.hg2
734 adding changesets
734 adding changesets
735 adding manifests
735 adding manifests
736 adding file changes
736 adding file changes
737 added 0 changesets with 0 changes to 3 files
737 added 0 changesets with 0 changes to 3 files
738 0 unread bytes
738 0 unread bytes
739 addchangegroup return: 1
739 addchangegroup return: 1
740
740
741 with reply
741 with reply
742
742
743 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
743 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
744 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
744 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
745 0 unread bytes
745 0 unread bytes
746 addchangegroup return: 1
746 addchangegroup return: 1
747
747
748 $ cat ../rev-reply.hg2
748 $ cat ../rev-reply.hg2
749 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
749 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
750 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
750 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
751 adding manifests
751 adding manifests
752 adding file changes
752 adding file changes
753 added 0 changesets with 0 changes to 3 files
753 added 0 changesets with 0 changes to 3 files
754 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
754 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
755
755
756 Real world exchange
756 Real world exchange
757 =====================
757 =====================
758
758
759
759
760 clone --pull
760 clone --pull
761
761
762 $ cd ..
762 $ cd ..
763 $ hg clone main other --pull --rev 9520eea781bc
763 $ hg clone main other --pull --rev 9520eea781bc
764 adding changesets
764 adding changesets
765 adding manifests
765 adding manifests
766 adding file changes
766 adding file changes
767 added 2 changesets with 2 changes to 2 files
767 added 2 changesets with 2 changes to 2 files
768 updating to branch default
768 updating to branch default
769 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
769 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
770 $ hg -R other log -G
770 $ hg -R other log -G
771 @ changeset: 1:9520eea781bc
771 @ changeset: 1:9520eea781bc
772 | tag: tip
772 | tag: tip
773 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
773 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
774 | date: Sat Apr 30 15:24:48 2011 +0200
774 | date: Sat Apr 30 15:24:48 2011 +0200
775 | summary: E
775 | summary: E
776 |
776 |
777 o changeset: 0:cd010b8cd998
777 o changeset: 0:cd010b8cd998
778 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
778 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
779 date: Sat Apr 30 15:24:48 2011 +0200
779 date: Sat Apr 30 15:24:48 2011 +0200
780 summary: A
780 summary: A
781
781
782
782
783 pull
783 pull
784
784
785 $ hg -R other pull -r 24b6387c8c8c
785 $ hg -R other pull -r 24b6387c8c8c
786 pulling from $TESTTMP/main (glob)
786 pulling from $TESTTMP/main (glob)
787 searching for changes
787 searching for changes
788 adding changesets
788 adding changesets
789 adding manifests
789 adding manifests
790 adding file changes
790 adding file changes
791 added 1 changesets with 1 changes to 1 files (+1 heads)
791 added 1 changesets with 1 changes to 1 files (+1 heads)
792 (run 'hg heads' to see heads, 'hg merge' to merge)
792 (run 'hg heads' to see heads, 'hg merge' to merge)
793
793
794 push
794 push
795
795
796 $ hg -R main push other --rev eea13746799a
796 $ hg -R main push other --rev eea13746799a
797 pushing to other
797 pushing to other
798 searching for changes
798 searching for changes
799 remote: adding changesets
799 remote: adding changesets
800 remote: adding manifests
800 remote: adding manifests
801 remote: adding file changes
801 remote: adding file changes
802 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
802 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
803
803
804 pull over ssh
804 pull over ssh
805
805
806 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
806 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
807 pulling from ssh://user@dummy/main
807 pulling from ssh://user@dummy/main
808 searching for changes
808 searching for changes
809 adding changesets
809 adding changesets
810 adding manifests
810 adding manifests
811 adding file changes
811 adding file changes
812 added 1 changesets with 1 changes to 1 files (+1 heads)
812 added 1 changesets with 1 changes to 1 files (+1 heads)
813 (run 'hg heads' to see heads, 'hg merge' to merge)
813 (run 'hg heads' to see heads, 'hg merge' to merge)
814
814
815 pull over http
815 pull over http
816
816
817 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
817 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
818 $ cat main.pid >> $DAEMON_PIDS
818 $ cat main.pid >> $DAEMON_PIDS
819
819
820 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
820 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
821 pulling from http://localhost:$HGPORT/
821 pulling from http://localhost:$HGPORT/
822 searching for changes
822 searching for changes
823 adding changesets
823 adding changesets
824 adding manifests
824 adding manifests
825 adding file changes
825 adding file changes
826 added 1 changesets with 1 changes to 1 files (+1 heads)
826 added 1 changesets with 1 changes to 1 files (+1 heads)
827 (run 'hg heads .' to see heads, 'hg merge' to merge)
827 (run 'hg heads .' to see heads, 'hg merge' to merge)
828 $ cat main-error.log
828 $ cat main-error.log
829
829
830 push over ssh
830 push over ssh
831
831
832 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
832 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
833 pushing to ssh://user@dummy/other
833 pushing to ssh://user@dummy/other
834 searching for changes
834 searching for changes
835 remote: adding changesets
835 remote: adding changesets
836 remote: adding manifests
836 remote: adding manifests
837 remote: adding file changes
837 remote: adding file changes
838 remote: added 1 changesets with 1 changes to 1 files
838 remote: added 1 changesets with 1 changes to 1 files
839
839
840 push over http
840 push over http
841
841
842 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
842 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
843 $ cat other.pid >> $DAEMON_PIDS
843 $ cat other.pid >> $DAEMON_PIDS
844
844
845 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
845 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
846 pushing to http://localhost:$HGPORT2/
846 pushing to http://localhost:$HGPORT2/
847 searching for changes
847 searching for changes
848 remote: adding changesets
848 remote: adding changesets
849 remote: adding manifests
849 remote: adding manifests
850 remote: adding file changes
850 remote: adding file changes
851 remote: added 1 changesets with 1 changes to 1 files
851 remote: added 1 changesets with 1 changes to 1 files
852 $ cat other-error.log
852 $ cat other-error.log
853
853
854 Check final content.
854 Check final content.
855
855
856 $ hg -R other log -G
856 $ hg -R other log -G
857 o changeset: 7:32af7686d403
857 o changeset: 7:32af7686d403
858 | tag: tip
858 | tag: tip
859 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
859 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
860 | date: Sat Apr 30 15:24:48 2011 +0200
860 | date: Sat Apr 30 15:24:48 2011 +0200
861 | summary: D
861 | summary: D
862 |
862 |
863 o changeset: 6:5fddd98957c8
863 o changeset: 6:5fddd98957c8
864 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
864 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
865 | date: Sat Apr 30 15:24:48 2011 +0200
865 | date: Sat Apr 30 15:24:48 2011 +0200
866 | summary: C
866 | summary: C
867 |
867 |
868 o changeset: 5:42ccdea3bb16
868 o changeset: 5:42ccdea3bb16
869 | parent: 0:cd010b8cd998
869 | parent: 0:cd010b8cd998
870 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
870 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
871 | date: Sat Apr 30 15:24:48 2011 +0200
871 | date: Sat Apr 30 15:24:48 2011 +0200
872 | summary: B
872 | summary: B
873 |
873 |
874 | o changeset: 4:02de42196ebe
874 | o changeset: 4:02de42196ebe
875 | | parent: 2:24b6387c8c8c
875 | | parent: 2:24b6387c8c8c
876 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
876 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
877 | | date: Sat Apr 30 15:24:48 2011 +0200
877 | | date: Sat Apr 30 15:24:48 2011 +0200
878 | | summary: H
878 | | summary: H
879 | |
879 | |
880 | | o changeset: 3:eea13746799a
880 | | o changeset: 3:eea13746799a
881 | |/| parent: 2:24b6387c8c8c
881 | |/| parent: 2:24b6387c8c8c
882 | | | parent: 1:9520eea781bc
882 | | | parent: 1:9520eea781bc
883 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
883 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
884 | | | date: Sat Apr 30 15:24:48 2011 +0200
884 | | | date: Sat Apr 30 15:24:48 2011 +0200
885 | | | summary: G
885 | | | summary: G
886 | | |
886 | | |
887 | o | changeset: 2:24b6387c8c8c
887 | o | changeset: 2:24b6387c8c8c
888 |/ / parent: 0:cd010b8cd998
888 |/ / parent: 0:cd010b8cd998
889 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
889 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
890 | | date: Sat Apr 30 15:24:48 2011 +0200
890 | | date: Sat Apr 30 15:24:48 2011 +0200
891 | | summary: F
891 | | summary: F
892 | |
892 | |
893 | @ changeset: 1:9520eea781bc
893 | @ changeset: 1:9520eea781bc
894 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
894 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
895 | date: Sat Apr 30 15:24:48 2011 +0200
895 | date: Sat Apr 30 15:24:48 2011 +0200
896 | summary: E
896 | summary: E
897 |
897 |
898 o changeset: 0:cd010b8cd998
898 o changeset: 0:cd010b8cd998
899 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
899 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
900 date: Sat Apr 30 15:24:48 2011 +0200
900 date: Sat Apr 30 15:24:48 2011 +0200
901 summary: A
901 summary: A
902
902
903
903
904 Error Handling
904 Error Handling
905 ==============
905 ==============
906
906
907 Check that errors are properly returned to the client during push.
907 Check that errors are properly returned to the client during push.
908
908
909 Setting up
909 Setting up
910
910
911 $ cat > failpush.py << EOF
911 $ cat > failpush.py << EOF
912 > """A small extension that makes push fails when using bundle2
912 > """A small extension that makes push fails when using bundle2
913 >
913 >
914 > used to test error handling in bundle2
914 > used to test error handling in bundle2
915 > """
915 > """
916 >
916 >
917 > from mercurial import util
917 > from mercurial import util
918 > from mercurial import bundle2
918 > from mercurial import bundle2
919 > from mercurial import exchange
919 > from mercurial import exchange
920 > from mercurial import extensions
920 > from mercurial import extensions
921 >
921 >
922 > def _pushbundle2failpart(orig, pushop, bundler):
922 > def _pushbundle2failpart(orig, pushop, bundler):
923 > extradata = orig(pushop, bundler)
923 > extradata = orig(pushop, bundler)
924 > reason = pushop.ui.config('failpush', 'reason', None)
924 > reason = pushop.ui.config('failpush', 'reason', None)
925 > part = None
925 > part = None
926 > if reason == 'abort':
926 > if reason == 'abort':
927 > part = bundle2.bundlepart('test:abort')
927 > part = bundle2.bundlepart('test:abort')
928 > if reason == 'unknown':
928 > if reason == 'unknown':
929 > part = bundle2.bundlepart('TEST:UNKNOWN')
929 > part = bundle2.bundlepart('TEST:UNKNOWN')
930 > if reason == 'race':
931 > # 20 Bytes of crap
932 > part = bundle2.bundlepart('b2x:check:heads', data='01234567890123456789')
930 > if part is not None:
933 > if part is not None:
931 > bundler.addpart(part)
934 > bundler.addpart(part)
932 > return extradata
935 > return extradata
933 >
936 >
934 > @bundle2.parthandler("test:abort")
937 > @bundle2.parthandler("test:abort")
935 > def handleabort(op, part):
938 > def handleabort(op, part):
936 > raise util.Abort('Abandon ship!', hint="don't panic")
939 > raise util.Abort('Abandon ship!', hint="don't panic")
937 >
940 >
938 > def uisetup(ui):
941 > def uisetup(ui):
939 > extensions.wrapfunction(exchange, '_pushbundle2extraparts', _pushbundle2failpart)
942 > extensions.wrapfunction(exchange, '_pushbundle2extraparts', _pushbundle2failpart)
940 >
943 >
941 > EOF
944 > EOF
942
945
943 $ cd main
946 $ cd main
944 $ hg up tip
947 $ hg up tip
945 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
948 3 files updated, 0 files merged, 1 files removed, 0 files unresolved
946 $ echo 'I' > I
949 $ echo 'I' > I
947 $ hg add I
950 $ hg add I
948 $ hg ci -m 'I'
951 $ hg ci -m 'I'
949 $ hg id
952 $ hg id
950 e7ec4e813ba6 tip
953 e7ec4e813ba6 tip
951 $ cd ..
954 $ cd ..
952
955
953 $ cat << EOF >> $HGRCPATH
956 $ cat << EOF >> $HGRCPATH
954 > [extensions]
957 > [extensions]
955 > failpush=$TESTTMP/failpush.py
958 > failpush=$TESTTMP/failpush.py
956 > EOF
959 > EOF
957
960
958 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
961 $ "$TESTDIR/killdaemons.py" $DAEMON_PIDS
959 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
962 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
960 $ cat other.pid >> $DAEMON_PIDS
963 $ cat other.pid >> $DAEMON_PIDS
961
964
962 Doing the actual push: Abort error
965 Doing the actual push: Abort error
963
966
964 $ cat << EOF >> $HGRCPATH
967 $ cat << EOF >> $HGRCPATH
965 > [failpush]
968 > [failpush]
966 > reason = abort
969 > reason = abort
967 > EOF
970 > EOF
968
971
969 $ hg -R main push other -r e7ec4e813ba6
972 $ hg -R main push other -r e7ec4e813ba6
970 pushing to other
973 pushing to other
971 searching for changes
974 searching for changes
972 abort: Abandon ship!
975 abort: Abandon ship!
973 (don't panic)
976 (don't panic)
974 [255]
977 [255]
975
978
976 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
979 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
977 pushing to ssh://user@dummy/other
980 pushing to ssh://user@dummy/other
978 searching for changes
981 searching for changes
979 abort: Abandon ship!
982 abort: Abandon ship!
980 (don't panic)
983 (don't panic)
981 [255]
984 [255]
982
985
983 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
986 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
984 pushing to http://localhost:$HGPORT2/
987 pushing to http://localhost:$HGPORT2/
985 searching for changes
988 searching for changes
986 abort: Abandon ship!
989 abort: Abandon ship!
987 (don't panic)
990 (don't panic)
988 [255]
991 [255]
989
992
990
993
991 Doing the actual push: unknown mandatory parts
994 Doing the actual push: unknown mandatory parts
992
995
993 $ cat << EOF >> $HGRCPATH
996 $ cat << EOF >> $HGRCPATH
994 > [failpush]
997 > [failpush]
995 > reason = unknown
998 > reason = unknown
996 > EOF
999 > EOF
997
1000
998 $ hg -R main push other -r e7ec4e813ba6
1001 $ hg -R main push other -r e7ec4e813ba6
999 pushing to other
1002 pushing to other
1000 searching for changes
1003 searching for changes
1001 abort: missing support for 'test:unknown'
1004 abort: missing support for 'test:unknown'
1002 [255]
1005 [255]
1003
1006
1004 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1007 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1005 pushing to ssh://user@dummy/other
1008 pushing to ssh://user@dummy/other
1006 searching for changes
1009 searching for changes
1007 abort: missing support for "'test:unknown'"
1010 abort: missing support for "'test:unknown'"
1008 [255]
1011 [255]
1009
1012
1010 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1013 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1011 pushing to http://localhost:$HGPORT2/
1014 pushing to http://localhost:$HGPORT2/
1012 searching for changes
1015 searching for changes
1013 abort: missing support for "'test:unknown'"
1016 abort: missing support for "'test:unknown'"
1014 [255]
1017 [255]
1018
1019 Doing the actual push: race
1020
1021 $ cat << EOF >> $HGRCPATH
1022 > [failpush]
1023 > reason = race
1024 > EOF
1025
1026 $ hg -R main push other -r e7ec4e813ba6
1027 pushing to other
1028 searching for changes
1029 abort: push failed:
1030 'repository changed while pushing - please try again'
1031 [255]
1032
1033 $ hg -R main push ssh://user@dummy/other -r e7ec4e813ba6
1034 pushing to ssh://user@dummy/other
1035 searching for changes
1036 abort: push failed:
1037 'repository changed while pushing - please try again'
1038 [255]
1039
1040 $ hg -R main push http://localhost:$HGPORT2/ -r e7ec4e813ba6
1041 pushing to http://localhost:$HGPORT2/
1042 searching for changes
1043 abort: push failed:
1044 'repository changed while pushing - please try again'
1045 [255]
General Comments 0
You need to be logged in to leave comments. Login now